content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Positive semidefinite decomposition, Laplacian eigenvalues, and the oriented incidence matrix
up vote 2 down vote favorite
Suppose $A\in\mathbb{C}^{n\times n}$ is Hermitian and positive semidefinite with some decomposition $A=BB^*$, where $B=(b_{ij})\in\mathbb{C}^{n\times m}$ (not necessarily the Cholesky decomposition).
Question: is there a nice relationship between the row/column sums of $B$ and the eigenvalues of $A$? Specifically, can we obtain lower bounds for the largest eigenvalue of $A$ with the one of the
following forms (or similar in nature):
(1) $\displaystyle\max_{1\leq j\leq n}\sum_{k=1}^mb_{jk}$ + $\displaystyle\min_{1\leq k\leq m}\sum_{j=1}^n b_{jk}$ - 1 $\leq \lambda_{\max}(A)$.
(2) $\displaystyle\max_{1\leq j\leq n}\sum_{k=1}^m|b_{jk}|$ + $\displaystyle\min_{1\leq k\leq m}\sum_{j=1}^n |b_{jk}|$ - 1 $\leq \lambda_{\max}(A)$.
Obviously $B$ would need to have some special condition in (1) to make sure these sums are real. Perhaps this is too strong, but it would be useful to have such a relationship. Here is some possible
The Laplacian matrix of a simple graph $G$ can be written as $L(G)=B(G)B(G)^{\text{T}}$, where $B$ is the oriented incidence matrix of $G$. The suggested lower bound (2) produces the well known
$\Delta+2-1=\Delta+1\leq \lambda_{\max}(L(G))$.
So I suppose the question can be thought of as: is there a nice relationship between the oriented incidence matrix row/column sums and the Laplacian eigenvalues similar to (1) and (2)?
matrices linear-algebra spectral-graph-theory graph-theory
1 isn't there a scaling problem? if you rescale $B$ to $\alpha B$ for some $\alpha\in\mathbb{R}^+$, then the left sides of your inequalities scale like $\alpha$ while the right sides scale like $\
alpha^2$, and you might have trouble in neighbourhoods of 0. How does your well-known bound deal with this? – Emilio Pisanty Apr 27 '12 at 22:50
add comment
2 Answers
active oldest votes
Let n=m and let B have 1/2 in each entry of the first row and the rest zeros. Then your inequality reduces to (n-1)/2 < n/4, which is clearly false.
up vote 1 down vote
add comment
There is a large literature about the relations between the graph's degrees (i.e. the row sums of the incidence matrix; the column sums are all 0 and so less interesting) and the
up vote 0 Laplacian eigenvalues. Are you interested in this? If yes, the Grone-Merris conjecture might be a place to start (it has been proved since but I think the name stuck).
down vote
Hi Felix, I know of the Grone-Merris conjecture as well as Bai's work on it (if you want a neat reference that is quite new: homepages.cwi.nl/~aeb/math/ipm.pdf). However, I am more
interested if the matrix result above is true. – hypercube Mar 30 '12 at 20:27
add comment
Not the answer you're looking for? Browse other questions tagged matrices linear-algebra spectral-graph-theory graph-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/90740/positive-semidefinite-decomposition-laplacian-eigenvalues-and-the-oriented-inc","timestamp":"2014-04-19T15:32:44Z","content_type":null,"content_length":"57501","record_id":"<urn:uuid:2126afdb-e79f-4cb9-bc59-a4fad26066a3>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00601-ip-10-147-4-33.ec2.internal.warc.gz"} |
2.1 Tangent Lines And Their Slopes
│Calculus Of One Real Variable – By Pheng Kim Ving │
│Chapter 2: The Derivative – Section 2.1: Tangent Lines And Their Slopes│
│2.1 │
│Tangent Lines And Their Slopes │
Return To Contents
Go To Problems & Solutions
│1. Tangent Lines And Their Slopes│
Non-Vertical Tangent Lines
Thus, the equation of secant PQ is y = (6 + h)(x – 3) + 9. Similarly, T passes thru the point (3, 9) and has slope 6. Let
(x, y) be an arbitrary point of T. The equation of T is ( y – 9)/(x – 3) = 6, or y = 6(x – 3) + 9. In general, the equation
of the line passing thru the point (x[0], y[0]) and having slope m is:
y = m(x – x[0]) + y[0].
Fig. 1.1
Tangent T is limit of secant PQ as Q approaches P.
Note: Scales on the axes are different.
This equation is called the point-slope equation of the line ( because it involves a point and the slope). Recall that the
slope-intercept equation of a line is y = mx + b ( because it involves the slope and the y-intercept), where m is the slope
and b the y-intercept. We see that the equation y = mx + b is a special case of the equation y = m(x – x[0]) + y[0] where
x[0] = 0 and thus y[0] is the y-intercept and in the notation is replaced by the letter b.
Vertical Tangent Lines
The graph of y = x^1/3 is illustrated in Fig. 1.2. As Q approaches O, the secant OQ approaches the y-axis. The same thing
occurs if Q approaches O from the part of the graph to the left of O. Thus, the graph has a vertical line – the y-axis in
Fig. 1.2
Tangent line to graph of y = x^1/3 at x = 0 is y-axis, a vertical
this case – as the tangent line at x = 0. The slope of that tangent line is:
then the graph of f has a vertical tangent line at x = x[0]. The equation of such a vertical tangent line is x = x[0].
Note that horizontal tangent lines are classified as non-vertical tangent lines. Their common slope is 0. The equation of a
horizontal tangent line to the graph of y = f(x) at (x[0], y[0]) is therefore y = y[0].
Where There Are No Tangent Lines
The graph of y = |x| is sketched in Fig. 1.3. As Q[1] approaches O, the line OQ[1] stays the same, as part of the graph to
the right of O. As Q[2] approaches O, the line OQ[2] stays the same, as part of the graph to the left of O. So, if the graph
Fig. 1.3
Graph of y = |x| has no tangent line at x = 0.
has a tangent line at x = 0, then there would be two distinct tangent lines there, the right and left parts of the graph,
which contradicts the uniqueness of a tangent line. It follows that the graph has no tangent line at x = 0. Now:
Definition 1.1
│Suppose the function f(x) is continuous at x = x[0]. Then the quotient:│
If the tangent line to the graph of f(x) at x = a exists, then it's clear that it must be unique. So we define its slope by
using the (two-sided) limit, not one-sided limits, which may be different when they exist.
Definition 1.2
│The slope of a curve C at a point P is the slope of the tangent line to C at P if such a tangent line exists.│
Example 1.1
a. Find the tangent line to the graph of y = f(x) = x^2 at x = 3.
b. Find the tangent line to the graph of f at x = 0.
c. Use a graphing calculator or software to sketch a graph of y = g(x) = x^2/3.
d. Does the graph of g have a tangent line at x = 0?
a. When x = 3 we have y = 3^2 = 9. The slope of the tangent line at x = 3 is:
Thus the equation of the tangent line at x = 0 is y = 0(x – 0) + 0, or y = 0, which is the x-axis.
c. A graph of g is sketched in Fig. 1.4.
d. The difference quotient of g at x = 0 is:
Hence the graph of g has no tangent line at x = 0.
Go To Problems & Solutions Return To Top Of Page
Slopes Of Perpendicular Lines
Suppose two lines T and N are perpendicular. If none of them is vertical (thus none of them is horizontal), as shown in
Fig. 2.1, then their slopes are negative reciprocals of each other. In order to see this, let the slope of T be m and that of
Fig. 2.1
Line N is normal to curve C at point P.
N be n. Draw a vertical line cutting T at A and N at B. Angles PAH and BPH are equal because their sides are
perpendicular. So right triangles PAH and BPH are similar. It follows that HA/PH = PH/HB. Now, m = HA/PH and n
= – HB/PH. Thus, m = –1/n, which is the same as n = –1/m.
Lines Normal To A Curve
A line is said to be normal to a curve at a point if it's perpendicular to the tangent line of the curve at that point. In Fig.
2.1, T is tangent to curve C at point P, and N is normal to curve C at point P. We have (slope of normal N ) =
–1/(slope of tangent T ).
Example 2.1
Find the equation of the normal line to the curve y = x^2 at the point (1, 1).
Let f(x) = x^2. The slope of the tangent line at the point (1, 1) is:
1. Find the equation of the tangent line to each of the following curves at the indicated point.
b. Let f(x) = ax^2 + bx + c. The slope of the tangent line is:
When x = u we have y = au^2 + bu + c. Thus the equation of the tangent line is y = (2au + b)(x – u) + (au^2 + bu +
c), or y = (2au + b)x – au^2 + c.
2. Find the equation of the normal line to each of the following curves at the indicated point.
b. Let f(x) = ax^2 + bx + c. The slope of the tangent line is:
a. dom( g ) is the set of all non-negative real numbers.
b. The points are (0, 0), (1, 1), (4, 2), and (9, 3).
c. The slope of the tangent line is:
The slope of the normal line is – 4, and thus its equation is y = – 4(x – 4) + 2, or:
y = – 4x + 18.
d. The lines are drawn in the graph in part b.
4. Determine the slope of the curve y = x^2 – 1 at the point x = a. Find the equation of the tangent line with a slope of – 3
to that curve.
We'll show that the line y = – 3x – 13/4, which has a slope of –3, is tangent to the curve y = x^2 – 1.
Let f(x) = x^2 – 1. The slope of f at x = a is the same as the slope of the tangent line to f at x = a, so it is:
5. Find all points on the curve y = x^3 – 3x where the tangent line is parallel to the x-axis.
We'll show that the tangent lines to the curve y = x^3 – 3x that are parallel the x-axis are at the points (1, –2) and (–1, 2).
Let g(x) = x^3 – 3x. The slope of a tangent line to g at an arbitrary point x is:
Return To Top Of Page Return To Contents | {"url":"http://www.phengkimving.com/calc_of_one_real_var/02_the_der/02_01_tan_lines_and_their_slopes.htm","timestamp":"2014-04-17T06:57:08Z","content_type":null,"content_length":"117380","record_id":"<urn:uuid:4d35f1c7-7fe8-4cfd-b6ad-d0dfd9625e70>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00501-ip-10-147-4-33.ec2.internal.warc.gz"} |
volume of solid using double integrals
hi again Find the volume of the given soild: Bounded by the cylinders x^2+y^2=r^2 and y^2+z^2=r^2. thanks
you only need to find the volume of the solid in the first octant and then multiply the result by 8 to get the full volume. so you have $z=\sqrt{r^2 - y^2}$ and you need to integrate $z$ over $R: \ x
^2+y^2 \leq r^2, \ x \geq 0, \ y \geq 0.$ so choosing a suitable order of integration we'll have: $V=8 \int_0^r \int_0^ {\sqrt{r^2 - y^2}} \sqrt{r^2 - y^2} \ dx \ dy=8\int_0^r(r^2 - y^2) \ dy = \frac | {"url":"http://mathhelpforum.com/calculus/84479-volume-solid-using-double-integrals.html","timestamp":"2014-04-17T15:58:07Z","content_type":null,"content_length":"34912","record_id":"<urn:uuid:87c265cb-639b-4b25-a6a6-1cd80903e994>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00627-ip-10-147-4-33.ec2.internal.warc.gz"} |
Other work
Next: References Up: The LEGO library Previous: Number Theory
Another example of proof development in LEGO is the proof of the Chinese Remainder Theorem in [McKinna, 1992] which we are currently making consistent with the library so that it can be added to our
collection of examples.
Some more examples include a proof of strong normalization for system F [Altenkirch, 1993] and a formalization of Pure Type Systems [McKinna and Pollack, 1993] which may be added to the example
collection in the future.
Fri May 24 19:01:27 BST 1996 | {"url":"http://www.dcs.ed.ac.uk/home/lego/html/release-1.2/library/node91.html","timestamp":"2014-04-19T10:34:35Z","content_type":null,"content_length":"2279","record_id":"<urn:uuid:abfdc0f1-2e64-4d97-924f-984388d9724f>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00559-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rigorous techniques for 1D systems (1)
Seminar Room 1, Newton Institute
Measure spaces, measure preserving transformations, ergodic transformations, Birkhoff's ergodic theorem, Kingman's sub-additive ergodic theorem. Examples. Products of matrices and Lyapunov exponents:
elementary properties. Stationary sequences of random matrices. Products of stationary random matrices: examples. Lyapunov exponents of products of stationary sequences of matrices; the Oseledets's
multiplicative ergodic theorem (MOT). The idea of the proof of the MOT; the MOT for products of 2x2 matrices.
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible. | {"url":"http://www.newton.ac.uk/programmes/MPA/seminars/2008071614001.html","timestamp":"2014-04-17T15:29:42Z","content_type":null,"content_length":"6215","record_id":"<urn:uuid:4d0cddd5-1734-4255-b5ef-dc6da17cbecd>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00561-ip-10-147-4-33.ec2.internal.warc.gz"} |
Effective Duration and Convexity [Archive] - Actuarial Outpost
04-26-2010, 03:19 PM
I just finished Hoffman exam #1, and there is a question on there (#11) that has be a bit confused about the proper use of the effective duration formula. The formula for effective duration is [P
(down) - P(up)]/[2P* delta y]. In practice, I used the bond function on the calculator to move the yield up and down by .1 to get the P(up) and P(down). So in the case above my delta y would be .001.
Is this correct? The yield on the calculator is expressed as a semiannual (or annual) rate, is the delta y expressed as a semiannual rate as well, or should this be based on continuous compounding?
EG: annual coupons, 7% continuous yield, and I want to use .001 as delta y. In the calculator, the yield is converted to annual compounding ( e^(.07) = 1.0725), so enter a yield of 7.25 in the
calculator. Move the yield up to 7.35 to get P(up) and move the yield down to 7.15 to get P(down). Or should I have used e^(.071) = 1.0736 (7.36%) and e^(.069) = 1.0714 (7.14%) to get P(up) and P
Hope some of this makes sense. | {"url":"http://www.actuarialoutpost.com/actuarial_discussion_forum/archive/index.php/t-191440.html","timestamp":"2014-04-20T20:58:59Z","content_type":null,"content_length":"11956","record_id":"<urn:uuid:1546fc2c-1988-44d6-aaaf-8289248f4b53>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00225-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: RE: Negative eigen values in factor, pf command?
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: RE: Negative eigen values in factor, pf command?
From "Verkuilen, Jay" <JVerkuilen@gc.cuny.edu>
To <statalist@hsphsun2.harvard.edu>
Subject st: RE: Negative eigen values in factor, pf command?
Date Tue, 28 Apr 2009 12:39:35 -0400
Jean-Gael Collomb wrote:
<<I am having trouble interpreting the results of a principle factor
analysis I am conducting. <snip> >>
This has nothing to do with Stata whatsoever and has to do with the
structure of the problem. Stata is honest about its reporting, unlike
far too many other stats programs, and generally doesn't hide issues
from you when there are problems. Anyway, confusing principal factor PF
with principal components is a common mistake, because the names are
similar. In fact, PF is PCA applied not to the correlation matrix, R,
but to
C = R - U
where U is an estimate of the variables' uniquenesses (unreliability, a
measure of the variables' error variances). The usual estimate used in
this kind of a procedure is
U = diag(1 - rsq_jj),
where rsq_jj is the multiple R-squared from the regression of variable j
on all other variables. R is guaranteed to be positive semi-definite. C
is not, and often isn't. Slight violations are no big deal. Substantial
ones, signaled by big negative eigenvalues, are a sign that the model
does not apply.
In a sense you can think about removing the uniquenesses as the opposite
of ridging. Ridging adds positive value to the main diagonal of a matrix
relative to the off-diagonal to push it towards being positive definite.
This deemphasizes the off-diagonal. Because the goal of a factor
analysis is to analyze what the variables have in common, which is
measured by the covariances (or correlations), removing the influence of
the diagonal helps focus on this in a way that PCA does not. See, e.g.,
Chapter 5 in
Lattin, J., Carroll, J. D. and Green, P. (2003). Analyzing
Multivariate Data. Duxbury Press.
(Aside: PF is an antiquated method that belongs on the dustbin of
history as far as I am concerned.)
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2009-04/msg01204.html","timestamp":"2014-04-20T01:17:02Z","content_type":null,"content_length":"7381","record_id":"<urn:uuid:daa4581d-938c-433c-b949-0d6a694c6e73>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00256-ip-10-147-4-33.ec2.internal.warc.gz"} |
Thomas Bayes and Bayes’s Theorem
Very little is known about the life of Thomas Bayes. We don’t know whether he was born in 1701 or 1702 and we don’t know if the picture commonly associated with him has been misattributed. We do know
that, despite his poor publication record, Bayes was elected as a Fellow of the Royal Society.
But Bayes’s most famous work, “An Essay toward Solving a Problem is the Doctrine of Chances,” was brought to the attention of the Royal Society in 1763 (two years after his death) by his friend
Richard Price. The essay, the key to what we now know as Bayes’s Theorem, concerned how we should adjust probabilities when we encounter new data.
In The Signal And The Noise, Nate Silver explains:
Price, in framing Bayes’s essay, gives the example of a person who emerges into the world (perhaps he is Adam, or perhaps he came from Plato’s cave) and sees the sun rise for the first time. At
first, he does not know whether this is typical or some sort of freak occurrence. However, each day that he survives and the sun rises again, his confidence increases that it is a permanent
feature of nature. Gradually, through this purely statistical form of inference, the probability he assigns to his prediction that the sun will rise again tomorrow approaches (although never
exactly reaches) 100 percent.
The argument made by Bayes and Price is not that the world is intrinsically probabilistic or uncertain Bayes was a believer in divine perfection; he was also an advocate of Isaac Newton’s work,
which had seemed to suggest that nature follows regular and predictable laws. It is, rather, a statement—expressed both mathematically and philosophically—about how we learn about the universe:
that we learn about it through approximation, getting closer and closer to the truth as we gather more evidence.
This contrasted with the more skeptical viewpoint of the Scottish philosopher David Hume, who argued that since we could not be certain that the sun would rise again, a prediction that it would
was inherently no more rational than one that it wouldn’t. The Bayesian viewpoint, instead, regards rationality as a probabilistic matter. In essence, Bayes and Price are telling Hume, don’t
blame nature because you are too daft to understand it: if you step out of your skeptical shell and make some predictions about its behavior, perhaps you will get a little closer to the truth.
Bayes’s Theorem
Bayes’s theorem wasn’t actually formulated by Thomas Bayes. Instead it was developed by the French mathematician and astronomer Pierre-Simon Laplace.
Laplace believed in scientific determinism — given the location of every particle in the universe and enough computing power we could predict the universe perfectly. However it was the disconnect
between the perfection of nature and our human imperfections in measuring and understanding it that led to Laplace’s involvement in a theory based on probabilism.
Laplace was frustrated at the time by astronomical observations that appeared to show anomalies in the orbits of Jupiter and Saturn — they seemed to predict that Jupiter would crash into the sun
while Saturn would drift off into outer space. These prediction were, of course, quite wrong and Laplace devoted much of his life to developing much more accurate measurements of these planets’
orbits. The improvements that Laplace made relied on probabilistic inferences in lieu of exacting measurements, since instruments like the telescope were still very crude at the time. Laplace
came to view probability as a waypoint between ignorance and knowledge. It seemed obvious to him that a more thorough understanding of probability was essential to scientific progress.
The Bayesian approach to probability is simple: take the odds of something happening, and adjust for new information. This, of course, is most useful in the cases where you have strong prior
knowledge. If your initial probability is off the Bayesian approach is much less helpful.
In her book, The Theory That Would Not Die, Sharon Bertsch McGrayne lays out the Bayesian process:
We modify our opinions with objective information: Initial Beliefs + Recent Objective Data = A New and Improved Belief. … each time the system is recalculated, the posterior becomes the prior of
the new iteration. It was an evolving system, with each bit of new information pushed closer and closer to certitude.
Here is a short example, found in Investing: The Last Liberal Art, on how it works:
Let’s imagine that you and a friend have spent the afternoon playing your favorite board game, and now, at the end of the game, you are chatting about this and that. Something your friend says
leads you to make a friendly wager: that with one roll of the die from the game, you will get a 6. Straight odds are one in six, a 16 percent probability. But then suppose your friend rolls the
die, quickly covers it with her hand, and takes a peek. “I can tell you this much,” she says; “it’s an even number.” Now you have new information and your odds change dramatically to one in
three, a 33 percent probability. While you are considering whether to change your bet, your friend teasingly adds: “And it’s not a 4.” With this additional bit of information, your odds have
changed again, to one in two, a 50 percent probability. With this very simple example, you have performed a Bayesian analysis. Each new piece of information affected the original probability, and
that is a Bayesian inference.
“In its most basic form,” writes Silver, “it is just an algebraic expression with three known variables and one unknown one. But this simple formula can lead to vast predictive insights.”
“Bayes’s theorem,” Silver continues, “is concerned with conditional probability. That is, it tells us the probability that a theory or hypothesis is true if some event has happened.”
When our priors are strong, they can be surprisingly resilient in the face of new evidence. One classic example of this is the presence of breast cancer among women in their forties. The chance
that a woman will develop breast cancer in her forties is fortunately quite low — about 1.4 percent. But what is the probability if she has a positive mammogram?
Studies show that if a woman does not have cancer, a mammogram will incorrectly claim that she does only about 10 percent of the time. If she does have cancer, on the other hand, they will detect
it about 75 percent of the time. When you see those statistics, a positive mammogram seems like very bad news indeed. But if you apply Bayes’s Theorem to these numbers, you’ll come to a different
conclusion: the chance that a woman in her forties has breast cancer given that she’s had a positive mammogram is still only about 10 percent. These false positive dominate the equation because
very few young women have breast cancer to begin with. For this reason, many doctors recommend that women do not begin getting regular mammograms until they are in their fifties and the prior
probability of having breast cancer is higher.
Bayesian reasoning is counterintuitive and what I’ve extracted so far may not be sufficient enough for you to walk away with a working understanding.
Luckily, when doing research for this post, I stumbled on Eliezer Yudkowsky’s intuitive explanation (building upon the mammogram example above):
The most common mistake is to ignore the original fraction of women with breast cancer, and the fraction of women without breast cancer who receive false positives, and focus only on the fraction
of women with breast cancer who get positive results. For example, the vast majority of doctors in these studies seem to have thought that if around 80% of women with breast cancer have positive
mammographies, then the probability of a women with a positive mammography having breast cancer must be around 80%.
Figuring out the final answer always requires all three pieces of information – the percentage of women with breast cancer, the percentage of women without breast cancer who receive false
positives, and the percentage of women with breast cancer who receive (correct) positives.
To see that the final answer always depends on the original fraction of women with breast cancer, consider an alternate universe in which only one woman out of a million has breast cancer. Even
if mammography in this world detects breast cancer in 8 out of 10 cases, while returning a false positive on a woman without breast cancer in only 1 out of 10 cases, there will still be a hundred
thousand false positives for every real case of cancer detected. The original probability that a woman has cancer is so extremely low that, although a positive result on the mammography does
increase the estimated probability, the probability isn’t increased to certainty or even “a noticeable chance”; the probability goes from 1:1,000,000 to 1:100,000.
Similarly, in an alternate universe where only one out of a million women does not have breast cancer, a positive result on the patient’s mammography obviously doesn’t mean that she has an 80%
chance of having breast cancer! If this were the case her estimated probability of having cancer would have been revised drastically downward after she got a positive result on her mammography –
an 80% chance of having cancer is a lot less than 99.9999%! If you administer mammographies to ten million women in this world, around eight million women with breast cancer will get correct
positive results, while one woman without breast cancer will get false positive results. Thus, if you got a positive mammography in this alternate universe, your chance of having cancer would go
from 99.9999% up to 99.999987%. That is, your chance of being healthy would go from 1:1,000,000 down to 1:8,000,000.
These two extreme examples help demonstrate that the mammography result doesn’t replace your old information about the patient’s chance of having cancer; the mammography slides the estimated
probability in the direction of the result.
Part of the problem is the availability heuristic — we focus on what’s readily available. In this case that’s the newest information and the bigger picture gets lost. We fail to adjust the
probability to reflect new information.
The big idea behind Bayes’s theorem is that we continuously update our probability estimates.
Let’s take a look at another example, only this time we’ll do some basic algebra.
Consider a somber example: the September 11 attacks. Most of us would have assigned almost no probability to terrorists crashing planes into buildings in Manhattan when we woke up that morning.
But we recognized that a terror attack was an obvious possibility once the first plane hit the World Trade Center. And we had no doubt we were being attacked once the second tower was hit.
Bayes’s theorem can replicate this result.
For instances, say that before the first plane hit, our estimate of the possibility of terror attack on tall buildings in Manhattan was just 1 chance in 20,000, or 0.005 percent. However, we
would also have assigned a very low probability to a plane hitting the World Trade Center by accident. This figure can actually be estimated empirically: in the previous 25,000 days of aviation
over Manhattan prior to September 11, there had been two such accidents: one involving the Empire State building in 1945 and another at 40 Wall Street in 1946. That would make the possibility of
such an accident about 1 chance in 12,500 on any given day. If you use Bayes’s theorem to run these numbers (see below), the probability we’d assign to a terror attack increased form 0.005
percent to 38 percent the moment that the first plane hit.
Weigh the evidence
Tim Harford, adds:
Bayes’ theorem is an important reality check on our efforts to forecast the future. How, for instance, should we reconcile a large body of theory and evidence predicting global warming with the
fact that there has been no warming trend over the last decade or so? Sceptics react with glee, while true believers dismiss the new information.
A better response is to use Bayes’ theorem: the lack of recent warming is evidence against recent global warming predictions, but it is weak evidence. This is because there is enough variability
in global temperatures to make such an outcome unsurprising. The new information should reduce our confidence in our models of global warming – but only a little.
The same approach can be used in anything from an economic forecast to a hand of poker, and while Bayes’ theorem can be a formal affair, Bayesian reasoning also works as a rule of thumb. We tend
to either dismiss new evidence, or embrace it as though nothing else matters. Bayesians try to weigh both the old hypothesis and the new evidence in a sensible way.
Here is another example, this time from Quora. A reader poses the question, “What does it mean when a girl smiles at you every time she sees you?” Another reader, using Bayes’s Theorem replies:
The probability she likes you is
For example, suppose she just smiles at everyone. Then intuition says that fact that she smiles at you doesn’t mean anything one way or another. Indeed,
meaning that knowing that she smiles at you doesn’t change anything.
At the other extreme, suppose she smiles at everyone she likes, and only those she likes. Then
and she is certain to like you.
In the intermediate case, what you need to do is find the ratio of odds of smiling to people she likes to smiles in general, multiply by the percentage of people she likes, and there is your
The more she smiles in general, the lower the chance she likes you. The more she smiles at people she likes, the better the chance. And of course the more people she likes, the better your
chances are.
Of course, how to actually determine these values is a mystery I have never solved.
Decision Trees
In The Essential Buffett: Timeless Principles for the New Economy, Robert Hagstrom writes:
Bayesian analysis is an attempt to incorporate all available information into a process for making inferences, or decisions, about the underlying state of nature. Colleges and universities use
Bayes’s theorem to help their students study decision making. In the classroom, the Bayesian approach is more popularly called the decision tree theory; each branch of the tree represents new
information that, in turn, changes the odds in making decisions. “At Harvard Business School,” explains Charlie Munger, “the great quantitative thing that bonds the first-year class together is
what they call decision tree theory. All they do is take high school algebra and apply it to real life problems. The students love it. They’re amazed to find that high school algebra works in
The key is to look at the world as an ever shifting array of probabilities and to remember the limitations. One such limitation can be explained by Nassim Taleb in the Black Swan:
Consider a turkey that is fed everyday. Every single feeding will firm up the bird’s belief that it is the general rule of life to be fed everyday by friendly members of the human race “looking
out for its best interests,” as a politician would say. On the afternoon of the Wednesday before Thanksgiving, something unexpected will happen to the turkey. It will incur a revision of belief.
Don’t walk away thinking the Bayesian approach will enable you to predict everything. One of the biggest problems is that the volume of information is increasing exponentially. Silver writes:
There is no reason to conclude that the affairs of man are becoming more predictable. The opposite may well be true. The same sciences that uncover the laws of nature are making the organization
of society more complex.
Bayes’s Theorem is part of the Farnam Street latticework of mental models. | {"url":"http://www.farnamstreetblog.com/2012/12/thomas-bayes-and-bayess-theorem/","timestamp":"2014-04-18T05:38:11Z","content_type":null,"content_length":"37074","record_id":"<urn:uuid:bec1a47c-47f1-4476-a5cc-5407a1ff116c>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00579-ip-10-147-4-33.ec2.internal.warc.gz"} |
The solution presented here is based on the solution for sicp-ex-1.7 and, similarly, uses the alternative strategy for the good-enough? predicate.
;; ex 1.8. Based on the solution of ex 1.7.
(define (square x) (* x x))
(define (cube-root-iter guess prev-guess x)
(if (good-enough? guess prev-guess)
(cube-root-iter (improve guess x) guess x)))
(define (improve guess x)
(average3 (/ x (square guess)) guess guess))
(define (average3 x y z)
(/ (+ x y z) 3))
;; Stop when the difference is less than 1/1000th of the guess
(define (good-enough? guess prev-guess)
(< (abs (- guess prev-guess)) (abs (* guess 0.001))))
(define (cube-root x)
(cube-root-iter 1.0 0.0 x))
;; Testing
(cube-root 1)
(cube-root -8)
(cube-root 27)
(cube-root -1000)
(cube-root 1e-30)
(cube-root 1e60)
;; this fails for -2 due to zero division :(
;; Fix: take absolute cuberoot and return with sign
;;(define (cube-root x)
;; ((if (< x 0) - +)(cube-root-iter (improve 1.0 (abs x)) 1 (abs x))))
(define (cube x)
(* x x x))
(define (improve guess x)
(/ (+ (/ x (square guess)) (* 2 guess)) 3))
(define (good-enough? guess x)
(< (abs (- (cube guess) x)) 0.001))
(define (cube-root-iter guess x)
(if (good-enough? guess x)
(cube-root-iter (improve guess x)
(define (cube-root x)
(cube-root-iter 1.0 x))
(define (cube-root x)
(cube-root-iter 1.0 x))
(define (cube-root-iter guess x)
(if (good-enough? guess x)
(cube-root-iter (improve guess x)
(define (good-enough? guess x)
(< (relative-error guess (improve guess x)) error-threshold))
(define (relative-error estimate reference)
(/ (abs (- estimate reference)) reference))
(define (improve guess x)
(average3 (/ x (square guess)) guess guess))
(define (average3 x y z)
(/ (+ x y z) 3))
(define error-threshold 0.01)
This solution makes use of the fact that (in LISP) procedures are also data.
(define (square x) (* x x))
(define (cube x) (* x x x))
(define (good-enough? guess x improve)
(< (abs (- (improve guess x) guess))
(abs (* guess 0.001))))
(define (root-iter guess x improve)
(if (good-enough? guess x improve)
(root-iter (improve guess x) x improve)))
(define (sqrt-improve guess x)
(/ (+ guess (/ x guess)) 2))
(define (cbrt-improve guess x)
(/ (+ (/ x (square guess))
(* 2 guess))
(define (sqrt x)
(root-iter 1.0 x sqrt-improve))
(define (cbrt x)
(root-iter 1.0 x cbrt-improve))
Use the improved good-enough?:
(define (cube-roots-iter guess prev-guess input)
(if (good-enough? guess prev-guess)
(cube-roots-iter (improve guess input) guess input)))
(define (good-enough? guess prev-guess input)
(> 0.001 (/ (abs (- guess prev-guess))
(define (improve guess input)
(/ (+ (/ input (square guess))
(* 2 guess))
(define (square x)
(* x x))
;;to make sure the first input of guess and prev-guess does not pass the predicate accidentally, use improve here once:
;;to make sure float number is implemented, use 1.0 instead of 1:
(define (cube-roots x)
(cube-roots-iter (improve 1.0 x) 1 x))
<< Previous exercise (1.7) | sicp-solutions | Next exercise (1.9) >> | {"url":"http://community.schemewiki.org/?sicp-ex-1.8","timestamp":"2014-04-16T16:00:38Z","content_type":null,"content_length":"21372","record_id":"<urn:uuid:908788c2-8d18-4993-aa85-5ed28bdd5895>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00618-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
A regular hexagon with sides of 3" is inscribed in a circle. Find the area of a segment formed by a side of the hexagon and the circle. (Hint: remember Corollary 1--the area of an equilateral
triangle is 1/4 s2 √3.)
• one year ago
• one year ago
Best Response
You've already chosen the best response.
maybe you can find the radius of the circle by looking at one of the triangles that make up the inscribed hexagon. That will allow you to get the whole circle area, and 1/6 of that area falls in
the "pie piece" that includes the triangle. Then, if you have the area of that triangle, subtract it from the pie piece to get the segment in question.
Best Response
You've already chosen the best response.
the radius of that circle will be 3. so are=pi r^2=28.27 by joining the centre with each corner of the hexagon u'll get the area of each part. So Area of each part=28.27/6=4.545.............(i)
Area of the triangle= (3)^2*sq. root of 3/4=3.9.............(ii) Now subtracting the area of equation (ii) from equation (i) u will get the result.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Area of circle with radius 3 = 3^2 pi = 9 pi. 1/6 of the circle = (9/6) pi = (3/2) pi The triangle has hypotenuse 3, base 3, so the height is (3/2)sqrt(3). The area of that triangle is (1/2)bh =
(1/2)(3)(3/2)sqt(3) = (9/4)sqrt(3) Area of 1/6 circle - Area of triangle = (3/2)pi - (9/4)sqrt(3) = 4.712 - 3.897 = 0.815 That's what I got... but you need to double check the geometry and the
Best Response
You've already chosen the best response.
Ooops, i'm really sorry, JakeV8 is right. I've checked it right now. Actually, previously i made a calculation mistake..Sorry again..
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/506b1596e4b0ba2216cb0336","timestamp":"2014-04-19T22:21:01Z","content_type":null,"content_length":"40612","record_id":"<urn:uuid:3480473e-b069-4e86-96d2-6abffdf97382>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00161-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quantum walk - physicists take atoms for a walk
The latest news from academia, regulators research labs and other things of interest
Posted: March 10, 2010
Quantum walk - physicists take atoms for a walk
(Nanowerk News) A team of physicists headed by Christian Roos and Rainer Blatt from the Institute of Quantum Optics and Quantum Information of the Austrian Academy of Sciences realize a quantum walk
in a quantum system with up to 23 steps. It is the first time that this quantum process using trapped ions is demonstrated in detail ("Realization of a Quantum Walk with One and Two Trapped Ions").
When a hiker comes to a junction s/he has to decide which way to take. All of these decisions, eventually, lead the hiker to the intended destination. When the hiker forgot the map, s/he has to make
a decision randomly and gets to the destination with more or less detours. In science this is called a random walk and can regularly be encountered in mathematics and physics.
In 1827, for example, the Scottish botanist Robert Brown found out that pollen grains show irregular fluttering vibrations on water drops. This effect is caused by a random motion of water molecules
– a phenomenon known in the scientific world as Brownian motion. Another example is the Galton board, which is used to demonstrate binomial distribution to students. On this board, balls are dropped
from the top and they repeatedly bounce either left or right in a random way as they hit pins stuck in the board.
An example for a random walk is the Galton board, which is used to demonstrate binomial distribution to students. On this board, balls are dropped from the top and they repeatedly bounce either left
or right in a random way as they hit pins stuck in the board. Photo: Antoine Taveneaux
Atom takes a 'quantum walk'
The Innsbruck scientists have now transferred this principle of random walk to quantum systems and stimulated an atom to take a quantum walk: “We trap a single atom in an electromagnetic ion trap and
cool it to prepare it in the ground state,” explains Christian Roos from the Institute of Quantum Optics and Quantum Information (IQOQI).
“We then create a quantum mechanical superposition of two inner states and send the atom on a walk.“ The two internal states correspond to the decision of the hiker to go left or right. However,
unlike the hiker the atom does not really have to decide where to go; due to the superposition of the two states, both possibilities are presented at the same time.
“Depending on the internal state, we shift the ion to the right or to the left,” explains Christian Roos. “Thereby, the motional and internal state of the ion are entangled.“ After each step the
experimental physicists modify the superposition of the inner states by a laser pulse and again shift the ion to the left or right. The physicists can repeat this randomly controlled process up to 23
times, while collecting data about how quantum walks work. By using a second ion, the scientists extend the experiment, giving the walking ion the additional possibility to stay instead of moving to
the right or left.
Better understanding of natural phenomena
The statistic analysis of these numerous steps confirms that quantum walks differ from classical (random) walks. While, for example, the balls of a Galton board move away from the starting point
statistically very slowly, quantum particles spread much faster on their walk.
These experiments, which have also been realized in a similar way in Bonn, Munich and Erlangen with atoms, ions and photons, can be applied to studying natural phenomena. For example, researchers
suspect that the energy transport in plants works more efficiently because of quantum walks than would be the case with classical walks. In addition, a regime of quantum walk is of importance for
developing a quantum computer model, which could solve ubiquitous problems. For example, applying quantum walks in such a model would help in finding search quantum algorithms that outperform their
classical counterparts as different directions could be chosen simultaneously.
Subscribe to a free copy of one of our daily
Nanowerk Newsletter Email Digests
with a compilation of all of the day's news. | {"url":"http://www.nanowerk.com/news/newsid=15232.php","timestamp":"2014-04-19T09:27:40Z","content_type":null,"content_length":"37973","record_id":"<urn:uuid:3942edc9-79d4-4807-b84a-0af02294f9cf>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00291-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Problems
Homework Problems
1. An object is observed to be moving at constant speed in a certain direc-
tion. Can you conclude that no forces are acting on it. Explain. [Based on a
problem by Serway and Faughn.]
2. A car is normally capable of an acceleration of 3 m/s
. If it is towing a
trailer with half as much mass as the car itself, what acceleration can it
achieve. [Based on a problem from PSSC Physics.]
3. (a) Let T be the maximum tension that the elevator's cable can with-
stand without breaking, i.e. the maximum force it can exert. If the motor is
programmed to give the car an acceleration a, what is the maximum mass
that the car can have, including passengers, if the cable is not to break. (b)
Interpret the equation you derived in the special cases of a=0 and of a
downward acceleration of magnitude g.
4 . A helicopter of mass m is taking off vertically. The only forces acting
on it are the earth's gravitational force and the force, F
, of the air pushing
up on the propeller blades. (a) If the helicopter lifts off at t=0, what is its
vertical speed at time t. (b) Plug numbers into your equation from part a,
using m=2300 kg, F
=27000 N, and t=4.0 s.
5. In the 1964 Olympics in Tokyo, the best men's high jump was 2.18
m. Four years later in Mexico City, the gold medal in the same event was
for a jump of 2.24 m. Because of Mexico City's altitude (2400 m), the
acceleration of gravity there is lower than that in Tokyo by about 0.01 m/s
Suppose a high-jumper has a mass of 72 kg.
(a) Compare his mass and weight in the two locations.
(b) Assume that he is able to jump with the same initial vertical velocity
in both locations, and that all other conditions are the same except for
gravity. How much higher should he be able to jump in Mexico City.
(Actually, the reason for the big change between '64 and '68 was the
introduction of the "Fosbury flop.")
. A blimp is initially at rest, hovering, when at t=0 the pilot turns on
the motor of the propeller. The motor cannot instantly get the propeller
going, but the propeller speeds up steadily. The steadily increasing force
between the air and the propeller is given by the equation F=kt, where k is a
constant. If the mass of the blimp is m, find its position as a function of
time. (Assume that during the period of time you're dealing with, the blimp
is not yet moving fast enough to cause a significant backward force due to
air resistance.)
7 S. A car is accelerating forward along a straight road. If the force of the
road on the car's wheels, pushing it forward, is a constant 3.0 kN, and the
car's mass is 1000 kg, then how long will the car take to go from 20 m/s to
50 m/s.
Problem 6.
SA solution is given in the back of the book.A difficult problem.
A computerized answer check is available.
A problem that requires calculus.
Homework Problems | {"url":"http://www.faqs.org/docs/Newtonian/Newtonian_113.htm","timestamp":"2014-04-19T09:48:02Z","content_type":null,"content_length":"9575","record_id":"<urn:uuid:deb7e7d0-c1e0-48cf-9a08-c2d0852c4432>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00358-ip-10-147-4-33.ec2.internal.warc.gz"} |
Multivariate Calibration
In calibration problem we have accurately known data values (X) and a responses to those values (Y). Responses are scaled and contaminated by noise (E), but easier to obtain. Given the calibration
data (X,Y), we want to estimate new data values (X’) when we observe response Y’. Using Brown’s (Brown 1982) notation, we have a model
Y=\textbf{1}\alpha ^T + XB + E (1)
Y'=\alpha ^T + X'^T B + E'
where sizes of matrices are Y (nXq), E (nXq), B(pXq), Y’ (1Xq), E’ (1Xq), X (nXp) and X’ (pX1). [tex]\textbf{1}[/tex] is a column vector of ones (nX1). This is a bit less general than Brown’s model
(only one response vector for each X’). n is length of the calibration data, q length of the response vector, and p length of the unknown X’. For example, if Y contains proxy responses to global
temperature X, p is one and q the number of proxy records.
In the following, it is assumed that columns of E are zero mean, normally distributed vectors. Furthermore, rows of E are uncorrelated. (This assumption would be contradicted by red proxy noise.) The
(qXq) covariance matrix of noise is denoted by G. In addition, columns of X are centered and have average sum of squares one.
Classical and Inverse Calibration Estimators
Classical estimator of X’ ( CCE (Williams 69) , indirect regression (Sundberg 99), inverse regression (Juckes 06) ) is obtained by generating ML estimator with known [tex]B[/tex] and [tex]G[/tex] and
then replacing [tex]B[/tex] by [tex]\hat{B}[/tex] and [tex]G[/tex] by [tex]\hat{G}[/tex] where
[tex]\hat{B}=(X^TX)^{-1}X^TY[/tex] (3a)
[tex]\hat{\alpha}^T=(\textbf{1}^T \textbf{1})^{-1}\textbf{1}^TY[/tex] (3b)
[tex]\hat{G}=(Y_c ^TY_c-\hat{B}^TX^TY_c)/(n-p-q) [/tex] (4)
([tex]Y_c=Y-\textbf{1}\hat{\alpha}^T[/tex] , i.e. centered Y ), yielding CCE estimator
[tex] \hat{X}’=(\hat{B} S^{-1}\hat{B}^T)^{-1}\hat{B}S^{-1}(Y’^T-\hat{\alpha})[/tex] (5)
[tex]S=Y_c^TY_c-\hat{B}^TX^TY_c[/tex] (6)
Another way to go is ICE (inverse calibration estimator (Krutchkoff 67), direct regression (Sundberg 99) ) , directly regress X on Y,
[tex]\hat{\hat{X}}’^T=(Y’-\hat{\alpha}^T)(Y_c^TY_c)^{-1}Y_c^TX[/tex] (7)
Note that nobody yet has said that these estimators are optimal in any sense. It turns out that if we have special prior knowledge of ‘ (Xs and Ys sampled from normal population), ICE is optimal.
Important note (yet without proof here) is that sample variance of econstruction in the calibration period will be smaller than the reconstruction in the case of ICE, and larger with CCE. In the
absence of noise, ICE and CCE yield (naturally) the same result. Update: see Gerd’s link and, and also note that ICE is a matrix weighted average between CCE and zero-matrix (Brown82, Eq 2.21).
Confidence Region for X’
Following Brown, we have [tex](100-\gamma)[/tex] per cent confidence region, all X’ such that
[tex](Y’^T-\hat{\alpha}-\hat{B}^TX’)^TS^{-1}(Y’^T-\hat{\alpha}-\hat{B}^TX’)/\sigma ^2(X’)\leq (q/v)F(\gamma)[/tex] (8)
where [tex]F(\gamma)[/tex] is the upper [tex](100-\gamma)[/tex] per cent point of the standard F-distribution on q and v=(n-p-q) degrees of freedom and
[tex]\sigma ^2(X’)=1+1/n+X’^T(X^TX)^{-1}X’[/tex] (9)
The form of this confidence region is very interesting, and it is important to note that letting [tex]\gamma[/tex] approach one the region degenerates to the CCE estimate [tex]\hat{X}’[/tex].
Update2: Central point of the region is NOT (AFAIK for now ;) ) ML estimate, and the relation of central point and CCE is, as per Brown,
[tex]C^{-1}D[/tex] (10) , where
[tex]C=\hat{B}S^{-1}\hat{B}^T-(q/v)F(\gamma)(X^TX)^{-1}[/tex] (11)
[tex]D=\hat{B}S^{-1}(Y’^T-\hat{\alpha})[/tex] (12) .
Often calibration residuals are used to generate CIs for proxy reconstructions. We’ll see what will be missing in that case:
I simulated proxy vs. temperature cases with q=40, n=79 and SNR=1 and SNR=0.01. With SNR 1 we’ll get nice CIs, (which agree quite well with calibration residuals), but when SNR gets lower, the
confidence region grows rapidly, being open from the upper side quite soon! Yet, in the latter case calibration residuals indicate relatively low noise. The dangerous situation is when true X’ is
greater than calibration X (the very thing hockey sticks are trying to prove wrong).
1. Direct usage of calibration residuals for estimating confidence intervals is quite dangerous procedure.
2. Assumptions of ICE just do not work in proxy reconstructions
Brown 82: Multivariate Calibration, Journal of the Royal Statistical Society. Ser B. Vol. 44, No. 3, pp. 287-321
Williams 69: Regression methods in calibration problems. Bull. ISI., 43, 17-28
Krutchkoff 67: Classical and inverse regression methods of calibration. Technometrics, 9, 425-439
Sundberg 99: Multivariate Calibration – Direct and Indirect Regression Methodology
( http://www.math.su.se/~rolfs/Publications.html )
Juckes 06: Millennial temperature reconstruction intercomparison and evaluation
( http://www.cosis.net/members/journals/df/article.php?a_id=4661 )
Post a Comment
Recent Comments
• JunkPsychology on The “Ethics Application” for Lewandowsky’s Fury
• Bob Koss on Frontiers Issues Statement on Lewandowsky
• JunkPsychology on The “Ethics Application” for Lewandowsky’s Fury
This entry was written by uc00, posted on Jul 5, 2007 at 3:05 PM, filed under Uncategorized and tagged brown, calibration, multivariate, sundberg, uc. Bookmark the permalink. Follow any comments here
with the RSS feed for this post. Post a comment or leave a trackback: Trackback URL. | {"url":"http://climateaudit.org/2007/07/05/multivariate-calibration/?like=1&source=post_flair&_wpnonce=e94a57fbb5","timestamp":"2014-04-16T22:06:53Z","content_type":null,"content_length":"72142","record_id":"<urn:uuid:5f442d9f-a10e-4e0c-a01e-0d58cafab95d>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00332-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 84
- IN PODS , 2002
"... In this overview paper we motivate the need for and research issues arising from a new model of data processing. In this model, data does not take the form of persistent relations, but rather
arrives in multiple, continuous, rapid, time-varying data streams. In addition to reviewing past work releva ..."
Cited by 620 (19 self)
Add to MetaCart
In this overview paper we motivate the need for and research issues arising from a new model of data processing. In this model, data does not take the form of persistent relations, but rather arrives
in multiple, continuous, rapid, time-varying data streams. In addition to reviewing past work relevant to data stream systems and current projects in the area, the paper explores topics in stream
query languages, new requirements and challenges in query processing, and algorithmic issues.
, 2005
"... In the data stream scenario, input arrives very rapidly and there is limited memory to store the input. Algorithms have to work with one or few passes over the data, space less than linear in
the input size or time significantly less than the input size. In the past few years, a new theory has emerg ..."
Cited by 375 (21 self)
Add to MetaCart
In the data stream scenario, input arrives very rapidly and there is limited memory to store the input. Algorithms have to work with one or few passes over the data, space less than linear in the
input size or time significantly less than the input size. In the past few years, a new theory has emerged for reasoning about algorithms that work within these constraints on space, time, and number
of passes. Some of the methods rely on metric embeddings, pseudo-random computations, sparse approximation theory and communication complexity. The applications for this scenario include IP network
traffic analysis, mining text message streams and processing massive data sets in general. Researchers in Theoretical Computer Science, Databases, IP Networking and Computer Systems are working on
the data stream challenges. This article is an overview and survey of data stream algorithmics and is an updated version of [175].1
- In SIGMOD , 2001
"... An ε-approximate quantile summary of a sequence of N elements is a data structure that can answer quantile queries about the sequence to within a precision of εN . We present a new online...
Cited by 183 (2 self)
Add to MetaCart
An ε-approximate quantile summary of a sequence of N elements is a data structure that can answer quantile queries about the sequence to within a precision of εN . We present a new
, 1998
"... In this paper we study the space requirement of algorithms that make only one (or a small number of) pass(es) over the input data. We study such algorithms under a model of data streams that we
introduce here. We give a number of upper and lower bounds for problems stemming from queryprocessing, ..."
Cited by 156 (3 self)
Add to MetaCart
In this paper we study the space requirement of algorithms that make only one (or a small number of) pass(es) over the input data. We study such algorithms under a model of data streams that we
introduce here. We give a number of upper and lower bounds for problems stemming from queryprocessing, invoking in the process tools from the area of communication complexity.
, 2001
"... Histograms have been used widely to capture data distribution, to represent the data by a small number of step functions. Dynamic programming algorithms which provide optimal construction of
these histograms exist, albeit running in quadratic time and linear space. In this paper we provide linear ti ..."
Cited by 130 (8 self)
Add to MetaCart
Histograms have been used widely to capture data distribution, to represent the data by a small number of step functions. Dynamic programming algorithms which provide optimal construction of these
histograms exist, albeit running in quadratic time and linear space. In this paper we provide linear time construction of 1 + epsilon approximation of optimal histograms, running in polylogarithmic
space. Our results extend to the context of data-streams, and in fact generalize to give 1 + epsilon approximation of several problems in data-streams which require partitioning the index set into
intervals. The only assumptions required are that the cost of an interval is monotonic under inclusion (larger interval has larger cost) and that the cost can be computed or approximated in small
space. This exhibits a nice class of problems for which we can have near optimal data-stream algorithms.
, 1998
"... We present new algorithms for computing approximate quantiles of large datasets in a single pass. The approximation guarantees are explicit, and apply without regard to the value distribution or
the arrival distributions of the dataset. The main memory requirements are smaller than those reported ea ..."
Cited by 113 (2 self)
Add to MetaCart
We present new algorithms for computing approximate quantiles of large datasets in a single pass. The approximation guarantees are explicit, and apply without regard to the value distribution or the
arrival distributions of the dataset. The main memory requirements are smaller than those reported earlier by an order of magnitude. We also discuss methods that couple the approximation algorithms
with random sampling to further reduce memory requirements. With sampling, the approximation guarantees are explicit but probabilistic, i.e., they apply with respect to a (user controlled) confidence
parameter. We present the algorithms, their theoretical analysis and simulation results. 1 Introduction This article studies the problem of computing order statistics of large sequences of online or
disk-resident data using as little main memory as possible. We focus on computing quantiles, which are elements at specific positions in the sorted order of the input. The OE-quantile, for OE 2 [0;
- IEEE TKDE , 2003
"... Abstract—The data stream model has recently attracted attention for its applicability to numerous types of data, including telephone records, Web documents, and clickstreams. For analysis of
such data, the ability to process the data in a single pass, or a small number of passes, while using little ..."
Cited by 106 (2 self)
Add to MetaCart
Abstract—The data stream model has recently attracted attention for its applicability to numerous types of data, including telephone records, Web documents, and clickstreams. For analysis of such
data, the ability to process the data in a single pass, or a small number of passes, while using little memory, is crucial. We describe such a streaming algorithm that effectively clusters large data
streams. We also provide empirical evidence of the algorithm’s performance on synthetic and real data streams. Index Terms—Clustering, data streams, approximation algorithms. 1
- In VLDB , 2002
"... Order statistics, i.e., quantiles, are frequently used in databases both at the database server as well as the application level. For example, they are useful in selectivity estimation during
query optimization, in partitioning large relations, in estimating query result sizes when building us ..."
Cited by 104 (13 self)
Add to MetaCart
Order statistics, i.e., quantiles, are frequently used in databases both at the database server as well as the application level. For example, they are useful in selectivity estimation during query
optimization, in partitioning large relations, in estimating query result sizes when building user interfaces, and in characterizing the data distribution of evolving datasets in the process of data
- IN ACM SIGMOD '99 , 1999
"... In a recent paper [MRL98], we had described a general framework for single pass approximate quantile nding algorithms. This framework included several known algorithms as special cases. We had
identi ed a new algorithm, within the framework, which had a signi cantly smaller requirement for main memo ..."
Cited by 99 (1 self)
Add to MetaCart
In a recent paper [MRL98], we had described a general framework for single pass approximate quantile nding algorithms. This framework included several known algorithms as special cases. We had identi
ed a new algorithm, within the framework, which had a signi cantly smaller requirement for main memory than other known algorithms. In this paper, we address two issues left open in our earlier
paper. First, all known and space e cient algorithms for approximate quantile nding require advance knowledge of the length of the input sequence. Many important database applications employing
quantiles cannot provide this information. In this paper, we present anovel non-uniform random sampling scheme and an extension of our framework. Together, they form the basis of a new algorithm
which computes approximate quantiles without knowing the input sequence length. Second, if the desired quantile is an extreme value (e.g., within the top 1 % of the elements), the space requirements
of currently known algorithms are overly pessimistic. We provide a simple algorithm which estimates extreme values using less space than required by the earlier more general technique for computing
all quantiles. Our principal observation here is that random sampling is quanti ably better when estimating extreme values than is the case with the median. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=331907","timestamp":"2014-04-20T21:51:30Z","content_type":null,"content_length":"36034","record_id":"<urn:uuid:ec17a322-565e-44a3-b792-3795b212cab8>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00374-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: October 2009 [00659]
[Date Index] [Thread Index] [Author Index]
RE: How to format Superscript and Subscript for a symbol to the same vertical level??
• To: mathgroup at smc.vnet.net
• Subject: [mg104254] RE: [mg104217] How to format Superscript and Subscript for a symbol to the same vertical level??
• From: Richard Hofler <rhofler at bus.ucf.edu>
• Date: Sun, 25 Oct 2009 01:06:46 -0400 (EDT)
• References: <200910240638.CAA07400@smc.vnet.net>
Hi Nasser,
You'll likely get a number of replies with the same answer: use one of =
the palettes in Mathematica 7.
For example,
(1) Basic Math Assistant palette, Typesetting, click on the first tab =
from the left. You'll see a button that does what you want.
(2) Writing Assistant palette: same steps
You can also do the same thing with the keyboard.
x, Ctrl-6, 2, Ctrl-5, 1
I hope this helps.
Richard Hofler
From: Nasser M. Abbasi [mailto:nma at 12000.org]
Sent: Sat 10/24/2009 2:38 AM
To: mathgroup at smc.vnet.net
Subject: [mg104254] [mg104217] How to format Superscript and Subscript for a symbol =
to the same vertical level??
Version 7
I wanted to write something like Superscript[Subscript[x, 1], 2] but =
the "2" and the "1" appear on the symbol x without one being pushed =
more than the other.
This is trivial to do in latex, but gave up trying to do it in =
I am actually trying to use Mathematica more to type set some math =
inside a
Text cell, and the above is one problem I find. I actually use the =
and type Ctrl9 to open a math cell inside the text cell and type
x Ctrl ^2 spacebar Ctrl _1
and I get
(x )
when I want
I looked the all the Palettes also that come with Mathematica 7, but do =
see such a pattern to use?
• References: | {"url":"http://forums.wolfram.com/mathgroup/archive/2009/Oct/msg00659.html","timestamp":"2014-04-16T07:32:45Z","content_type":null,"content_length":"27118","record_id":"<urn:uuid:4b529a4a-d746-481d-8865-8ab0009a74cb>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00009-ip-10-147-4-33.ec2.internal.warc.gz"} |
digitalmars.D - a<b<c?
xs0 <xs0 xs0.com>
I always wondered why no language includes such syntax (of those I know,
at least), but wouldn't it be possible to have this?
if (a < b <= c < d) ...
It looks much more obvious (and prettier and shorter) than
if ((a < b) && (b <= c) && (c < d)) ...
And this is also a common mathematical notation, afaik..
It is perfectly OK, if it is evaluated just the same as the long form
(with short-circuiting and everything).
I'm not sure whether to allow this:
if (a < b > c)
But I guess there is no harm, again it gets translated to (a<b) && (b>c)..
Feb 12 2005
zwang <nehzgnaw gmail.com>
xs0 wrote:
I always wondered why no language includes such syntax (of those I know,
at least), but wouldn't it be possible to have this?
if (a < b <= c < d) ...
This is a perfectly legal expression in D and many other languages, though semantically equivalent to (((a<b)<=c)<d) rather than ((a<b)&&(b<=c)&&(c<d)).
It looks much more obvious (and prettier and shorter) than
if ((a < b) && (b <= c) && (c < d)) ...
And this is also a common mathematical notation, afaik..
It is perfectly OK, if it is evaluated just the same as the long form
(with short-circuiting and everything).
I'm not sure whether to allow this:
if (a < b > c)
But I guess there is no harm, again it gets translated to (a<b) && (b>c)..
Feb 12 2005
if (a < b <= c < d) ...
This is a perfectly legal expression in D and many other languages, though semantically equivalent to (((a<b)<=c)<d) rather than ((a<b)&&(b<=c)&&(c<d)).
Well, I know that, but wouldn't it be much more natural if it was the second form? I can't think of a case where I'd want it compiled as (((a<b)<=c)<d), while the other case happens very frequently..
I think if this is something the parser can easily handle, it'd be a nice feature to have.. xs0
Feb 13 2005
Recently, in college, I have done an intermediate compiler, that compiled a
language similar
to Matlab to triple address code (3ac).
When I was doing the grammar that question came up too!
As you said, mathematicians use the notation: a < b < c, which means that b
belongs to the open interval ]a , c [ and is equivalent to (a < b) && (b <
However, imho, I think there is some ambiguity, because programming
languages implement boolean expressions and it can have another meaning:
(a < b) < c.
Which evaluates to: ((a < b) ? 1 : 0) < c
Miguel Ferreira Simoes
Feb 12 2005 | {"url":"http://www.digitalmars.com/d/archives/digitalmars/D/16468.html","timestamp":"2014-04-20T23:39:51Z","content_type":null,"content_length":"13620","record_id":"<urn:uuid:1ecf8aba-51fd-4d1a-bec8-af6887386945>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00353-ip-10-147-4-33.ec2.internal.warc.gz"} |
A traveling salesman approach for predicting protein functions
Protein-protein interaction information can be used to predict unknown protein functions and to help study biological pathways.
Here we present a new approach utilizing the classic Traveling Salesman Problem to study the protein-protein interactions and to predict protein functions in budding yeast Saccharomyces cerevisiae.
We apply the global optimization tool from combinatorial optimization algorithms to cluster the yeast proteins based on the global protein interaction information. We then use this clustering
information to help us predict protein functions. We use our algorithm together with the direct neighbor algorithm [1] on characterized proteins and compare the prediction accuracy of the two
methods. We show our algorithm can produce better predictions than the direct neighbor algorithm, which only considers the immediate neighbors of the query protein.
Our method is a promising one to be used as a general tool to predict functions of uncharacterized proteins and a successful sample of using computer science knowledge and algorithms to study
biological problems.
With the development of the genome projects, the focus of research has shifted from studying a single gene or protein to analyzing groups of genes or proteins. In addition to the progress in genome
sequencing, researchers have also made great progress in the area called proteomics where people study proteins on the genome level based on their sequence data and their interaction information on a
large scale. Protein-protein interactions are very informative for protein function predictions. We speculate that proteins interacting with each other are within the same functional group or within
closely related functional groups. By this reasoning, if we have adequate protein-protein interaction information, we can try to predict the functions of uncharacterized proteins based on their
interacting neighbors that have been characterized. We can further predict the biological pathways of those proteins. With the rapid progress in identifying protein-protein interactions
systematically using two-hybrid experiments and mass spectrometry, we have collected a wealth of information on protein interactions. Here we study in the field of yeast protein clustering and
function prediction utilizing a combinatorial optimization tool. The results show that we can cluster the proteins based on their interaction patterns, and we can make predictions of their biological
functions based on those clustering. The prediction method works better than the traditional method based only on the direct neighbors of the query protein because our method adopts a global view and
hence makes better usage of the information available. With the success in predicting yeast protein functions with the interaction database currently available, we can anticipate continued success as
we get more accurate and larger collections of protein-protein interaction information from additional experimental results. Also, we can apply the same methodology to other more complicated
organisms, including humans.
In a previously reported study [1], the authors have tried to use graphical display to show protein interaction patterns, to identify protein clusters, and to predict protein functions through visual
discerning. This method has given us good intuition about how the proteins are related, but it is not a quantitative or systematic way to study the proteins, especially as the network gets larger.
Other workers in the field have been trying to utilize protein interaction information to predict protein functions using a direct neighbor approach – for a particular protein they try to identify
all its interacting neighbors and use a simple mechanism such as voting ("the majority rule") to determine which might be the most likely function [1]. However, many proteins of unknown functions
either do not have interacting partners that are characterized or have too few of them for us to trust the voting. This has limited the use of the direct neighbor approach or made its predictions
less accurate. In essence the traditional approach adopts a local view of the problem where we only look at the small region of the protein's immediate neighborhood. Here we try to develop an
approach that makes use of the global connectivity pattern of the protein interaction network.
The Traveling Salesman Problem (TSP) is a classic problem in the field of graph theory and combinatorial optimization. The Traveling Salesman Problem can be described as following: Given n cities
where the distances between any two cities are known, a traveling salesman wants to visit all n cities in a tour so that each city is visited exactly once and the total distance of traveling is
minimal [2]. The Traveling Salesman Problem is NP-hard and no polynomial algorithm has been found, but the optimal solutions can be approximated using methods like linear programming or heuristic
searching [3]. Among those solutions, the Concorde program [4] is a state-of-the-art program that has provided good quality solutions within reasonable computation time. Concorde is an award winning
TSP solver publicly available at http://www.tsp.gatech.edu/concorde.html webcite.
The Traveling Salesman Problem has many applications in areas such as vehicle routing, job sequencing and data array clustering. The key to convert a data array clustering problem into a Traveling
Salesman Problem is to think of the rows and columns as the intermediate cities visited [2,5,6]. In order to make this intuitive notion concrete, we build a target function called the measure of
effectiveness (ME) [5] and transform the clustering problem into maximizing this target function, which is a typical global optimization equivalent to TSP. Because of the global and combinatorial
nature of the TSP, viewing protein clustering from the TSP perspective automatically makes use of the global information.
In this approach we download the yeast protein interaction database [7], describe the protein-protein interactions as a connectivity graph represented by the interaction matrix a[ij], and transform
the clustering problem into a Traveling Salesman Problem by using an auxiliary matrix (see methods section.)
When we input the auxiliary matrix to Concorde, we obtain the solution in the form of a permutation of the rows or columns. We re-arrange the rows and columns of the protein interaction matrix
according to this permutation and find the permutation produces a matrix with much better patterns of clustering. To quantitatively define the clusters, we compute the difference scores between each
two adjacent rows by counting how many corresponding cells are of different values. When we plot the difference scores along the permutated rows we find the scores have a distribution of dozens of
peaks over a flat baseline (Figure 1). For comparison, we show in Figure 2 that a random arrangement does not have this pattern.
Figure 1. Difference scores between adjacent proteins on the Traveling Salesman Rearrangement of the protein interaction matrix.
Figure 2. Difference scores between adjacent proteins on a random arrangement of the protein interaction matrix.
We analyze the distribution of the difference scores in Figure 1 and use the 95% percentile of the scores as a cut-off to define the boundaries of protein clusters. This way we use a cut-off score of
22 and define 75 clusters for the whole protein network. The cut-off is empirical in that we have tried different cut-off values and have found that a cut-off between 90% and 97% produces good
clustering and prediction. When the cut-off is too high, the clusters contain too many proteins some of which are not similar at all and this dilutes the useful information; on the other hand, when
the cut-off is too low, the clusters are too small so there is not enough information for protein function prediction.
We download the protein function catalogue [7]. To get an idea of how the clustering information can contribute to the protein function prediction, we first use the following strategy to predict the
functions for a particular protein: for all other proteins in the same cluster, if it is a protein with known functions, we take the vote from that protein and increment the frequency count of each
of those functions. We sort the frequency list of the functions and get the top three of the list. We consider the top three our predictions for the most likely functions of the query protein. We use
this strategy and the direct neighbor approach [1,8] separately on all known proteins and compare the predictions with the true functions respectively. For each protein with known functions, we
compare its true functions with the three predictions we give; if any of the protein's functions belongs to the top three predictions we count that protein as correctly predicted. We look at the
prediction accuracy for proteins of different degrees (a protein's degree in the network is defined as the number of its immediate neighbors) and we see the advantage of the global optimization and
clustering. The direct neighbor approach cannot predict for a protein with no characterized neighbors because there is no voting input. Plus, if the protein has very few characterized neighbors, the
votes are too sparse to give accurate predictions. The Traveling Salesman's approach, on the other hand, can make meaningful predictions in such situations because it allows us to get some useful
information from other proteins in the cluster even if the query protein does not have many characterized immediate neighbors. We can see from Figure 3 that when the protein's degree is small, the
global prediction method can produce predictions of significantly higher accuracy than the direct neighbor approach. However, when the protein's degree is high, the direct neighbor approach makes
slightly better predictions, probably because for such a protein there is already enough information to make good predictions by the direct neighbor approach while the global method introduces extra
noise. Figure 3 gives encouraging information because the majority of the proteins have low degrees (About 30% of the proteins have degree of 1). Plus, those lower degree proteins are usually the
less characterized ones of the most interest to biologists. Therefore, the advantage our method has on lower degree proteins can be very helpful for predicting biological functions for those
Figure 3. Prediction accuracy comparison. Comparison of prediction accuracies between the direct neighbor approach and the new global optimization approach using the TSP algorithms.
The analysis above gives us a better understanding of the behaviors of the two approaches. To get the best of both worlds, we use a combination of the two approaches in our prediction. We try various
combinations of the two using either the protein's degree or the protein's similarity with the query protein in terms of interaction patterns as the cut-off. We find the following rule simple and
effective – when the protein's degree is lower than 4, we use the TSP approach; when the protein's degree is greater than or equal to 4, we use the direct neighbor approach. This way we get a
prediction accuracy of 69.72% as compared to the 64.81% from the direct neighbor approach alone.
These results show that our method can give better predictions of protein functions by incorporating the global optimization algorithms. This is especially useful when the protein has no or very few
characterized neighbors directly interacting with it. Since we utilize global information of the interactions instead of concerning only the direct interactions in the neighborhood, the method is
more robust with regard to local inaccuracies or incompleteness of the information.
One concern of this method is the accuracy of the topology of the network. We need to take into account the false positives and false negatives of the protein-protein interactions obtained from
experiments. Incorrect input data affect prediction accuracy and we expect to get better predictions when the input data has better quality.
Another determinant of the method's prediction accuracy is the fineness of the classification. The more coarse-grained the classification, the fewer degrees of freedom there are in the network, and
therefore the more likely we will have a correct prediction. On the other hand, to gain more insight from the predictions, we would like the method to be able to predict protein functions on a more
fine-grained level.
At this point, we use the first level GO (Gene Ontology) classification for our prediction based on the information we can get from the protein function catalogue. In the future, with more protein
annotation and protein-protein interaction data available, we will apply our prediction system on different levels of classification and find a level at which we can predict with meaningful accuracy
and the predictions will be most insightful for further biological studies.
Certainly, when we predict the functions for uncharacterized proteins, the ultimate way to validate the predictions is to perform biological experiments. However, the predictions produced by
computational methods can give us a good place to begin the experimental exploration and can hence reduce the amount of bench work needed. Instead of speculating wild guesses for an uncharacterized
protein's functions, we can form some educated hypotheses and perform experiments to test those hypotheses. The clustering information can also give us some insights into biological pathways because
proteins functioning in the same pathway tend to interact with each other and fall into the same cluster. Therefore the global optimization clustering method can help us either to better understand
some pathways or to find the missing pieces in them.
The combinatorial optimization tools we use here can easily be used on larger data sets. Given that the Concorde program has solved a 24,978-city TSP problem to the optimum [4], we can expect it to
solve the TSP problems obtained from the protein-protein interaction matrices of most organisms. When we get adequate protein-protein interaction information for other organisms, we can use the same
methodology to predict protein functions and biological pathways for those organisms.
The reason why our TSP solver based clustering performs better than traditional gene clustering algorithms such as hierarchical clustering or nearest neighbor tree clustering is that the latter
methods are essentially greedy bottom-up algorithms where they progressively combine the most similar nodes or clusters at each step till a tree of clusters is built[12]. Greedy algorithms adopt
locally best decisions at each step and are likely to face very costly moves at later stages. For this reason, greedy algorithms tend to produce sub-optimal solutions especially for larger problems.
In contrast, in our approach, we use Concorde the TSP solver to find the globally optimal solution for the TSP equivalent of our clustering problem. Aiming at global optimization, our method works
better especially in the context that there are thousands of nodes (proteins) to be clustered.
Most recently, Climer and Zhang have proposed the TSPCluster algorithm [12], which is an improved TSP-based approach to optimal rearrangement clustering. Their algorithm produces optimal solutions
when we have known the number of clusters (k) we are going to cluster the data into and the goal is to find the cluster borders optimally [12]. The rationale is that if we know the number of clusters
beforehand, we can use that information, introduce dummy nodes, and modify the object function to minimize the total intra-cluster dissimilarity while tolerating large inter-cluster dissimilarity [13
]. Their algorithm works better in situations where we know in advance the range of values for the number of clusters k that are of interest, and we can try a few k values in that range. An example
of such situations would be to determine the locations of a few distribution centers based on population clustering [12]. In the more explorative situations like we have now where we do not know how
many clusters the proteins are going to be clustered into based on their interaction information, it is better to use our algorithm to globally cluster the proteins and use that clustering
information for protein function prediction. After we have performed the study by our algorithm and found out the viable number of clusters k, we can further apply the TSPCluster algorithm with that
k value and some nearby values to additionally optimize the clustering.
The success of the method relies on the insight that we need to get information, not only from the protein's immediate neighbors, but also from other components more remotely related. Our method is
still a simple one in that we adopt a simple rule where we use the clustering information for proteins with small numbers of neighbors and use direct voting for proteins with more neighbors. If we
try to perceive the protein interaction relationship with a more integrated view, we can see that a protein can have direct neighbors, indirect neighbors with a certain number of "bridge" proteins,
non-neighbors in the same cluster, and non-neighbors in different clusters. If we assign different weights to those relationships according to the distances of how the proteins are related, and we
fine-tune the weights based on our training sets, we hope to get a more sophisticated and more accurate prediction system.
In this study we have performed yeast protein clustering and function prediction utilizing a combinatorial optimization tool. Our results show that we can cluster the proteins based on their
interaction patterns, and that we can make predictions of the biological functions of uncharacterized proteins based on the clustering. The clustering reveals the global patterns of protein-protein
interactions within and across functional classes. Although the clustering is not an exact replica of the protein-protein interactions in a proteome, it can be used as a base for protein function
prediction. Our prediction method works better than the traditional method based only on the direct neighbors of the query protein in terms of prediction accuracy and prediction robustness with
regard to local inaccuracies or incompleteness. The advantage is more prominent when the protein has very few characterized immediate neighbors or no such neighbors at all. The success of our method
lies in the fact that it adopts a global view and hence makes better use of the information available.
Our approach is the first one to use the Traveling Salesman Problem, a classical and well studied computer problem and a combinatorial optimization tool, to study the protein-protein interactions
from a global point of view. The results show that this approach is a promising one to be used as a general tool to predict functions of uncharacterized proteins. This is a successful sample of using
computer science knowledge and algorithms to study biological problems. With the success of being able to predict yeast protein functions more accurately based on the yeast protein database currently
available, we can anticipate continued success as we get more complete protein-protein interaction information from additional experimental results. Also, we can apply the same methodology to other
more complicated organisms, including humans.
Data and software
We downloaded the yeast protein interaction database and yeast protein function catalogue from the Comprehensive Yeast Genome Database http://mips.gsf.de/genre/proj/yeast/ webcite. We downloaded the
Concorde TSP solver from Concorde Home http://www.tsp.gatech.edu//concorde/downloads/downloads.htm webcite. We ran the Concorde program on a Sun ^® Ultra 10 work station with a total of 32 GB memory.
Our protein function prediction algorithm was implemented in the Perl programming language and was run on an Intel^® Xeon processor 2.80 GHz with 2.00 GB RAM installed with Microsoft Windows
operating system. We used Microsoft Excel and the SAS^® software package for data analysis.
Transformation of the protein clustering to a Traveling Salesman Problem
Let ρ indicate the permutation of both the rows and the columns because the interaction matrix is a symmetric one. The Measure of Effectiveness (ME) represents the overall similarity and it is the
objective function to be maximized. ME is calculated as following [2,9]:
With the symmetry between the rows and columns, this function reduces to
Therefore, the network clustering problem becomes a combinatorial optimization problem where the optimal clustering corresponds to the configuration or permutation ρ where ME(ρ) is maximal. This
amounts to a Traveling Salesman Problem looking for an optimized permutation ρ with the distance matrix being [2,9,10]. We use the formula to make sure the matrix cells are non-negative numbers. By
this analysis, we transform the problem of rearranging the protein-protein interaction matrix into a Traveling Salesman Problem which can be represented by a new matrix C, where . We call this new
matrix the auxiliary matrix [11]. The solution to this Traveling Salesman Problem gives us the permutation we need to rearrange the protein interaction matrix.
Authors' contributions
Olin Johnson conceived of the study, participated in its design and revised the manuscript. Jing Liu carried out the design, implementation and data analysis and drafted the manuscript. Both authors
read and approved the final manuscript.
Sign up to receive new article alerts from Source Code for Biology and Medicine | {"url":"http://www.scfbm.org/content/1/1/3","timestamp":"2014-04-16T19:47:41Z","content_type":null,"content_length":"89616","record_id":"<urn:uuid:8a64e755-7267-45ab-9b5c-f4c971dd763e>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00289-ip-10-147-4-33.ec2.internal.warc.gz"} |
Video Library
Since 2002 Perimeter Institute has been recording seminars, conference talks, and public outreach events using video cameras installed in our lecture theatres. Perimeter now has 7 formal presentation
spaces for its many scientific conferences, seminars, workshops and educational outreach activities, all with advanced audio-visual technical capabilities. Recordings of events in these areas are all
available On-Demand from this Video Library and on Perimeter Institute Recorded Seminar Archive (PIRSA). PIRSA is a permanent, free, searchable, and citable archive of recorded seminars from relevant
bodies in physics. This resource has been partially modelled after Cornell University's arXiv.org.
How should we think about quantum computing? The usual answer to this question is based on ideas inspired by computer science, such as qubits, quantum gates, and quantum circuits. In this talk I will
explain an alternate geometric approach to quantum computation. In the geometric approach, an optimal quantum computation corresponds to "free falling" along the minimal geodesics of a certain
Riemannian manifold.
The anatomy of a black hole. Learning Outcomes: • What are the mass requirements for a star to become a black hole? • The anatomy of a Schwarzschild black hole, including the singularity and the
event horizon. • What a traveller would experience if he orbited a black hole, or had the bad luck to fall through the event horizon.
The physical attributes of a black hole and what types of physical evidence astronomers use the locate them. Learning Outcomes: • What are the physical requirements for a star to become a black hole,
and what properties of that star remain after the black hole is formed?• The types of black holes, including: the Schwarzschild black hole, the Reissner-Nordström black hole, the Kerr black hole, and
the Kerr-Newman black hole.• What a traveller would experience if he orbited one of these more general black holes, or fell through to the singularity.
An introduction to a few of the major scientists who applied Einstein's ideas to better understand the life cycle of various stars. Learning Outcomes: • How Subrahmanyan Chandrasekhar resolved the
paradox of the white dwarf star, and how Walter Baade and Fritz Zwicky described the dynamics of neutron stars. • Yakov Zel'dovich develops the nuclear chain reaction that is the engine that keeps
stars burning.
The mathematical predictions made by scientists tell a story of the life and death of stars. Learning Outcomes: • How the Hertzsprung-Russel diagram describes the life cycle of stars. • Depending on
its mass, how a star ends its life as a white dwarf star, a neutron star, or a black hole, and where super novas fit in. • How the mathematical predictions of white dwarf stars, super novas, and
neutron stars are slowly verified by the advancement of the astronomical equipment used by astronomers. | {"url":"https://www.perimeterinstitute.ca/video-library?title=&page=641","timestamp":"2014-04-21T16:42:03Z","content_type":null,"content_length":"63111","record_id":"<urn:uuid:3450c7e7-ae61-421a-b24f-c34d1c9d4f09>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00302-ip-10-147-4-33.ec2.internal.warc.gz"} |
Josephine, TX Algebra Tutor
Find a Josephine, TX Algebra Tutor
...I’ve tutored over 100 hours of SAT and ACT prep, including over 30 hours of SAT and ACT Reading. The SAT Critical Reading section requires strong vocabulary skills, particularly for the
sentence completion portion. I include vocabulary lessons and sentence completion practice in my tutoring sessions to help students gain confidence in answering vocabulary-related questions.
15 Subjects: including algebra 1, algebra 2, reading, geometry
...My expertise is in math of all levels. I believe that prior knowledge frequently gets lost when learning new material. By supporting new learning with acquired skills, students do better on
10 Subjects: including algebra 1, geometry, ASVAB, SAT math
...With the ability to do that, you can solve similar problems on your quizzes, lab reports, tests, exams (SAT, SAT 2). If you have problems in Gen. Chemistry, Gen. Physics, Algebra, Calculus I
and II, I could be a resource for you.
24 Subjects: including algebra 2, algebra 1, English, calculus
...I also tutor high school math. I help students with their understanding of new and complicated concepts. My aim, as a tutor, is to help students to solve the problems with their best abilities
without overwhelming them.
19 Subjects: including algebra 1, algebra 2, physics, chemistry
...I am confident that I can assist students with any biology concepts. I am certified to teach any science course for grades 6-12, but have concentrated on teaching physics for the last 4 years
of my 20 years in teaching. I have completed training in AP physics and am currently taking classes at SMU to obtain a Master Physics Teacher certificate.
6 Subjects: including algebra 1, chemistry, physics, biology
Related Josephine, TX Tutors
Josephine, TX Accounting Tutors
Josephine, TX ACT Tutors
Josephine, TX Algebra Tutors
Josephine, TX Algebra 2 Tutors
Josephine, TX Calculus Tutors
Josephine, TX Geometry Tutors
Josephine, TX Math Tutors
Josephine, TX Prealgebra Tutors
Josephine, TX Precalculus Tutors
Josephine, TX SAT Tutors
Josephine, TX SAT Math Tutors
Josephine, TX Science Tutors
Josephine, TX Statistics Tutors
Josephine, TX Trigonometry Tutors
Nearby Cities With algebra Tutor
Anna, TX algebra Tutors
Campbell, TX algebra Tutors
Celeste algebra Tutors
Commerce, TX algebra Tutors
Cumby algebra Tutors
Elmo, TX algebra Tutors
Merit algebra Tutors
Nevada, TX algebra Tutors
Princeton, TX algebra Tutors
St Paul, TX algebra Tutors
Van Alstyne algebra Tutors
West Tawakoni, TX algebra Tutors
Westminster, TX algebra Tutors
Weston, TX algebra Tutors
Wills Point algebra Tutors | {"url":"http://www.purplemath.com/josephine_tx_algebra_tutors.php","timestamp":"2014-04-17T20:01:27Z","content_type":null,"content_length":"23808","record_id":"<urn:uuid:22a359c4-7973-4d71-bbb3-335cadcb5cde>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00627-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebraic integers
April 11th 2009, 02:35 PM #1
MHF Contributor
May 2008
Algebraic integers
Suppose $a,b \in \mathbb{R}$ are such that the complex number $a+bi$ is a root of unity and $p(a)=0$ for some monic polynomial $p(x) \in \mathbb{Z}[x].$ Show that $a \in \{-1,0,1 \}.$
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/math-challenge-problems/83263-algebraic-integers.html","timestamp":"2014-04-19T21:37:37Z","content_type":null,"content_length":"30905","record_id":"<urn:uuid:0264eda7-31e3-405e-ac0f-9e823f8532bd>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00375-ip-10-147-4-33.ec2.internal.warc.gz"} |
Error analysis of implicit functions
up vote 2 down vote favorite
I'm trying to do propagation of error using the linearized variance method (assuming independent variables, thus no need for the covariance terms):
$$\sigma^2_f = \sum^n_{k=0} \left(\frac{\partial f}{\partial x_k}\right)^2 \sigma^2_{x_k}$$
However, I have a nasty function that doesn't give me a clear-cut explicit definition of my variable. For simplicity, I will just give a simple example of what I'm trying to accomplish. Take the
$$x - y = e^f + e^{2f} + e^{2x}$$
This is algebraicly impossible to solving explicitly for $f = f(x,y)$. So, if I wanted to find the variance of f, I had two ideas (one of them backfired...). First, make a new function equal to zero,
$$g(x,y,f) = e^f + e^{2f} + e^{2x} + y - x = 0$$
That way I could easily find the partial derivatives, and the variance of this new function would be zero, since it's value always equaled zero.
$$\sigma^2_g = 0$$
Unfortunately, this backfired on me (after I did the 26 partial derivatives, ouch) as you can see with
$$\sigma^2_g = \left(\frac{\partial g}{\partial x}\right)^2 \sigma^2_x + \left(\frac{\partial g}{\partial y}\right)^2 \sigma^2_y + \left(\frac{\partial g}{\partial f}\right)^2 \sigma^2_f$$
If you set the variance of g to zero, then you could solve for the variance of f, and be a happy camper! Wrong. Because all the terms are squared, there is NO WAY they can add up to be zero unless
they are all identically zero. That really messed me up.
The other idea I had just take the original equation and performing the partial differential with respect to x and y to each side of the equation, then solve for the partial diff quantities. That
would require me to do a complete overhaul on all my work.
My question then: is there anyway to use the first method I thought of, just modifying my steps? Or maybe a third way? If not, then I will be surprised, since mathematics usually has a way to solve
such twisted scenarios. Please advise soon, as I need to finish this up quickly.
st.statistics fa.functional-analysis differential-equations ap.analysis-of-pdes
add comment
3 Answers
active oldest votes
Alas, your second idea is the only correct one to use here. The big (and irrepairable) problem with your first idea is that you tried to use the formula for the variance of a function of
up vote 3 several independent variables in the case when they are severely dependent (namely, for $x,y,f=f(x,y)$). So, however sad it is, you've got to redo everything from the beginning.
down vote
add comment
Except you can't just naively divide by the partial differentials, you have to multiply by the inverse Jacobian to find the correct equation.
up vote 1 down vote
add comment
Actually, I figured it out with a stoke of luck last night. I was so set on using an implicit function that I forgot the rest of the implications that go along with that.
The Implicit Function Theorem provides a method to perform implicit differentiation:
$\frac{\partial f}{\partial x} = -\frac{\displaystyle\frac{\partial g}{\partial x}}{\displaystyle\frac{\partial g}{\partial f}}$
up vote 0
down vote This relationship is often used in manipulating total differentials. I used it a lot in my thermodynamics class, which does this kind of calculation all the time. So instead of re-doing
everything, I just had to use the existing partials I had and divide them by the appropriate partial as shown above. Hooray! All hard work was not lost! Thanks anyways for your thoughts
and contributions.
add comment
Not the answer you're looking for? Browse other questions tagged st.statistics fa.functional-analysis differential-equations ap.analysis-of-pdes or ask your own question. | {"url":"http://mathoverflow.net/questions/4074/error-analysis-of-implicit-functions","timestamp":"2014-04-21T15:12:08Z","content_type":null,"content_length":"58156","record_id":"<urn:uuid:8ffe26d1-ba7b-476b-b6ca-ff4676684b79>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00172-ip-10-147-4-33.ec2.internal.warc.gz"} |
Jinjin Li, Zhaowen Li
A Positive Answer to Velichko's Question
We give positive answer to Velichko's question in which the quotient and $s$-map is replaced by a sequence-covering and $cs$-map. In addition, let $X$ have a star-countable $k$-network, then $X$ is a
sequence-covering and $cs$-image of a locally separable metric space if and only if $X$ is a sequence-covering and $cs$-image of a metric space.
Sequence-covering maps, $cs$-mappings, cs-networks, $k$-networks, compact-countable covers, star-countable collections, cosmic spaces, $\aleph_{0}$-spaces.
MSC 2000: Primary 54E99, 54C10. Secondary 54D55 | {"url":"http://www.emis.de/journals/GMJ/vol13/13-2-10.htm","timestamp":"2014-04-21T02:30:56Z","content_type":null,"content_length":"1184","record_id":"<urn:uuid:cfb3c1da-9d9d-48de-997e-e19d5bc62475>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00522-ip-10-147-4-33.ec2.internal.warc.gz"} |
Valley Stream Geometry Tutor
...Therefore, I make sure students are on grade level, know grammar rules, can comprehend, can effectively communicate, can read clearly and concisely, speak well and can edit their own work. I
have written many educational articles for a number of computer magazines and newspapers. I have assiste...
17 Subjects: including geometry, reading, English, writing
...I received A grades in math up to Calculus during undergrad. I have tutored many hours of SAT Math, as well as all of the relevant high school material. I received my bachelor's of science in
biology from SUNY Geneseo.
24 Subjects: including geometry, chemistry, ASVAB, physics
...It is often the course where students become acquainted with symbolic manipulations of quantities. While it can be confusing at first (eg "how can a letter be a number?"), it can also broaden
your intellectual scope. It's a step out of the morass of arithmetic into a more obviously structured way of thinking.
25 Subjects: including geometry, chemistry, physics, calculus
...I hope you like what you see and I look forward to working with you.I am currently certified to teach early childhood, which includes pre-kindergarten until 2nd grade. I'm certified in general
education and special education. I received my M.S.Ed from Hunter College.
19 Subjects: including geometry, reading, writing, algebra 1
...Proficiency with Physics Laws and Concepts. Prepare a perfect base knowledge to absorb complex problems in further studies Habit of keeping myself updated, learning something new everyday.
Competitive exams have always drawn the best output from me.
9 Subjects: including geometry, physics, GRE, algebra 1 | {"url":"http://www.purplemath.com/valley_stream_ny_geometry_tutors.php","timestamp":"2014-04-19T17:02:12Z","content_type":null,"content_length":"24044","record_id":"<urn:uuid:71b4f588-6b3a-4c90-8c3f-1afe58292d89>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00495-ip-10-147-4-33.ec2.internal.warc.gz"} |
Addition Formulae
January 10th 2008, 01:48 PM #1
Jan 2008
Addition Formulae
Hey guys, im really stuck on a question from my maths homework, mainly because in school we have never done an addition formula with more than two terms in each set of brackets, any help would be
greatly appreciated.
This is what i have so far:
(cosP+cosG+cosF)*+ (sinP+sinG+sinF)*
= (cos*P+2cosPcosGcosF+cos*F) + (sin*P+2sinPsinGsinF+sin*F)
= 1+1+(2cosPcosGcosF) + (2sinPsinGsinF)
= 2+2(cosPcosGcosF+sinPsinGsinF)
= 2+2cos(P-G-F)
= 2[1+cos(P-G-F)]
PS: the * refers to a squared term, i didnt know how to type it in the forum
It is not at all clear form what you have posted what the objective is.
But your algebra is lacking.
$\left( {\cos (x) + \cos (y) + \cos (z)} \right)^2 =$$\cos ^2 (x) + \cos ^2 (y) + \cos ^2 (z) + 2\cos (x)\cos (y) + 2\cos (x)\cos (z) + 2\cos (y)\cos (z)$.
Thanks for putting it so bluntly
Sorry about the clarity of my last post, i didnt really know how to put it into a clear question since the question in my homework only says "solve". Its on Addition Formulae though so i assumed
someone here would know how to do it.
I dont really understand what youve said there, i typed the eqautions into my calculator using random numbers and the expanded version comes out with a syntax error, is that just me inputting it
wrong or something?? Sorry im really bad at maths.
You still have not told us what the question is asking!
But thank you for being honest. You have actually made a point that I have said many times.
“If someone does not know mathematics, calculators or computers are useless”.
Now to your basic problem. I think that the help that you really need is not available in any forum such as this one. You need a one-on-one tutorial.
January 10th 2008, 02:26 PM #2
January 10th 2008, 03:06 PM #3
Jan 2008
January 10th 2008, 03:43 PM #4 | {"url":"http://mathhelpforum.com/trigonometry/25878-addition-formulae.html","timestamp":"2014-04-18T12:34:52Z","content_type":null,"content_length":"40740","record_id":"<urn:uuid:a3b6d04b-6aa2-426e-9c19-a2bae7bb51ad>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00491-ip-10-147-4-33.ec2.internal.warc.gz"} |
Vector how to find the equation..
here is the question that I have and I have confuse
find the equation of the straight line which is perpendicular to the plane
and which goes through the point (1,1,7)
is the point (5,7,15) on this line?
that maybe is something!? | {"url":"http://mathhelpforum.com/calculus/90699-vector-how-find-equation.html","timestamp":"2014-04-18T07:48:53Z","content_type":null,"content_length":"49575","record_id":"<urn:uuid:c3de81c0-a4cf-49c1-9a9f-643c3a24a945>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00370-ip-10-147-4-33.ec2.internal.warc.gz"} |
sampling without replacement
August 5th 2010, 10:43 PM #1
Mar 2010
sampling without replacement
say i have fifty objects and 1 is the object of interest.With replacement i can modell this as a geometric distribution to find the expectation of the number of bernouli trials required until i
pick the object of interest.But if there is no replacemnt then the probabilities change so how do i model this scenario in order to find quantities such as expoectation and variance?
Please be more specific. Are you drawing a sample of fixed size (5 objects, say), or do you have something else in mind like drawing objects until you draw your special object?
i have n marbles. All are white except for one which is blue.If i continually choose one marble at a time without replacement, what is the expectation until i choose the blue?
I think i have figured it out, because my notes have something on this type of distribution, so i have solved the question, but i still don't fully understand the logic, which is annoying.
i have n marbles. All are white except for one which is blue.If i continually choose one marble at a time without replacement, what is the expectation until i choose the blue?
I think i have figured it out, because my notes have something on this type of distribution, so i have solved the question, but i still don't fully understand the logic, which is annoying.
Let X be the random variable of the number of white marbles chosen before a blue marble is chosen.
Draw a tree diagram to an extend where it's sufficient for you to see a pattern.
P(X=1)=((n-1)/n) x 1/(n-1) =1/n
P(X=2)= ((n-1)/n) x ((n-2)/(n-1)) x (1/(n-2)) = 1/n
and the sum of this AP is (n-1)/2
so if you have 50 marbles, you are expected to pick (50-1)/2 white marbles before the blue marble is picked.
August 6th 2010, 01:47 PM #2
August 6th 2010, 06:33 PM #3
Mar 2010
August 7th 2010, 12:20 AM #4
MHF Contributor
Sep 2008
West Malaysia | {"url":"http://mathhelpforum.com/statistics/152899-sampling-without-replacement.html","timestamp":"2014-04-17T21:32:40Z","content_type":null,"content_length":"38733","record_id":"<urn:uuid:809033cd-cb62-4207-a676-51b9135edf0d>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00366-ip-10-147-4-33.ec2.internal.warc.gz"} |
uniform continuity
Let $f:\mathbb{R}^{2} \to \mathbb{R}$ be defined as $f(x,y)=\text{max}\{|x|,|y|\}$. Prove that f is uniformly continuous
You can also show that the function is Holder continuous on $\mathbb{R}^2$; that is, $|f(\mathbf{y})-f(\mathbf{x})| \le C|\mathbf{y}-\mathbf{x}|^{\lambda}$$\forall \mathbf{x}, \mathbf{y} \in \mathbb
{R}^2,$ where $C$ and $\lambda$ are nonnegative real constants. If a function is Holder continuous, then it is uniformly continuous. | {"url":"http://mathhelpforum.com/differential-geometry/123731-uniform-continuity.html","timestamp":"2014-04-19T02:15:37Z","content_type":null,"content_length":"37922","record_id":"<urn:uuid:b6a238e2-bf51-4a7d-b651-9f0c01e43b58>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00574-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematical English Usage
Mathematical English Usage - a Dictionary
We must now bring dependence on $d$ into the arguments of [9].
The proof will be divided into a sequence of lemmas.
We can factor $g$ into a product of irreducible elements.
Other types fit into this pattern as well.
This norm makes $X$ into a Banach space.
We regard (1) as a mapping of $S^2$ into $S^2$, with the obvious conventions concerning the point $\infty$.
We can partition $[0,1]$ into $n$ intervals by taking ......
The map $F$ can be put $\langle$brought$\rangle$ into this form by setting ......
The problem one runs into, however, is that $f$ need not be ......
But if we argue as in (5), we run into the integral ......, which is meaningless as it stands.
Thus $N$ separates $M$ into two disjoint parts.
Now (1) splits into the pair of equations ......
Substitute this value of $z$ into $\langle$in$\rangle$ (7) to obtain ......
Replacement of $z$ by $1/z$ transforms (4) into (5).
This can be translated into the language of differential forms.
Implementation is the task of turning an algorithm into a computer program.
Go to the list of words starting with: a b c d e f g h i j k l m n o p q r s t u v w y z | {"url":"http://www.impan.pl/cgi-bin/dict?into","timestamp":"2014-04-21T12:33:36Z","content_type":null,"content_length":"4932","record_id":"<urn:uuid:2e1d40b1-57c0-4b23-a613-c3b51a0c4177>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00483-ip-10-147-4-33.ec2.internal.warc.gz"} |
Convert torr to micron mercury [0 °C] - Conversion of Measurement Units
›› Convert torr to micron mercury [0 °C]
›› More information from the unit converter
How many torr in 1 micron mercury [0 °C]? The answer is 0.00100000015001.
We assume you are converting between torr and micron mercury [0 °C].
You can view more details on each measurement unit:
torr or micron mercury [0 °C]
The SI derived unit for pressure is the pascal.
1 pascal is equal to 0.00750061673821 torr, or 7.50061561303 micron mercury [0 °C].
Note that rounding errors may occur, so always check the results.
Use this page to learn how to convert between torrs and microns mercury.
Type in your own numbers in the form to convert the units!
›› Definition: Torr
The torr is a non-SI unit of pressure, named after Evangelista Torricelli. Its symbol is Torr.
›› Metric conversions and more
ConvertUnits.com provides an online conversion calculator for all types of measurement units. You can find metric conversion tables for SI units, as well as English units, currency, and other data.
Type in unit symbols, abbreviations, or full names for units of length, area, mass, pressure, and other types. Examples include mm, inch, 100 kg, US fluid ounce, 6'3", 10 stone 4, cubic cm, metres
squared, grams, moles, feet per second, and many more!
This page was loaded in 0.0028 seconds. | {"url":"http://www.convertunits.com/from/torr/to/micron+mercury+%5B0+%C2%B0C%5D","timestamp":"2014-04-18T03:37:34Z","content_type":null,"content_length":"20101","record_id":"<urn:uuid:468162f3-f9df-4c9f-a281-a9fbaaeeb831>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00177-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dual of a Basis for a Hopf Algebra Conatined in all Dually Paired Hopf Algebras
up vote 1 down vote favorite
For an infinite dimensional Hopf algebra $H$, a non-degenerate dually pairing Hopf algebra $H'$, and a choice of basis $e_i$ of $H$, is the dual basis $e^i$ (defined of course by $e^i(e_j) = \delta_
{ij}$) contained in $H'$?
I am interested in the specific case of $SU_q(N)$ and the dually paired Hopf algebra $\mathfrak{sl}_N$.
quantum-groups qa.quantum-algebra hopf-algebras
add comment
1 Answer
active oldest votes
No. Let $k$ be a field of characteristic $0$. Consider the symmetric algebra on one generator $k[x]$, with comultiplication $x \mapsto x\otimes 1 + 1\otimes x$. It has a Hopf pairing with
itself, given by $\langle x^m,x^n\rangle = n! \hspace{.5ex} \delta_{m=n}$. Then consider the bases of $k[x]$ given by expansion around $1$, i.e. $e_n = (x-1)^n$. The dual basis, if it
exists, includes $e^0$ such that $\langle e^0, (x-1)^n\rangle = \delta_{0,n}$. So suppose that $e^0 = \sum a_n x^n$; then: $$ \begin{aligned} 1 & = a_0 \\ 0 & = a_1 - a_0 \\ 0 & = a_2 -
up vote 4 2a_1 + a_0 \\ \dots & \phantom= \dots \\ 0 & = \sum_{k=0}^n (-1)^{n-k} \hspace{.5ex} n!\hspace{.5ex} a_n \hspace{.5ex} \binom{n}{k} \\ \dots & \phantom= \dots \\ \end{aligned} $$ The
down vote solution is that all $a_n = 1/n!$, i.e. $e^0 = \sum x^n/n! = \exp(x)$. But this is not a polynomial.
In the case you ask about, you will similarly have some bases with dual bases, and some without.
add comment
Not the answer you're looking for? Browse other questions tagged quantum-groups qa.quantum-algebra hopf-algebras or ask your own question. | {"url":"http://mathoverflow.net/questions/56679/dual-of-a-basis-for-a-hopf-algebra-conatined-in-all-dually-paired-hopf-algebras","timestamp":"2014-04-20T11:30:38Z","content_type":null,"content_length":"51256","record_id":"<urn:uuid:0a6a856b-5028-469b-bb2c-a8646aea773a>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00625-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
What are the different types of triangles?
• 11 months ago
• 11 months ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5188885de4b0f910f30a8ab7","timestamp":"2014-04-17T04:05:00Z","content_type":null,"content_length":"58732","record_id":"<urn:uuid:46371bdc-c1b9-4efa-92d1-d588f1a84265>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00473-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Is physical space three dimensional? Mathematical perspectives...
Replies: 4 Last Post: Feb 4, 2013 10:04 PM
Messages: [ Previous | Next ]
Re: Is physical space three dimensional? Mathematical perspectives...
Posted: Feb 4, 2013 10:04 PM
get rid of the lightconeheads;
use quaternions, where "t is the real,
scalar parameter; thank you."
> of quaternions, or "the first-get vector mechanics,
> wherefrom we get *all* of the lingo thereof."
thus increasing the concentration of iron,
decreases the bio-availablity of all of the other trace-elements;
we cannot be sure that all 92 of them are not ultimately required,
although most of them are known to be so,
viz molydenum, vanadium etc. etc. for animal nutrition;
read a dogfood label & create a recipe!
> >There's no reason not to increase the size of the tests.
> If it was done in measured amounts
> and increases the edible fish population,
the Copenhagenskoolers have merely "reified" the math
of the mere probablities, which are no different in effect
than they are for flipping a coin;
til you actually look at the result,
it might be funny. and, if you try
to look at anything that is "not on your scale,"
either interatomic or intergalactic, you will find odd,
"quantized" behaviors that are amenable to "QM" ...
as well as to a modicum of good sense.
thus quoth:
observation/measurement) forcing change on the electron's position or
momentum. I've heard the phenomenon explained in this way many times,
with the unexplained caveat that this implies a fundamental randomness
to the universe, rather than just the nature of trying to observe it.
the speed of lightwaves' propogation is dependent
solely upon the index of refraction of the medum,
which is relaed to its a)
composition and b)
density; not the *sotto voce* velocity
of some ur-newtonian corpuscles,
already set-up to violate Snell's law (of refraction) by Sir Isaac.
that was really dumb.
> basically, he's hide-bound by certain things he's learned in the past
> (like momentum = m*v) and won't let them go.
typically obfuscating nondimensional analysis,
you twosome. space is clearly three-dimensional
for several applications (surveying & navigation e.g.),
and it is clearly "more complicated that that,"
for both interatomic & intergalactic processes,
hence the enormous efficacy of stringtheory
for appreciating the many "quantized" phenomena,
beginning with the Kaluza theory.
however, the "compactification to a string"
of Klein may not be necessary;
it is simply an abstuse math-speak;
it may not be "hidden variables," but
there are "hidden dimensionalities" or spaces,
at least in a formal sense.
it is not that Kaluza was wrong, it's just that
no-one knew how to treat of such dimensionality,
other than through phase-spaces (like Hamiltonians and
Lagrangians) -- with the notable exception
of quaternions, or "the first-get vector mechanics,
wherefrom we get *all* of the lingo thereof."
> > Th. Kaluza agreed with Einstein and in 1921 tried
> > to explain SRT using 5D space.
Date Subject Author
2/2/13 Re: Is physical space three dimensional? Mathematical perspectives... socratus@bezeqint.net
2/4/13 Re: Is physical space three dimensional? Mathematical perspectives... Brian Q. Hutchings
2/4/13 Re: Is physical space three dimensional? Mathematical perspectives... Brian Q. Hutchings | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2432391&messageID=8248954","timestamp":"2014-04-16T17:03:09Z","content_type":null,"content_length":"21535","record_id":"<urn:uuid:8da7b9c4-7217-4d30-ae0e-689fa0efca9b>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00155-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematics 1152q > Bayer > Notes > 11.2 - Series | StudyBlue
Simply amazing. The flashcards are smooth, there are many different types of studying tools, and there is a great search engine. I praise you on the awesomeness. - Dennis
I have been getting MUCH better grades on all my tests for school. Flash cards, notes, and quizzes are great on here. Thanks! - Kathy
I was destroying whole rain forests with my flashcard production, but YOU, StudyBlue, have saved the ozone layer. The earth thanks you. - Lindsey
This is the greatest app on my phone!! Thanks so much for making it easier to study. This has helped me a lot! - Tyson | {"url":"http://www.studyblue.com/notes/note/n/112-series-/file/106318","timestamp":"2014-04-21T12:12:45Z","content_type":null,"content_length":"35985","record_id":"<urn:uuid:baafac3e-0360-401b-bf2e-ab58bed29749>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00179-ip-10-147-4-33.ec2.internal.warc.gz"} |
Random variables. An RVar is a sampleable random variable. Because probability distributions form a monad, they are quite easy to work with in the standard Haskell monadic styles. For examples, see
the source for any of the Distribution instances - they all are defined in terms of RVars.
class Monad m => RandomSource m s
A source of entropy which can be used in the given monad.
See also MonadRandom.
Minimum implementation is either the internal getRandomPrimFrom or all other functions. Additionally, this class's interface is subject to extension at any time, so it is very, very strongly
recommended that the randomSource Template Haskell function be used to implement this function rather than directly implementing it. That function takes care of choosing default implementations for
any missing functions; as long as at least one function is implemented, it will derive sensible implementations of all others.
To use randomSource, just wrap your instance declaration as follows (and enable the TemplateHaskell, MultiParamTypeClasses and GADTs language extensions, as well as any others required by your
instances, such as FlexibleInstances):
$(randomSource [d|
instance RandomSource FooM Bar where
{- at least one RandomSource function... -}
Monad m0 => RandomSource m0 (m0 Double)
Monad m0 => RandomSource m0 (m0 Word64)
Monad m0 => RandomSource m0 (m0 Word32)
Monad m0 => RandomSource m0 (m0 Word16)
Monad m0 => RandomSource m0 (m0 Word8)
Monad m => RandomSource m (GetPrim m)
class Monad m => MonadRandom m where
A typeclass for monads with a chosen source of entropy. For example, RVar is such a monad - the source from which it is (eventually) sampled is the only source from which a random variable is
permitted to draw, so when directly requesting entropy for a random variable these functions are used.
Minimum implementation is either the internal getRandomPrim or all other functions. Additionally, this class's interface is subject to extension at any time, so it is very, very strongly recommended
that the monadRandom Template Haskell function be used to implement this function rather than directly implementing it. That function takes care of choosing default implementations for any missing
functions; as long as at least one function is implemented, it will derive sensible implementations of all others.
To use monadRandom, just wrap your instance declaration as follows (and enable the TemplateHaskell and GADTs language extensions):
$(monadRandom [d|
instance MonadRandom FooM where
getRandomDouble = return pi
getRandomWord16 = return 4
{- etc... -}
type RVar = RVarT IdentitySource
An opaque type modeling a "random variable" - a value which depends on the outcome of some random event. RVars can be conveniently defined by an imperative-looking style:
normalPair = do
u <- stdUniform
t <- stdUniform
let r = sqrt (-2 * log u)
theta = (2 * pi) * t
x = r * cos theta
y = r * sin theta
return (x,y)
OR by a more applicative style:
logNormal = exp <$> stdNormal
Once defined (in any style), there are several ways to sample RVars:
runRVar (uniform 1 100) DevRandom :: IO Int
sampleRVar (uniform 1 100) :: State PureMT Int
• As a pure function transforming a functional RNG:
sampleState (uniform 1 100) :: StdGen -> (Int, StdGen)
(where sampleState = runState . sampleRVar)
data RVarT m a Source
A random variable with access to operations in an underlying monad. Useful examples include any form of state for implementing random processes with hysteresis, or writer monads for implementing
tracing of complicated algorithms.
For example, a simple random walk can be implemented as an RVarT IO value:
rwalkIO :: IO (RVarT IO Double)
rwalkIO d = do
lastVal <- newIORef 0
let x = do
prev <- lift (readIORef lastVal)
change <- rvarT StdNormal
let new = prev + change
lift (writeIORef lastVal new)
return new
return x
To run the random walk it must first be initialized, after which it can be sampled as usual:
rw <- rwalkIO
x <- sampleRVarT rw
y <- sampleRVarT rw
The same random-walk process as above can be implemented using MTL types as follows (using import Control.Monad.Trans as MTL):
rwalkState :: RVarT (State Double) Double
rwalkState = do
prev <- MTL.lift get
change <- rvarT StdNormal
let new = prev + change
MTL.lift (put new)
return new
Invocation is straightforward (although a bit noisy) if you're used to MTL:
rwalk :: Int -> Double -> StdGen -> ([Double], StdGen)
rwalk count start gen =
flip evalState start .
flip runStateT gen .
sampleRVarTWith MTL.lift $
replicateM count rwalkState
MonadTrans RVarT
MonadPrompt Prim (RVarT n)
Monad (RVarT n)
Functor (RVarT n)
Applicative (RVarT n)
MonadIO m => MonadIO (RVarT m)
MonadRandom (RVarT n)
runRVarTWith :: forall m n s a. RandomSource m s => (forall t. n t -> m t) -> RVarT n a -> s -> m aSource
"Runs" an RVarT, sampling the random variable it defines.
The first argument lifts the base monad into the sampling monad. This operation must obey the "monad transformer" laws:
lift . return = return
lift (x >>= f) = (lift x) >>= (lift . f)
One example of a useful non-standard lifting would be one that takes State s to another monad with a different state representation (such as IO with the state mapped to an IORef):
embedState :: (Monad m) => m s -> (s -> m ()) -> State s a -> m a
embedState get put = \m -> do
s <- get
(res,s) <- return (runState m s)
put s
return res
The ability to lift is very important - without it, every RVar would have to either be given access to the full capability of the monad in which it will eventually be sampled (which, incidentally,
would also have to be monomorphic so you couldn't sample one RVar in more than one monad) or functions manipulating RVars would have to use higher-ranked types to enforce the same kind of isolation
and polymorphism.
sampleRVarTWith :: forall m n a. MonadRandom m => (forall t. n t -> m t) -> RVarT n a -> m aSource
sampleRVarTWith lift x is equivalent to runRVarTWith lift x StdRandom. | {"url":"http://hackage.haskell.org/package/rvar-0.2.0.1/docs/Data-RVar.html","timestamp":"2014-04-17T20:14:20Z","content_type":null,"content_length":"22054","record_id":"<urn:uuid:7687261a-c659-48db-9d01-ce57801c9d2e>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00038-ip-10-147-4-33.ec2.internal.warc.gz"} |
the Philosophy of Organism
Bell's Theorem and the Theory of Relativity
----An Interpretation of Quantum Correlation at a Distance based on the Philosophy of Organism------
Yutaka Tanaka
This paper starts with the observation that the combination of the so-calld EPR argument and Bell's theorem reveals one of the most paradoxical features of quantum reality, i.e. the non-separability
of two contingent events. If we accept the conclusion of the revised EPR argument together with Bell's theorem, we are necessarily led to the denial of local causality which was presupposed by the
original version of Einstein's criticism against quantum physics. As the concept of local causality is a cornerstone of Einstein's theory of relativity, we next consider the problem of compatibility
between the theory of relativity and quantum physics Popper's proposal of going back to Lorentz's theory is examined and rejected because the quantum correlation of EPR is not to be interpreted as
"an action at a distance' which we can control and use as the operational definition of absolute simultaneity. An inquiry into something like aether as hidden reality behind the theory of relativity
is considered as retrogressive as the so-called hidden variable theory of quantum physics. Accepting the non-separability of local elements of reality as the undeniable fact, we discuss the
possibility of a realistic interpretation of quantum physics which transcends scientific materialism and classical determinism. As an example of such projects, Stapp's theory is examined with respect
to a Whiteheadian process philosophy which provides the metaphysical background for his realistic interpretation of quantum physics. Finally, we present another version of quantum metaphysics based
on "the philosophy of organism" which is broad enough to include observer and observed, local causality and non-local correlation, space and time, and potentiality and actuality in the inseparable
unity of physical reality.
I. Einstein's Criticism of Quantum Mechanics and Bell's Theorem
The experimental test of Bell's theorem which the French physicist Alain Aspect conducted in 1882 attracted the attention of those who were interested in philosophical problems of quantum physics.^
(1) This experiment manifested one of the most paradoxical characteristics of quantum system, namely the non-separability of two contingent events, concerning the correlation of polarized photon
pairs at a distance. Both philosophers and physicists were reminded of the celebrated debate between Bohr and Einstein about the completeness of quantum mechanics in the 1930s.^(2) The imaginary
experiment, which Einstein used in his polemics against the alleged completeness of quantum mechanics, became a real one through the progress of technology. The combination of conceptual analysis and
experimental tests revived the controversy about the philosophical status of quantum physics in the new light. The test of Bell's theorem became a starting point for refreshed research into the
nature of quantum phenomena for those who ventured on a new cosmology beyond a positivistic or pragmatic interpretation of quantum formulae.^(3) As philosophers and physicists do not seem to
appreciate the meta-theoretical significance of Einstein's criticism of the Copenhagen interpretation of quantum mechanics, I shall first reconsider the so-called EPR argument which Einstein
presented with his collaborators, Podolsky and Rosen, and then evaluate this argument in the light of experimental tests of Bell's theorem.
The original form of the EPR argument was summed up by Einstein and his coauthors as follows^(4):
In a complete theory there is an element corresponding to each element of reality. A sufficient condition for the reality of a physical quantity is the possibility of predicting it with certainty,
without disturbing the system. In quantum mechanics in the case of two physical quantities described by non-commuting operators, the knowledge of one precludes the knowledge of the other. Then either
(1) the description of reality given by the wave function is not complete or (2) these two quantities cannot have simultaneous reality. Consideration of the problem of making predictions concerning a
system on the basis of measurements made on another system that had previously interacted with it leads to the result that if (1) is false then (2) is also false. One is thus led to conclude that the
description of reality as given by a wave function is not complete.
We can write the above argument in the form of syllogism which contains two propositions.
Proposition C (the completeness of quantum mechanics): Quantum mechanics is complete in the sense that there are no hidden parameters which explain the statistical data in a deterministic way.
Proposition S (the simultaneous reality of complementary physical quantities):
The complementary physical quantities, to which the canonical conjugate operators correspond in the standard formulation of quantum physics, have simultaneous reality in the sense that we can predict
with certainty their values without disturbing the system.
The formal structure of the EPR argument is as follows:
│The Major Premise ││
│The Minor Premise ││
│The Conclusion ││
This argument is sometimes called the EPR paradox, for it says that if we admit the completeness of quantum physics, then we are necessarily led into the contradiction (
It is noteworthy that the semantic structure of the EPR argument against the completeness of quantum mechanics is similar to Goedel's argument against the completeness of formalized arithmetic, for
Goedel proved that if a formalized system of arithmetic is consistent, then it cannot be complete. As the criterions of completeness are different between formalized arithmetic and physics, this
similarity only holds in an analogous sense, but it helps us to understand the meta-physical aspect of the EPR argument. Bohr seemed to understand this aspect of the argument, for he once said that
he could see no reason why the prefix "meta" should be reserved for logic and mathematics and why it was anathema in physics.^(5) The Bohr-Einstein debate was essentially meta-physical in the sense
that they tackled the aporias of quantum physics at and beyond the boundary of human observation.
The EPR argument was not generally accepted as valid by his contemporary physicists, because it was interpreted as an argument against the indeterminacy principle established by Heisenberg. Though
Einstein's earlier arguments against the Copenhagen interpretation aimed at pointing out a possibility of measuring two complementary physical quantities beyond the limit of exactitude imposed by the
indeterminacy principle, the purpose of the EPR argument was not the refutation of this principle, but essentially the semantic claim that if we accept the completeness of quantum physics, then we
are, through considering a suitable imaginary experiment, necessarily led to the contradiction of both accepting and not accepting the indeterminacy principle.
The imaginary experiment in the EPR argument involved a system of two particles with the wave function
The Assumption L (the separability of local elements of reality):
The physical system are separable into two or more parts which are causally independent of each other at an given instant. The observation of the one cannot causally influence that of the other in so
far as the four-dimensional distance between them is space-like (dx^2+dy^2+dz^2-c^2dt^2
The reformed EPR argument shows that the alleged incompleteness is proved only under the assumption of L.
│The Major Premise ││
│The Minor Premise ││
│The Conclusion ││
Though Einstein mentioned this assumption toward the end of his paper, he took it for granted because the breakdown of L was so unreasonable for him. If the principle of local causality did not hold,
then the partial description of the whole universe would be, strictly speaking, impossible on account of the dubious concept of a closed system in the level of quantum phenomena.
It was Bohm^(6)who first explicitly stated that the assumption L was incompatible with the current theoretical structure of quantum mechanics. He even said that the name "quantum mechanics" might be
a misnomer because "mechanics" is necessarily associated with L, i.e. the separability of local elements of reality.^(7)
The local hidden variable theory is formally expressed as (^(8) His proof was, however, not conclusive in the case of the hidden variable theory which does not share the axioms of quantum mechanics.^
Bell proposed a crucial experiment between quantum mechanics and a local hidden variable theory.^(10)He proved that there is a limit on the extent of correlation of statistical results that can be
expected for any type of local hidden variable theory. The limit is expressed in the form of inequality which is now called the Bell inequality. Bell showed that quantum mechanics sometimes violates
this inequality, especially in the correlation at a distance in the imaginary experiment of the EPR argument. This experiment became realizable when we replace the original version of the EPR
experiment with the measurement of spin-components of two spin-1/2 particles or with the measurement of polarization of two photons. Real tests of the Bell inequality have been carried out by many
groups of investigators.^(11) The most conclusive was done by Aspect (1982) and the result was that the Bell inequality was really violated.
Aspect utilized the correlated photon pairs I and II. The result +1 and -1 are assigned to linear polarizations parallel or perpendicular to the orientation of the polarizer, and this orientation is
characterized by a unit vector a and b.
The quantum state of the whole system can be expressed as the following superposition:
Then we can calculate the probability of each photon's polarization along a given direction. P[+](a)=P[-](a)=P[+](b)=P[-](b)=1/2
Let EQM be the coefficient of correlation between two quantum events at the polarizers a and b:
If P, which means perfect positive correlation.
This kind of perfect correlation seems miraculous if we assume the completeness of quantum mechanics and admit the coincidence between two contingent events, for we may wonder how gknows" which
channel was chosen at the last moment for gmiracle" would disappear if we succeeded in making a locally deterministic model for the above simultaneous perfect correlation. Such a model has to assume
the hidden causal mechanism which predetermines both results of measurements. This causal mechanism can be represented by local hidden variable
For simplicity we assume one hidden variable a and b are neither parallel nor perpendicular.
Let the function A(a) and B(b) determine the measured values of
polarization at the polarizer a and b respectively:
A(a)=1 or -1 ; B(b)=1 or -1
Then the coefficient of correlation d(a,b) given by the statistical expectation value with respect to
With respect to four different directions a,a',b,b', we define the quantity S=E(a,b)-E(a,b')+E(a',b){E(a',b').
Then we can prove the inequality -2
This is the Bell inequality which was tested by Aspect's experiment.^(12)
Quantum physics shows that SQM based on EQM does not satisfy this inequality. In the experimental situation in which '=a'b'=22.5^0,
aa'=bb'=45^0, ab'=67.5^0, we get SQM=, which invalidates the Bell inequality.
As quantum mechanics and any kind of local hidden variable theory predict different statistical results, the above experiment may well be called a crucial experiment. As the result was for quantum
mechanics, we must conclude that we cannot make quantum mechanics complete by introducing local hidden variables.
The violation of the Bell inequality means that the combined proposition ( is false,
for such a local hidden variable theory cannot explain the correlation at a distance in the system of two particles which have previously interacted with each other. If we admit both the validity of
the EPR argument and the experimental test of Bell's theorem, we have to abandon L, i.e. the separability of local elements of reality.
│The Conclusion of the Reformed EPR Argument │(│
│The Experimental Test of Bell's Theorem │ │
│The Final Conclusion │L│
We must notice that the final conclusion is independent of our attitude toward the completeness versus incompleteness problem of quantum mechanics.
We cannot prove , as Einstein intended to do, the incompleteness of quantum mechanics as a result of the falsified premise L, but this falsification itself depends logically upon the validity of the
EPR argument and empirically upon the violation of the Bell inequality, for the validity of the argument is one thing and the truth of its conclusion is quite another.
Moreover, the EPR argument makes us reconsider the nature of the indeterminacy principle, for there seems to exist no mechanical interference between the observer and the observed in the imaginary
experiment concerned. This principle was originally interpreted by Heisenberg as the inevitable inexactitude of measurement due to uncontrollable mechanical interactions between the observer and the
observed, but once we have verified the simultaneous correlation between distant events and admit the non-separability of local element of reality, we must amend Heisenberg's interpretation in such a
way that the indeterminacy principle holds primarily on the level of the definition of quantum phenomena where the observer and the observed are not separable from each other. Bohr seemed to
anticipate this view in his reply to the EPR argument^(13):
Of course there is in a case like that just considered no question of a mechanical disturbance of the system under investigation during the last critical stage of the measuring procedure. But even at
this stage there is essentially the question of an influence on the very conditions which define the possible types of predictions regarding the future behavior of the system.
Bohr, however, rejected the semantic criterion of completeness and reality in the EPR argument, and chose to talk only about quantum phenomena which we can define through the macroscopic apparatus of
observation. Bohr's standpoint was that quantum mechanics did not require the depth structure under quantum phenomena , but certainly not that it was ontologically self-sufficient for the
world-description. Rather, classical physics was to Bohr indispensable for the definition of quantum phenomena. It was important for Bohr to recognize that ghowever far the phenomena transcend the
scope of classical physical explanation, the account of all evidence must be expressed in classical terms".^(14) So quantum phenomena need classical physics for their definition in terms of
experimental apparatus, whereas any single classical model of reality cannot exhaust the varieties of quantum phenomena. Bohr's philosophy of complementarity was pragmatic and provisional in the
sense that it sidestepped the difficult problem of quantum measurement, i.e. how to describe "the collapse of the wave function" within the framework of quantum mechanics.^(15) If we want to get a
unified picture of macroscopic and microscopic reality, we must present a suitable framework of ontology which can assimilate the main characteristics of quantum physics, especially the
non-separability of local elements of reality.
II. Quantum Correlation and the Theory of Relativity
Some philosophers and physicists, facing the breakdown of locality, proposed going back to the problem situation before Einstein. Bell himself suggested a possibility of the restoration of the
absolute framework presupposed by Lorentz's theory of electrons and aether, because "behind the scenes something is going faster than light."^(16) Popper more explicitly stated this possibility^(17)
It is only now, in the light of the new experiments stemming from Bell's work, that the suggestion of replacing Einstein's interpretation by Lorentz's can be made. If there is action at a distance,
then there is something like absolute space. If we now have theoretical reasons from quantum theory for introducing absolute simultaneity, then we would have to go back to Lorentz's interpretation.
Popper's opinion, however, is very dubious when we reexamine Lorentz's own comparison of his and Einstein's interpretation of the "Lorentz" transformation in the supplements of his Theory of
Electrons.^(18) Whereas Lorentz derived this transformation through considering the relation between the true and universal frame of reference (S) and an apparent and local one (S'), Einstein
abolished the very distinction between true and apparent or between absolute and relative. The Lorentz transformation became the symmetric interrelation between two inertial systems in the theory of
relativity. The crucial difference between the two interpretation is as follows :
(1) The contraction of a measuring rod and the delay of a clock was, according to Lorentz, caused by an electron's movement through aether absolutely at rest. Lorentz explained away these "weird"
effects by appealing to aether as a hidden reality. The constant velocity of light was to Lorentz a paradoxical fact to be explained away on ad hoc hypotheses of the unknown causal mechanism of
(2) Einstein considered the contraction of a measuring rod and the delay of a clock, not as causal effects of unknown reality, but as the symmetric effects between S and S' which should be
interpreted to be derived from the definition of space-time metric. If we rely on S, then we must say the measuring rod of S' contracts and the clock of S' delays. Symmetrically, if we rely on S',
then we must also say the measuring rod of S contracts and the clock of S delays. The hidden causal mechanism was, to Einstein, not only useless, but also contradictory because mathematical formulae
of the Lorentz transformation exclude the non-symmetric interpretation. The constant velocity of light was, however paradoxical it might be seem, not to be explained away as exceptional phenomena,
but to be accepted as the universal principle which made it possible to reconstruct Newtonian mechanics in combination with the principle of relativity.
It is noteworthy that the relation between Einstein's theory of relativity and Lorentz's theory of aether was similar to that between quantum mechanics and the hidden variable theory. This similarity
suggests that the methodology of special relativity was more revolutionary and akin to quantum mechanics than Lorentz's correlations of quantum mechanics on the essentially classical and
pre-relativistic model like Lorentz's seems fruitless and retrogressive.
Moreover, there are several arguments against the restoration of the absolute frame of reference. The simultaneous correlation in quantum physics is different from a Newtonian type of action at a
distance. The former is probabilistic and non-controllable whereas the latter is deterministic and controllable. So we cannot send information with a superluminous speed on the basis of the distant
simultaneous correlation in quantum physics. We cannot acquire information through the random sequence of measured values at one side without comparing them with the results of the other side. As the
coincidence of two contingent events cannot be used for sending information with a superluminous speed for the purpose of synchronizing two clocks at a distance, the empirical test of Bell's theorem
does not make Einstein's theory of relativity invalid through the alleged discovery of prohibited action. We may theoretically introduce absolute simultaneity, but we do not have any experimental
arrangement to detect the existence of the absolute frame of reference.
Instead of the restoration of an abolished classical theory, Stapp made a radically progressive trial of introducing something like absolute time by supposing the deep structure below Lorentz
invariant phenomena.^(19) This structure was described by him as that of events which have the absolutely linear order of "coming into existence". Stapp's theory had an ontological background
provided by Hartshorne's version of process metaphysics, according to which the ultimate realities are events and the whole universe has a cumulative structure of creative advance with a cosmic
simultaneous "front" of actuality. The purpose of Stapp's theory was to ensure both the macroscopic causality properties with Lorentz-invariance and all of quantum theory on the basis of his
metaphysics of events. We may say that Stapp replaces the classical concept of aether with the absolute world of events which are logically prior to space-time. The main characteristic of Stapp's
theory was that he adopted the absolute and universal concept of existence in which what comes into existence does not depend on a space-time standpoint, whereas Einstein's theory of relativity
relied on the relative and local concept of existence in which what comes into existence depends on a space-time standpoint. As the breakdown of the Bell inequality requires some events to depend on
other events whose positions lie outside their backward light-cones, Stapp postulated that the sequence of actualized events should be well-ordered even in the case of spatially distant events.
Though I agree with Stapp that the ontological framework of events is necessary for the unified picture of the world, I do not think he is justified in introducing the absolutely well-ordered
structure of events. Einstein's theory of relativity which only admits the partially-ordered structure of events seems more plausible the consideration of the Bell-Aspect experiment.
In the simplest cases of Bell's phenomena there are four events E[0] E[1] E[2] , E[3] whose locations L[o] ,L[1], L[2], and L[3], lie in four well-separated experimental areas A[o],A[1], A[2] and A
[3.] If all events lie in the well-ordered sequence of occurrence as Stapp assumed, there must be an unambiguous temporal order between E[1] and E[2] : one of the two events must be prior to the
other. Suppose E[1] is the prior to E[2.] Then E[2] depends on what the experimenter in A[l] has decided to do whereas E[l] is independent of what the experimenter in A[2] will decide to do. So he
reduced the "simultaneous" correlation between E[1] and E[2] to the unilateral influence of one upon the other. The difficulty of the above picture is that there does not seem to be any experimental
apparatus to determine which is prior, E[l] or E[2.] Though we guess that an influence or superluminous signal must have gone from L[l] to L[2], or from L[2] to L[1], we do not know yet which one is
the cause of the other. There is the remnant of classical causality in Stapp's model in which the mutuality or interdependence of quantum phenomena totally disappears. In other words, Stapp's model
does not seem to consider the "individuality" of quantum system which Bohr emphasized in his doctrine of complementarity between space-time coordination and causality. This "individuality" can be
expressed as the organic interdependence between parts of the quantum system : the whole may be in a definite state, i.e. may have as definite properties as quantum theory permits,' without its parts
being in definite state. The two particles of the imaginary experiment in the EPR argument and the two photons of Aspect's experiment are examples of the inseparable parts of an "individual"
organism. In this organic unity there cannot be a determinate causal order between all parts of the whole. In the above case there remains the essential ambiguity of causal order between E[1] and E
[2] because their correlation is symmetrical and not detectable until we monitor and record it in L[3,] i.e. the common causal future of L[l] and L[2]. This ambiguity is characteristic of the
relativistic framework of space-time, and any attempt of restoring the absolute framework tends to violate not only the principle of relativity but also the principle of complementarity between
space-time coordination and causality.
In the next section I will present another model which aims at synthesizing the principle of relativity and quantum correlation on the basis of the philosophy organism. In this model events are, as
in Stapp's and Hartshorne's process me physics, basic ontological categories from which material objects and space-time are derived. The background philosophy of organism is more similar to
Whitehead's own cosmology than Stapp's and Hartshorne's revised version, for the fundamental vision of Whitehead's philosophy is, as Nobo clearly explicated^(20), the mutual immanence of discrete
events regardless of their temporal relationship ^(20), whereas "process" philosophers seem to stress only the immanence of earlier events in later ones. We will find that the immanence of later
events in earlier ones and contemporaries in each other are indispensable for the understanding of quantum correlation. The "organic" model of quantum reality is also similar to the Hua-yen Buddhist
doctrine of simultaneous interfusion and interpenetration signifying unity-in-multiplicity, for it rejects the notion of independent self-existence which Hua-yen Buddhists called svabhava in their
doctrines of pratitya-samutpada (interdependent origination).^(21) The concept of the absolute frame of reference should replaced with the idea of thoroughgoing relativity : we need not postulate t
absolutely unique temporal order. Even the absolute world of four-dimension space-time as prefixed reality in Einstein's theory of relativity should be abolished if we take into account the
complementarity between space-time coordination a causality. If we are, as Bohr aptly stated, simultaneously actors as well as spectators on the great stage of life, the image of a scientist as an
outside spectator should be replaced with that of a participating observer inseparably involved in the object to be observed.
III. Quantum Correlation viewed from the Philosophy of Organism
The peculiarity of quantum correlation is caused by the so-called "the collapse of the wave function". One of the unsolved problems of quantum mechanics is about the nature of this discontinuous
phenomenon. The usual framework of quantum theory does not describe the process of collapse itself but simply accepted it as the result measurement in the statistical data of observation. In other
words the collapse of the wave function belongs, not to the object language of quantum formulae, but the meta-language of quantum mechanics which correlates mathematical formula and experimental
data. Many physicists tried to enlarge the framework of quantum mechanics enough to give a unified description of observer and observed, i.e. microscopic measured system and the macroscopic measuring
apparatus, but there seems not to be an unanimous resolution of this conundrum.
d'Eespagnat pointed out the enigma of the "collapse of the wave function" follows : ^(22)
The puzzle with which we have to struggle is constituted by the fact that, since the wave function is a non-local entity, its collapse is a non-local phenomenon. According to the formalism, this
phenomenon propagates instantaneously. In that sense we may say that the wave packet reduction is a non-covariant process. Again, this would create no difficulty if, like the reduction of
probabilities in classical phenomena, this collapse were of a purely subjective nature. But we have seen quite strong arguments in favor of the thesis that it is not.
d'Espagnat's comment that the wave collapse is not to be solved by a subjective interpretation of probability is important, for it excludes an easy "solution" of the conundrum by appealing to our
ignorance of initial conditions. Certainly, if we get a new information about the system, then the probability distribution of quantities which characterize the system changes discontinuously. The
discontinuous change of quantum physics cannot be explained away by this kind of probabilistic arguments. Such general arguments are unsatisfactory because they do not take into consideration the
peculiar characteristics of quantum mechanical algorithm of probability. The probability wave and the probability amplitude represented by a complex number were totally unknown before quantum
physics. They behave in the very inconceivable way as if they violated classical logic.
For example, the famous double slit experiment shows that even in the case of only one particle, say a photon, the interference occurs between two mutually exclusive possibilities i.e. the
possibility of the same particle's going through one slit A and the alternative possibility of its going through another slit B. So if we represent the third event, say the effect of the photon on
the photographic plate with C, then
Finkelstein stresses the need of quantum logic as a non-Aristotelian logic in the description of the microscopic world just as we need a non-Euclidean geometry in the theory of general relativity.^
(23) I prefer to say that if we need quantum logic, then it must be a kind of modal logic with the distinction of real (objective) possibility and actuality. In the above example of the double slit
This phenomenon of probability interference shows that we have to face objective probability reflecting the experimental situation rather than subjective one reflecting only our ignorance of the
determinate fact. In other words real possibility and actuality are inseparable with each other in quantum physics, and we must treat the collapse of the wave function as the objective transition
from real possibility to actuality.
The next Problem is about the quantum transition itself. If the collapse of wave function is an objective phenomenon, then is it "an action at a distance" i.e. a non-covariant phenomenon which
happens instantaneously This problem is crucial to our consideration of the Bell correlation and the theory of relativity. In the section II we confirmed the fact that quantum correlation and the
principle of relativity are compatible, and we need not explain quantum correlation as the unilateral causal effect with the superluminous speed. Einstein's theory of relativity was more progressive
than Lorentz's theory of aether in that Einstein introduced into physics a radically new perspective in which space and time are non-separable with each other.
It is regrettable that many discussions of physicists about the collapse of the wave function presuppose only non-relativistic framework. The "simultaneous" correlation would be meaningless in the
relativistic framework, because such a terminology implicitly assumes that there exists only one time-system of classical physics. The non-relativistic quantum physics does not treat space and time
in their non-separable unity. Time appears only in the form of a parameter and does not take the role of operator corresponding to an observable quantity whereas spatial coordinates are permitted
status of operators which characterize the quantum system. So if we describe the collapse of the wave function in the non-relativistic framework, we must say that it happens instantaneously, i.e.
non-locally with respect to space.
The dubious scenario roughly runs as follows : if the quantum system prepared at the time t[1] is measured at t[2] , it changes its states continuously and causally between t[1]< t < t[2] according
to Schroedinger's equation, but at the moment of t[2] the discontinuous irreversible event called "the collapse of the wave function" happens and its effects propagates instantaneously with the
super-luminous speed. The above picture is not relevant to the relativistic concept of space-time, because the very concept of simultaneity and instantaneous transmission does not make sense. The
non-separability of time from space means that non-locality of the collapse should be accepted, not only with respect to space but also with respect to time. The reason why temporal non-locality,
more exactly spatio-temporal non-locality has been ignored may be simply that the collapse of the wave function has been discussed mainly in the non-relativistic framework. Einstein himself seemed to
anticipate the problematic of spatio-temporal non-locality in his criticism of the indeterminacy principle, for he pointed out that "if we accept quantum physics, then it becomes impossible to
restrict the indeterminacy principle to the future ; we must admit the indeterminacy of the past as well."^(24)
This criticism was not so famous as the EPR argument, but it is of decisive importance when we discuss the collapse of the wave function as a non-local phenomenon in space-time.
An example of the indeterminate past was given by Wheeler in his famous discussion of the "delayed choice" experiment.^(25) We may use the same diagram to explain this experiment. In this diagram we
assume that the present choice is made at A[3]. The experimenter at A[3] can choose for one photon either the mode of non-interference or the mode of self-interference even after the photon has
passed through A[l] or A[2]. In this experiment, whether the photon has passed through [l] or A[2]) as a particle, or through [1] and A[2]) as a wave depends on the present choice made at A[3].
Before the decision at A3 the location of the particle was essentially indeterminate. What we can say of past space-time and past events is decided by choices made in the near past and now. Wheeler
discussed the possibility that the phenomena called into being by the present decision can reach backward in time, even to the earliest days of the universe. The above example shows that it makes
sense to state that events occur in the four dimensional framework of space-time. This occurrence itself does not take place in time as the fourth coordinate of space-time.
In Einstein's theory of relativity the concept of events is static in the sense that an event simply is and occupies a determinate location without any regard to other regions of space-time sub
specie aeternitatis. In the quantum indeterminism, on the other hand, the modified concept of events is dynamic in the sense that an event happens in the extensive continuum of space-time. We need
not postulate, as Stapp did, that all events of the whole universe constitute the well-ordered sequence with respect to this kind of happening in space-time, because it would make geneses of events
subordinate to space-time coordination to make becoming of events the fifth coordinate. The delayed choice experiment cannot be explained away by the introduction of anything like absolute time-order
because any theory compatible with relativity must retain the order of causality within a light cone.
We cannot call the delayed-choice "retroactive causality" because it does not make sense to say that we can "change" the past if we mean by the past something determinate ; rather we should say that
the past in the level of quantum description cannot be considered as totally determinate. The following analysis of quantum correlation is similar to that of Whitehead's analysis of "symbolic
reference" though Whitehead seems to use this term to explicate the structure of perception only in the high-grade organisms such as a human being. As the wave function is an essentially non-local
relational entity, the world itself has the structure of symbolism as well as that of causality.
The main difference of the proposed model from the Whiteheadian ontology is that this model does not take a single quantum event as the totally determinate individual. The specificity of any
attribute of the quantum event is, as Shimony clearly showed,^(26) always attained at the price of indefiniteness of other attributes on account of the indeterminacy principle. Every event is
complementarily described as an entity with respect its actuality, and as a locus with respect to its potentiality. An event is a spatio-temporal entity, and a material body corresponds to the nexus
of events (world-tube) which has various characteristics such as energy, momentum, and other observable physica quantities. It is essential in this organic model that observables are adjectives of
event-nexus with alternative selective patterns of perspectives.
Two "elementary particles of the same kind are not two separate substances, but the same adjective which can have two contexts of actualization in different events. The fundamental relation of events
is called "self-projection". This term i introduced for the purpose of explaining both objective and subjective aspects which necessarily emerge in quantum organism, but it should not be understood
in psychological sense in which the self-projection of an observer has no objective correlate. The self-projection which I mean is a physical relation between events and signifies the organic unity
between the observer and the observed. We cannot observe microscopic events without their self-projections in the macroscopic measuring apparatus. What we observe, however, is not a mere shadow of
the separate self-existing substance, but in one sense a thing itself because every thing can exist only in the complex network of self-projections of events. What we observe depends on our choice of
measuring apparatus which reciprocally projects itself in the microscopic events by influencing the possible pattern of actualized contingency.
First I will sketch the formal structure of self-projection ; a, b, c, signify events which, as loci, "mirror" the universe according to their own perspectives, and as entities, project themselves in
every loci in the universe. There are two modes of self-projection : causal efficacy and mutual immanence.
a < b : a projects itself into b in the mode of causal efficacy
: a projects itself into b in the mode of mutual immanence
The mode of causal efficacy is cumulative ; it is non-reflexive, non-symmetrical and transitive :
(1) (
(2) (
(3) (
The mode of mutual immanence is reflexive, symmetrical, and transitive ;
(4) (
Let a, b, g signify classes of events. We can define the relativistic concept of the past, the future, and the contemporaries of an given event in terms of the self-projection in the mode of causal
efficacy :
│Def. P(a)={x | x<a}│P(a) is called the (causal) past of a │
│Def. F(a)={x | a<x}│F(a) is called the (causal) future of a │
│Def. aCb │a is contemporaneous with b │
As the relation of contemporaneity is not always transitive, the existence of the uniquely-defined present of a given event is not guaranteed by the theory of relativity. We can introduce something
like a cosmological "present" in terms of a maximum class of mutually contemporary events instead
Def. If a class d of events satisfies the following conditions, it is called a contemporary duration of the universe :
(1) (dbdaCb (2) (daCbd)
The relativity of simultaneity means that there are an infinite number of possibilities for a contemporary duration of the universe. As the arrow of local causality always passes from the past to the
future in every frame of reference, it cannot explain the quantum correlation of the Bell experiment which holds between two contemporaries. The contemporaneity as defined above is essentially a
negative (derivative) relation and also irrelevant to the explanation of positive correlation in quantum physics. The experimental test of Bell's theorem requires something positive to cover such a
correlation. Self-projection in the mode of mutual immanence is introduced to satisfy this requirement. This mode should be non-causal in the sense that it does not pass immediately from the past to
the future, but signifies a kind of mutual interpenetration among events in terms of which a composite system behaves as if it were one individual. Causal efficacy ranges from causal immanence to
causal influence.
Causal immanence holds between two temporally separated events in the isolated microscopic system with a small number of degrees of freedom, when the causal influences from the outside are
negligible. The relation of causal immanence is the basis of a deterministic description of the microscopic system before its interaction with the measuring apparatus.
The causal efficacy from the macroscopic system with a great number of degrees of freedom is called causal influence. It is practically impossible to give a deterministic description of the system on
the basis of the exact control of causal influences, which only permit statistical treatment of complex thermo-dynamical processes with an increasing entropy. The irreversible process of quantum
measurement, however, cannot be identified with the entropy-increasing process of thermodynamics, as Wigner showed in his argument against Daneri-Loinger-Prosperi's theory of measurement.^(27) When a
and b project themselves into each other in the mode of mutual immanence, they behave as if they were one individual on account of mutual immanence (the non-separability of quantum events). Even when
the two loci of a and b are spatially separated, these two loci as potentialities have an internal relation with each other with regard to certain characteristics (e.g. polarization or spin).
The mutual immanence disappears when the system is causally influenced from the outside system. The collapse of the wave function of a composite system may give a distant simultaneous correlation
when self-projections between contemporary parts of the system pass from the mode of mutual immanence to that of mutual transcendence (the disappearance of the term of phase interference between
them). Every event is organically related with the whole universe by symbolic correlation which integrates the two modes of self-projection. The distant correlation h quantum physics holds between
two contingent events with the same causal past immanent in both. This correlation does not mean the superluminous sending o information in terms of causality. but signifies the relation of mutually
self projecting events which constitute the organic system. This system integrates two different modes of self-projection in the presented duration defined by the measuring apparatus. The whole
setting of the measuring apparatus determines the kind o simultaneous correlation which holds between contingent patterns of physical value, measured in both parts. Each of two events with the same
immanent causal past can be seen as the symbol of the other as if they were two sides of the same coin.
1. Aspect, A., Grangier, P., and Roger, G., "Experimental tests of realistic local theories via Bell's theorem", Physical Review Letters 47 460-463 (1981) 49 1804 (1982) "Experiments on
Einstein-Podolsky-Rozen type correlations with pairs of visible photons" in Quantum Concepts in Space and Time, Oxford Science Publications, 1-15 (1986).
2. d'Espagnat, B., "The quantum theory and reality", Scientific American, 241 (Nov.) 158-181 (1979).
3. Stapp, H.P., "Mind, Matter, and quantum mechanics", Foundation of physics, 12, No. 4, 363-399 (1982).
4. Einstein, A., Podolsky, B., and Rosen, W., "Can quantum mechanical description of physical reality be considered complete ?", Physical Review, 47, 777-780 (1985).
5. Honner, J., The Description of Nature-Niels Bohr aud the Philosophy of Quantum Physics, Oxford University Press, 194 (1987).
6. Bohm, D., Quantum Theory, Prentice-Hall, Inc., 611-23 (1951)
7. Bohm, D., op. cit., Footnotes of Chap.
8. von Neumann, J., Mathematische Grudlagen der Quantenmechanik, Springer, Berlin, ( 1932 ).
9. Bell, J.S., "On the problem of hidden variables in quantum mechanics", Review of Modern Physics, 38, 447-52 (1966).
10. Bell, J.S., "On the Einstein-Podolski-Rosen Paradox", Physics, 1, 195-200 (1964), reprinted in Speakable and Unspeakable in Quantum Mechanics, Cambridge University Press, 14-21 (1987).
11. d'Espagnat, B., op. sit., 136 (1976).
12. Aspect, A., op. sit., 11 (1986).
13. Bohr, N., "Can quantum mechanical description of physical reality be considered complete ?", Physical Review, 48, 696-702 (1935).
14. Bohr, N., "Discussion with Einstein on epistemological problems in atomic physics", in Schilpp, P.A., ed., 200-241 (1949).
15. von Weizsaecker, C.F. "Komplementaritaet und Logik", Naturwissenschaften, 42, 525 ( 1955 )
16. Bell, J.S., op. sit., 68-80 (1987). The Ghost in the Atom, ed. by Davies, P.C.W., & Brown, J.R., Cambridge University Press, 45-57 (1986).
17. Popper, K.R., Quantum Theory and the Schism in Physics, Hutchison, 30 (1982).
18. Lorentz, H.A., The Theory of Electrons, Teubner Leipzig, Addenda 72 (1915).
19. Stapp, H.P., "Bell s Theorem and World Process" I1 Nuovo Cimento vol 29B No 2, 270-276 (1975). "Whiteheadian Approach to Quantum Theory and the Generalized Bell's theorem" Foundations of Physics
vol 9 Nos l/2 1 25 (1979) "Quantum Mechanics, Local Causality, and Process Philosophy", Process Studies, vol. 7, N3, 173-182 (1977). Cf. Hartshorne, C., "Bell's Theorem and Stapp's Revised View
of Space-Time", Process Studies, vol. 7, N3, 183-191 (1977). Jones, W.B., "Bell's Theorem, H.P. Stapp, and Process Theism", Process Studies, vol. 7, N4, 250-261 (1977).
20. Nobo, J.L., Whitehead's Metaphysics of Extension and Solidarity, State University of New York Press, Albany, 205-248 (1986).
21. Odin, S., Process Metaphysics aud Hua-yen Buddhism, State University Press of New York, Introduction (1981)
22. d'Espagnat, B., Conceptual Foundations of Quantum Mechanics, W.A. Benjamin, Inc., Chapter 8 (1976).
23. Finkelstein, D., "Matter, space, and logic" in Boston Studies in the Philosophy of Science, v, 199-215 (1964).
24. Einstein, A., Tolman, E.C., Podolsky, B., "Knowledge of Past and Future in Quantum Mechanics", Physical Review, 34, 780-781 (1981).
25. Wheeler, J.A., "Law without law", in Quantum Theory and Measurement, Wheeler, J.A., and Zurek, W.H., Princeton University Press, 182-213 (1983). ed. by Shimony, A., "Quantum Physics and the
Philosophy of Whitehead", in Boston Studies in the Philosophy of Science, vol. 2, 307-30 ( 1965).
26. Daneri, A., Loinger, A., Prosperi G M "Further remarks on the relations between statistical mechanics and quantum theory of measurement Nuovo Cimento, 48B, 144-151 (1967). 1190128 (1966). J.M.
Wigner, Jauch, J.M., Yanase M "Some comments concerning measurements in Quantum mechanics", Nuovo Cimento, 44B, | {"url":"http://pweb.cc.sophia.ac.jp/process/society/tanaka_3.htm","timestamp":"2014-04-18T16:06:16Z","content_type":null,"content_length":"63836","record_id":"<urn:uuid:121f7b29-291c-4f37-963b-0c5a0cf47bf2>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00382-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kings Point, NY Algebra Tutor
Find a Kings Point, NY Algebra Tutor
...I took the test July 27th 2013. I have been accepted to medical school and am matriculating in August, though I currently tutor full time. I am excited at the idea of helping other students to
overcome the stressful burden of the MCAT.
24 Subjects: including algebra 1, algebra 2, chemistry, biology
...Obviously, most people will not go on to become rocket scientists, but a solid understanding of fractions, decimals, and percentages is essential to functioning in everyday life. I taught
precalculus as a high school math teacher and am extremely comfortable with all the material. Even though t...
26 Subjects: including algebra 1, algebra 2, calculus, writing
...I think learning C is a good opportunity not only to learn a widely-used programming language, but also to explore the ideas behind programming in general. C specifically provides an excellent
chance to cement good programming practices, like memory management, garbage collection, and making efficient choices. I love programming.
37 Subjects: including algebra 1, algebra 2, chemistry, physics
I'm an RN who is taking some time off before hopefully starting graduate school. Tutoring would be a great way to keep busy without the major stresses I am used to in my field. My career has
taught me to be flexible, to think on my feet, and that there are many ways to accomplish the task at hand.
17 Subjects: including algebra 1, reading, writing, geometry
...My knowledge of government was cultivated during my time spent interning in a Congressional office in Washington, DC. My intended major is political science. I am extremely interested in
psychology, and I took AP Psychology.
43 Subjects: including algebra 1, algebra 2, English, writing
Related Kings Point, NY Tutors
Kings Point, NY Accounting Tutors
Kings Point, NY ACT Tutors
Kings Point, NY Algebra Tutors
Kings Point, NY Algebra 2 Tutors
Kings Point, NY Calculus Tutors
Kings Point, NY Geometry Tutors
Kings Point, NY Math Tutors
Kings Point, NY Prealgebra Tutors
Kings Point, NY Precalculus Tutors
Kings Point, NY SAT Tutors
Kings Point, NY SAT Math Tutors
Kings Point, NY Science Tutors
Kings Point, NY Statistics Tutors
Kings Point, NY Trigonometry Tutors
Nearby Cities With algebra Tutor
Glen Oaks algebra Tutors
Great Nck Plz, NY algebra Tutors
Great Neck algebra Tutors
Great Neck Estates, NY algebra Tutors
Great Neck Plaza, NY algebra Tutors
Kenilworth, NY algebra Tutors
Kensington, NY algebra Tutors
Manhasset algebra Tutors
Plandome, NY algebra Tutors
Port Washington, NY algebra Tutors
Russell Gardens, NY algebra Tutors
Saddle Rock, NY algebra Tutors
Sands Point, NY algebra Tutors
Thomaston, NY algebra Tutors
Whitestone algebra Tutors | {"url":"http://www.purplemath.com/kings_point_ny_algebra_tutors.php","timestamp":"2014-04-17T07:15:14Z","content_type":null,"content_length":"24154","record_id":"<urn:uuid:546d3613-4d81-48d5-aa4a-d1f6a3305816>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00112-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dice Probability
November 27th 2006, 04:04 PM #1
Junior Member
Nov 2006
Dice Probability
Hi. Is there a way to solve the following without actually listing out all the possibilities?
What is the probability of rolling a total sum of 8 using three dice?
Thank you!
Yes, there is a way.
There are 3 dice and you want to find all the sums of 8.
Expand this out and check the coefficient of $x^{8}$
That is the number of ways to sum to 8.
The coefficent is $21x^{8}$
Therefore, there are 21 ways.
There are $6^{3}=216$ possible rolls with 3 dice.
Probability is $\frac{21}{216}$
Wow! Ingenious xD.
Now can u explain why that is? I don't see it.
Thanks again!
A text on advanced counting methods and/or generating functions may prove useful if you are curious of how it's derived.
Thank you. =]
This is a more direct approach for those not familiar with generating functions.
Let Z be the sum of three dice. Let Y be the sum of the first two dice.
$P(Z = 8|Y = k) = 1/6$, for k = 2, 3,...,7, and 0 elsewhere. Also it's easy to show that
$P(Y=k)=\frac{k-1}{6^{2}}$, for k = 2, 3,...,7 (we won't consider higher k's)
Hence we end up with
Last edited by F.A.P; December 20th 2006 at 02:59 PM.
November 27th 2006, 05:44 PM #2
November 27th 2006, 08:00 PM #3
Junior Member
Nov 2006
November 28th 2006, 02:52 AM #4
November 28th 2006, 07:51 AM #5
Junior Member
Nov 2006
December 18th 2006, 05:42 PM #6 | {"url":"http://mathhelpforum.com/statistics/8086-dice-probability.html","timestamp":"2014-04-21T10:26:20Z","content_type":null,"content_length":"45353","record_id":"<urn:uuid:25c32928-ce9e-4de9-bc82-d5f7d72ed3fb>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00122-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
- User Profile for: noaddres_@_owhere.net
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
User Profile: noaddres_@_owhere.net
User Profile for: noaddres_@_owhere.net
UserID: 630020
Name: Clark Smith
Registered: 3/29/10
Total Posts: 8
Show all user messages | {"url":"http://mathforum.org/kb/profile.jspa?userID=630020","timestamp":"2014-04-21T13:01:16Z","content_type":null,"content_length":"13885","record_id":"<urn:uuid:0267e463-1998-431d-a930-ffdfe5905a85>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00441-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculating the current in a solenoid
1. The problem statement, all variables and given/known data
A short-circuited solenoid of radius b with n turns rotates at an angular velocity
about the diameter of one of the turns in a uniform magnetic field B. The axis of rotation is perpendicular to the magnetic field direction. The resistance and the inductance of the solenoid are
equal to R and L, respectively. Find the current in the solenoid as a function of time
2. Relevant equations
Hint eq: phi = normal vector (dot) Magnetic field (B) s
3. The attempt at a solution
i cant seem to figure out where to start. in order to find the current in the solenoid, do i have to first start with finding the magnetic field (or a specific value of B)? and then plug that into
the above eq? | {"url":"http://www.physicsforums.com/showthread.php?t=534675","timestamp":"2014-04-17T12:47:05Z","content_type":null,"content_length":"24201","record_id":"<urn:uuid:3bd820f3-908e-45aa-ba0a-a5adba6ca08b>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00658-ip-10-147-4-33.ec2.internal.warc.gz"} |
FYI Events
When: Wednesday, December 05, 2012 3:15 PM - 4:15 PM
Where: Math Building : http://en.wikipedia.org/wiki/Hillel_Furstenberg : Colloquium Room 3206
Event Type(s): Colloquium
Speaker: Hillel Furstenberg (Hebrew University)
Title: Multiple Recurrence Phenomena for Non-amenable
Groups and A Szemeredi- like Theorem for
the Free Group
Szemeredi's theorem in combinatorial number theory asserts that
any subset of the integers having positive density contains arithmetic
progressions of any length. It turns out that this is equivalent to a
"multiple" recurrence statement for measure preserving transformations.
Together with Eli Glasner we show that this has an analogue for group
actions that are only measure preserving "on the average". By analogy
the case of the integers, this multiple recurrence result leads to
a theorem guaranteeing existence of geometric progressions in non-
amenable groups. The result for a finitely generated free group can
be made quite explicit.
Website: http://www-math.umd.edu/research/colloquium.html
For more information, contact:
Linette D Berry
Department of Mathematics
+1 301 405 5058 | {"url":"http://www.umd.edu/fyi/index.cfm?id=166361","timestamp":"2014-04-16T21:57:01Z","content_type":null,"content_length":"4793","record_id":"<urn:uuid:6cd85e54-2a47-4515-9765-cbfac50078a8>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00130-ip-10-147-4-33.ec2.internal.warc.gz"} |
CS 188: Artificial Intelligence
CS 188: Artificial Intelligence, Spring 2008
• [5/28/08] You may pick up a copy of your final from Angie in 784 Soda. Any other unreturned homework may be picked up from Michael in 545 Soda.
• [5/22/08] All grades, including project 8 and the final exam, are now available on glookup.
• [5/20/08] Final Exam Solutions have been posted.
• [5/14/08] Review Sessions: Friday 6-9pm 310 Soda (Michael), Saturday 3-6pm 310 Soda (Ryan)
Some previous exams are linked below to help you prepare for the final. The final is comprehensive and will cover all material discussed this semester.
• [5/9/08] Final Exam "Cheat" Sheet Policy: You may bring a two-sided 8.5x11 sheet of paper with notes to the final (or two one-sided sheets). Bring a calculator to the final.
• [5/8/08] Homework 7 grades are now available on glookup. You can pick up your graded homework in section tomorrow or during office hours or review sessions next week.
• [5/5/08] Project 8 extended to be due Thursday 5/8 at 11:59pm.
• [4/30/08] Homework 7 solutions posted.
• [4/28/08] Project 8 extended to be due Tuesday 5/6 at 11:59pm.
• [4/24/08] Project 8: Face Detection and Digit Classification has been posted due Friday 5/2 at 11:59pm.
• [4/23/08] Grades for project 5 are in glookup. Please see newsgroup message for details on grading.
• [4/22/08] Solutions for projects 5 and 6 (Reinforcement Learning and Neural Networks) been posted on the projects page. You may want to take a look at the solution code for the Monte Carlo ES and
Q-learning algorithms if you had difficulty with the project. Grades for project 5 will be posted on glookup tomorrow (Wednesday).
• [4/20/08] Ryan's office hours have been cancelled for Monday. If you would like to schedule an appointment later this week, please send him an email at rwaliany@gmail.com
• [4/17/08] Homework 7: Search and Classification has been posted due 4/24 in class.
• [4/14/08] Homework 6 EXTENSION: Neural Networks has been extended to be due 4/17 at 11:59pm
• [4/9/08] Homework 6: Neural Networks has been posted due 4/15 at 11:59pm
• [4/3/08] Midterm solutions posted. See the project page.
• [3/23/08] We will accept late submissions for Project 5 up until Tuesday 4/1 11:59pm. However, there will be a 20% late penalty. Using two of your extension days makes today (3/23) at 11:59pm the
last chance to submit without incurring the late penalty
• [3/22/08] Please be sure to submit your code for Project 5, even if its not working. We will read it to give partial credit. Also note you're welcome to apply up to two of your late days to
Project 5.
• [3/21/08] Please see the newsgroup for clarifications on Monte Carlo ES, project 5 grading.
• [3/20/08] There will be no section this week (3/20 & 3/21).
• [3/13/08] Homework 5 due date extended to 3/21 at 11:59pm
• [3/13/08] The midterm will cover lecture material through 3/13
• [3/13/08] There will be a second review session led by Michael next week on Wednesday 3/19 from 6-9PM at (310 Soda).
• [3/10/08] There will be a review session led by Ryan this week on Wednesday from 6-9PM at (310 Soda).
• [3/10/08] Homework 5: Reinforcement Learning has been posted due 3/18 at 11:59pm
• [3/10/08] Homework 4 solutions posted.
• [3/05/08] The midterm will be Thursday March 20.
• [3/04/08] The reading for this week is chapters 3 and 4 of Reinforcement Learning by Sutton and Barto. The book is available online.
• [3/03/08] Homework 4 due date extended to Thursday 3/6 (due in class).
• [2/29/08] No submissions will be accepted after Monday (3/3/08) at 12:00am, the autograder will no longer respond to emails regard p3.
• [2/29/08] Since many people had trouble with the programming assignment (p3), you may use an extra late day on it if you'd like to make corrections and resubmit. In addition, we will be lenient
in deducting points (approx 10% off) for late work provided it is submitted this weekend.
• [2/26/08] Homework 4: Utility and Decisions has been posted due 3/4 in class
• [2/25/08] Updated office hours this week: Monday 4-6pm in 246 Cory and Tuesday 5-7pm in 246 Cory. We've moved office hours since the programming assignment is due Tuesday 11:59pm. There will not
be Wednesday office hours this week.
• [2/23/08] Please DOWNLOAD the UPDATED version of hmm.py, your code will not pass the sanity checker without this.
• [2/21/08] Sample output for the test cases provided with homework 3 has been added to the homework 3 webpage.
• [2/19/08] Homework 3: HMM Programming Project has been posted due 2/26 11:59pm
• [2/19/08] The Disabled Students' Program (DSP) is looking for a note-taker for this class. This is a good opportunity to assist a fellow student and receive pay. You can fill out an application
at dsp.berkeley.edu or at the DSP Office, 260 Cesar Chavez right beside the Golden Bear Cafe. Contact dspnotes at berkeley dot edu with any questions.
• [2/17/08] Ryan has moved his office hours for this week to Monday at 4pm-6pm in 5th floor lounge in Cory. Starting next week, his office hours will be on Monday from 4-6pm in 246 Cory to
accomodate homework questions before the due date.
• [2/15/08] Please see the newsgroup post about errors in the solutions included with the HMM Tutorial.
• [2/14/08] Correction made to homework 2: P(H=v|C=~c) should be 0.50 (not 0.40). The online version has been updated to reflect this change.
• [2/12/08] The Battleship Project Part 2 has been posted due 2/19 at the start of class. Please submit a hard copy in class (no electronic submissions).
• [2/5/08] Office hours for Michael updated (Wednesday 2-4pm, 551 Soda).
• [1/31/08] The Battleship Project Part 1 has been posted due 2/7. [Update: You may also hand in the homework in class on Thursday. If submitting electronically, use: submit p1]
• [1/31/08] Starting next week (Feb 5) class will be held in 306 Soda instead of 145 Dwinelle.
• [1/29/08] For some extra practice in understanding the applicability of the lecture material to future topics, you may look at this handout here, containing a probability reference sheet and some
derivations of the probability distribution for the Naive Bayes classifier and Hidden Markov Models.
• [1/29/08] The section times on the course info page have been corrected (Ryan's sections are on Thursday, not Tuesday).
• [1/29/08] The deadline for the Python tutorial has been extended to 1/31. You should receive a confirmation e-mail.
• [1/22/08] The python tutorial has been posted. Before doing the tutorial, have a look at the instructions for submitting assignments.
• [1/22/08] Waitlist: as of now, everyone in the waitlist should be admitted.
• [1/22/08] We are trying to reserve a lab on Friday for the Python tutorial, January 25th from 10am-3pm, in Soda 273. You can arrive and leave as you like; the tutorial should only take an hour or
two. You must complete the tutorial, but you may complete it at home if you choose.
• [1/22/08] Welcome to CS188! Check back for updates in the next few weeks. | {"url":"http://www-inst.eecs.berkeley.edu/~cs188/sp08/announcements.html","timestamp":"2014-04-17T21:25:25Z","content_type":null,"content_length":"10818","record_id":"<urn:uuid:40c3e25c-f03c-447c-9875-ab26c7148888>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00542-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Posts by li
Total # Posts: 42
Early childhood education
An important aspect of socialization is helping children recognize, accept, sort out, identify, label, integrate, express, and cope with their a. satisfaction b. behaviors c. expression d. emotions i
say it's d. emotions
Early childhood education
What is one possible factor that may make a child vulnerable to stigma? a.being the same religion as the majority of the class. b. having brothers and sisters who were stigmatized c. getting a B on a
spelling test d. having a health condition
algebra 1 help
how do you factor it?
algebra 1 help
if the sides of the sqaure are represented by a and b what is the area of the remaining glass when the smaller square is cut from the larger square? write answer in factored form i don't understand
because of the variables
reading grammer and stuff
thank you
reading grammer and stuff
differece between subject pronoun, object pronoun, reflexive pronoun intensive pronoun. how are they used and examples. we've been learning about this but i'm still confused
algebra 1
what i don't undestand is a problem that is similar to this but has ab if the sides of the sqaure are represented by a and b what is the area of the remaining glass when the smaller square is cut
from the larger square? write answer in factored form
algebra 1
oh i meant to put 95 but i was in my own world :/
algebra 1
a diagram below shows an artistic design,] if the length of the outer square is 12" and the length of the inner square is 7" what is the area of the remaining glass when the smaller square is cut
form the larger square? well i multiplied 12*12 and 7*7 that gives me 1...
algebra 1
x^2-4x-21=0 how do you solve this
what is inhumanity to man and give ex.
a. they are equal
d. chi square
i think its median because By far the most commonly used measures of dispersion in the social sciences are variance and standard deviation.The range is the simplest measure of dispersion.
What are some adverbs that show condition? All I can find are adverb clauses that show condition.
Are there any errors in this sentence. I say it is correct. Consumers, who choose not to recycle, dump their waste in landfills.
I am confused because I think on the word contribute, tribute would be the root. I'm just confused about why this would change on contributions.
Would log be the root in ecologists? contributions con tribu (t)(e) ion s
In the word ecologists, is eco the root morpheme or ology? Are the morphemes eco, log, y, ist, s or ec, ology, ist, s ? This is very confusing for me. Any explanation on how to find the root would be
appreciated. For example, on contributions are the morphemes con, trib, ute, ...
Algebra I
How do you put into scientific notation -6 x 10 to the power of -5?
How do you put into scientific notation -6 x 10 to the power of -5?
f'(x)=In(25x)/x between x =1 amd x=12 12. f(x)= ??
Determine the area of the region bounded by f(x)=7*e^(2x) and the -axis, between x=0 and x=2 . what is The area of the region ??
Determine the equation of the tangent line at the indicated -coordinate f(x)=e^(-2x)*in(8x) for x=2 The equation of the tangent line in slope-intercept form is y= ?
$ 2631 is deposited into an account for 15 years. Determine the accumulation if interest is 8.01 % compounded (a) monthly, (b) daily, (c) continuously. (Round-off your answers to the nearest cent.)
The accumulation based on (a) monthly compounding is $ ; (b) daily compounding ...
Find the average value of the function f(x)=8x^2-5x+6 , on the interval [3,5]. Find the value of x-coordinate at which the function assumes it's average value. what is the average value = to ? what
is the x coordinate = to ? Thanks
A 42.3-L volume of methane gas is heated from 25°C to 76°C at constant pressure. What is the final volume of the gas?
Answer the following questions for the function f(x)=sin^2(x/5) defined on the interval .(-15.507923,3.82699075) Rememer that you can enter "pi" for as part of your answer. a.what is f(x) concave
down on the region B. A global minimum for this function occurs at C. A...
Excel VBA
it looks like we share same professor hahahah add me on zamanei1974 messenger, maybe we could solve it
What mass in grams of an aqueous hydrochloric acid solution that is 37.0% by mass hydrogen chloride is needed to supply 2.000 moles of hydrogen chloride?
If they maintain there speed, how far from each other will they be ... north at 2.5 mph and Josh walking east at 3 mph, how long will they meet
which of the following is a suitable toy for a toddler
10th grade
A child throws a snowball with a horizontal velocity of 18m/s directly toward a tree, from a distance of 9m and a height above the ground of 1.5 m. After what interval does the ball hit the tree? At
what height above the ground will the snowball hit the tree? Determine the sno...
Physics - urgent, please help
In its final trip upstream to its spawning territory, a slamon jumps to the top of a waterfall 1.9m high. What is the minimum vertical velocity needed by the salmon at the end of this motion?
Physics - urgent ; test Monday
oh, ok, I see; I was making this waaay too comlicated... Thank you! :)
Physics - urgent ; test Monday
Divers entertain tourists in Punta Cana by diving froma cliff 36 meters above water. Determine the landing speed; ignore air resistance and assume the object starts from rest. I'm supposed to be
using kinematics formulas. I tried using d=V(i)t+1/2at^2 and then plussing the...
S, Se Sb, Pb, Cs | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=li","timestamp":"2014-04-16T18:10:55Z","content_type":null,"content_length":"13862","record_id":"<urn:uuid:56e94832-d980-4eb4-ae30-1cdb8d0db3ca>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00194-ip-10-147-4-33.ec2.internal.warc.gz"} |
Palmer Township, PA Science Tutor
Find a Palmer Township, PA Science Tutor
...I one year of an upper-level probability and statistics course while at Kutztown University and received A's in both semesters. I took two semesters of upper-level Probability and Statistics
at Kutztown University, receiving an A in both. I took a one semester course in Differential Equations at Kutztown University in which I received an A.
16 Subjects: including physics, physical science, chemistry, calculus
...College chemistry instructor and tutor with excellent math skills to assist students in algebra 1 and 2 plus geometry and trigonometry. Current experience in tutoring GED math preparation.
Excellent math skills in algebra 1 and 2, geometry, and trigonometry from an experienced college chemistry instructor and tutor.
36 Subjects: including biochemistry, pharmacology, Praxis, algebra 1
...During that time, the student successfully completed the course work with an A-. I worked as a 1:1 with the Bucks County IU #22 in a middle school classroom that was Autistic Support. I also
worked for 6 summers with the IU in their ESY program, in an Autistic support classroom. I hold an MS in plant physiology, and have taken genetics at the graduate level.
10 Subjects: including biology, physiology, ecology, special needs
I graduated from Penn State University with a degree in elementary education. I went into education because school wasn't always easy for me. I had a hard time keeping up with the rest of my
peers and a lot of the times the way the teachers taught the class material didn't make sense to me.
14 Subjects: including anatomy, public speaking, elementary (k-6th), study skills
...I graduated in May of 2013 from Ramapo College of New Jersey. I am certified to teach elementary school and am a test away from being certified to teach middle school math. I currently work in
a school as an instructional aide in Kindergarten and 2nd grade.
31 Subjects: including biology, nutrition, grammar, geometry
Related Palmer Township, PA Tutors
Palmer Township, PA Accounting Tutors
Palmer Township, PA ACT Tutors
Palmer Township, PA Algebra Tutors
Palmer Township, PA Algebra 2 Tutors
Palmer Township, PA Calculus Tutors
Palmer Township, PA Geometry Tutors
Palmer Township, PA Math Tutors
Palmer Township, PA Prealgebra Tutors
Palmer Township, PA Precalculus Tutors
Palmer Township, PA SAT Tutors
Palmer Township, PA SAT Math Tutors
Palmer Township, PA Science Tutors
Palmer Township, PA Statistics Tutors
Palmer Township, PA Trigonometry Tutors
Nearby Cities With Science Tutor
Alpha, NJ Science Tutors
Bethlehem, PA Science Tutors
Catasauqua Science Tutors
Easton, PA Science Tutors
Forks Township, PA Science Tutors
Freemansburg, PA Science Tutors
Glendon, PA Science Tutors
Harmony Township, NJ Science Tutors
Nazareth, PA Science Tutors
New Hanover Twp, PA Science Tutors
Phillipsburg, NJ Science Tutors
Riegelsville Science Tutors
Stockertown Science Tutors
Tatamy Science Tutors
West Easton, PA Science Tutors | {"url":"http://www.purplemath.com/Palmer_Township_PA_Science_tutors.php","timestamp":"2014-04-19T05:24:50Z","content_type":null,"content_length":"24531","record_id":"<urn:uuid:1121f297-bbce-4f14-ad10-04c57a2b4c52>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00639-ip-10-147-4-33.ec2.internal.warc.gz"} |
Coordiante Geometry Ex.1
1. The points (1,3) and (5,1) are the opposite vertices of a rectangle. The other two vertices are on the line y=2x+c. Find c and the remaining vertices.
2. The consecutive sides of a parallelogram are 4x+5y=0 and 7x+2y=0. If the equation of one diagonal be 11x+7y=9, find the equation of the other diagonal.
3. One side of a rectangle lies along the line 4x+7y+5=0. Two of its vertices are (-3,1) and (1,1). Find the equations of the other three sides.
4. Find the equations of straight lines passing through (-2, -7) and having an intercept of length 3 between the straight lines 4x+3y=12 and 4x+3y=3.
5. Find the equation of the circle which passes through the point (2,0) and whose centre is the limit of the point of intersection of the lines 3x+5y=1 and (2+c)x+5c²y=1 as c tends to 1.
6. Two vertices of a triangle are (6,4) and (2,6). If the centroid of the triangle is (4,6), find the coordinates of the third vertex.
7. Find the circumcentre of the triangle whose vertices are (1,1), (2, -1) and (3,2).
8. Find the value of k such that the three points (k,2k), (2k,3k) and (3,1) are collinear.
9. Find the area of the quadrilateral whose vertices are (-1,-5),
(2,-3), (1,2), and (-2, 4).
10. Find the equations of the circles passing through (-4,3) and touching the lines x+y=2 and x-y=2.
Character is who you are when no one is looking. | {"url":"http://www.mathisfunforum.com/viewtopic.php?id=6093","timestamp":"2014-04-20T23:31:09Z","content_type":null,"content_length":"9466","record_id":"<urn:uuid:e252f3f7-235e-40d9-9ce2-8dfc0e7f2780>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00163-ip-10-147-4-33.ec2.internal.warc.gz"} |
eq lines and surd
December 10th 2007, 10:05 AM #1
Dec 2007
eq lines and surd
have a picture of a line AB tht eq is y=4x-5 and passes through the point 9(2,3). The line BC is perp to AB and cuts the x axis at C. I need tofind the equation of line BC, which i have done, but
i also need to find the x coordinate of C, but im not sure how do it? thnx
Also how could i express 3(in a square root thing)
6-3(3 again in sqr root thing)
in the form of p+q 3(in sqroot)
Iv now with the help of you great people being able to complete my practise paper and practised and found some new tips lol thnx
We need more information. Where does BC cross AB? There are any number of lines perpendicular to AB.
In LaTeX code, [ math ] 6 - 3 \sqrt{3} [ /math ] (without typing the spaces between the [ ].
$6 - 3\sqrt{3} = (6) + (-3)\sqrt{3}$
Is this what you are looking for?
have a picture of a line AB tht eq is y=4x-5 and passes through the point 9(2,3). The line BC is perp to AB and cuts the x axis at C. I need tofind the equation of line BC, which i have done, but
i also need to find the x coordinate of C, but im not sure how do it? thnx
Also how could i express 3(in a square root thing)
6-3(3 again in sqr root thing)
in the form of p+q 3(in sqroot)
Iv now with the help of you great people being able to complete my practise paper and practised and found some new tips lol thnx
$BC: f(x) = - \frac{x}{4} + D$
Somewhere BC intercepts AB
So the equations of the two lines become equal at some point.
$- \frac{x}{4} + D = 4x + 5$
Simplify that and you'll find that the D-value in $f(x)$ is given by the following function:
$D = \frac{17}{4} x + 5$
Where the $x$ is the $x$-axis value where the two lines cross. From there you'll find the D-value of f(x).
When you have that, set $f(x) = 0$
Then solve for $x$ and you'll find the C-value that you're looking for.
December 10th 2007, 10:12 AM #2
December 10th 2007, 10:21 AM #3
December 10th 2007, 10:27 AM #4
December 10th 2007, 10:30 AM #5 | {"url":"http://mathhelpforum.com/pre-calculus/24602-eq-lines-surd.html","timestamp":"2014-04-17T02:44:52Z","content_type":null,"content_length":"48957","record_id":"<urn:uuid:4313bf32-3537-42d6-806e-8879e9a1f36e>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00539-ip-10-147-4-33.ec2.internal.warc.gz"} |
Merrionette Park, IL Algebra 2 Tutor
Find a Merrionette Park, IL Algebra 2 Tutor
...My grades in my first two semesters of the IU theory sequence were both A+; I earned A's in subsequent honors theory courses. I'd be happy to tutor through the first year of undergraduate music
theory. I'm willing to tutor groups.
13 Subjects: including algebra 2, calculus, geometry, statistics
...The college counseling process has many important steps, all of which are crucial to student success. The first step in this process, of course, is working with the student to identify their
interests and goals. Once these interests and goals are identified, families can methodically sort the college brochures, based on whether they highly competitive, competitive, and "safety
28 Subjects: including algebra 2, English, writing, geometry
My work experience includes 22 years as an Information Technology professional, performing as a developer, analyst, team and project leader, and manager. This experience led to an opportunity as
Technology Coordinator at Hales Franciscan High School. I transitioned to teaching in 2003, obtained my...
22 Subjects: including algebra 2, geometry, algebra 1, GED
...Mathematics is my passion and the fundamentals are my specialty, from pre-algebra thru algebra and analytic geometry. Many award winning students in the field of Math. I stress the importance
knowing and speaking the language of mathematics.
8 Subjects: including algebra 2, geometry, algebra 1, SAT math
...Although I am heavily knowledgeable in the natural sciences, I am better at teaching algebra and writing. I have instructed many of my peers in college, nursing school, and high school on their
writing and they received A's on their essays. I began tutoring Algebra as a student in high school and continued to help my peers with Algebra and Calculus in College.
4 Subjects: including algebra 2, writing, algebra 1, prealgebra | {"url":"http://www.purplemath.com/Merrionette_Park_IL_Algebra_2_tutors.php","timestamp":"2014-04-18T18:33:27Z","content_type":null,"content_length":"24491","record_id":"<urn:uuid:62f786e4-8fbe-467e-aa21-59377700dcd2>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00625-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mutually Permutable Products of Finite Groups
ISRN Algebra
VolumeΒ 2011Β (2011), Article IDΒ 867082, 4 pages
Research Article
Mutually Permutable Products of Finite Groups
Department of Mathematics, Faculty of Science 14466, King Abdulaziz University, Jeddah 21424, Saudi Arabia
Received 19 June 2011; Accepted 10 July 2011
Academic Editors: M.Β Asaad, G.Β Buskes, G.Β Mason, and L.Β Vinet
Copyright Β© 2011 Rola A. Hijazi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
Let G be a finite group and G[1], G[2] are two subgroups of G. We say that G[1] and G[2] are mutually permutable if G[1] is permutable with every subgroup of G[2] and G[2] is permutable with every
subgroup of G[1]. We prove that if is the product of three supersolvable subgroups G[1], G[2], and G[3], where G[i] and G[j] are mutually permutable for all i and j with and the Sylow subgroups of G
are abelian, then G is supersolvable. As a corollary of this result, we also prove that if G possesses three supersolvable subgroups whose indices are pairwise relatively prime, and G[i] and G[j] are
mutually permutable for all i and j with , then G is supersolvable.
1. Introduction
Throughout this paper, will denote a finite group. We write for the set of prime divisors of the order of and for their number. In [1], Doerk determined the structure of minimal non-supersolvable
groups (nonsupersolvable groups and all of whose proper subgroups are supersolvable). He proved that if is a minimal non-supersolvable group, then is solvable and . Therefore, if is a minimal
non-supersolvable group with , then possesses three supersolvable subgroups such that . In [2], Kegel proved that if , where and are nilpotent subgroups of , and is a supersolvable subgroup of , then
is supersolvable. In [3], Asaad and Shaalan proved the following result: assume that and are supersolvable subgroups of , is nilpotent and . Assume further that is permutable with every subgroup of
and is permutable with every subgroup of ( and are mutually permutable). Then is supersolvable. Further, they proved the following result: Assume that and are supersolvable subgroups of and and that
every subgroup of is permutable with every subgroup of ( and are totally permutable). Then, is supersolvable.
In this paper, we are interested in the following question.
Assume that , where , , and are supersolvable subgroups and that and are mutually permutable for all and with . Is supersolvable? The answer is negative as the following example shows.
Example 1 (see [8, pages 8-9]). Let , where . The maps , , and , are automorphisms of and generate a subgroup of order 8 ( is isomorphic with a quaternion group). Take . Then , , and are normal
supersolvable subgroups of and , but is not supersolvable.
We prove the following result.
Theorem 1.1. If is the product of three supersolvable subgroups , , and such that and are mutually permutable for all and with , and the Sylow subgroups of are abelian, then is supersolvable.
As a corollary of Theorem 1.1, we have the following.
Corollary 1.2. If possesses three supersolvable subgroups (i= 1,2,3) whose indices in are pairwise relatively prime, and is permutable with every subgroup of , for all and with , then is
2. Preliminaries
We list here some basic results which are needed in this paper.
Lemma 2.1 (see [2]). Let be the product of three subgroups , , and . If , , and have normal Sylow p-subgroups for a certain prime p, then also has a normal Sylow p-subgroup.
Lemma 2.2 (see [4]). Let be a group such that and are mutually permutable. (a)If , then and are totally permutable.(b) is a quasinormal subgroup of and of .
Lemma 2.3 (see [3]). Let be a group such that and are totally permutable subgroups. If and are supersolvable subgroups of , then is supersolvable.
Lemma 2.4 (see [5, page 213, Theorem7.1.2]). If is a quasinormal subgroup of , then is subnormal in .
Lemma 2.5 (see [5, page 239, Theorem7.7.1]). Let be a group such that and are subgroups of . If is a subnormal subgroup of and of , then is subnormal in .
Lemma 2.6 (see [6]). Let be a group and is a quasinormal subgroup of . Then, is nilpotent, where .
Lemma 2.7 (see [7, page 29, Theorem8.8(a)]). is subnormal subgroup andis nilpotent.
Lemma 2.8 (see [2]). Let the group be the product of three subgroups , , and . If , , and are nilpotent subgroups of , then is nilpotent.
Lemma 2.9 (see [8, page 196, Theorem5.1(15)]). If is a supersolvable group, then is abelian of exponent dividing for all primes .
Lemma 2.10 (see [8, page 6, Theorem1.9]). Let be a normal Sylow -subgroup of . If is abelian of exponent dividing , then is supersolvable.
Lemma 2.11 (see [8, page 5, Theorem1.6]). The commutator subgroup of a supersolvable group is nilpotent.
3. Proofs
Proof of Theorem 1.1. Assume that the result is not true, and let be a counterexample of minimal order. Since is supersolvable, it follows that has a normal Sylow -subgroup, where is the largest
prime dividing . Then, by Lemma 2.1, has a normal Sylow -subgroup, say . Certainly, every proper quotient group of satisfies the hypothesis of the theorem. So every proper quotient group of is
supersolvable by the minimal choice of . But the class of all supersolvable groups is a saturated formation, so , , and . We argue that . If not, . Then, is a totally permutable product of and by
Lemma 2.2(a). Then, by Lemma 2.3, is supersolvable, a contradiction.
Thus, . Analogously, and . Since is a mutually permutable product of and , it follows that is a quasinormal subgroup of and of by Lemma 2.2(b). Then, by Lemma 2.4, is a subnormal subgroup of and of .
Hence is a subnormal subgroup of by Lemma 2.5.
If , then, by Lemma 2.6, is nilpotent. Hence, is a subnormal nilpotent subgroup of . So by Lemma 2.7. Taken into consideration that is quasinormal in , and is abelian, it follows that is normal in
and so , a contradiction. Thus . Analogously, and . Set , and . Then , , and . Now, () as is a unique minimal normal subgroup of . Hence, ().
Now, we finish the proof of the theorem. Since is supersolvable, and (), it follows that () and is abelian. Hence, by Lemma 2.8, is nilpotent, and, since the Sylow subgroups of are abelian, it
follows that is abelian. On the other hand, by Lemma 2.9, () is of exponent dividing . Hence, is abelian of exponent dividing and so is supersolvable, by Lemma 2.10, a final contradiction completing
the proof of the theorem.
Proof of Corollary 1.2. Assume that the result is not true and let be a counterexample of minimal order. Let be a Sylow -subgroup of , where is the largest prime dividing . Then, is normal in by
Lemma 2.1. Certainly, every proper quotient group of satisfies the hypothesis of the corollary. So every proper quotient group of is supersolvable by the minimal choice of . But the class of all
supersolvable groups is a saturated formation, so , , and . Since , , and have coprime indices, we can assume that does not divide and does not divide . Then, and and so as . Then, and are abelian
subgroups of by Lemma 2.11. This together with imply that the Sylow subgroups of are abelian. Now Theorem 1.1 implies that is supersolvable, a contradiction completing the proof of the corollary.
1. K. Doerk, β Minimal nicht überauflösbare, endliche Gruppen,β Mathematische Zeitschrift, vol. 91, pp. 198β 205, 1966. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
2. O. H. Kegel, β Zur Struktur mehrfach faktorisierter endlicher Gruppen,β Mathematische Zeitschrift, vol. 87, pp. 42β 48, 1965. View at Publisher Β· View at Google Scholar
3. M. Asaad and A. Shaalan, β On the supersolvability of finite groups,β Archiv der Mathematik, vol. 53, no. 4, pp. 318β 326, 1989. View at Publisher Β· View at Google Scholar Β· View at
Zentralblatt MATH Β· View at MathSciNet
4. A. Carocca, β p-supersolvability of factorized finite groups,β Hokkaido Mathematical Journal, vol. 21, no. 3, pp. 395β 403, 1992. View at Zentralblatt MATH
5. J. C. Lennox and S. E. Stonehewer, Subnormal Subgroups of Groups, Oxford Mathematical Monographs, The Clarendon Press/Oxford University Press, New York, NY, USA, 1987.
6. N. Itô and J. Szép, β Über die Quasinormalteiler von endlichen Gruppen,β Acta Scientiarum Mathematicarum, vol. 23, pp. 168β 170, 1962. View at Zentralblatt MATH
7. K. Doerk and T. Hawkes, Finite Soluble Groups, vol. 4 of de Gruyter Expositions in Mathematics, Walter de Gruyter & Co., Berlin, Germany, 1992.
8. H. G. Bray, W. E. Deskins, D. Johnson et al., Between Nilpotent and Solvable, M. Weinstein, Ed., Polygonal Publ. House, Washington, NJ, USA, 1982. | {"url":"http://www.hindawi.com/journals/isrn.algebra/2011/867082/","timestamp":"2014-04-17T20:05:13Z","content_type":null,"content_length":"221874","record_id":"<urn:uuid:b189f34b-6588-4d24-9a7b-1dc260174657>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00193-ip-10-147-4-33.ec2.internal.warc.gz"} |
Implicite Differentiation
December 15th 2009, 09:29 AM #1
Oct 2009
Implicite Differentiation
6. a) The equation of a curve is defined implicitly is x^3y^2 = -3xy. Verify that the point (-1,-3) belongs to the curve. Find an equation of the tangent line to the curve at this point.
My work:
Step 1. Verify point. I did and got -9 = -9.
Step 2. Differentiate.
3x^2y^2 + 2x^3yy' = -3x -3xy'
Step 3. Sub pt into above equation.
This is where I am stuck... you see I am able to differentiate it properly but then when it comes time to sub the points in, I get a totally different answer.
I got received tutoring about this question and was able to get it, but a week later here I am reviewing it and cannot. I keep getting dy/dx = 18/3 = 6. It's supposed to be -12 and I got that
once, but now I keep getting 6 for some reason. What could I be doing wrong?
My work:
a) 3x^2y^2dx + 2x^3ydy = -3ydx + -3xdy
b) Put dx's and dy's together. I get (2x^3y+3x)dy = (-3y-3x^2y^2)dx
c) dy/dx = (-3y-3x^2y^2)dx / (2x^3y+3x)dy
d) 9-27 / 6-3 = -18/3 = -6.
e) Teacher solution is -12. How come I get -6?
6. a) The equation of a curve is defined implicitly is x^3y^2 = -3xy. Verify that the point (-1,-3) belongs to the curve. Find an equation of the tangent line to the curve at this point.
My work:
Step 1. Verify point. I did and got -9 = -9.
Step 2. Differentiate.
3x^2y^2 + 2x^3yy' = -3x -3xy' (1)
Step 3. Sub pt into above equation.
This is where I am stuck... you see I am able to differentiate it properly but then when it comes time to sub the points in, I get a totally different answer.
I got received tutoring about this question and was able to get it, but a week later here I am reviewing it and cannot. I keep getting dy/dx = 18/3 = 6. It's supposed to be -12 and I got that
once, but now I keep getting 6 for some reason. What could I be doing wrong?
My work:
a) 3x^2y^2dx + 2x^3ydy = -3ydx + -3xdy
b) Put dx's and dy's together. I get (2x^3y+3x)dy = (-3y-3x^2y^2)dx
c) dy/dx = (-3y-3x^2y^2)dx / (2x^3y+3x)dy
d) 9-27 / 6-3 = -18/3 = -6.
e) Teacher solution is -12. How come I get -6?
Might want to check the term in red above. It's alos easier to sub in your x and y values into (1) above in red and then solve for y'.
I think the x in red was added to correct a typo on my part but I do not understand why there is a (1) added in red.
Okay. Have you had the chance to verify my work below in c and d?
6. a) The equation of a curve is defined implicitly is x^3y^2 = -3xy. Verify that the point (-1,-3) belongs to the curve. Find an equation of the tangent line to the curve at this point.
My work:
Step 1. Verify point. I did and got -9 = -9.
Step 2. Differentiate.
3x^2y^2 + 2x^3yy' = -3x -3xy'
Step 3. Sub pt into above equation.
This is where I am stuck... you see I am able to differentiate it properly but then when it comes time to sub the points in, I get a totally different answer.
I got received tutoring about this question and was able to get it, but a week later here I am reviewing it and cannot. I keep getting dy/dx = 18/3 = 6. It's supposed to be -12 and I got that
once, but now I keep getting 6 for some reason. What could I be doing wrong?
My work:
a) 3x^2y^2dx + 2x^3ydy = -3ydx + -3xdy
b) Put dx's and dy's together. I get (2x^3y+3x)dy = (-3y-3x^2y^2)dx
c) dy/dx = (-3y-3x^2y^2)dx / (2x^3y+3x)dy
d) 9-27 / 6-3 = -18/3 = -6.
e) Teacher solution is -12. How come I get -6?
Outside of the red dx and dy which shouldn't be there, it's correct.
Okay so I am correct up until the point where I substitute the values (-1,-3) because once I do, I get -6 as you can see. The answer however is -12.
It's possible I am mixing a sign somewhere because 9+27 is 36 and 36/3 is 12.
I on the other hand get 9-27 and so that is -18, -18/3 is -6.
From what I can see though, I'm applying everything properly, how could I be getting -9 and not +9 in order to get 36/3?
Okay so I am correct up until the point where I substitute the values (-1,-3) because once I do, I get -6 as you can see. The answer however is -12.
It's possible I am mixing a sign somewhere because 9+27 is 36 and 36/3 is 12.
I on the other hand get 9-27 and so that is -18, -18/3 is -6.
From what I can see though, I'm applying everything properly, how could I be getting -9 and not +9 in order to get 36/3?
From what I see, you are correct. I can only think that you've written the problem down wrong or the answer that you're comparing yours with (i.e. -12) is wrong.
December 15th 2009, 10:35 AM #2
December 15th 2009, 10:46 AM #3
Oct 2009
December 15th 2009, 10:55 AM #4
December 15th 2009, 10:56 AM #5
Oct 2009
December 15th 2009, 11:27 AM #6
December 15th 2009, 11:40 AM #7
Oct 2009
December 15th 2009, 12:58 PM #8 | {"url":"http://mathhelpforum.com/calculus/120605-implicite-differentiation.html","timestamp":"2014-04-19T01:09:48Z","content_type":null,"content_length":"55809","record_id":"<urn:uuid:b8bbf153-0688-433d-a5a2-7dbc5cee7c8a>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00602-ip-10-147-4-33.ec2.internal.warc.gz"} |
Differential equations substitution
Solve: $2y'y''=1+(y')^2$ by letting $y'=v$ Does $y''=v'$ So does $2vv'=1+v^2$
The same variable with respect to which you are taking the derivative of y. Your problem mentions no independent variable - I would probably just go with x.
Not sure I would use the division symbol there. On the LHS, you have $dv/dx$, originally. When you multiply both sides of $\displaystyle{\frac{2v}{1+v^{2}}\,\frac{dv}{dx}=1}$ by $dx$, what are you
going to get? [EDIT]: acevipa edited post # 5. It's correct now.
Last edited by Ackbeet; September 3rd 2010 at 02:56 AM.
Oh, I see. You edited your post. That looks good to me. | {"url":"http://mathhelpforum.com/differential-equations/155094-differential-equations-substitution.html","timestamp":"2014-04-16T21:00:14Z","content_type":null,"content_length":"57567","record_id":"<urn:uuid:cdc74ee4-8901-458a-ac70-ccf702a62d64>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00306-ip-10-147-4-33.ec2.internal.warc.gz"} |
Origami and The Context Aware Panel: being a good neighbor
I was looking at these fantastic modular origami tesselations and thinking (as I tend to) “Vasari Panel? Sure!”
Turns out origami is hard! One of the first issues that comes up is that each panel needs to have some amount of awareness about what its neighbor is doing. In general, Revit/Vasari panels operate in
isolation, each one’s behavior having very little, if anything to do with the panels that it shares edges with. For most cases, this is fine, as the creation of seamless relationships can be handled
by referencing shared point normals. In this case, however, there turn out to be more than one place where geometry in PanelA needs to reference the same geometry as PanelB.
The kernel of this solution is a 9 point adaptive component that is repeated 4 different times over a surface. The center of a 9 bay grid defines the center of any given panel, and the remaining 8
reference the centers of all neighbors.
With this solution in mind, let’s take a look at our origami pattern
If you break the pattern down into repeated elements, at first you can do the standard thing of identifying a rectangular set of points.
Here I am identifying the low point of each module and just connecting the dots.
That module is built on a “seamless panel base”, where I just look at where are the lines intersect with an imagined cubic space projected off the surface of the form, and connect the dots.
Q.E.D., right? Now we can all go back to doing important work, please?
NO! This solution leaves out an important piece of information
Look carefully at the resulting pattern, and you will see a kink in what should be a straight line connection between the high points of the modules!
Each panel piece needs to REACH OVER into the neighboring panel and make a straight line connection between interior geometry in itself and interior geometry in the neighbor!
A minor detail? No! This simplicity is what lends clarity and readability to the pattern.
In a completely flat, regular system, this would be no problem, as there would be no kink.
But with irregular curvature, where angles are changing, you can’t pre-calculate where that reach-over is going to need to land.
What’s a girl to do?
If you conceive of each high, square area as the center of each module, each panel needs to know about 4 other similar centers:
But with this configuration of 5 points, you can’t determine the extents of each cell. For that you also need to get the geometry of the centers of the remaining 4 corner neighbors.
Therefore, for every one cell that gets placed, it must have one point to identify its center, and 8 to identify its neighbor’s placements.
So you have a 9 point adaptive component where point #5 represents the center of one square we will model, and the remaining 8 points represent the neighboring cell centers
If you connect #5 diagonally to the corner neighbors, you can derive the lines out from the center that will define the lowest points of the pattern.
Imagine that points 2, 4, 6, and 8 will similarly be the center of their own high, square areas and connect them
Host points at he intersections and you will have the corners of the lowest points of the pattern.
These corners are where the panel will connect at the lowest point with its neighbor
Ok, head hurting? Hang on, it gets worse. This is a fully 3 dimensional pattern, that offsets from an arbitrarily curving surface. So we need to attack the Z-direction by placing points on the
horizontal workplane of each adaptive point and offsetting the desired thickness.
Connect the diagonals again and we can define the high, square area.
Now we connect more dots to create the folds:
One last piece. We place 4 of these little monsters on our divided surface, overlapping so that each block of 9 control points relates to its neighbor. Note that there are multiple areas of alignment
in 3 dimensions of neighboring panels.
Select each of the 4 families, one at a time, and hit the Repeat button:
This will result in 4 different Repeater elements on the surface.
There are still 3 missing features that are not solved with this. 1. Edges only match up on surfaces that have planar panels: revolves, translation, single curvature, and flat surfaces. Paper will
stretch, crinkle and bend a little, mine doesn’t. 2. The high, square areas do not rotate, that is, they stay in a gridded orthogonal relationship to each other. In the actual origami example, they
rotate against each other. 3. Perhaps more interestingly, there is no necessity for the panel elements to remain dimensionally static while only changing their angular relationship. I think there is
some interesting math to be done that might solve this last problem, and I can approximate it with some formulas that adjust the height of the family with changes in size of the cell, but it isn’t
(animation made with BoostYourBIM’s Image-O-Matic and Gimp)
Download the family from here. ContextPanel_seamless.rfa is a base family that can be used to make your own patterns that need to reference neighbors. ContextPanel_hosted has the origami panel module
hosted 4 times on a surface. Select each one and Repeat for each to cover the surface.
Suggestions for improvements or totally alternate approaches welcome!
16 comments:
1. You are MUCH to brilliant for me my friend... I will definitely have to take some time to digest this post.
2. Is this going to simulate folding like how Kangaroo already does?
3. Nope, this isn't a physics based simulation. This is using the logic of a modular origami to inform a parametric system. Pretty different thing.
4. Really cool post!
5. This comment has been removed by the author.
6. This comment has been removed by a blog administrator.
7. Hey Danny P. Sorry, I accidentally deleted your post. This book does look awesome:
8. Hi there - this is a fantastic post, I'm enthralled.
as an origami artist who specializes in tessellations, it's very interesting to see people approaching these topologies from a completely different viewpoint!
9. Hi Eric!
Wow, I'm a big fan, thanks for commenting, and my apologies for playing fast and loose with your honorable discipline :)
10. great !
11. head spinning stuff :O ... anyways, u think getting into origami might help sharpen ur designing acumen?? Sincere question :)
12. Stealthnyc,
Yep, nothing you are doing worng, I bet you a donut the problem is that, if you look in the draw gallery, you are set to "draw on workplane". Switch it to "draw on face" or just type "DF".
13. Thanks!
14. When I'm trying to place host points at the intersections something goes wrong. The center snap is turned off, and it shows the intersection snapping, when I place points, but in the final
component lowest points still connected to centers of diagonal lines, so that lowest points of adjacent components a bit deviating from each other.
15. and which orientation type do You use for adaptive points?
16. to make the lowest points to be in the same place within adjacent components I used a little trick: two corners use midpoint of 2-4 and 6-8, while two others use midpoint of 1-5 and 5-9
thus adjacent components has their lowest points in the same places, thou they are not at intersection of rectangles' diagonals. | {"url":"http://buildz.blogspot.com/2013/01/origami-and-context-aware-panel-being.html","timestamp":"2014-04-16T04:25:32Z","content_type":null,"content_length":"152353","record_id":"<urn:uuid:0102dae7-af6b-40e6-8617-27f5226e8cc6>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00429-ip-10-147-4-33.ec2.internal.warc.gz"} |
Triangle Trig Question
June 3rd 2008, 05:22 PM
Triangle Trig Question
Last One!
A farmer has a triangular field with sides of 240 ft., 300 ft., and 360 ft. He wants to apply fertilizer to the field. If one 40-pound bag of fertilizer covers 6000 square feet, how many bags
must he buy to cover the field.
I drew a diagram and have a feeling I need to use these equations
Law of Cosines
Area of a triangle (1/2 ab sinC)
I just don't know how I would go about it
June 3rd 2008, 06:15 PM
Last One!
A farmer has a triangular field with sides of 240 ft., 300 ft., and 360 ft. He wants to apply fertilizer to the field. If one 40-pound bag of fertilizer covers 6000 square feet, how many bags
must he buy to cover the field.
I drew a diagram and have a feeling I need to use these equations
Law of Cosines
Area of a triangle (1/2 ab sinC)
I just don't know how I would go about it
Use Cosine rule to get the angle opposite one of the sides.
240^2 = 300^2 + 360^2 - 2*300*360CosA
CosA = (300^2 + 360^2 - 240^2)/(2*300*360)
A = Cos^-1(3/4)
Area = 0.5*300*360Sin(Cos^-1(3/4)) = 35718 sq ft
35718 / 6000 = 5.953
Therefore he needs 6 bags. | {"url":"http://mathhelpforum.com/trigonometry/40550-triangle-trig-question-print.html","timestamp":"2014-04-19T05:55:25Z","content_type":null,"content_length":"4852","record_id":"<urn:uuid:5ddef2ac-a1da-43f6-b4a4-9c12891f0f26>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00563-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: How to get HU values from CT image
Replies: 3 Last Post: Sep 17, 2012 8:13 PM
Messages: [ Previous | Next ]
Jeff Re: How to get HU values from CT image
Posted: Sep 17, 2012 8:13 PM
Posts: 66
Registered: 3/2/ Manoj,
I believe what you need to do is to convert the displayed/generated pixel values, which are greyscale values, into the corresponding Hounsfield Unit (HU). To do this you use the
following equation:
HU = pixel_value*slope - intercept
The intercept is found in the dicom header at 0028,1052 and the slope at 0028,1053. Plug these into the equation and you obtain the HU value which is simply a rescaling of the linear
attenuation coefficient.
Date Subject Author
7/6/12 How to get HU values from CT image Manoj Nimbalkar
7/6/12 Re: How to get HU values from CT image Matt J
7/7/12 Re: How to get HU values from CT image trison
9/17/12 Re: How to get HU values from CT image Jeff | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2389840&messageID=7891831","timestamp":"2014-04-20T21:15:28Z","content_type":null,"content_length":"19726","record_id":"<urn:uuid:450b6e91-97ff-4e72-9526-18c522d46f90>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00622-ip-10-147-4-33.ec2.internal.warc.gz"} |
How many people are employed at a certain manufacturing
Author Message
How many people are employed at a certain manufacturing [#permalink] 17 May 2004, 18:10
How many people are employed at a certain manufacturing plant with an annual payroll of $2342000
Joined: 26 Apr
2004 (1) Three-fourths of the employees are clerical, at an average salary of $16020
Posts: 1226 (2)With 8 percent more employees, the payroll would equal $ 2548000
Location: Is this data sufficient ? thank you
Followers: 2
Senior Manager
I think, the answer is C. Just a guess!
Joined: 02 Mar
Posts: 330
Followers: 1
Virtual Re: please give me an hand (Data sufficiency) [#permalink] 18 May 2004, 09:30
Manager chunjuwu wrote:
Joined: 10 Mar How many people are employed at a certain manufacturing plant with an annual payroll of $2342000
(1) Three-fourths of the employees are clerical, at an average salary of $16020
Posts: 66
(2)With 8 percent more employees, the payroll would equal $ 2548000
Dallas,TX Is this data sufficient ? thank you
Followers: 1 I don't think it cannot be solved even by both combined.
carsen Answer is E
Senior Manager Statement 1... tells about the clerks 'Three-fourths of the employees are clerical, at an average salary of $16020 ' Therefore the remaining one forth ... but what is the NUMBER ...
hence, insuffnt.
Joined: 25 Dec
2003 Statement 2 ....'With 8 percent more employees, the payroll would equal $ 2548000' .. from this we can get the payroll of the total employees ....ie. 1.08x=2548000, through which we
can calculate the payroll of all .....still we cannot get the whole number of employess.. Not sufficnt.
Posts: 360
Combing both also does not give us the number. Hence the answer is E.
India _________________
Followers: 1 Giving another SHOT
Re: please give me an hand (Data sufficiency) [#permalink] 18 May 2004, 10:29
hallelujah1234 Let x be the number of employees.
Senior Manager Obviously, x should be a positive integer.
Joined: 02 Mar 1. x is a multiple of 4. Insufficient.
2004 2. x is a multiple of 25. Insufficient.
Posts: 330 Combined 1 and 2: x is a multiple of 100.
Location: (3x/4)16020 + (x/4)y = 2342000
There or (3x)(4005)+ (xy/4) = 2342000.
Followers: 1 3*200*4005 > 2342000.
There is only one solution, that is, x = 100.
C is the answer.
Hi Hallelu
Senior Manager
To be honest, I cannot understand your explaination .. with the multiple of 4 and multiple of 25...this goes way above my head. Would you be able to provide the source for the
Joined: 25 Dec multiplication stuff ...its interesting (only if i can know your approach).
Thanks bro
Posts: 360
India Giving another SHOT
Followers: 1
Senior Manager 3/4 of 6 people = 4.25 people.
8% of 10 people = .8 people
Joined: 02 Mar
2004 I hope, you'll get it.
Posts: 330
Followers: 1
chunjuwu I think the answer is E.
VP set the total people = X and nonclerical employees have the average salary, Y
From A, we can only get:
Joined: 26 Apr 0.75X * 16020 + 0.25X * Y = 234200
From B we can only get:
Posts: 1226 1.08X * each employees' salary = 254800
Location: But, we don't know what percentage of the employees are clerical or nonclerical employees; therefore, we can solve the problem.
This answer is E. Does anybody have other thought?
Followers: 2
Hi Hallelujah
You have mentioned the mnumbers "6 people" and "10 people" ....
3/4 of 6 people = 4.25 people.
Senior Manager
8% of 10 people = .8 people
Joined: 25 Dec
2003 I understand the number being a WHOLE number as ... thanks for that...
Posts: 360 Now, coming back to your previous post. You have mentioned that "x is a multiple of 100" ... well, it can be 200, 300, 400 ....
Location: But in your last statement, you took it to be 100, why not the other multiples of 100?. I hope I brought the error in your reasoning. Hence the EXACT number cannot be determined with
India the information given. Making it E as the answer.
Followers: 1 Regards
Giving another SHOT
Senior Manager
If the number of employees is more than or equal to 200, the total salary of 3/4 employees is greater than $2342000. Hence, except for 100, all ther positive multiples of 100 are ruled
Joined: 02 Mar out.
Posts: 330
Followers: 1
Manager The answer(C) that Hella has given is correct and his solution perfect.
Not sure if such difficult DS can be a part of actual GMAT.
Joined: 10 Mar
2004 Great work Hella.
Posts: 66 cheers
Followers: 1
Intern beautiful solution halle!
Joined: 06 May _________________
lets do it together....
Posts: 15
Followers: 0
Hi Halle
Senior Manager
HATS OFF HALLE
Joined: 25 Dec
2003 I agree and the explaination is clear. Thanks for the insight. Pardon me, if i had been stubborn in proving my point.
Posts: 360 Regards
Location: _________________
Giving another SHOT
Followers: 1
VP Way to go, Halle
Joined: 26 Apr you are right,
thank you.
Posts: 1226
Followers: 2
Re: please give me an hand (Data sufficiency) [#permalink] 31 May 2004, 19:19
hallelujah1234 wrote:
Let x be the number of employees.
Obviously, x should be a positive integer.
Senior Manager
1. x is a multiple of 4. Insufficient.
Joined: 07 Oct 2. x is a multiple of 25. Insufficient.
Combined 1 and 2: x is a multiple of 100.
Posts: 358
(3x/4)16020 + (x/4)y = 2342000
Location: or (3x)(4005)+ (xy/4) = 2342000.
3*200*4005 > 2342000.
Followers: 2
There is only one solution, that is, x = 100.
Kudos [?]: 3 [
0], given: 0 C is the answer.
Don't mean to beat on a dead horse here, but I missed this one.
Where does the 200 in '3*200*4005>234200 come in?
Also, how do you make the leap to this equation in general? Thanks in advance.
Ooooooo my gosh .. i am confused in here again. I am abck to straching my head. Can Halle or others be kind enough to explain step-by-step. Please. One request, please explain
Senior Manager step-by-step .. I get confused with sudden insertion of 3/4 of 6, or 8% of 10 etc...
Joined: 25 Dec Thanks for the patience guys.
Posts: 360
India Giving another SHOT
Followers: 1
Senior Manager hallelujah1234 wrote:
Joined: 02 Mar If the number of employees is more than or equal to 200, the total salary of 3/4 employees is greater than $2342000. Hence, except for 100, all ther positive multiples of 100 are ruled
2004 out.
Posts: 330 The above explains why not 200, or any positive multiple of 100 except 100.
Followers: 1
I am new to this forum but not new to math,got some awards in my school days. I see lot of posts here asking to explain in a very detailed way.I have a piece of advice.
Joined: 02 Jun
2004 Best way to find solutions to a math problem is to analyze it yourself.Even if you dont get the answer, dont give up,try again from all possible angles,start from basics,try
again,timing is not the issue when you are in learning mode.Only then, you look for the real answer.Then you will understand what people are talking about.I felt members were asking
Posts: 154 too spoon-feedy questions (not this thread though).Either they are looking at the forum questions without clearing the basic concepts or they are straightway jumping to the answers.
Location: san
Followers: 3
Kudos [?]: 16
[0], given: 0 | {"url":"http://gmatclub.com/forum/how-many-people-are-employed-at-a-certain-manufacturing-6574.html","timestamp":"2014-04-17T07:22:38Z","content_type":null,"content_length":"187053","record_id":"<urn:uuid:a0e90070-8d48-49d3-b7f7-24dd34817cf1>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00413-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dennis DeTurck, Herman Gluck, Rafal Komendarczyk, Paul Melvin, David Shea Vela-Vick and I have just put our paper “Triple linking numbers, ambiguous Hopf invariants and integral formulas for
three-component links” on the arXiv.
The basic background for the paper is as follows. In 1958 the astrophysicist Lodewijk Woltjer was studying the Crab Nebula when he discovered that there is a certain quantity which is constant for a
force-free magnetic field in a closed system (he had argued in an earlier paper that the magnetic fields in the Crab Nebula are force-free). This quantity turns out to be important in, e.g., plasma
physics. H. K. Moffatt coined the term “helicity” for this quantity and suggested that it measures the extent to which the field lines wrap and coil around each other.
Vladimir Arnol’d made this more rigorous in his 1973 paper “The asymptotic Hopf invariant and its applications” (which doesn’t appear to be available online) in which he demonstrated that helicity
can be thought of as an “average asymptotic linking number”. To explain what that means, I need to digress for a moment into knot theory.
For a mathematician, a knot is just a piece of string where the two ends have been glued together. This can be done in the obvious way or in not so obvious ways. Here’s Wikipedia’s table of all the
topologically distinct knots with up to 7 crossings (topologically distinct means you can’t get from one to the other without cutting the string or passing it through itself):
Knot theory is an incredibly rich field in mathematics which is fundamental to the study of three- and four-dimensional spaces and which has applications to everything from shoe-tying to quantum
A collection of two or more knots is called a link. The simplest non-trivial link is the Hopf link:
Here’s a slightly more complicated link (which I drew for this paper):
The simplest numerical description of a link is its linking number. I don’t really want to get into the precise definition of the linking number, but it’s easily illustrated by the following three
examples. First, a link with linking number zero:
Linking number 1:
Linking number 2:
Now, getting back to helicity, Arnol’d said that the helicity is an “average asymptotic linking number”. What does the “average asymptotic” business mean? Well, helicity is a property of magnetic
fields (or, more generally, of vector fields). Given a magnetic field, you could put two charged particles in the field at different points. Since they’re charged, the two particles will start
moving, tracing out two paths (called orbits of the field). If you keep track of the paths for a long time T, you’ll get two long curves in space; close them up and you’ll have two loops in space.
Now, a loop in space is just a knot, and two knots form a link, so there’s a linking number between these two loops. If you let T go to infinity and take an appropriate average, you get the average
asymptotic linking number of the two orbits, which Arnol’d tells us is equal to the helicity of the field.
This is all very nice, but there’s a slight problem. Helicity is a useful quantity in plasma physics in part because it provides a lower bound for the field energy (this was also proved by Arnol’d).
Any magnetic field has an intrinsic energy which has a tendency to decrease as the field evolves toward equilibrium. If the field has non-zero helicity, then you know that the intrinsic energy of the
field can’t decrease below a certain amount related to the helicity. This means that, if you know the helicity of a field, then you can determine what that field’s equilibrium energy will be.
The problem is that this doesn’t go both ways: having zero helicity doesn’t necessarily mean that the field can relax to a zero-energy state. So the question, posed by Arnol’d and Boris Khesin in
their book Topological Methods in Hydrodynamics, is this: are there “higher-order” helicities which would kick in when the ordinary helicity is zero and provide lower bounds for the field energy?
The problem, then, is to come up with a sensible definition for a higher-order helicity. Recall that Arnol’d showed that the ordinary helicity is an average asymptotic linking number. There is a
beautiful integral formula which computes the linking number of two closed curves discovered by Gauss back in 1833, known as the Gauss linking integral (brief aside: see the papers by DeTurck and
Gluck and Vela-Vick and me for generalizations of the Gauss linking integral). Using Arnol’d's approach, it’s fairly straightforward to derive the formula for helicity from the Gauss integral.
That works great for the ordinary helicity, so one might hope something similar will work to get to higher-order helicities. Remember that I said that the linking number is the simplest numerical
description (more precisely: topological invariant) of a link, but it’s certainly not the only one. In fact, the linking numbers are useless for one of the simplest three-component links, the
Borromean rings:
The salient feature of the Borromean rings is that it’s a non-trivial link (meaning it can’t be pulled apart without breaking one of the components), but deleting any one component causes the whole
thing to fall apart. The linking numbers between components are all zero, so you need a more sophisticated measure than the linking number to describe what’s going on.
This measure was provided by John Milnor in his senior thesis(!) in which he classified three-component links up to “link-homotopy”. Milnor came up with a new invariant which I’ll just call μ (though
the μ usually comes with various decorations). The definition of μ is actually quite unpleasant, but it is equal to 1 for the Borromean rings and 0 for the trivial three-component link.
Now, by analogy with the Arnol’d approach for ordinary helicity, you might hope that a suitable “average asymptotic μ invariant” would give a higher-order helicity. However, in order to get a useful
formula for such a higher-order helicity, you need some sort of integral formula for the μ invariant which is analogous to the Gauss integral. That’s one of the results in our paper (which I’ve
finally got back around to mentioning): we give an integral formula for the μ invariant in the cases where that makes sense.
To get there, we related Milnor’s μ invariant to another topological invariant, the Hopf invariant. The basic idea is this: any three-component link is fairly naturally associated to a map from a
space called the 3-torus (which naturally lives in 4-dimensional space) to the 2-sphere (think of the surface of the Earth). Such maps were classified (up to homotopy) by Lev Pontryagin in a 1941
paper; the key invariant defined by Pontryagin, denoted by ν, is a generalized (and somewhat ambiguous) version of the usual Hopf invariant and is typically called either the Hopf invariant or the
Pontryagin-Hopf invariant.
Our main result is that the μ invariant of a three-component link is equal to half the Pontryagin-Hopf invariant of the associated map. We have two different proofs of this, one purely topological
(including the picture at the top of this post) and one more algebraic (following a key insight of Nathan Habegger and Xiao-Song Lin).
Okay, that’s probably not enough detail for the mathematicians and probably way too much for everybody else, so I think I’ll stop here. | {"url":"http://www.sellingwaves.com/categories/geek-talk/","timestamp":"2014-04-18T08:04:33Z","content_type":null,"content_length":"86206","record_id":"<urn:uuid:35987058-16af-43b3-aa13-2a50fb4e115f>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00331-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: How can estimate dicount rate for each household in the panel data
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: How can estimate dicount rate for each household in the panel data
From Quang Nguyen <quangn@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject st: How can estimate dicount rate for each household in the panel data
Date Tue, 17 Feb 2009 08:26:39 -1000
Dear all:
I have the following sample data set:
Household x y t
I would like to estimate the discount rate for each household based
on the fomular: x = y*exp(-rt), where r is the interest rate. I know
that in Stata we can use: nl (x=y*(exp(*-{r}*t) for each household. My
questions are:
1. How we can ask State to do the the above estimate for all household
at the same time instead of one by one?
2. How can we save the ESTIMATED discount rates as an variable that
can be used for latter analysis? For instance, on emay study the
relationship between the household's discount rate and the educational
Many thanks and Have A Great Day!
> --
> "My father gave me the greatest gift anyone could give another person, he believed in me." - Jim Valvano
"My father gave me the greatest gift anyone could give another person,
he believed in me." - Jim Valvano
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2009-02/msg00713.html","timestamp":"2014-04-20T08:41:01Z","content_type":null,"content_length":"6452","record_id":"<urn:uuid:3da1d79e-90d2-47cc-b5c2-7c45980fc242>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00474-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Calculus: Find K
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
You need to use integration by parts
Best Response
You've already chosen the best response.
put u= -y^2/2 then.
Best Response
You've already chosen the best response.
Is it:\[\int\limits\limits_{-\infty}^{\infty}ke^{-\frac{y^2}{2}}dy=1\]?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
u= -y^2/2 find dy =... ?
Best Response
You've already chosen the best response.
I used u=y^2
Best Response
You've already chosen the best response.
but maybe its easier with u=-y^2/2
Best Response
You've already chosen the best response.
Its not smart to have that negative there
Best Response
You've already chosen the best response.
Do you agree?
Best Response
You've already chosen the best response.
@hartnn ?
Best Response
You've already chosen the best response.
oh, sorry.... yeah, its easier with e^u constants can be rearranged easily...
Best Response
You've already chosen the best response.
I got the wrong answer when evaluating the integral, and I won't do it again now. But this function is a gaussian, and it is expected that the constant k in this case has the value 1/sqrt(2pi)=
sqrt(2pi)/2pi. That comes from the statistic theory.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Yes the answer is 1/ sqrt(2*pi) I just so dont follow how they got that answer.
Best Response
You've already chosen the best response.
did u try that substitution and then integration by parts ?
Best Response
You've already chosen the best response.
I was getting weird answers. I have not really touched calculus for over a yr so i cld be messing up
Best Response
You've already chosen the best response.
\(\huge\int \frac{e^{-u}u^{-1/2}}{\sqrt2}du\) when 2u=y^2 yes, its messy...
Best Response
You've already chosen the best response.
How would you continue from here?
Best Response
You've already chosen the best response.
Can you show me?
Best Response
You've already chosen the best response.
ok dont show me
Best Response
You've already chosen the best response.
u need to know gamma function to solve that ... i am so sorry, my internet are giving me connection problems....
Best Response
You've already chosen the best response.
hmmm I know the gamma distribution function
Best Response
You've already chosen the best response.
ohhhh I see it now lol
Best Response
You've already chosen the best response.
\( \Gamma(z) = \int_0^\infty e^{-t} t^{z-1} dt \)
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
now , you'll get it easily.
Best Response
You've already chosen the best response.
ask if u don't get...
Best Response
You've already chosen the best response.
Thanks let me try to work it out then. Thanks :)))))
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
\(\Gamma{1/2} = \sqrt{\pi}\)
Best Response
You've already chosen the best response.
you'll need it.
Best Response
You've already chosen the best response.
ohhh coool never knew that
Best Response
You've already chosen the best response.
I feel like i am missing the basics in math :(
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50b3090de4b0e906b4a66847","timestamp":"2014-04-16T04:41:34Z","content_type":null,"content_length":"130045","record_id":"<urn:uuid:03adf20d-0174-44b5-b607-8d743ca374b1>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00583-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of Baby universe
In physics, a wormhole is a hypothetical topological feature of spacetime that is fundamentally a 'shortcut' through space and time. Spacetime can be viewed as a 2D surface, and when 'folded' over, a
wormhole bridge can be formed. A wormhole has at least two mouths which are connected to a single throat or tube. If the wormhole is traversable, matter can 'travel' from one mouth to the other by
passing through the throat. While there is no observational evidence for wormholes, spacetimes-containing wormholes are known to be valid solutions in general relativity.
The term wormhole was coined by the American theoretical physicist John Wheeler in 1957. However, the idea of wormholes was theorized already in 1921 by the German mathematician Hermann Weyl in
connection with his analysis of mass in terms of electromagnetic field energy.
The name "wormhole" comes from an analogy used to explain the phenomenon. If a worm is travelling over the skin of an apple, then the worm could take a shortcut to the opposite side of the apple's
skin by burrowing through its center, rather than travelling the entire distance around, just as a wormhole traveler could take a shortcut to the opposite side of the universe through a topologically
nontrivial tunnel.
The basic notion of an intra-universe wormhole is that it is a
region of
whose boundary is topologically trivial but whose interior is not
simply connected
. Formalizing this idea leads to definitions such as the following, taken from Matt Visser's
Lorentzian Wormholes
If a Minkowski spacetime contains a compact region Ω, and if the topology of Ω is of the form Ω ~ R x Σ, where Σ is a three-manifold of nontrivial topology, whose boundary has topology of the
form dΣ ~ S^2, and if, furthermore, the hypersurfaces Σ are all spacelike, then the region Ω contains a quasipermanent intra-universe wormhole.
Characterizing inter-universe wormholes is more difficult. For example, one can imagine a 'baby' universe connected to its 'parent' by a narrow 'umbilicus'. One might like to regard the umbilicus as
the throat of a wormhole, but the space time is simply connected.
Wormhole types
Intra-universe wormholes
connect one location of a universe to another location of the same universe (in the same present time or unpresent). A wormhole should be able to connect distant locations in the universe by creating
a shortcut through
, allowing travel between them that is faster than it would take light to make the journey through normal space. See the image above.
Inter-universe wormholes
connect one universe with another., This gives rise to the speculation that such wormholes could be used to travel from one
parallel universe
to another. A wormhole which connects (usually
) universes is often called a
Schwarzschild wormhole
. Another application of a wormhole might be
time travel
. In that case, it is a shortcut from one point in space and time to another. In
string theory
, a wormhole has been envisioned to connect two
, where the mouths are attached to the branes and are connected by a
flux tube
. Finally, wormholes are believed to be a part of
spacetime foam
. There are two main types of wormholes:
Lorentzian wormholes
Euclidean wormholes
. Lorentzian wormholes are mainly studied in
general relativity
semiclassical gravity
, while Euclidean wormholes are studied in
particle physics
Traversable wormholes
are a special kind of Lorentzian wormholes which would allow a human to travel from one side of the wormhole to the other.
Serguei Krasnikov
suggested the term
spacetime shortcut
as a more general term for (traversable) wormholes and
propulsion systems
like the
Alcubierre drive
and the
Krasnikov tube
to indicate hyperfast interstellar travel
Theoretical basis
It is known that (Lorentzian) wormholes are not excluded within the framework of
general relativity
, but the physical plausibility of these solutions is uncertain. It is also unknown whether a theory of
quantum gravity
, merging general relativity with
quantum mechanics
, would still allow them. Most known solutions of general relativity which allow for traversable wormholes require the existence of
exotic matter
, a theoretical substance which has negative
energy density
. However, it has not been mathematically proven that this is an absolute requirement for traversable wormholes, nor has it been established that exotic matter cannot exist.
Schwarzschild wormholes
Lorentzian wormholes known as Schwarzschild wormholes or Einstein-Rosen bridges are bridges between areas of space that can be modeled as vacuum solutions to the Einstein field equations by combining
models of a black hole and a white hole. This solution was discovered by Albert Einstein and his colleague Nathan Rosen, who first published the result in 1935. However, in 1962 John A. Wheeler and
Robert W. Fuller published a paper showing that this type of wormhole is unstable, and that it will pinch off instantly as soon as it forms, preventing even light from making it through.
Before the stability problems of Schwarzschild wormholes were apparent, it was proposed that quasars were white holes forming the ends of wormholes of this type.
While Schwarzschild wormholes are not traversable, their existence inspired Kip Thorne to imagine traversable wormholes created by holding the 'throat' of a Schwarzschild wormhole open with exotic
matter (material that has negative mass/energy).
Traversable wormholes
Lorentzian traversable wormholes would allow travel from one part of the universe to another part of that same universe very quickly or would allow travel from one universe to another. The
possibility of traversable wormholes in general relativity was first demonstrated by
Kip Thorne
and his graduate student
Mike Morris
in a 1988 paper; for this reason, the type of traversable wormhole they proposed, held open by a spherical shell of
exotic matter
, is referred to as a
Morris-Thorne wormhole
. Later, other types of traversable wormholes were discovered as allowable solutions to the equations of general relativity, including a variety analyzed in a 1989 paper by
Matt Visser
, in which a path through the wormhole can be made in which the traversing path does not pass through a region of exotic matter. However in the pure Gauss-Bonnet theory exotic matter is not needed in
order for wormholes to exist- they can exist even with no matter A type held open by negative mass
cosmic strings
was put forth by Visser in collaboration with
Cramer et al.
,, in which it was proposed that such wormholes could have been naturally created in the early universe.
Wormholes connect two points in spacetime, which means that they would in principle allow travel in time as well as in space. In a 1988 paper, Morris, Thorne and Yurtsever worked out explicitly how
to convert a wormhole traversing space into one traversing time.
Wormholes and faster-than-light travel
Special relativity only applies
. Wormholes allow superluminal (
) travel by ensuring that the speed of light is not exceeded locally at any time. While traveling through a wormhole, subluminal (slower-than-light) speeds are used. If two points are connected by a
wormhole, the time taken to traverse it would be less than the time it would take a light beam to make the journey if it took a path through the space
the wormhole. However, a light beam traveling through the wormhole would always beat the traveler. As an analogy, running around to the opposite side of a mountain at maximum speed may take longer
than walking through a tunnel crossing it. You can walk slowly while reaching your destination more quickly because the length of your path is shorter.
Wormholes and time travel
A wormhole could allow
time travel
. This could be accomplished by accelerating one end of the wormhole to a high velocity relative to the other, and then sometime later bringing it back;
relativistic time dilation
would result in the accelerated wormhole mouth aging less than the stationary one as seen by an external observer, similar to what is seen in the
twin paradox
. However, time connects differently through the wormhole than outside it, so that synchronized clocks at each mouth will remain synchronized to someone traveling through the wormhole itself, no
matter how the mouths move around. This means that anything which entered the accelerated wormhole mouth would exit the stationary one at a point in time prior to its entry. For example, if clocks at
both mouths both showed the date as 2000 before one mouth was accelerated, and after being taken on a trip at relativistic velocities the accelerated mouth was brought back to the same region as the
stationary mouth with the accelerated mouth's clock reading 2005 while the stationary mouth's clock read 2010, then a traveler who entered the accelerated mouth at this moment would exit the
stationary mouth when its clock also read 2005, in the same region but now five years in the past. Such a configuration of wormholes would allow for a particle's
world line
to form a closed loop in spacetime, known as a
closed timelike curve
It is thought that it may not be possible to convert a wormhole into a time machine in this manner: some analyses using the semiclassical approach to incorporating quantum effects into general
relativity indicate that a feedback loop of virtual particles would circulate through the wormhole with ever-increasing intensity, destroying it before any information could be passed through it, in
keeping with the chronology protection conjecture. This has been called into question by the suggestion that radiation would disperse after traveling through the wormhole, therefore preventing
infinite accumulation. The debate on this matter is described by Kip S. Thorne in the book Black Holes and Time Warps. There is also the Roman ring, which is a configuration of more than one
wormhole. This ring seems to allow a closed time loop with stable wormholes when analyzed using semiclassical gravity, although without a full theory of quantum gravity it is uncertain whether the
semiclassical approach is reliable in this case.
Wormhole metrics
Theories of
wormhole metrics
describe the spacetime geometry of a wormhole and serve as theoretical models for time travel. An example of a (traversable) wormhole
is the following:
$ds^2= - c^2 dt^2 + dl^2 + \left(k^2 + l^2\right)\left(d theta^2 + sin^2 theta , dphi^2\right)$
One type of non-traversable wormhole metric is the Schwarzschild solution:
$ds^2= - c^2 left\left(1 - frac\left\{2GM\right\}\left\{rc^2\right\}right\right)dt^2 + frac\left\{dr^2\right\}\left\{1 - frac\left\{2GM\right\}\left\{rc^2\right\}\right\} + r^2\left(d theta^2 +
sin^2 theta , dphi^2\right)$
Wormholes in fiction
Wormholes are a popular feature of
science fiction
as they allow interstellar (and sometimes interuniversal) travel within human timescales. It is common for the creators of a fictional universe to decide that
travel is either impossible or that the technology does not yet exist, but to use wormholes as a means of allowing humans to travel long distances in short periods. Military science fiction (such as
Wing Commander
games) often use a "jump drive" to propel a spacecraft between two fixed "jump points" connecting stellar systems. Connecting systems in a network like this results in a fixed "terrain" with choke
points that can be useful for constructing plots related to military campaigns. The Alderson points used by
Larry Niven
Jerry Pournelle
The Mote in God's Eye
and related novels are an example, although the mechanism does not seem to describe actual wormhole physics.
David Weber
has also used the device in the
and other books such as those based upon the
universe. Naturally occurring wormholes form the basis for interstellar travel in
Lois McMaster Bujold
Vorkosigan Saga
. They are also used to create an Interstellar Commonwealth in
Peter F. Hamilton
Commonwealth Saga
Concept of wormholes is used in The Wild Blue Yonder, a science fiction film by Werner Herzog
Wormholes also play pivotal roles in science fiction where faster-than-light travel is possible though limited, allowing connections between regions that would be otherwise unreachable within
conventional timelines. Several examples appear in the Star Trek franchise, including the Bajoran wormhole in the Deep Space Nine series. In Star Trek: The Motion Picture in 1979 the USS Enterprise
(NCC-1701) was trapped in a wormhole caused by an imbalance in the calibration of the ship's warp engines when it first achieved warp speed.
In Carl Sagan's novel Contact and subsequent 1997 film starring Jodie Foster and Matthew McConaughey, Foster's character Ellie travels 26 light years through a series of wormholes to the star Vega.
The round trip, which to Ellie lasts 18 hours, passes by in a fraction of a second on Earth, making it appear she went nowhere. In her defense, Foster mentions an Einstein-Rosen bridge and tells how
she was able to travel faster than light and time. Analysis of the situation by Kip Thorne, on the request of Sagan, is quoted by Thorne as being his original impetus for analyzing the physics of
Wormholes play major roles in the television series Farscape, where they are the cause of John Crichton's presence in the alien universe, and in the Stargate series, where stargates create a stable
artificial wormhole where matter is disintegrated, converted into energy, and is sent through to be reintegrated at the other side. In the science fiction series Sliders, a wormhole (or vortex, as it
is usually called in the show) is used to travel between parallel worlds, and one is seen at least once or twice in every episode. In the pilot episode it was referred to as an
"Einstein-Rosen-Podolsky bridge".
The central theme in the movie Donnie Darko revolves around Einstein-Rosen bridges.
Wormholes play a major role in the movie Jumper. The plot revolves around David, a kid who suddenly learns he can teleport himself from one place to another. In the movie, David can only teleport to
a place he has been. Once he jumps, the wormhole stays open for a few minutes but is not called a wormhole, David refers to it as a jump scar. NOTE: The other Jumper that David meets does call it a
"wormhole" once, when he is angered by David bringing the girl to his secret base in the desert.
In Invader Zim episode "Room With A Moose", Zim tricks his classmates into a wormhole with two exits: a room with a moose (left) and to Earth (right)
In Will Wright's video game Spore, Wormholes play an important part of the space stage. Once the player has obtained a 'wormhole key' they are able to take their ship into wormholes, found scattered
across the galaxy, which transport them through a semitransparent blue tunnel (similar to the tunnel seen occasionally in the series Stargate) to exit through another wormhole in some other part of
the galaxy. The Galactic Core is also some kind of wormhole which takes the player to what appears to be the center of the universe.
See also
External links | {"url":"http://www.reference.com/browse/Baby+universe","timestamp":"2014-04-20T20:27:15Z","content_type":null,"content_length":"110731","record_id":"<urn:uuid:1e42dde9-4f4d-452b-b784-8e92a1a27b27>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00062-ip-10-147-4-33.ec2.internal.warc.gz"} |
Research concludes there is no 'simple theory of everything' inside the enigmatic E8
The "exceptionally simple theory of everything," proposed by a surfing physicist in 2007, does not hold water, says Emory University mathematician Skip Garibaldi.
Garibaldi did the math to disprove the theory, which involves a mysterious structure known as E8. The resulting paper, co-authored by physicist Jacques Distler of the University of Texas, will appear
in an upcoming issue of Communications in Mathematical Physics.
"The beautiful thing about math and physics is that it is not subjective," says Garibaldi. "I wanted a peer-reviewed paper published, so that the scientific literature provides an accurate state of
affairs, to help clear up confusion among the lay public on this topic."
In November of 2007, physicist Garrett Lisi published an online paper entitled "An Exceptionally Simple Theory of Everything." Lisi spent much of his time surfing in Hawaii, adding a bit of color to
the story surrounding the theory. Although his paper was not peer-reviewed, and Lisi himself commented that his theory was still in development, the idea was widely reported in the media, under
attention-grabbing headlines like "Surfer dude stuns physicists with theory of everything."
Garibaldi was among the skeptics when the theory hit the news. So was Distler, a particle physicist, who wrote about problems he saw with Lisi's idea on his blog. Distler's posting inspired Garibaldi
to think about the issue more, eventually leading to their collaboration.
Lisi's paper centered on the elegant mathematical structure known as E8, which also appears in string theory. First identified in 1887, E8 has 248 dimensions and cannot be seen, or even drawn, in its
complete form.
The enigmatic E8 is the largest and most complicated of the five exceptional Lie groups, and contains four subgroups that are related to the four fundamental forces of nature: the electromagnetic
force; the strong force (which binds quarks); the weak force (which controls radioactive decay); and the gravitational force.
In a nutshell, Lisi proposed that E8 is the unifying force for all the forces of the universe.
"That would be great if it were true, because I love E8," Garibaldi says. "But the problem is, it doesn't work as he described it in his paper."
As a leading expert on several of the exceptional Lie groups, Garibaldi felt an obligation to help set the record straight. "A lot of mystery surrounds the Lie groups, but the facts about them should
not be distorted," he says. "These are natural objects that are central to mathematics, so it's important to have a correct understanding of them."
Using linear algebra and proving theorems to translate the physics into math, Garibaldi and Distler not only showed that the formulas proposed in Lisi's paper do not work, they also demonstrated the
flaws in a whole class of related theories.
"You can think of E8 as a room, and the four subgroups related to the four fundamental forces of nature as furniture, let's say chairs," Garibaldi explains. "It's pretty easy to see that the room is
big enough that you can put all four of the chairs inside it. The problem with 'the theory of everything' is that the way it arranges the chairs in the room makes them non-functional."
He gives the example of one chair inverted and stacked atop another chair.
"I'm tired of answering questions about the 'theory of everything,'" Garibaldi says. "I'm glad that I will now be able to point to a peer-reviewed scientific article that clearly rebuts this theory.
I feel that there are so many great stories in science, there's no reason to puff up something that doesn't work."
More information: Paper: http://arxiv.org/abs/0905.2658
not rated yet Mar 26, 2010
Why not to consider some larger group, for example Monster group?
1 / 5 (1) Mar 26, 2010
We will just have to wait until this "proof" has been widely disseminated for a period of time to see whether or not it stands up.
3.5 / 5 (4) Mar 26, 2010
We will just have to wait until this "proof" has been widely disseminated for a period of time to see whether or not it stands up.
True. But given E8's origin my money is on Mr. Garibaldi.
2.7 / 5 (3) Mar 26, 2010
jonnyboy: Remember that it only takes one test that causes a theory to fail to make that theory wrong. However, it would take a large number of tests that do not disprove a theory to give us
confidence in the theory. In this case, a single test disproved the theory and everyone can now move on. Since even the originator of the original E8 theory agrees it fails this test, there is no
need to wait for more analyses. This one is dead and buried. Don't get me wrong, something similar but different might hold up to testing - but this one is (as the mythbusters say) BUSTED!
4.5 / 5 (2) Mar 26, 2010
.. that it only takes one test that causes a theory to fail to make that theory wrong...
This is very naive & schematic stance. Both relativity, both quantum mechanics violate mutualy or various observations, like cosmologic constant or vacuum energy density. Does it mean, these theories
are wrong? After all, E8 gauge group was introduced by string theory into physics - not by Garrett Lissi.
4 / 5 (4) Mar 26, 2010
.. that it only takes one test that causes a theory to fail to make that theory wrong...
This is very naive & schematic stance.
No, it's perfect scientific thinking based on the falsifiability of scientific theories.
Both relativity, both quantum mechanics violate mutualy
They are two different theories for two different aspects of physics. We don't have yet a grand unified theory.
or various observations, like cosmologic constant or vacuum energy density.
which are not observations, but conclusions.
Does it mean, these theories are wrong?
No. It means that these theories are no grand unified theories.
After all, E8 gauge group was introduced by string theory into physics - not by Garrett Lissi.
E8 is an object of mathematics and can not be questioned by physics. The new results falsify only the "simple theory of everything" and a "whole class of related theories" which are based on E8, but
not string theory.
5 / 5 (1) Mar 27, 2010
This proof is itself open for debate, it hasn't even been published yet.
"Peer review" does not equal fact!
Lisi sent in a testable theory, and if this paper holds up that's great! We can move on to other theories.
And "frajo" "PERFECT scientific thinking based on the falsifiability of scientific theories."
"PERFECT" "frajo" "PERFECT"!
I hate to see that word even used anywhere near physicists! And since when has "string theory" been "falsifiable"!
5 / 5 (1) Mar 27, 2010
"PERFECT scientific thinking based on the falsifiability of scientific theories."
"PERFECT" "frajo" "PERFECT"!
I hate to see that word even used anywhere near physicists!
You are right; as I'm no native speaker my English is not perfect yet. :)
And since when has "string theory" been "falsifiable"!
AFAIK not yet. But they are working on it.
1 / 5 (2) Mar 27, 2010
E8 is an object of mathematics and can not be questioned by physics.
Usage of E8 has a robust meaning in many physical theories, because quantum foam gets more dense under shaking like soap foam. Particle structure exchanging energy with others via bosons can be
considered as fractal mesh of closely packed hyperspheres, where hyperspheres representing particles of energy are sitting at the kissing points of hyperspheres, representing particles of matter.
Therefore the E8 Lie group answers the trivial question: "Which structure should have the tightest lattice of particles, exchanged/formed by another particles?". And such question has perfect meaning
even from classical physics point of view! Such question has a perfect meaning in theory, describing the most dense structure of inertial particles, which we can ever imagine, i.e. the interior of
black hole.
1 / 5 (2) Mar 27, 2010
The second interpretation of E8 gauge group is relevant for cosmic scale and so called ekpyrotic model of cosmology and so called shock wave cosmology as proposed by J. Smoller and B. Temple [PNAS,
This model considers, the current Universe generation is formed by interior of giant dense collapsar, which is behaving like black hole from outer perspective. This collapse was followed by phase
transition, which proceeded like crystallization from over-saturated solution by avalanche-like mechanism. During this, the approximately spherical zones of condensing false vacuum (branes) have
intersect mutually, and from these places the another vacuum condensation has started in sort of nucleation effect.
Now we can observe the residua of these zones as a dark matter streaks and the dodecahedron structure of these zones should correspond the E8 group geometry, as being observed from inside.
1 / 5 (2) Mar 27, 2010
As we can see, between various formal descriptions of Universe and their understanding at intuitive level exists certain barriers uncrossable by formal math, only by human imagination. Without it we
cannot say anything about E8 model relevance for physics, because we cannot imagine nothing particular behind it. In my opinion E8 gauge model isn't wrong at all - it's just a bit schematic. After
all, like every other formal theory. We aren't required to understand details, but we should understand their physical motivation, so I just making these theories accessible for people.
The schematic division of theories into good and wrong one doesn't work well here. At the moment, when such theory can predict something relevant about particle generations, it cannot be completely
wrong - but it shouldn't be overestimated as well. Such theory is simply just another tool for Universe understanding and its relevancy can be only measured by number of theorists, which will
extrapolate it further.
1 / 5 (1) Mar 28, 2010
Here another heroic and sad GUT attempt:
Unless you can convince mathematicians (or at least obtain a consensus) that there is no such concept as non-applicable (pure) mathematics, the rest of science's grand unifying efforts will remain
forever in limbo. And such an attempt, to convince the mathematical society, is being made with the research cited above.
Even if all humans were omni-linguistic - capable of all human language - past and present - rendering translation obsolete - interpretations remain for all other languages outside the human language
- the language of all other living entities and Nature itself.
Does information increase entropy?
not rated yet Mar 28, 2010
Here another heroic and sad GUT attempt
This article contains just a few abstract ideas and equations - with compare to Garrett's E8 model, which is fully fledged theory of many particle generations.
1 / 5 (1) Mar 29, 2010
So? What is your point? Perhaps one is incomplete - still in limbo. Garrett's Model is as close as you can get to DOA - pending peer review. After which, it will be.
5 / 5 (1) Mar 29, 2010
..Garrett's Model is as close as you can get to DOA..
String theory is waiting for its acceptation forty years. Now we're talking about theory, the acceptation/refusal of which we'll never live to see.
3 / 5 (2) Mar 29, 2010
With compare to string theory, E8 validity doesn't depend only to confirmations by some mathematicians. This theory is matching 226 known standard model particles to most of 248 symmetries of E8
group and Lisi is able to predict the existence and quantum numbers of 22 new particles, three of these were already predicted by another independent theory (Pati-Salam model).
In such a way, E8 is a heavy weight between existing theories, because every opponent of it must be able to explain at the same moment, if this theory is wrong, why it fits properties of two hundreds
particles so well. Because the very beautiful thing on math is, despite of what is saying or not, experiment always goes first.
Frankly, I wouldn't want to be at Diestler or Garibaldi place by now...
3 / 5 (2) Mar 29, 2010
E8 is a mathematical object. It's only 'validity' comes from it being free of anomalies - mathematical inconsistencies. Garrett's application of E8's math, to model and make predictions, is full of
anomalies - mathematical inconsistencies - regardless of how 'striking' its predictions and matching powers are with the current state of physics.
String theory went through the same evolution, it was full of anomalies - mathematical inconsistencies. Now string theory is mathematically consistent. A consistent mathematical model. String
theory's next step is to make it accessible to the scientific community through scientific method - a method that describes, along with other things, that falsifiability is essential to the
scientific method.
This is why string theory waits and waits and waits - and not because it is not mathematically
I would do ANYTHING to be in Diestlers' or Garibaldis' places right now. And Garrett should too.
There are no 'opponents' here. I see none.
not rated yet Mar 29, 2010
and not because it is not mathematically
String theory is indeed mathematically inconsistent, because it leads to extremelly large landscape of 10+500 possible solutions. For me it's even more substantial, string theory is inconsistent
physically in its at least two main postulates: the Lorentz symmetry of special relativity and the assumption of extradimension, because every extradimension of 3D space-time would manifest just by
violation of Lorentz symmetry.
Regarding E8 theory, if some theory fits less or better properties of 226 particles at the same moment, it simply cannot be completelly wrong - despite its formulation can suffer some
inconsistencies. Problems in formulation can be corrected anytime latter. The REAL problems for E8 would occur, if E8 would violate some well known experiments. I presume, the fact, E8 is not a TOE
is apparent for everybody, because our Universe simply doesn't appear like root system of E8 group - it's much more irregular.
1 / 5 (1) Mar 30, 2010
Surfer physicist responds to claims that his E8 theory of everything doesn't work | {"url":"http://phys.org/news188827214.html","timestamp":"2014-04-18T03:18:02Z","content_type":null,"content_length":"96362","record_id":"<urn:uuid:c0b61b91-8944-4024-83e5-28afdafe8412>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00409-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quotations by
Quotations by Gottfried Leibniz
[about him:]
It is rare to find learned men who are clean, do not stink and have a sense of humour.
[attributed variously to Charles Louis de Secondat Montesquieu and to the Duchess of Orléans]
Nothing is more important than to see the sources of invention which are, in my opinion more interesting than the inventions themselves.
Quoted in J Koenderink, Solid Shape (Cambridge Mass. 1990).
Musica est exercitium arithmeticae occultum nescientis se numerare animi
The pleasure we obtain from music comes from counting, but counting unconsciously. Music is nothing but unconscious arithmetic.
From a letter to Goldbach, 27 April 1712, quoted in O Sacks, The Man who Mistook his Wife for a Hat
He who understands Archimedes and Apollonius will admire less the achievements of the foremost men of later times.
Quoted in G Simmons Calculus Gems (New York 1992).
In symbols one observes an advantage in discovery which is greatest when they express the exact nature of a thing briefly and, as it were, picture it; then indeed the labor of thought is wonderfully
Quoted in G Simmons Calculus Gems (New York 1992).
The art of discovering the causes of phenomena, or true hypothesis, is like the art of decyphering, in which an ingenious conjecture greatly shortens the road.
New Essays Concerning Human Understanding
Although the whole of this life were said to be nothing but a dream and the physical world nothing but a phantasm, I should call this dream or phantasm real enough, if, using reason well, we were
never deceived by it.
Quoted in J R Newman, The World of Mathematics (New York 1956).
The soul is the mirror of an indestructible universe.
The Monadology.
The imaginary number is a fine and wonderful resource of the human spirit, almost an amphibian between being and not being.
Quoted in A L Mackay, Dictionary of Scientific Quotations (London 1994)
Therefore, I have attacted [the problem of the catenary] which I had hitherto not attempted, and with my key [the differential calculus] happily opened its secret.
Acta eruditorum
Taking mathematics from the beginning of the world to the time of Newton, what he has done is much the better half.
Quoted in C B Boyer, A History of Mathematics (New York 1968)
The art of discovering the causes of phenomena, or true hypothesis, is like the art of decyphering, in which an ingenious conjecture greatly shortens the road.
New Essays Concerning Human Understanding It is unworthy of excellent men to lose hours like slaves in the labour of calculation which could safely be relegated to anyone else if machines were used.
Miracles are not to be multiplied beyond necessity.
The dot was introduced as a symbol for multiplication by Leibniz. On July 29, 1698, he wrote in a letter to Johann Bernoulli: "I do not like X as a symbol for multiplication, as it is easily
confounded with x..."
Quoted in F Cajori, A History of Mathematical Notations (1928)
What is is what must be.
It is unworthy of excellent men to lose hours like slaves in the labor of calculation which could be relegated to anyone else if machines were used.
JOC/EFR April 2011
The URL of this page is: | {"url":"http://www-gap.dcs.st-and.ac.uk/~history/Quotations/Leibniz.html","timestamp":"2014-04-19T01:50:31Z","content_type":null,"content_length":"4232","record_id":"<urn:uuid:5b3916e4-3176-487f-bfe4-99938db56991>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00162-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Set of real numbers in a finite number of words
Yes, agreed. But mathematicians haven't stopped talking about uncountable sets, or started believing that the reals are secretly countable. When someone talks about the uncomputable reals or the
undefinable reals, their argument is not immediately dismissed with "Oh everyone knows that Skolem showed there is a countable model of the reals." NOBODY does that.
I am still puzzled by this use of downward Lowenheim-Skolem in this thread. The statement is made that most reals are unnameable. This is because there are only countably many names/algorithms but
uncountably many reals.
Then downward L-S is invoked to object that well, actually, secretly, the reals are countable
I have never heard of downward L-S used in this manner, to cut off discussion of the reals being uncountable. It is true that there is a nonstandard model of the reals that is countable, but that
model would not be recognizable to anyone as the usual real numbers.
The real numbers are uncountable. Are people in this thread now objecting to that well-known fact on the basis of downward Lowenheim-Skolem? That would be a great misunderstanding of L-S in my
Is the next undergrad who shows up here to ask, "I heard that the reals are uncountable, I don't understand Cantor's proof," to be told, "Oh, don't worry about it, downward Lowenheim-Skolem shows
that the reals are countable."
That would not be mathematically correct.
yes, it is true that there is a "non-standard" model of the reals that is countable. presumably, this is meant to contrast with a "standard" model of uncountably many reals. the trouble is, the
"standard model" doesn't exist, at least not in the way people think it does.
what i mean is this: give me an example of a collection of things that satisfy ZFC that includes all sets. i mean, i'd like to know that SOME "version" or "example" of set theory is out there, it
would be reassuring. given the group axioms, for example, we can demonstrate that, oh, say the integers under addition satisfy the axioms. it is, to my understanding, an open question whether or not
there is a structure, ANY structure, that satisfies the ZFC axioms. at the moment, ZFC appears to describe the class of all sets (let's call this V), and since V is not a set, V is not a model for
ZFC (close, but no cigar).
that is to say: it's not logically indefensible to disallow calling any uncountable thing "a set". in this view of things, what cantor's diagonal argument shows is:
there exists collections of things larger than sets (which we certainly know is true anyway).
this is somewhat of a different question than the consistency of the ZFC axioms, although existence of a model would establish its consistency. since great pains have been taken to disallow
"inconsistent sets" (the last big push being restricting the axiom of comprehension, for which we previously had great hopes of defining a set purely in terms of its properties), the general
consensus is that ZFC is indeed "probably consistent" (it's been some time since anyone has found a "contradictory set").
downward L-S does not show "the real numbers" (with any of the standard constructions) are uncountable, rather, it shows that "the set of real numbers" might not be what we hope it is, in some
variant of set theory. indeed the Skolem paradox can be resolved by noting that any model of set theory can describe a larger model, which is what (i believe) current set theory DOES: it shows we
can't get by "with only sets", we need a background of things not regulated by the axioms (classes, and larger things).
in other words: there is a deep connection between cardinality, and "set-ness". what cardinals we are willing to accept, determines what things we are willing to call sets. and: what things we are
willing to call sets, affects a set's cardinality (cardinality isn't "fixed" under forcing).
1) only finite sets <--> countable universe (first notion of infinity as "beyond measure")
2) countable infinite sets <--> uncountable universe (infinity can now be "completed")
3) uncountably infinite sets <--> strongly inaccessible universe (an infinity beyond all prior infinities)
cantor took step (2) for us, and ever since, we have decided that that pretty much justifies step (3). note that even step (1) is not logically obvious, the axiom of infinity had to be added as an
axiom, because we desired the natural numbers to be a set, it does not follow from the other axioms. it is apparently known that (2) is logically consistent, and unknown if (3) is logically
consistent (but if (3) is assumed, then (2) follows).
geometrically, the situation seems to be thus: there seems to be a qualitative difference, between "continua" and "discrete approximations of them". the analog and digital worlds are different,
although at some levels of resolution, nobody cares.
going back to the real numbers: some mathematicians feel uncomfortable with uncountable sets, including the set (as usually defined) of the real numbers. there are some good philosophical (not
mathematical) reasons for feeling this way: most uncountable set elements are "forever beyond our reach", so why use them if we don't need them? perhaps the best answer is that having a wider context
(a bigger theory), often makes working in our smaller theory more satisfying: treating "dx" as a hyperreal number, makes proofs about differentiation more intuitive (where we only care about what
happens to the "real part").
knowing that sup(A) is a real number, means we can prove things about sets of real numbers in ways that would be difficult, if we had no such assurance. the "background" logic of our set theory
(which gets more complicated with uncountable sets) makes the "foreground" logic of the real numbers, easier to swallow. | {"url":"http://www.physicsforums.com/showpost.php?p=3765852&postcount=30","timestamp":"2014-04-19T17:33:37Z","content_type":null,"content_length":"13781","record_id":"<urn:uuid:f1a07a99-3050-4182-8608-943fc83a8b87>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00225-ip-10-147-4-33.ec2.internal.warc.gz"} |
Question on Laws of Exponents
February 25th 2011, 06:06 AM #1
Feb 2011
Question on Laws of Exponents
Ok I'm at the end of my lesson and I have to prove if an expression is true or false.
eg: $(x+y)^{1/2} = x^{1/2}+y^{1/2}$
I know if it were $(xy)^{1/2}$ this would be equal to $x^{1/2}y^{1/2}$ but the addition sign is throwing me off.
I've looked at my rules and can't find anything like $(x+y)^2$ laws just the power of the product.
I hope I'm making sense :P
It's not true. You can check using the Binomial Series...
Ok I'm at the end of my lesson and I have to prove if an expression is true or false.
eg: $(x+y)^{1/2} = x^{1/2}+y^{1/2}$
I know if it were $(xy)^{1/2}$ this would be equal to $x^{1/2}y^{1/2}$ but the addition sign is throwing me off.
I've looked at my rules and can't find anything like $(x+y)^2$ laws just the power of the product.
I hope I'm making sense :P
1. Re-write the expression as:
$\sqrt{x+y}=\sqrt{x} + \sqrt{y}$
2. Square both sides and you'll see that this equation is only true if x = 0 or y = 0
wonderful thank you guys!
February 25th 2011, 06:14 AM #2
February 25th 2011, 06:16 AM #3
February 25th 2011, 06:19 AM #4
Feb 2011 | {"url":"http://mathhelpforum.com/algebra/172579-question-laws-exponents.html","timestamp":"2014-04-19T13:57:29Z","content_type":null,"content_length":"40837","record_id":"<urn:uuid:71aa3e78-ffc4-43dc-9c18-ee87221b2e27>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00128-ip-10-147-4-33.ec2.internal.warc.gz"} |
And the winner is ...
Sep 2001
A new award to honour outstanding work in mathematics, comparable to the Nobel prizes in physics and other areas, has been set up by the Norwegian government.
There is already one top international prize in mathematics, the Fields Medal, often referred to as the "Nobel prize of mathematics". However, though its prestige in the mathematical community is
similar to that of the Nobel prizes, its public exposure has been far less. Partly this may be because Fields Medals are awarded only every four years. Also the medal is only available to
mathematicians under the age of 40. When Andrew Wiles announced his celebrated proof of Fermat's Last Theorem, he was just too old to be awarded one of the 1998 medals.
The Abel prize, which will be awarded annually, was announced by the Norwegian prime minister Jens Stoltenberg in August. An initial fund ofNOK 200 million (US$22 million) will be established by the
Norwegian government in 2002, and the first prize, worth about US$500,000, will be awarded in 2003.
A prize in the honour of Niels Henrick Abel (1802 - 1829) was first suggested in 1902 by King Oscar II of Sweden and Norway, but was never established, as union between the two countries was
dissolved. Abel was a Norwegian mathematician who made a significant impact on mathematics despite his short and tragic life. At the age of 16 he extended Euler's binomial theorem, and at 19 he
proved that there is no general algebraic solution for quintic equations, a problem that had baffled mathematicians for centuries.
Mr Stoltenberg said that the Abel Prize was an expression of the importance of mathematics, and was intended to present the field with a prize on the highest level. The prize has the support of the
International and European Mathematical Societies and recipients will be chosen by an independent committee of international mathematicians. The Abel Prize will be established in 2002, the 200th
anniversary of Abel's birth. | {"url":"http://plus.maths.org/content/and-winner","timestamp":"2014-04-17T13:04:56Z","content_type":null,"content_length":"24322","record_id":"<urn:uuid:b056828a-f827-4438-be2f-1d0cf760ef05>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00260-ip-10-147-4-33.ec2.internal.warc.gz"} |
Yonkers Algebra Tutor
Find a Yonkers Algebra Tutor
...Depending on the level, about half of the questions are synonyms, and the other half are sentence completion. It is important in the sentence completion portion to find the important word in
the sentence. For example: "ALTHOUGH Daniel is an excellent math student, his recent test scores in Calc...
15 Subjects: including algebra 1, algebra 2, chemistry, calculus
...Sometimes it is hard to convey certain concepts or ideas in a way that is easy for the student to understand, and in a way that helps them to retain what they learned. But before a student can
begin to appreciate math, they first need to know why it is necessary. In my teaching experience I always made sure to connect the abstract concepts we were learning to a more practical
7 Subjects: including algebra 1, precalculus, algebra 2, geometry
...I am a member of Pi Alpha Theta, the national history honors society as well as a member of Pi Sigma Alpha, the honors society for government. I have been involved in tutoring since high school
and have been doing it professionally for 5 years. My specialties include test preparation (SAT/ACT), history, government, English, writing, and grammar, amongst others.
34 Subjects: including algebra 1, algebra 2, English, chemistry
...My teaching is based on assessing the potential of the tutee first to know his or her level in relevant subjects after which, I prepare a tutoring plan on individual basis in order to proceed
with him or her from a low difficulty level to a higher one. This would include an in depth approach to...
23 Subjects: including algebra 1, algebra 2, chemistry, precalculus
I have a bachelor's degree in economics and I currently tutor at Cerullo Learning Assistance Center (CLAC) at Bergen Community College. I have ten years of experience tutoring. I have experience
as a tutor in middle school, high school, and college level basic math, prealgebra, algebra 1-2, precalculus and trigonometry.
6 Subjects: including algebra 1, algebra 2, precalculus, trigonometry | {"url":"http://www.purplemath.com/Yonkers_Algebra_tutors.php","timestamp":"2014-04-21T15:21:29Z","content_type":null,"content_length":"24045","record_id":"<urn:uuid:d72e6118-310f-4e1e-be08-d092bec8bd7d>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00490-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math in the
Visualizing the curvature tensor
A special section in Science for August 3, 2012 covered black holes, and included the "perspective" piece by Kip Thorne, "Classical Black Holes: The Nonlinear Dynamics of Curved Spacetime." Thorne
begins by describing a way that he, Robert Owen and collaborators (Physical Review Letters 106 151101) have devised "to visualize the Riemann curvature tensor, which embodies the curvature of
spacetime. Just as the electromagnetic field can be split into an electric field and a magnetic field, so the Riemann tensor can be split into a tidal field ${\cal E}$ that stretches and squeezes
anything it encounters, and a frame-drag ${\cal B}$ that twists adjacent inertial frames with respect to each other. ... Just as electric and magnetic fields can be visualized using field lines, ...
so ${\cal E}$ and ${\cal B}$ can each be described by three orthogonal sets of field lines, called (tidal) tendex lines for ${\cal E}$ and (frame-drag) vortex lines for ${\cal B}$."
The tendex lines for a stationary (non-rotating) spherical object "which could be the Earth, Moon, Sun or a non-spinning black hole." A test object falling towards the object is stretched in the red
directions and compressed in the blue.
Along with its own pattern of tendex lines, a spinning mass has a frame-dragging effect on space-time. This image shows the vortex lines around a black hole spinning at 95% of its maximum possible
rate (black arrow: spin axis). For an object falling along a red line, gyroscopes at top and bottom would precess counter-clockwise with respect to each other. Along a blue line, the relative
precession would be clockwise. Images adapted from Science 337 536.
Thorne uses this imagery to describe the results of simulations of black-hole collisions and close encounters. He remarks that the resulting patterns of gravitational waves should be observable by a
new generation of detectors coming on line in 2017.
Math in the plant physiology curriculum
An article in the online journal Bioscience Education addresses the perception that "Biology has often been considered the ideal career for students inclined to science but mathematically challenged"
and the difficulty that Biology students face with test questions involving a formula, a graph, or a table. The authors (A. LLamas, F. Vila and A. Sanz) based their study on ten years of instruction
in Plant Physiology at the University of Valencia, in Spain. They report that "the percentage of correct answers for questions requiring mathematical skills is 16% lower than for the corresponding
non-mathematical questions." In particular, questions involving math skills are almost twice as likely to be left unanswered. The authors' interpretation is that students lack what they call
"self-efficacy:" a familiarity with this kind of problem and some confidence that it can be mastered. They recommend strengthening the curriculum, "less in the sense of adding more subjects of
mathematics, but rather in increasing their practical use in the various experimental disciplines that use them as a tool." Llamas, Vila and Sanz' work was picked up in the "Editor's Choice" section
of Science, August 3, 2012, with the heading: "Math $+$ Science $=$ Success."
Is Algebra Necessary?
That was the title of an "opinion" piece published in the Sunday Review of the New York Times, July 29, 2012. The author, Andrew Hacker, is emeritus professor of political science at Queens College,
CUNY. The lead illustration shows desperate hands emerging from a sea of formulas, as the text begins: "A typical American school day finds some six million high school students and two million
college freshmen struggling with algebra. In both high school and college, all too many students are expected to fail. Why do we subject American students to this ordeal?" Hacker examines and
dismisses some of the standard reasons; he suspects that "that institutions and occupations often install prerequisites just to look rigorous ... Mathematics is used as a hoop, a badge, a totem to
impress outsiders and elevate a profession's status." Hacker distinguishes proficiency in algebra, which he exemplifies as the ability to prove $(x^2+y^2)^2 = (x^2-y^2)^2 + (2xy)^2$, from
quantitative literacy, which "clearly is useful in weighing all manner of public policies." "What is needed is not textbook formulas but greater understanding of where various numbers come from, and
what they actually convey." Along that line, he proposes that "mathematics teachers at every level could create exciting courses in ... 'citizen statistics'." These "would familiarize students with
the kinds of numbers that describe and delineate our personal and public lives." He remarks, "More and more colleges are requiring courses in 'quantitative reasoning.' In fact, we should be starting
that in kindergarten." At the same time, "mathematics departments can also create courses in the history and philosophy of their discipline, as well as its applications in early cultures. Why not
mathematics in art and music--even poetry--along with its role in assorted sciences? The aim would be to treat mathematics as a liberal art, making it as accessible and welcoming as sculpture or
New strategies for the Prisoner's Dilemma
According to an August 16, 2012 posting in the Physics Arxiv Blog of Technology Review ("The Emerging Revolution in Game Theory"), "The world of game theory is currently on fire." William Press
(Computer Science, UTA) and Freeman Dyson (IAS) have discovered "a previously unknown strategy for the game of prisoner's dilemma which guarantees one player a better outcome than the other. That's a
monumental surprise. Theorists have studied Prisoner's Dilemma for decades, using it as a model for the emergence of co-operation in nature. This work has had a profound impact on disciplines such as
economics, evolutionary biology and, of course, game theory itself. The new result will have impact in all these areas and more." The blog author sketches the rules of the game: "Alice and Bob have
committed a crime and are arrested. The police offer each one a deal--snitch and you go free while your friend does 6 months in jail. If both Alice and Bob snitch, they both get 3 months in jail. If
they both remain silent, they both get one month in jail for a lesser offence." "In a single game. the best strategy is to snitch because it guarantees that you don't get the maximum jail term.
However, the game gets more interesting when played in repeated rounds ... ." Until now, received wisdom ("based on decades of computer simulations and a certain blind faith in the symmetry of the
solution") has been that a tit-for-tat approach, in which each player copies the opponent's behavior in the previous round, was the best strategy; both opponents spend the same time in jail. But it
turns out that tit-for-tat is only one member of the family of "zero determinant strategies" discovered by Press and Dyson, that can make the other player spend "far more time in jail (or far less if
you're feeling generous)." See the Press and Dyson article, "Iterated Prisoner's Dilemma contains strategies that dominate any evolutionary opponent."
Tony Phillips
Stony Brook University
tony at math.sunysb.edu | {"url":"http://cust-serv@ams.org/news/math-in-the-media/09-2012-media","timestamp":"2014-04-24T09:19:47Z","content_type":null,"content_length":"18658","record_id":"<urn:uuid:bc84ab09-34d2-40c0-8f01-c7f7e29823e0>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00415-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matches for:
Contemporary Mathematics
1992; 167 pp; softcover
Volume: 140
ISBN-10: 0-8218-5153-5
ISBN-13: 978-0-8218-5153-1
List Price: US$48
Member Price: US$38.40
Order Code: CONM/140
This volume contains the refereed proceedings of the Special Session on Geometric Analysis held at the AMS meeting in Philadelphia in October 1991. The term "geometric analysis" is being used with
increasing frequency in the mathematical community, but its meaning is not entirely fixed. The papers in this collection should help to better define the notion of geometric analysis by illustrating
emerging trends in the subject. The topics covered range over a broad spectrum: integral geometry, Radon transforms, geometric inequalities, microlocal analysis, harmonic analysis, analysis on Lie
groups and symmetric spaces, and more. Containing articles varying from the expository to the technical, this book presents the latest results in a broad range of analytic and geometric topics.
Researchers and graduate students interested in the many fields related to geometric analysis.
"Another triumph of intellectual honesty."
-- The Bulletin of Mathematics Books and Computer Software
• C. A. Berenstein and E. C. Tarabusi -- On the Radon and Riesz transforms in real hyperbolic spaces
• J. Boman -- Holmgren's uniqueness theorem and support theorems for real analytic Radon transforms
• G. D. Chakerian and E. Lutwak -- On the Petty-Schneider theorem
• L. Ehrenpreis -- Nonlinear Fourier transform
• H. Goldschmidt -- On the infinitesimal rigidity of the complex quadrics
• A. Greenleaf and G. Uhlmann -- Microlocal analysis of the two-plane transform
• E. L. Grinberg -- Aspects of flat Radon transforms
• P. Kuchment -- On positivity problems for the Radon transform and some related transforms
• A. Meziani -- Cohomology relative to the germ of an exact form
• V. Oliker -- Generalized convex bodies and generalized envelopes
• E. T. Quinto -- A note on flat Radon transforms
• R. S. Strichartz -- Self-similarity on nilpotent Lie groups
• J. Zhou -- A kinematic formula and analogues of Hadwiger's theorem in space | {"url":"http://ams.org/bookstore?fn=20&arg1=conmseries&ikey=CONM-140","timestamp":"2014-04-17T22:26:57Z","content_type":null,"content_length":"15850","record_id":"<urn:uuid:c0991bc7-0570-421a-a52a-2999b7b3de70>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00251-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: question on MLE d0 method for multilogit model with invariant r
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: question on MLE d0 method for multilogit model with invariant regressors
From Ian Breunig <ianbreunig@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: question on MLE d0 method for multilogit model with invariant regressors
Date Sun, 11 Apr 2010 21:58:46 -0600
Your first model is a Multinomial logit. Your second model reminds me
of a Conditional logit. It may not be quite the same but it's in the
ballpark of what you're trying to do. Check out "McFadden's
conditional logit" as described on pages 43-45 in Maddala, G.S. (1983:
reprinted paperback 1999). "Limited Dependent and Qualitative
Variables in Econometrics". Cambridge University Press: NY, NY.
On Sun, Apr 11, 2010 at 7:26 PM, Hey Sky <heyskywalker@yahoo.com> wrote:
> Hey, All
> I try to do a mle d0 estimation for multilogit model with invariant regressors and have searched internet but no luck. I wish I can get some help here. any suggestion is appreciated.
> here is the question:
> given a period, the probability of a person choose choice k is:
> pr(choice=k)=exp(x*beta_k)/(exp(x*beta_1) + exp(x*beta_2)+...)
> that is, the regresors are invariant, x, but the parameters are variant when choose diffecrent choice, beta_1, beta_2, etc.
> a little complicated case is when making the regressors and parameters both changes along with choices, i.e.
> pr(choice=k)=exp(x_k*beta_k)/(exp(x_1*beta_1) + exp(x_2*beta_2)+...)
> how to do it with the Stata MLE method? thanks for any clue you provide.
> Nan
> from Montreal
> __________________________________________________________________
> Make your browsing faster, safer, and easier with the new Internet Explorer® 8. Optimized for Yahoo! Get it Now for Free! at http://downloads.yahoo.com/ca/internetexplorer/
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2010-04/msg00596.html","timestamp":"2014-04-20T11:17:12Z","content_type":null,"content_length":"9603","record_id":"<urn:uuid:3aba9a1d-0bb6-43e2-8b5b-ae3a593cac57>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00517-ip-10-147-4-33.ec2.internal.warc.gz"} |
The mathematics of Fibonacci's sequence
Nov 2001
The Fibonacci sequence is defined by the property that each number in the sequence is the sum of the previous two numbers; to get started, the first two numbers must be specified, and these are
usually taken to be 1 and 1. In mathematical notation, if the sequence is written
with starting conditions
This quadratic equation has two roots; the one we need here is obviously between zero and one; it is
the number known as the golden Ratio.
The number continued fraction:
In the theory of chaotic dynamical systems,
The spiral curve shown in the poster is a logarithmic spiral, a curve whose equation in polar coordinates is
The underlying reason for this may be found in many texts; see for example Conway JH and Guy RK
The Book of Numbers,
Springer-Verlag (1996), chapter 4.
Back to the main article
About the author
Keith Moffatt is a fellow of the Royal Society and Director of the Isaac Newton Institute for Mathematical Sciences, a national and international visitor research institute at the University of
His own research interests lie in the field of fluid dynamics. | {"url":"http://plus.maths.org/content/mathematics-fibonaccis-sequence","timestamp":"2014-04-21T07:27:17Z","content_type":null,"content_length":"28429","record_id":"<urn:uuid:fbfadca7-58ad-49ee-bbc5-aa47d1cd2f7c>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00203-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lamirada, CA Algebra 2 Tutor
Find a Lamirada, CA Algebra 2 Tutor
...I have taught this subject in the last 13+ years more times than I can remember, and I know which areas students struggle the most. I will teach Algebra 1 completely so this will become a
foundation so the student can succeed in Algebra 2, which builds upon the previous Algebra 1 knowledge. Als...
38 Subjects: including algebra 2, reading, English, ESL/ESOL
...I am a UCLA graduate with a BS/BA degree in Physiological Science and Study of Religion, and I will be starting physical therapy school in February 2014. I took Organic Chemistry courses at
UCLA, and I received an A. Organic Chemistry is generally a very tough class for many students, but I hav...
20 Subjects: including algebra 2, chemistry, calculus, writing
...Extensive experience on 1-1 tutoring as well. More than 10 years of experience in teaching math and calculus from middle school-level, high school-level and college-level students. Earned
Ph.D. degree in physical Chemistry.
10 Subjects: including algebra 2, chemistry, calculus, statistics
I have 15 years' experience in teaching students from elementary school to high school in various subjects including algebra, geometry, English, science, and social studies. I have directed
school plays and been involved in other after-school activities. I have worked with students of various skill-sets and levels including special needs students primarily those with ADD or ADHD.
18 Subjects: including algebra 2, English, reading, writing
...I look forward to working with you to achieve success in your learning! I have tutored several students, from middle school through college in algebra and related math courses. I received
perfect scores on the math sections of the PSAT, SAT, GRE, and GMAT exams.
38 Subjects: including algebra 2, reading, chemistry, statistics
Related Lamirada, CA Tutors
Lamirada, CA Accounting Tutors
Lamirada, CA ACT Tutors
Lamirada, CA Algebra Tutors
Lamirada, CA Algebra 2 Tutors
Lamirada, CA Calculus Tutors
Lamirada, CA Geometry Tutors
Lamirada, CA Math Tutors
Lamirada, CA Prealgebra Tutors
Lamirada, CA Precalculus Tutors
Lamirada, CA SAT Tutors
Lamirada, CA SAT Math Tutors
Lamirada, CA Science Tutors
Lamirada, CA Statistics Tutors
Lamirada, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/Lamirada_CA_algebra_2_tutors.php","timestamp":"2014-04-16T07:54:48Z","content_type":null,"content_length":"24150","record_id":"<urn:uuid:6edb0597-a148-4399-8ef2-4ae86ebaddb4>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00394-ip-10-147-4-33.ec2.internal.warc.gz"} |
El Sobrante Algebra 2 Tutor
Find an El Sobrante Algebra 2 Tutor
...My PhD thesis is in numerical models for high performance computers. I also worked for several years for a company modeling hazards based on data driven computational models. I used Fortran for
many years in high performance computing for climate research models and non linear finite element systems.
41 Subjects: including algebra 2, calculus, geometry, statistics
...Another approach is to use clever acronyms or phrases as mnemonics tools. Of course, I do not impose one approach or the other on the student. I adapt to each student’s needs.
24 Subjects: including algebra 2, chemistry, calculus, physics
...I think this is a wonderful combination: I can relate to students, understand their frustrations and fears, and at the same time I deeply understand math and take great joy in communicating
this to reluctant and struggling students, as well as to able students who want to maximize their achieveme...
20 Subjects: including algebra 2, calculus, geometry, biology
...I never give up until we create the success the student needs. So many standardized tests are required these days – CAHSEE, PSAT, SAT I and SAT II subject tests, ACT and the many Advanced
Placement tests [AP Literature, AP History, AP Chemistry, AP Math, AP Physics, AP Languages ... and more]. S...
44 Subjects: including algebra 2, English, chemistry, reading
...Students are often waiting for the correct explanation that will get them over the hump and I can give that explanation. Just as there are hundreds of ways to prove the Pythagorean Theorem,
there's bound to be an explanation for you. Math isn't an abstract subject for geniuses, it's a practical tool set that just requires a lot of practice and hard work.
19 Subjects: including algebra 2, physics, calculus, writing | {"url":"http://www.purplemath.com/el_sobrante_algebra_2_tutors.php","timestamp":"2014-04-16T10:30:28Z","content_type":null,"content_length":"23990","record_id":"<urn:uuid:eab608d4-bf81-45cd-be42-c75960360ef5>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00440-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kalman filter
The Kalman filter, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements observed over time, containing noise (random variations) and other inaccuracies,
and produces estimates of unknown variables that tend to be more precise than those based on a single measurement alone. More formally, the Kalman filter operates recursively on streams of noisy
input data to produce a statistically optimal estimate of the underlying system state. The filter is named for Rudolf (Rudy) E. Kálmán, one of the primary developers of its theory.
The Kalman filter has numerous applications in technology. A common application is for guidance, navigation and control of vehicles, particularly aircraft and spacecraft. Furthermore, the Kalman
filter is a widely applied concept in time series analysis used in fields such as signal processing and econometrics.
The algorithm works in a two-step process. In the prediction step, the Kalman filter produces estimates of the current state variables, along with their uncertainties. Once the outcome of the next
measurement (necessarily corrupted with some amount of error, including random noise) is observed, these estimates are updated using a weighted average, with more weight being given to estimates with
higher certainty. Because of the algorithm's recursive nature, it can run in real time using only the present input measurements and the previously calculated state and its uncertainty matrix; no
additional past information is required.
It is a common misconception that the Kalman filter assumes that all error terms and measurements are Gaussian distributed. Kalman's original paper derived the filter using orthogonal projection
theory to show that the covariance is minimized, and this result does not require any assumption, e.g., that the errors are Gaussian.^[1] He then showed that the filter yields the exact conditional
probability estimate in the special case that all errors are Gaussian-distributed.
Extensions and generalizations to the method have also been developed, such as the extended Kalman filter and the unscented Kalman filter which work on nonlinear systems. The underlying model is a
Bayesian model similar to a hidden Markov model but where the state space of the latent variables is continuous and where all latent and observed variables have Gaussian distributions.
Naming and historical development[edit]
The filter is named after Hungarian émigré Rudolf E. Kálmán, although Thorvald Nicolai Thiele^[2]^[3] and Peter Swerling developed a similar algorithm earlier. Richard S. Bucy of the University of
Southern California contributed to the theory, leading to it often being called the Kalman–Bucy filter. Stanley F. Schmidt is generally credited with developing the first implementation of a Kalman
filter. It was during a visit by Kalman to the NASA Ames Research Center that he saw the applicability of his ideas to the problem of trajectory estimation for the Apollo program, leading to its
incorporation in the Apollo navigation computer. This Kalman filter was first described and partially developed in technical papers by Swerling (1958), Kalman (1960) and Kalman and Bucy (1961).
Kalman filters have been vital in the implementation of the navigation systems of U.S. Navy nuclear ballistic missile submarines, and in the guidance and navigation systems of cruise missiles such as
the U.S. Navy's Tomahawk missile and the U.S. Air Force's Air Launched Cruise Missile. It is also used in the guidance and navigation systems of the NASA Space Shuttle and the attitude control and
navigation systems of the International Space Station.
This digital filter is sometimes called the Stratonovich–Kalman–Bucy filter because it is a special case of a more general, non-linear filter developed somewhat earlier by the Soviet mathematician
Ruslan L. Stratonovich.^[4]^[5]^[6]^[7] In fact, some of the special case linear filter's equations appeared in these papers by Stratonovich that were published before summer 1960, when Kalman met
with Stratonovich during a conference in Moscow.
Overview of the calculation[edit]
The Kalman filter uses a system's dynamics model (e.g., physical laws of motion), known control inputs to that system, and multiple sequential measurements (such as from sensors) to form an estimate
of the system's varying quantities (its state) that is better than the estimate obtained by using any one measurement alone. As such, it is a common sensor fusion and data fusion algorithm.
All measurements and calculations based on models are estimates to some degree. Noisy sensor data, approximations in the equations that describe how a system changes, and external factors that are
not accounted for introduce some uncertainty about the inferred values for a system's state. The Kalman filter averages a prediction of a system's state with a new measurement using a weighted
average. The purpose of the weights is that values with better (i.e., smaller) estimated uncertainty are "trusted" more. The weights are calculated from the covariance, a measure of the estimated
uncertainty of the prediction of the system's state. The result of the weighted average is a new state estimate that lies between the predicted and measured state, and has a better estimated
uncertainty than either alone. This process is repeated every time step, with the new estimate and its covariance informing the prediction used in the following iteration. This means that the Kalman
filter works recursively and requires only the last "best guess", rather than the entire history, of a system's state to calculate a new state.
Because the certainty of the measurements is often difficult to measure precisely, it is common to discuss the filter's behavior in terms of gain. The Kalman gain is a function of the relative
certainty of the measurements and current state estimate, and can be "tuned" to achieve particular performance. With a high gain, the filter places more weight on the measurements, and thus follows
them more closely. With a low gain, the filter follows the model predictions more closely, smoothing out noise but decreasing the responsiveness. At the extremes, a gain of one causes the filter to
ignore the state estimate entirely, while a gain of zero causes the measurements to be ignored.
When performing the actual calculations for the filter (as discussed below), the state estimate and covariances are coded into matrices to handle the multiple dimensions involved in a single set of
calculations. This allows for representation of linear relationships between different state variables (such as position, velocity, and acceleration) in any of the transition models or covariances.
Example application[edit]
As an example application, consider the problem of determining the precise location of a truck. The truck can be equipped with a GPS unit that provides an estimate of the position within a few
meters. The GPS estimate is likely to be noisy; readings 'jump around' rapidly, though always remaining within a few meters of the real position. In addition, since the truck is expected to follow
the laws of physics, its position can also be estimated by integrating its velocity over time, determined by keeping track of wheel revolutions and the angle of the steering wheel. This is a
technique known as dead reckoning. Typically, dead reckoning will provide a very smooth estimate of the truck's position, but it will drift over time as small errors accumulate.
In this example, the Kalman filter can be thought of as operating in two distinct phases: predict and update. In the prediction phase, the truck's old position will be modified according to the
physical laws of motion (the dynamic or "state transition" model) plus any changes produced by the accelerator pedal and steering wheel. Not only will a new position estimate be calculated, but a new
covariance will be calculated as well. Perhaps the covariance is proportional to the speed of the truck because we are more uncertain about the accuracy of the dead reckoning estimate at high speeds
but very certain about the position when moving slowly. Next, in the update phase, a measurement of the truck's position is taken from the GPS unit. Along with this measurement comes some amount of
uncertainty, and its covariance relative to that of the prediction from the previous phase determines how much the new measurement will affect the updated prediction. Ideally, if the dead reckoning
estimates tend to drift away from the real position, the GPS measurement should pull the position estimate back towards the real position but not disturb it to the point of becoming rapidly changing
and noisy.
Technical description and context[edit]
The Kalman filter is an efficient recursive filter that estimates the internal state of a linear dynamic system from a series of noisy measurements. It is used in a wide range of engineering and
econometric applications from radar and computer vision to estimation of structural macroeconomic models,^[8]^[9] and is an important topic in control theory and control systems engineering. Together
with the linear-quadratic regulator (LQR), the Kalman filter solves the linear-quadratic-Gaussian control problem (LQG). The Kalman filter, the linear-quadratic regulator and the
linear-quadratic-Gaussian controller are solutions to what arguably are the most fundamental problems in control theory.
In most applications, the internal state is much larger (more degrees of freedom) than the few "observable" parameters which are measured. However, by combining a series of measurements, the Kalman
filter can estimate the entire internal state.
In Dempster–Shafer theory, each state equation or observation is considered a special case of a linear belief function and the Kalman filter is a special case of combining linear belief functions on
a join-tree or Markov tree.
A wide variety of Kalman filters have now been developed, from Kalman's original formulation, now called the "simple" Kalman filter, the Kalman–Bucy filter, Schmidt's "extended" filter, the
information filter, and a variety of "square-root" filters that were developed by Bierman, Thornton and many others. Perhaps the most commonly used type of very simple Kalman filter is the
phase-locked loop, which is now ubiquitous in radios, especially frequency modulation (FM) radios, television sets, satellite communications receivers, outer space communications systems, and nearly
any other electronic communications equipment.
Underlying dynamic system model[edit]
This section requires expansion. (August 2011)
The Kalman filters are based on linear dynamic systems discretized in the time domain. They are modelled on a Markov chain built on linear operators perturbed by Gaussian noise. The state of the
system is represented as a vector of real numbers. At each discrete time increment, a linear operator is applied to the state to generate the new state, with some noise mixed in, and optionally some
information from the controls on the system if they are known. Then, another linear operator mixed with more noise generates the observed outputs from the true ("hidden") state. The Kalman filter may
be regarded as analogous to the hidden Markov model, with the key difference that the hidden state variables take values in a continuous space (as opposed to a discrete state space as in the hidden
Markov model). Additionally, the hidden Markov model can represent an arbitrary distribution for the next value of the state variables, in contrast to the Gaussian noise model that is used for the
Kalman filter. There is a strong duality between the equations of the Kalman Filter and those of the hidden Markov model. A review of this and other models is given in Roweis and Ghahramani (1999)^[
10] and Hamilton (1994), Chapter 13.^[11]
In order to use the Kalman filter to estimate the internal state of a process given only a sequence of noisy observations, one must model the process in accordance with the framework of the Kalman
filter. This means specifying the following matrices: F[k], the state-transition model; H[k], the observation model; Q[k], the covariance of the process noise; R[k], the covariance of the observation
noise; and sometimes B[k], the control-input model, for each time-step, k, as described below.
The Kalman filter model assumes the true state at time k is evolved from the state at (k−1) according to
$\textbf{x}_{k} = \textbf{F}_{k} \textbf{x}_{k-1} + \textbf{B}_{k} \textbf{u}_{k-1} + \textbf{w}_{k}$
• F[k] is the state transition model which is applied to the previous state x[k−1];
• B[k] is the control-input model which is applied to the control vector u[k];
• w[k] is the process noise which is assumed to be drawn from a zero mean multivariate normal distribution with covariance Q[k].
$\textbf{w}_{k} \sim N(0, \textbf{Q}_k)$
At time k an observation (or measurement) z[k] of the true state x[k] is made according to
$\textbf{z}_{k} = \textbf{H}_{k} \textbf{x}_{k} + \textbf{v}_{k}$
where H[k] is the observation model which maps the true state space into the observed space and v[k] is the observation noise which is assumed to be zero mean Gaussian white noise with covariance R[k
$\textbf{v}_{k} \sim N(0, \textbf{R}_k)$
The initial state, and the noise vectors at each step {x[0], w[1], ..., w[k], v[1] ... v[k]} are all assumed to be mutually independent.
Many real dynamical systems do not exactly fit this model. In fact, unmodelled dynamics can seriously degrade the filter performance, even when it was supposed to work with unknown stochastic signals
as inputs. The reason for this is that the effect of unmodelled dynamics depends on the input, and, therefore, can bring the estimation algorithm to instability (it diverges). On the other hand,
independent white noise signals will not make the algorithm diverge. The problem of separating between measurement noise and unmodelled dynamics is a difficult one and is treated in control theory
under the framework of robust control.
This section needs additional citations for verification. (December 2010)
The Kalman filter is a recursive estimator. This means that only the estimated state from the previous time step and the current measurement are needed to compute the estimate for the current state.
In contrast to batch estimation techniques, no history of observations and/or estimates is required. In what follows, the notation $\hat{\textbf{x}}_{n\mid m}$ represents the estimate of $\textbf{x}$
at time n given observations up to, and including at time m.
The state of the filter is represented by two variables:
• $\hat{\textbf{x}}_{k\mid k}$, the a posteriori state estimate at time k given observations up to and including at time k;
• $\textbf{P}_{k\mid k}$, the a posteriori error covariance matrix (a measure of the estimated accuracy of the state estimate).
The Kalman filter can be written as a single equation, however it is most often conceptualized as two distinct phases: "Predict" and "Update". The predict phase uses the state estimate from the
previous timestep to produce an estimate of the state at the current timestep. This predicted state estimate is also known as the a priori state estimate because, although it is an estimate of the
state at the current timestep, it does not include observation information from the current timestep. In the update phase, the current a priori prediction is combined with current observation
information to refine the state estimate. This improved estimate is termed the a posteriori state estimate.
Typically, the two phases alternate, with the prediction advancing the state until the next scheduled observation, and the update incorporating the observation. However, this is not necessary; if an
observation is unavailable for some reason, the update may be skipped and multiple prediction steps performed. Likewise, if multiple independent observations are available at the same time, multiple
update steps may be performed (typically with different observation matrices H[k]).
Predicted (a priori) state estimate $\hat{\textbf{x}}_{k\mid k-1} = \textbf{F}_{k-1}\hat{\textbf{x}}_{k-1\mid k-1} + \textbf{B}_{k-1} \textbf{u}_{k-1}$
Predicted (a priori) estimate covariance $\textbf{P}_{k\mid k-1} = \textbf{F}_{k-1} \textbf{P}_{k-1\mid k-1} \textbf{F}_{k-1}^{\text{T}} + \textbf{Q}_{k}$
Innovation or measurement residual $\tilde{\textbf{y}}_k = \textbf{z}_k - \textbf{H}_k\hat{\textbf{x}}_{k\mid k-1}$
Innovation (or residual) covariance $\textbf{S}_k = \textbf{H}_k \textbf{P}_{k\mid k-1} \textbf{H}_k^\text{T} + \textbf{R}_k$
Optimal Kalman gain $\textbf{K}_k = \textbf{P}_{k\mid k-1}\textbf{H}_k^\text{T}\textbf{S}_k^{-1}$
Updated (a posteriori) state estimate $\hat{\textbf{x}}_{k\mid k} = \hat{\textbf{x}}_{k\mid k-1} + \textbf{K}_k\tilde{\textbf{y}}_k$
Updated (a posteriori) estimate covariance $\textbf{P}_{k|k} = (I - \textbf{K}_k \textbf{H}_k) \textbf{P}_{k|k-1}$
The formula for the updated estimate and covariance above is only valid for the optimal Kalman gain. Usage of other gain values require a more complex formula found in the derivations section.
If the model is accurate, and the values for $\hat{\textbf{x}}_{0\mid 0}$ and $\textbf{P}_{0\mid 0}$ accurately reflect the distribution of the initial state values, then the following invariants are
preserved: (all estimates have a mean error of zero)
• $\textrm{E}[\textbf{x}_k - \hat{\textbf{x}}_{k\mid k}] = \textrm{E}[\textbf{x}_k - \hat{\textbf{x}}_{k\mid k-1}] = 0$
• $\textrm{E}[\tilde{\textbf{y}}_k] = 0$
where $\textrm{E}[\xi]$ is the expected value of $\xi$, and covariance matrices accurately reflect the covariance of estimates
• $\textbf{P}_{k\mid k} = \textrm{cov}(\textbf{x}_k - \hat{\textbf{x}}_{k\mid k})$
• $\textbf{P}_{k\mid k-1} = \textrm{cov}(\textbf{x}_k - \hat{\textbf{x}}_{k\mid k-1})$
• $\textbf{S}_{k} = \textrm{cov}(\tilde{\textbf{y}}_k)$
Estimation of the noise covariances Q[k] and R[k][edit]
Practical implementation of the Kalman Filter is often difficult due to the inability in getting a good estimate of the noise covariance matrices Q[k] and R[k]. Extensive research has been done in
this field to estimate these covariances from data. One of the more promising approaches to do this is the Autocovariance Least-Squares (ALS) technique that uses autocovariances of routine operating
data to estimate the covariances.^[12]^[13] The GNU Octave code used to calculate the noise covariance matrices using the ALS technique is available online under the GNU General Public License
Optimality and performance[edit]
It is known from the theory that the Kalman filter is optimal in case that a) the model perfectly matches the real system, b) the entering noise is white and c) the covariances of the noise are
exactly known. Several methods for the noise covariance estimation have been proposed during past decades. One, ALS, was mentioned in the previous paragraph. After the covariances are identified, it
is useful to evaluate the performance of the filter, i.e. whether it is possible to improve the state estimation quality. It is well known that, if the Kalman filter works optimally, the innovation
sequence (the output prediction error) is a white noise. The whiteness property reflects the state estimation quality. For evaluation of the filter performance it is necessary to inspect the
whiteness property of the innovations. Several different methods can be used for this purpose. Three optimality tests with numerical examples are described in ^[15]
Example application, technical[edit]
Consider a truck on perfectly frictionless, infinitely long straight rails. Initially the truck is stationary at position 0, but it is buffeted this way and that by random acceleration. We measure
the position of the truck every Δt seconds, but these measurements are imprecise; we want to maintain a model of where the truck is and what its velocity is. We show here how we derive the model from
which we create our Kalman filter.
Since F, H, R and Q are constant, their time indices are dropped.
The position and velocity of the truck are described by the linear state space
$\textbf{x}_{k} = \begin{bmatrix} x \\ \dot{x} \end{bmatrix}$
where $\dot{x}$ is the velocity, that is, the derivative of position with respect to time.
We assume that between the (k−1) and k timestep the truck undergoes a constant acceleration of a[k] that is normally distributed, with mean 0 and standard deviation σ[a]. From Newton's laws of motion
we conclude that
$\textbf{x}_{k} = \textbf{F} \textbf{x}_{k-1} + \textbf{G}a_{k}$
(note that there is no $\textbf{B}u$ term since we have no known control inputs) where
$\textbf{F} = \begin{bmatrix} 1 & \Delta t \\ 0 & 1 \end{bmatrix}$
$\textbf{G} = \begin{bmatrix} \frac{\Delta t^{2}}{2} \\ \Delta t \end{bmatrix}$
so that
$\textbf{x}_{k} = \textbf{F} \textbf{x}_{k-1} + \textbf{w}_{k}$
where $\textbf{w}_{k} \sim N(0, \textbf{Q})$ and
$\textbf{Q}=\textbf{G}\textbf{G}^{\text{T}}\sigma_a^2 =\begin{bmatrix} \frac{\Delta t^4}{4} & \frac{\Delta t^3}{2} \\ \frac{\Delta t^3}{2} & \Delta t^2 \end{bmatrix}\sigma_a^2.$
At each time step, a noisy measurement of the true position of the truck is made. Let us suppose the measurement noise v[k] is also normally distributed, with mean 0 and standard deviation σ[z].
$\textbf{z}_{k} = \textbf{H x}_{k} + \textbf{v}_{k}$
$\textbf{H} = \begin{bmatrix} 1 & 0 \end{bmatrix}$
$\textbf{R} = \textrm{E}[\textbf{v}_k \textbf{v}_k^{\text{T}}] = \begin{bmatrix} \sigma_z^2 \end{bmatrix}$
We know the initial starting state of the truck with perfect precision, so we initialize
$\hat{\textbf{x}}_{0\mid 0} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}$
and to tell the filter that we know the exact position, we give it a zero covariance matrix:
$\textbf{P}_{0\mid 0} = \begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix}$
If the initial position and velocity are not known perfectly the covariance matrix should be initialized with a suitably large number, say L, on its diagonal.
$\textbf{P}_{0\mid 0} = \begin{bmatrix} L & 0 \\ 0 & L \end{bmatrix}$
The filter will then prefer the information from the first measurements over the information already in the model.
This section needs additional citations for verification. (December 2010)
Deriving the a posteriori estimate covariance matrix[edit]
Starting with our invariant on the error covariance P[k | k] as above
$\textbf{P}_{k\mid k} = \textrm{cov}(\textbf{x}_{k} - \hat{\textbf{x}}_{k\mid k})$
substitute in the definition of $\hat{\textbf{x}}_{k\mid k}$
$\textbf{P}_{k\mid k} = \textrm{cov}(\textbf{x}_{k} - (\hat{\textbf{x}}_{k\mid k-1} + \textbf{K}_k\tilde{\textbf{y}}_{k}))$
and substitute $\tilde{\textbf{y}}_k$
$\textbf{P}_{k\mid k} = \textrm{cov}(\textbf{x}_{k} - (\hat{\textbf{x}}_{k\mid k-1} + \textbf{K}_k(\textbf{z}_k - \textbf{H}_k\hat{\textbf{x}}_{k\mid k-1})))$
and $\textbf{z}_{k}$
$\textbf{P}_{k\mid k} = \textrm{cov}(\textbf{x}_{k} - (\hat{\textbf{x}}_{k\mid k-1} + \textbf{K}_k(\textbf{H}_k\textbf{x}_k + \textbf{v}_k - \textbf{H}_k\hat{\textbf{x}}_{k\mid k-1})))$
and by collecting the error vectors we get
$\textbf{P}_{k|k} = \textrm{cov}((I - \textbf{K}_k \textbf{H}_{k})(\textbf{x}_k - \hat{\textbf{x}}_{k\mid k-1}) - \textbf{K}_k \textbf{v}_k )$
Since the measurement error v[k] is uncorrelated with the other terms, this becomes
$\textbf{P}_{k|k} = \textrm{cov}((I - \textbf{K}_k \textbf{H}_{k})(\textbf{x}_k - \hat{\textbf{x}}_{k\mid k-1})) + \textrm{cov}(\textbf{K}_k \textbf{v}_k )$
by the properties of vector covariance this becomes
$\textbf{P}_{k\mid k} = (I - \textbf{K}_k \textbf{H}_{k})\textrm{cov}(\textbf{x}_k - \hat{\textbf{x}}_{k\mid k-1})(I - \textbf{K}_k \textbf{H}_{k})^{\text{T}} + \textbf{K}_k\textrm{cov}(\textbf
{v}_k )\textbf{K}_k^{\text{T}}$
which, using our invariant on P[k | k−1] and the definition of R[k] becomes
$\textbf{P}_{k\mid k} = (I - \textbf{K}_k \textbf{H}_{k}) \textbf{P}_{k\mid k-1} (I - \textbf{K}_k \textbf{H}_{k})^\text{T} + \textbf{K}_k \textbf{R}_k \textbf{K}_k^\text{T}$
This formula (sometimes known as the "Joseph form" of the covariance update equation) is valid for any value of K[k]. It turns out that if K[k] is the optimal Kalman gain, this can be simplified
further as shown below.
Kalman gain derivation[edit]
The Kalman filter is a minimum mean-square error estimator. The error in the a posteriori state estimation is
$\textbf{x}_{k} - \hat{\textbf{x}}_{k\mid k}$
We seek to minimize the expected value of the square of the magnitude of this vector, $\textrm{E}[\|\textbf{x}_{k} - \hat{\textbf{x}}_{k|k}\|^2]$. This is equivalent to minimizing the trace of the a
posteriori estimate covariance matrix $\textbf{P}_{k|k}$. By expanding out the terms in the equation above and collecting, we get:
\begin{align} \textbf{P}_{k\mid k} & = \textbf{P}_{k\mid k-1} - \textbf{K}_k \textbf{H}_k \textbf{P}_{k\mid k-1} - \textbf{P}_{k\mid k-1} \textbf{H}_k^\text{T} \textbf{K}_k^\text{T} + \textbf{K}
_k (\textbf{H}_k \textbf{P}_{k\mid k-1} \textbf{H}_k^\text{T} + \textbf{R}_k) \textbf{K}_k^\text{T} \\[6pt] & = \textbf{P}_{k\mid k-1} - \textbf{K}_k \textbf{H}_k \textbf{P}_{k\mid k-1} - \textbf
{P}_{k\mid k-1} \textbf{H}_k^\text{T} \textbf{K}_k^\text{T} + \textbf{K}_k \textbf{S}_k\textbf{K}_k^\text{T} \end{align}
The trace is minimized when its matrix derivative with respect to the gain matrix is zero. Using the gradient matrix rules and the symmetry of the matrices involved we find that
$\frac{\partial \; \mathrm{tr}(\textbf{P}_{k\mid k})}{\partial \;\textbf{K}_k} = -2 (\textbf{H}_k \textbf{P}_{k\mid k-1})^\text{T} + 2 \textbf{K}_k \textbf{S}_k = 0.$
Solving this for K[k] yields the Kalman gain:
$\textbf{K}_k \textbf{S}_k = (\textbf{H}_k \textbf{P}_{k\mid k-1})^\text{T} = \textbf{P}_{k\mid k-1} \textbf{H}_k^\text{T}$
$\textbf{K}_{k} = \textbf{P}_{k\mid k-1} \textbf{H}_k^\text{T} \textbf{S}_k^{-1}$
This gain, which is known as the optimal Kalman gain, is the one that yields MMSE estimates when used.
Simplification of the a posteriori error covariance formula[edit]
The formula used to calculate the a posteriori error covariance can be simplified when the Kalman gain equals the optimal value derived above. Multiplying both sides of our Kalman gain formula on the
right by S[k]K[k]^T, it follows that
$\textbf{K}_k \textbf{S}_k \textbf{K}_k^T = \textbf{P}_{k\mid k-1} \textbf{H}_k^T \textbf{K}_k^T$
Referring back to our expanded formula for the a posteriori error covariance,
$\textbf{P}_{k\mid k} = \textbf{P}_{k\mid k-1} - \textbf{K}_k \textbf{H}_k \textbf{P}_{k\mid k-1} - \textbf{P}_{k\mid k-1} \textbf{H}_k^T \textbf{K}_k^T + \textbf{K}_k \textbf{S}_k \textbf{K}_k^
we find the last two terms cancel out, giving
$\textbf{P}_{k\mid k} = \textbf{P}_{k\mid k-1} - \textbf{K}_k \textbf{H}_k \textbf{P}_{k\mid k-1} = (I - \textbf{K}_{k} \textbf{H}_{k}) \textbf{P}_{k\mid k-1}.$
This formula is computationally cheaper and thus nearly always used in practice, but is only correct for the optimal gain. If arithmetic precision is unusually low causing problems with numerical
stability, or if a non-optimal Kalman gain is deliberately used, this simplification cannot be applied; the a posteriori error covariance formula as derived above must be used.
Sensitivity analysis[edit]
This section needs additional citations for verification. (December 2010)
The Kalman filtering equations provide an estimate of the state $\hat{\textbf{x}}_{k\mid k}$ and its error covariance $\textbf{P}_{k\mid k}$ recursively. The estimate and its quality depend on the
system parameters and the noise statistics fed as inputs to the estimator. This section analyzes the effect of uncertainties in the statistical inputs to the filter.^[16] In the absence of reliable
statistics or the true values of noise covariance matrices $\textbf{Q}_{k}$ and $\textbf{R}_{k}$, the expression
$\textbf{P}_{k\mid k} = (\textbf{I} - \textbf{K}_k\textbf{H}_k)\textbf{P}_{k\mid k-1}(\textbf{I} - \textbf{K}_k\textbf{H}_k)^T + \textbf{K}_k\textbf{R}_k\textbf{K}_k^T$
no longer provides the actual error covariance. In other words, $\textbf{P}_{k\mid k} eq E[(\textbf{x}_k - \hat{\textbf{x}}_{k\mid k})(\textbf{x}_k - \hat{\textbf{x}}_{k\mid k})^T]$. In most real
time applications the covariance matrices that are used in designing the Kalman filter are different from the actual noise covariances matrices.^[citation needed] This sensitivity analysis describes
the behavior of the estimation error covariance when the noise covariances as well as the system matrices $\textbf{F}_{k}$ and $\textbf{H}_{k}$ that are fed as inputs to the filter are incorrect.
Thus, the sensitivity analysis describes the robustness (or sensitivity) of the estimator to misspecified statistical and parametric inputs to the estimator.
This discussion is limited to the error sensitivity analysis for the case of statistical uncertainties. Here the actual noise covariances are denoted by $\textbf{Q}^{a}_k$ and $\textbf{R}^{a}_k$
respectively, whereas the design values used in the estimator are $\textbf{Q}_k$ and $\textbf{R}_k$ respectively. The actual error covariance is denoted by $\textbf{P}_{k\mid k}^a$ and $\textbf{P}_{k
\mid k}$ as computed by the Kalman filter is referred to as the Riccati variable. When $\textbf{Q}_k \equiv \textbf{Q}^{a}_k$ and $\textbf{R}_k \equiv \textbf{R}^{a}_k$, this means that $\textbf{P}_
{k\mid k} = \textbf{P}_{k\mid k}^a$. While computing the actual error covariance using $\textbf{P}_{k\mid k}^a = E[(\textbf{x}_k - \hat{\textbf{x}}_{k\mid k})(\textbf{x}_k - \hat{\textbf{x}}_{k\mid
k})^T]$, substituting for $\widehat{\textbf{x}}_{k\mid k}$ and using the fact that $E[\textbf{w}_k\textbf{w}_k^T] = \textbf{Q}_{k}^a$ and $E[\textbf{v}_k\textbf{v}_k^T] = \textbf{R}_{k}^a$, results
in the following recursive equations for $\textbf{P}_{k\mid k}^a$ :
$\textbf{P}_{k\mid k-1}^a = \textbf{F}_k\textbf{P}_{k-1\mid k-1}^a\textbf{F}_k^T + \textbf{Q}_k^a$
$\textbf{P}_{k\mid k}^a = (\textbf{I} - \textbf{K}_k\textbf{H}_k)\textbf{P}_{k\mid k-1}^a(\textbf{I} - \textbf{K}_k\textbf{H}_k)^T + \textbf{K}_k\textbf{R}_k^a\textbf{K}_k^T$
While computing $\textbf{P}_{k\mid k}$, by design the filter implicitly assumes that $E[\textbf{w}_k\textbf{w}_k^T] = \textbf{Q}_{k}$ and $E[\textbf{v}_k\textbf{v}_k^T] = \textbf{R}_{k}$. Note that
the recursive expressions for $\textbf{P}_{k\mid k}^a$ and $\textbf{P}_{k\mid k}$ are identical except for the presence of $\textbf{Q}_{k}^a$ and $\textbf{R}_{k}^a$ in place of the design values $\
textbf{Q}_{k}$ and $\textbf{R}_{k}$ respectively.
Square root form[edit]
One problem with the Kalman filter is its numerical stability. If the process noise covariance Q[k] is small, round-off error often causes a small positive eigenvalue to be computed as a negative
number. This renders the numerical representation of the state covariance matrix P indefinite, while its true form is positive-definite.
Positive definite matrices have the property that they have a triangular matrix square root P = S·S^T. This can be computed efficiently using the Cholesky factorization algorithm, but more
importantly, if the covariance is kept in this form, it can never have a negative diagonal or become asymmetric. An equivalent form, which avoids many of the square root operations required by the
matrix square root yet preserves the desirable numerical properties, is the U-D decomposition form, P = U·D·U^T, where U is a unit triangular matrix (with unit diagonal), and D is a diagonal matrix.
Between the two, the U-D factorization uses the same amount of storage, and somewhat less computation, and is the most commonly used square root form. (Early literature on the relative efficiency is
somewhat misleading, as it assumed that square roots were much more time-consuming than divisions,^[17]^:69 while on 21-st century computers they are only slightly more expensive.)
Efficient algorithms for the Kalman prediction and update steps in the square root form were developed by G. J. Bierman and C. L. Thornton.^[17]^[18]
The L·D·L^T decomposition of the innovation covariance matrix S[k] is the basis for another type of numerically efficient and robust square root filter.^[19] The algorithm starts with the LU
decomposition as implemented in the Linear Algebra PACKage (LAPACK). These results are further factored into the L·D·L^T structure with methods given by Golub and Van Loan (algorithm 4.1.2) for a
symmetric nonsingular matrix.^[20] Any singular covariance matrix is pivoted so that the first diagonal partition is nonsingular and well-conditioned. The pivoting algorithm must retain any portion
of the innovation covariance matrix directly corresponding to observed state-variables H[k]·x[k|k-1] that are associated with auxiliary observations in y[k]. The L·D·L^T square-root filter requires
orthogonalization of the observation vector.^[18]^[19] This may be done with the inverse square-root of the covariance matrix for the auxiliary variables using Method 2 in Higham (2002, p. 263).^[21]
Relationship to recursive Bayesian estimation[edit]
The Kalman filter can be considered to be one of the most simple dynamic Bayesian networks. The Kalman filter calculates estimates of the true values of states recursively over time using incoming
measurements and a mathematical process model. Similarly, recursive Bayesian estimation calculates estimates of an unknown probability density function (PDF) recursively over time using incoming
measurements and a mathematical process model.^[22]
In recursive Bayesian estimation, the true state is assumed to be an unobserved Markov process, and the measurements are the observed states of a hidden Markov model (HMM).
Because of the Markov assumption, the true state is conditionally independent of all earlier states given the immediately previous state.
$p(\textbf{x}_k\mid \textbf{x}_0,\dots,\textbf{x}_{k-1}) = p(\textbf{x}_k\mid \textbf{x}_{k-1})$
Similarly the measurement at the k-th timestep is dependent only upon the current state and is conditionally independent of all other states given the current state.
$p(\textbf{z}_k\mid\textbf{x}_0,\dots,\textbf{x}_{k}) = p(\textbf{z}_k\mid \textbf{x}_{k} )$
Using these assumptions the probability distribution over all states of the hidden Markov model can be written simply as:
$p(\textbf{x}_0,\dots,\textbf{x}_k\mid \textbf{z}_1,\dots,\textbf{z}_k) = p(\textbf{x}_0)\prod_{i=1}^k p(\textbf{z}_i\mid \textbf{x}_i)p(\textbf{x}_i\mid \textbf{x}_{i-1})$
However, when the Kalman filter is used to estimate the state x, the probability distribution of interest is that associated with the current states conditioned on the measurements up to the current
timestep. This is achieved by marginalizing out the previous states and dividing by the probability of the measurement set.
This leads to the predict and update steps of the Kalman filter written probabilistically. The probability distribution associated with the predicted state is the sum (integral) of the products of
the probability distribution associated with the transition from the (k − 1)-th timestep to the k-th and the probability distribution associated with the previous state, over all possible $x_{k-1}$.
$p(\textbf{x}_k\mid \textbf{Z}_{k-1}) = \int p(\textbf{x}_k \mid \textbf{x}_{k-1}) p(\textbf{x}_{k-1} \mid \textbf{Z}_{k-1} ) \, d\textbf{x}_{k-1}$
The measurement set up to time t is
$\textbf{Z}_{t} = \left \{ \textbf{z}_{1},\dots,\textbf{z}_{t} \right \}$
The probability distribution of the update is proportional to the product of the measurement likelihood and the predicted state.
$p(\textbf{x}_k\mid \textbf{Z}_{k}) = \frac{p(\textbf{z}_k\mid \textbf{x}_k) p(\textbf{x}_k\mid \textbf{Z}_{k-1})}{p(\textbf{z}_k\mid \textbf{Z}_{k-1})}$
The denominator
$p(\textbf{z}_k\mid \textbf{Z}_{k-1}) = \int p(\textbf{z}_k\mid \textbf{x}_k) p(\textbf{x}_k\mid \textbf{Z}_{k-1}) d\textbf{x}_k$
is a normalization term.
The remaining probability density functions are
$p(\textbf{x}_k \mid \textbf{x}_{k-1}) = \mathcal{N}(\textbf{F}_k\textbf{x}_{k-1}, \textbf{Q}_k)$
$p(\textbf{z}_k\mid \textbf{x}_k) = \mathcal{N}(\textbf{H}_{k}\textbf{x}_k, \textbf{R}_k)$
$p(\textbf{x}_{k-1}\mid \textbf{Z}_{k-1}) = \mathcal{N}(\hat{\textbf{x}}_{k-1},\textbf{P}_{k-1} )$
Note that the PDF at the previous timestep is inductively assumed to be the estimated state and covariance. This is justified because, as an optimal estimator, the Kalman filter makes best use of the
measurements, therefore the PDF for $\mathbf{x}_k$ given the measurements $\mathbf{Z}_k$ is the Kalman filter estimate.
Information filter[edit]
In the information filter, or inverse covariance filter, the estimated covariance and estimated state are replaced by the information matrix and information vector respectively. These are defined as:
$\textbf{Y}_{k\mid k} = \textbf{P}_{k\mid k}^{-1}$
$\hat{\textbf{y}}_{k\mid k} = \textbf{P}_{k\mid k}^{-1}\hat{\textbf{x}}_{k\mid k}$
Similarly the predicted covariance and state have equivalent information forms, defined as:
$\textbf{Y}_{k\mid k-1} = \textbf{P}_{k\mid k-1}^{-1}$
$\hat{\textbf{y}}_{k\mid k-1} = \textbf{P}_{k\mid k-1}^{-1}\hat{\textbf{x}}_{k\mid k-1}$
as have the measurement covariance and measurement vector, which are defined as:
$\textbf{I}_{k} = \textbf{H}_{k}^{\text{T}} \textbf{R}_{k}^{-1} \textbf{H}_{k}$
$\textbf{i}_{k} = \textbf{H}_{k}^{\text{T}} \textbf{R}_{k}^{-1} \textbf{z}_{k}$
The information update now becomes a trivial sum.
$\textbf{Y}_{k\mid k} = \textbf{Y}_{k\mid k-1} + \textbf{I}_{k}$
$\hat{\textbf{y}}_{k\mid k} = \hat{\textbf{y}}_{k\mid k-1} + \textbf{i}_{k}$
The main advantage of the information filter is that N measurements can be filtered at each timestep simply by summing their information matrices and vectors.
$\textbf{Y}_{k\mid k} = \textbf{Y}_{k\mid k-1} + \sum_{j=1}^N \textbf{I}_{k,j}$
$\hat{\textbf{y}}_{k\mid k} = \hat{\textbf{y}}_{k\mid k-1} + \sum_{j=1}^N \textbf{i}_{k,j}$
To predict the information filter the information matrix and vector can be converted back to their state space equivalents, or alternatively the information space prediction can be used.
$\textbf{M}_{k} = [\textbf{F}_{k}^{-1}]^{\text{T}} \textbf{Y}_{k-1\mid k-1} \textbf{F}_{k}^{-1}$
$\textbf{C}_{k} = \textbf{M}_{k} [\textbf{M}_{k}+\textbf{Q}_{k}^{-1}]^{-1}$
$\textbf{L}_{k} = I - \textbf{C}_{k}$
$\textbf{Y}_{k\mid k-1} = \textbf{L}_{k} \textbf{M}_{k} \textbf{L}_{k}^{\text{T}} + \textbf{C}_{k} \textbf{Q}_{k}^{-1} \textbf{C}_{k}^{\text{T}}$
$\hat{\textbf{y}}_{k\mid k-1} = \textbf{L}_{k} [\textbf{F}_{k}^{-1}]^{\text{T}}\hat{\textbf{y}}_{k-1\mid k-1}$
Note that if F and Q are time invariant these values can be cached. Note also that F and Q need to be invertible.
Fixed-lag smoother[edit]
This section needs additional citations for verification. (December 2010)
The optimal fixed-lag smoother provides the optimal estimate of $\hat{\textbf{x}}_{k-N \mid k}$ for a given fixed-lag $N$ using the measurements from $\textbf{z}_{1}$ to $\textbf{z}_{k}$. It can be
derived using the previous theory via an augmented state, and the main equation of the filter is the following:
$\begin{bmatrix} \hat{\textbf{x}}_{t\mid t} \\ \hat{\textbf{x}}_{t-1\mid t} \\ \vdots \\ \hat{\textbf{x}}_{t-N+1\mid t} \\ \end{bmatrix} = \begin{bmatrix} I \\ 0 \\ \vdots \\ 0 \\ \end{bmatrix} \
hat{\textbf{x}}_{t\mid t-1} + \begin{bmatrix} 0 & \ldots & 0 \\ I & 0 & \vdots \\ \vdots & \ddots & \vdots \\ 0 & \ldots & I \\ \end{bmatrix} \begin{bmatrix} \hat{\textbf{x}}_{t-1\mid t-1} \\ \
hat{\textbf{x}}_{t-2\mid t-1} \\ \vdots \\ \hat{\textbf{x}}_{t-N+1\mid t-1} \\ \end{bmatrix} + \begin{bmatrix} K^{(0)} \\ K^{(1)} \\ \vdots \\ K^{(N-1)} \\ \end{bmatrix} y_{t\mid |t-1}$
• $\hat{\textbf{x}}_{t\mid t-1}$ is estimated via a standard Kalman filter;
• $y_{t\mid t-1} = z(t) - H\hat{\textbf{x}}_{t\mid t-1}$ is the innovation produced considering the estimate of the standard Kalman filter;
• the various $\hat{\textbf{x}}_{t-i\mid t}$ with $i = 0,\ldots,N$ are new variables, i.e. they do not appear in the standard Kalman filter;
• the gains are computed via the following scheme:
$K^{(i)} = P^{(i)} H^{T} \left[ H P H^{T} + R \right]^{-1}$
$P^{(i)} = P \left[ \left[ F - K H \right]^{T} \right]^{i}$
where $P$ and $K$ are the prediction error covariance and the gains of the standard Kalman filter (i.e., $P_{t\mid t-1}$).
If the estimation error covariance is defined so that
$P_{i} := E \left[ \left( \textbf{x}_{t-i} - \hat{\textbf{x}}_{t-i\mid t} \right)^{*} \left( \textbf{x}_{t-i} - \hat{\textbf{x}}_{t-i\mid t} \right) \mid z_{1} \ldots z_{t} \right],$
then we have that the improvement on the estimation of $\textbf{x}_{t-i}$ is given by:
$P-P_{i} = \sum_{j = 0}^{i} \left[ P^{(j)} H^{T} \left[ H P H^{T} + R \right]^{-1} H \left( P^{(i)} \right)^{T} \right]$
Fixed-interval smoothers[edit]
The optimal fixed-interval smoother provides the optimal estimate of $\hat{\textbf{x}}_{k \mid n}$ ($k < n$) using the measurements from a fixed interval $\textbf{z}_{1}$ to $\textbf{z}_{n}$. This is
also called "Kalman Smoothing". There are several smoothing algorithms in common use.
The Rauch–Tung–Striebel (RTS) smoother is an efficient two-pass algorithm for fixed interval smoothing.^[23]
The forward pass is the same as the regular Kalman filter algorithm. These filtered state estimates $\hat{\textbf{x}}_{k\mid k}$ and covariances $\textbf{P}_{k\mid k}$ are saved for use in the
backwards pass.
In the backwards pass, we compute the smoothed state estimates $\hat{\textbf{x}}_{k\mid n}$ and covariances $\textbf{P}_{k\mid n}$. We start at the last time step and proceed backwards in time using
the following recursive equations:
$\hat{\textbf{x}}_{k\mid n} = \hat{\textbf{x}}_{k\mid k} + \textbf{C}_k ( \hat{\textbf{x}}_{k+1\mid n} - \hat{\textbf{x}}_{k+1\mid k} )$
$\textbf{P}_{k\mid n} = \textbf{P}_{k\mid k} + \textbf{C}_k ( \textbf{P}_{k+1\mid n} - \textbf{P}_{k+1\mid k} ) \textbf{C}_k^T$
$\textbf{C}_k = \textbf{P}_{k\mid k} \textbf{F}_k^T \textbf{P}_{k+1\mid k}^{-1}$
Modified Bryson–Frazier smoother[edit]
An alternative to the RTS algorithm is the modified Bryson–Frazier (MBF) fixed interval smoother developed by Bierman.^[18] This also uses a backward pass that processes data saved from the Kalman
filter forward pass. The equations for the backward pass involve the recursive computation of data which are used at each observation time to compute the smoothed state and covariance.
The recursive equations are
$\tilde{\Lambda}_k = \textbf{H}_k^T \textbf{S}_k^{-1} \textbf{H}_k + \hat{\textbf{C}}_k^T \hat{\Lambda}_k \hat{\textbf{C}}_k$
$\hat{\Lambda}_{k-1} = \textbf{F}_k^T\tilde{\Lambda}_{k}\textbf{F}_k$
$\hat{\Lambda}_n = 0$
$\tilde{\lambda}_k = -\textbf{H}_k^T \textbf{S}_k^{-1} \textbf{y}_k + \hat{\textbf{C}}_k^T \hat{\lambda}_k$
$\hat{\lambda}_{k-1} = \textbf{F}_k^T\tilde{\lambda}_{k}$
$\hat{\lambda}_n = 0$
where $\textbf{S}_k$ is the residual covariance and $\hat{\textbf{C}}_k = \textbf{I} - \textbf{K}_k\textbf{H}_k$. The smoothed state and covariance can then be found by substitution in the equations
$\textbf{P}_{k\mid n} = \textbf{P}_{k\mid k} - \textbf{P}_{k\mid k}\hat{\Lambda}_k\textbf{P}_{k\mid k}$
$\textbf{x}_{k\mid n} = \textbf{x}_{k\mid k} - \textbf{P}_{k\mid k}\hat{\lambda}_k$
$\textbf{P}_{k\mid n} = \textbf{P}_{k\mid k-1} - \textbf{P}_{k\mid k-1}\tilde{\Lambda}_k\textbf{P}_{k\mid k-1}$
$\textbf{x}_{k\mid n} = \textbf{x}_{k\mid k-1} - \textbf{P}_{k\mid k-1}\tilde{\lambda}_k.$
An important advantage of the MBF is that it does not require finding the inverse of the covariance matrix.
Minimum-variance smoother[edit]
The minimum-variance smoother can attain the best-possible error performance, provided that the models are linear, their parameters and the noise statistics are known precisely.^[24] This smoother is
a time-varying state-space generalization of the optimal non-causal Wiener filter.
The smoother calculations are done in two passes. The forward calculations involve a one-step-ahead predictor and are given by
$\hat{\textbf{x}}_{k+1\mid k} = \textbf{(F}_{k}-\textbf{K}_{k}\textbf{H}_{k})\hat{\textbf{x}}_{k\mid k-1} + \textbf{K}_{k} \textbf{z}_{k}$
${\alpha}_{k} = -\textbf{S}_k^{-1/2} \textbf{H}_{k}\hat{\textbf{x}}_{k\mid k-1} + \textbf{S}_k^{-1/2} \textbf{z}_{k}$
The above system is known as the inverse Wiener-Hopf factor. The backward recursion is the adjoint of the above forward system. The result of the backward pass $\beta_{k}$ may be calculated by
operating the forward equations on the time-reversed $\alpha_{k}$ and time reversing the result. In the case of output estimation, the smoothed estimate is given by
$\hat{\textbf{y}}_{k\mid N} = \textbf{z}_{k} - \textbf{R}_{k}\beta_{k}$
Taking the causal part of this minimum-variance smoother yields
$\hat{\textbf{y}}_{k\mid k} = \textbf{z}_{k} - \textbf{R}_{k} \textbf{S}_k^{-1/2} \alpha_{k}$
which is identical to the minimum-variance Kalman filter. The above solutions minimize the variance of the output estimation error. Note that the Rauch–Tung–Striebel smoother derivation assumes that
the underlying distributions are Gaussian, whereas the minimum-variance solutions do not. Optimal smoothers for state estimation and input estimation can be constructed similarly.
A continuous-time version of the above smoother is described in.^[25]^[26]
Expectation-maximization algorithms may be employed to calculate approximate maximum likelihood estimates of unknown state-space parameters within minimum-variance filters and smoothers. Often
uncertainties remain within problem assumptions. A smoother that accommodates uncertainties can be designed by adding a positive definite term to the Riccati equation.^[27]
In cases where the models are nonlinear, step-wise linearizations may be within the minimum-variance filter and smoother recursions (Extended Kalman filtering).
Non-linear filters[edit]
The basic Kalman filter is limited to a linear assumption. More complex systems, however, can be nonlinear. The non-linearity can be associated either with the process model or with the observation
model or with both.
Extended Kalman filter[edit]
In the extended Kalman filter (EKF), the state transition and observation models need not be linear functions of the state but may instead be non-linear functions. These functions are of
differentiable type.
$\textbf{x}_{k} = f(\textbf{x}_{k-1}, \textbf{u}_{k}) + \textbf{w}_{k}$
$\textbf{z}_{k} = h(\textbf{x}_{k}) + \textbf{v}_{k}$
The function f can be used to compute the predicted state from the previous estimate and similarly the function h can be used to compute the predicted measurement from the predicted state. However, f
and h cannot be applied to the covariance directly. Instead a matrix of partial derivatives (the Jacobian) is computed.
At each timestep the Jacobian is evaluated with current predicted states. These matrices can be used in the Kalman filter equations. This process essentially linearizes the non-linear function around
the current estimate.
Unscented Kalman filter[edit]
When the state transition and observation models—that is, the predict and update functions $f$ and $h$—are highly non-linear, the extended Kalman filter can give particularly poor performance.^[28]
This is because the covariance is propagated through linearization of the underlying non-linear model. The unscented Kalman filter (UKF) ^[28] uses a deterministic sampling technique known as the
unscented transform to pick a minimal set of sample points (called sigma points) around the mean. These sigma points are then propagated through the non-linear functions, from which the mean and
covariance of the estimate are then recovered. The result is a filter which more accurately captures the true mean and covariance. (This can be verified using Monte Carlo sampling or through a Taylor
series expansion of the posterior statistics.) In addition, this technique removes the requirement to explicitly calculate Jacobians, which for complex functions can be a difficult task in itself
(i.e., requiring complicated derivatives if done analytically or being computationally costly if done numerically).
As with the EKF, the UKF prediction can be used independently from the UKF update, in combination with a linear (or indeed EKF) update, or vice versa.
The estimated state and covariance are augmented with the mean and covariance of the process noise.
$\textbf{x}_{k-1\mid k-1}^{a} = [ \hat{\textbf{x}}_{k-1\mid k-1}^{T} \quad E[\textbf{w}_{k}^{T}] \ ]^{T}$
$\textbf{P}_{k-1\mid k-1}^{a} = \begin{bmatrix} & \textbf{P}_{k-1\mid k-1} & & 0 & \\ & 0 & &\textbf{Q}_{k} & \end{bmatrix}$
A set of 2L + 1 sigma points is derived from the augmented state and covariance where L is the dimension of the state.
\begin{align} \chi_{k-1\mid k-1}^{0} & = \textbf{x}_{k-1\mid k-1}^{a} \\[6pt] \chi_{k-1\mid k-1}^{i} & =\textbf{x}_{k-1\mid k-1}^{a} + \left ( \sqrt{ (L + \lambda) \textbf{P}_{k-1\mid k-1}^{a} }
\right )_{i}, \qquad i = 1,\ldots,L \\[6pt] \chi_{k-1\mid k-1}^{i} & = \textbf{x}_{k-1\mid k-1}^{a} - \left ( \sqrt{ (L + \lambda) \textbf{P}_{k-1\mid k-1}^{a} } \right )_{i-L}, \qquad i = L+1,\
dots{}2L \end{align}
$\left ( \sqrt{ (L + \lambda) \textbf{P}_{k-1\mid k-1}^{a} } \right )_{i}$
is the ith column of the matrix square root of
$(L + \lambda) \textbf{P}_{k-1\mid k-1}^{a}$
using the definition: square root A of matrix B satisfies
$B \triangleq A A^T. \,$
The matrix square root should be calculated using numerically efficient and stable methods such as the Cholesky decomposition.
The sigma points are propagated through the transition function f.
$\chi_{k\mid k-1}^{i} = f(\chi_{k-1\mid k-1}^{i}) \quad i = 0,\dots,2L$
where $f : R^{L} \rightarrow R^{|\textbf{x}|}$. The weighted sigma points are recombined to produce the predicted state and covariance.
$\hat{\textbf{x}}_{k\mid k-1} = \sum_{i=0}^{2L} W_{s}^{i} \chi_{k\mid k-1}^{i}$
$\textbf{P}_{k\mid k-1} = \sum_{i=0}^{2L} W_{c}^{i}\ [\chi_{k\mid k-1}^{i} - \hat{\textbf{x}}_{k\mid k-1}] [\chi_{k\mid k-1}^{i} - \hat{\textbf{x}}_{k\mid k-1}]^{T}$
where the weights for the state and covariance are given by:
$W_{s}^{0} = \frac{\lambda}{L+\lambda}$
$W_{c}^{0} = \frac{\lambda}{L+\lambda} + (1 - \alpha^2 + \beta)$
$W_{s}^{i} = W_{c}^{i} = \frac{1}{2(L+\lambda)}$
$\lambda = \alpha^2 (L+\kappa) - L\,\!$
$\alpha$ and $\kappa$ control the spread of the sigma points. $\beta$ is related to the distribution of $x$. Normal values are $\alpha=10^{-3}$, $\kappa=0$ and $\beta=2$. If the true distribution of
$x$ is Gaussian, $\beta=2$ is optimal.^[29]
The predicted state and covariance are augmented as before, except now with the mean and covariance of the measurement noise.
$\textbf{x}_{k\mid k-1}^{a} = [ \hat{\textbf{x}}_{k\mid k-1}^{T} \quad E[\textbf{v}_{k}^{T}] \ ]^{T}$
$\textbf{P}_{k\mid k-1}^{a} = \begin{bmatrix} & \textbf{P}_{k\mid k-1} & & 0 & \\ & 0 & &\textbf{R}_{k} & \end{bmatrix}$
As before, a set of 2L + 1 sigma points is derived from the augmented state and covariance where L is the dimension of the state.
\begin{align} \chi_{k\mid k-1}^{0} & = \textbf{x}_{k\mid k-1}^{a} \\[6pt] \chi_{k\mid k-1}^{i} & = \textbf{x}_{k\mid k-1}^{a} + \left ( \sqrt{ (L + \lambda) \textbf{P}_{k\mid k-1}^{a} } \right )_
{i}, \qquad i = 1,\dots,L \\[6pt] \chi_{k\mid k-1}^{i} & = \textbf{x}_{k\mid k-1}^{a} - \left ( \sqrt{ (L + \lambda) \textbf{P}_{k\mid k-1}^{a} } \right )_{i-L}, \qquad i = L+1,\dots,2L \end
Alternatively if the UKF prediction has been used the sigma points themselves can be augmented along the following lines
$\chi_{k\mid k-1} := [ \chi_{k\mid k-1}^T \quad E[\textbf{v}_{k}^{T}] \ ]^{T} \pm \sqrt{ (L + \lambda) \textbf{R}_{k}^{a} }$
$\textbf{R}_{k}^{a} = \begin{bmatrix} & 0 & & 0 & \\ & 0 & &\textbf{R}_{k} & \end{bmatrix}$
The sigma points are projected through the observation function h.
$\gamma_{k}^{i} = h(\chi_{k\mid k-1}^{i}) \quad i = 0..2L$
The weighted sigma points are recombined to produce the predicted measurement and predicted measurement covariance.
$\hat{\textbf{z}}_{k} = \sum_{i=0}^{2L} W_{s}^{i} \gamma_{k}^{i}$
$\textbf{P}_{z_{k}z_{k}} = \sum_{i=0}^{2L} W_{c}^{i}\ [\gamma_{k}^{i} - \hat{\textbf{z}}_{k}] [\gamma_{k}^{i} - \hat{\textbf{z}}_{k}]^{T}$
The state-measurement cross-covariance matrix,
$\textbf{P}_{x_{k}z_{k}} = \sum_{i=0}^{2L} W_{c}^{i}\ [\chi_{k\mid k-1}^{i} - \hat{\textbf{x}}_{k\mid k-1}] [\gamma_{k}^{i} - \hat{\textbf{z}}_{k}]^{T}$
is used to compute the UKF Kalman gain.
$K_{k} = \textbf{P}_{x_{k}z_{k}} \textbf{P}_{z_{k}z_{k}}^{-1}$
As with the Kalman filter, the updated state is the predicted state plus the innovation weighted by the Kalman gain,
$\hat{\textbf{x}}_{k\mid k} = \hat{\textbf{x}}_{k\mid k-1} + K_{k}( \textbf{z}_{k} - \hat{\textbf{z}}_{k} )$
And the updated covariance is the predicted covariance, minus the predicted measurement covariance, weighted by the Kalman gain.
$\textbf{P}_{k\mid k} = \textbf{P}_{k\mid k-1} - K_{k} \textbf{P}_{z_{k}z_{k}} K_{k}^{T}$
Kalman–Bucy filter[edit]
The Kalman–Bucy filter (named after Richard Snowden Bucy) is a continuous time version of the Kalman filter.^[30]^[31]
It is based on the state space model
$\frac{d}{dt}\mathbf{x}(t) = \mathbf{F}(t)\mathbf{x}(t) + \mathbf{B}(t)\mathbf{u}(t) + \mathbf{w}(t)$
$\mathbf{z}(t) = \mathbf{H}(t) \mathbf{x}(t) + \mathbf{v}(t)$
where $\mathbf{Q}(t)$ and $\mathbf{R}(t)$ represent the intensities of the two white noise terms $\mathbf{w}(t)$ and $\mathbf{v}(t)$, respectively.
The filter consists of two differential equations, one for the state estimate and one for the covariance:
$\frac{d}{dt}\hat{\mathbf{x}}(t) = \mathbf{F}(t)\hat{\mathbf{x}}(t) + \mathbf{B}(t)\mathbf{u}(t) + \mathbf{K}(t) (\mathbf{z}(t)-\mathbf{H}(t)\hat{\mathbf{x}}(t))$
$\frac{d}{dt}\mathbf{P}(t) = \mathbf{F}(t)\mathbf{P}(t) + \mathbf{P}(t)\mathbf{F}^{T}(t) + \mathbf{Q}(t) - \mathbf{K}(t)\mathbf{R}(t)\mathbf{K}^{T}(t)$
where the Kalman gain is given by
Note that in this expression for $\mathbf{K}(t)$ the covariance of the observation noise $\mathbf{R}(t)$ represents at the same time the covariance of the prediction error (or innovation) $\tilde{\
mathbf{y}}(t)=\mathbf{z}(t)-\mathbf{H}(t)\hat{\mathbf{x}}(t)$; these covariances are equal only in the case of continuous time.^[32]
The distinction between the prediction and update steps of discrete-time Kalman filtering does not exist in continuous time.
The second differential equation, for the covariance, is an example of a Riccati equation.
Hybrid Kalman filter[edit]
Most physical systems are represented as continuous-time models while discrete-time measurements are frequently taken for state estimation via a digital processor. Therefore, the system model and
measurement model are given by
\begin{align} \dot{\mathbf{x}}(t) &= \mathbf{F}(t)\mathbf{x}(t)+\mathbf{B}(t)\mathbf{u}(t)+\mathbf{w}(t), &\mathbf{w}(t) &\sim N\bigl(\mathbf{0},\mathbf{Q}(t)\bigr) \\ \mathbf{z}_k &= \mathbf{H}
_k\mathbf{x}_k+\mathbf{v}_k, &\mathbf{v}_k &\sim N(\mathbf{0},\mathbf{R}_k) \end{align}
$\hat{\mathbf{x}}_{0\mid 0}=E\bigl[\mathbf{x}(t_0)\bigr], \mathbf{P}_{0\mid 0}=Var\bigl[\mathbf{x}(t_0)\bigr]$
\begin{align} &\dot{\hat{\mathbf{x}}}(t) = \mathbf{F}(t) \hat{\mathbf{x}}(t) + \mathbf{B}(t) \mathbf{u}(t) \text{, with } \hat{\mathbf{x}}(t_{k-1}) = \hat{\mathbf{x}}_{k-1\mid k-1} \\ \Rightarrow
&\hat{\mathbf{x}}_{k\mid k-1} = \hat{\mathbf{x}}(t_k)\\ &\dot{\mathbf{P}}(t) = \mathbf{F}(t)\mathbf{P}(t)+\mathbf{P}(t)\mathbf{F}(t)^T+\mathbf{Q}(t) \text{, with } \mathbf{P}(t_{k-1}) = \mathbf
{P}_{k-1\mid k-1}\\ \Rightarrow &\mathbf{P}_{k\mid k-1} = \mathbf{P}(t_k) \end{align}
The prediction equations are derived from those of continuous-time Kalman filter without update from measurements, i.e., $\mathbf{K}(t)=0$. The predicted state and covariance are calculated
respectively by solving a set of differential equations with the initial value equal to the estimate at the previous step.
$\mathbf{K}_{k} = \mathbf{P}_{k\mid k-1}\mathbf{H}_{k}^T\bigl(\mathbf{H}_{k}\mathbf{P}_{k\mid k-1}\mathbf{H}_{k}^T+\mathbf{R}_{k}\bigr)^{-1}$
$\hat{\mathbf{x}}_{k\mid k} = \hat{\mathbf{x}}_{k\mid k-1} + \mathbf{K}_k(\mathbf{z}_k-\mathbf{H}_k\hat{\mathbf{x}}_{k\mid k-1})$
$\mathbf{P}_{k\mid k} = (\mathbf{I} - \mathbf{K}_{k}\mathbf{H}_{k})\mathbf{P}_{k\mid k-1}$
The update equations are identical to those of the discrete-time Kalman filter.
Variants for the recovery of sparse signals[edit]
Recently the traditional Kalman filter has been employed for the recovery of sparse, possibly dynamic, signals from noisy observations. Both works ^[33] and ^[34] utilize notions from the theory of
compressed sensing/sampling, such as the restricted isometry property and related probabilistic recovery arguments, for sequentially estimating the sparse state in intrinsically low-dimensional
See also[edit]
Further reading[edit]
• Einicke, G.A. (2012). Smoothing, Filtering and Prediction: Estimating the Past, Present and Future. Rijeka, Croatia: Intech. ISBN 978-953-307-752-9.
• Gelb, A. (1974). Applied Optimal Estimation. MIT Press.
• Kalman, R.E. (1960). "A new approach to linear filtering and prediction problems". Journal of Basic Engineering 82 (1): 35–45. doi:10.1115/1.3662552. Retrieved 2008-05-03.
• Kalman, R.E.; Bucy, R.S. (1961). New Results in Linear Filtering and Prediction Theory. Retrieved 2008-05-03.
• Harvey, A.C. (1990). Forecasting, Structural Time Series Models and the Kalman Filter. Cambridge University Press.
• Roweis, S.; Ghahramani, Z. (1999). "A Unifying Review of Linear Gaussian Models". Neural Computation 11 (2): 305–345. doi:10.1162/089976699300016674. PMID 9950734.
• Simon, D. (2006). Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches. Wiley-Interscience.
• Stengel, R.F. (1994). Optimal Control and Estimation. Dover Publications. ISBN 0-486-68200-5.
• Warwick, K. (1987). "Optimal observers for ARMA models". International Journal of Control 46 (5): 1493–1503. doi:10.1080/00207178708933989. Retrieved 2008-05-03.
• Bierman, G.J. (1977). "Factorization Methods for Discrete Sequential Estimation". Mathematics in Science and Engineering 128 (Mineola, N.Y.: Dover Publications). ISBN 978-0-486-44981-4.
• Bozic, S.M. (1994). Digital and Kalman filtering. Butterworth–Heinemann.
• Haykin, S. (2002). Adaptive Filter Theory. Prentice Hall.
• Liu, W.; Principe, J.C. and Haykin, S. (2010). Kernel Adaptive Filtering: A Comprehensive Introduction. John Wiley.
• Manolakis, D.G. (1999). Statistical and Adaptive signal processing. Artech House.
• Welch, Greg; Bishop, Gary (1997). "SCAAT". SCAAT: Incremental Tracking with Incomplete Information. ACM Press/Addison-Wesley Publishing Co. pp. 333–344. doi:10.1145/258734.258876. ISBN
• Jazwinski, Andrew H. (1970). Stochastic Processes and Filtering. Mathematics in Science and Engineering. New York: Academic Press. p. 376. ISBN 0-12-381550-9.
• Maybeck, Peter S. (1979). Stochastic Models, Estimation, and Control. Mathematics in Science and Engineering. 141-1. New York: Academic Press. p. 423. ISBN 0-12-480701-1.
• Moriya, N. (2011). Primer to Kalman Filtering: A Physicist Perspective. New York: Nova Science Publishers, Inc. ISBN 978-1-61668-311-5.
• Dunik, J.; Simandl M., Straka O. (2009). "Methods for estimating state and measurement noise covariance matrices: Aspects and comparisons". Proceedings of 15th IFAC Symposium on System
Identification (France): 372–377.
• Chui, Charles K.; Chen, Guanrong (2009). Kalman Filtering with Real-Time Applications. Springer Series in Information Sciences 17 (4th ed.). New York: Springer. p. 229. ISBN 978-3-540-87848-3.
• Spivey, Ben; Hedengren, J. D. and Edgar, T. F. (2010). "Constrained Nonlinear Estimation for Industrial Process Fouling". Industrial & Engineering Chemistry Research 49 (17): 7824–7831. doi:
• Thomas Kailath, Ali H. Sayed, and Babak Hassibi, Linear Estimation, Prentice–Hall, NJ, 2000, ISBN 978-0-13-022464-4.
• Ali H. Sayed, Adaptive Filters, Wiley, NJ, 2008, ISBN 978-0-470-25388-5.
External links[edit] | {"url":"http://blekko.com/wiki/Kalman_filter?source=672620ff","timestamp":"2014-04-16T13:05:51Z","content_type":null,"content_length":"187127","record_id":"<urn:uuid:ad9b876e-205f-443c-9312-8112744fdc4c>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00353-ip-10-147-4-33.ec2.internal.warc.gz"} |
Market arrows
June 16, 2011
By Pat
Graphs like Figure 1 are reasonably common. But they are not reasonable.
Figure 1: A (log) price series with an explicit guide line. Some have the prices on a logarithmic scale, which is an improvement on the raw prices.
The problem with this sort of plot is that two particular data points are taken as special. These two points are essentially assumed to have no error. The plot then invites the observer to project
— under false pretenses — into the future.
There is also a substantial amount of self-censoring with these plots. I suspect you are very unlikely to see any plots that look like Figure 2.
Figure 2: Another log price series with explicit guide line.
Is there a name for this type of plot?
Appendix R
Though the plots are not useful, the technique to make them in R can be useful. The basic trick is to add a polygon to the existing plot.
The function that created the figures is pp.timelinefill. You can get it into your R session with the command:
I can saw a woman in two
But you won’t want to look in the box when I do
from “For my next trick I’ll need a volunteer” by Warren Zevon
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or | {"url":"http://www.r-bloggers.com/market-arrows/","timestamp":"2014-04-19T09:32:07Z","content_type":null,"content_length":"36632","record_id":"<urn:uuid:14e00c03-c4e3-4084-bbcf-e70d88ae1900>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00204-ip-10-147-4-33.ec2.internal.warc.gz"} |
Using math.h in OpenGL [Archive] - OpenGL Discussion and Help Forums
10-02-2001, 11:40 AM
Ok, the opposite and adjacent sides of a right triangle i have drawn are 1. I want to use tan to find one of the angles. It never works. I have tried tana() and tanh() and they dont work either. I
know the answer should be 45 degrees, what is wrong?
Tana() sounds like it is what I want. I call tanaf(1.0f) but it always returns 0.76 when it should be 45!
What is wrong? | {"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-124127.html","timestamp":"2014-04-19T04:28:23Z","content_type":null,"content_length":"4114","record_id":"<urn:uuid:7a44139e-df7f-4ed3-ad56-97248b1cb704>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00008-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Plus Olympic calendar: Friday 3rd August
Usain Bolt is determined to become a legend this weekend, by running the 100m in 9.4 seconds. But what does mathematics have to say about this quest? What is the ultimate limit which no runner can
possibly surpass? How can Bolt improve his record significantly without improving his speed? What about the effects of wind assistance, timing accuracy, and altitude on sprint times? Find out about
all this and more in the Plus article No limits for Usain and in this video of a lecture given by John D. Barrow and featured on our sister site Maths and sport: Countdown to the Games.
The weekend will also be a great one for Olympic tennis so you should rehearse the scoring rules:
You win a game if you score 4 points before your opponent scores 3 points. Or, if you both score 3 points at some stage you win if you manage to score 2 points in a row after the 3-all stage before
your opponent does.
That's quite a mouthful and it turns out that the maths behind tennis can get tricky too. What is the secret to the perfect serve? Can you figure out the probability of winning a game when the
probability of winning a point is 0.6? What's the chance of a tennis match taking on epic proportions like the 11-hour battle between John Isner and Nicolas Mahut at Wimbledon in 2011? Why does
tennis use this comparatively complicated scoring system? And can you improve your chance of winning by choosing the right racket? Here are some answers to these questions, from Plus and from our
sister site Maths and sport: Countdown to the Games:
Spinning the perfect serve — A new mathematical analysis of how to hit a winning server shows that spin is the thing. Perhaps there's still time for Murray's coach to include some maths in his
preparations for the match today...
Any win for tennis? — Work out the probability of winning a game for a fixed probability of winning a point. This challenging activity is designed to be accessible to students of A-level maths and
anyone else who likes puzzling over probabilities.
Anyone for tennis (and tennis and tennis ...)? — What's the chance of a tennis match taking on epic proportions like the 11 hour battle between John Isner and Nicolas Mahut at Wimbledon in 2011?
Final score — Why are there so many different scoring systems in operation in sport? This video of a lecture given by John D. Barrow looks at how structuring matches into a series of sets affects the
relative roles of luck and skill in determining the winner of the contest. It also looks at issues surrounding scoring in table tennis and decathlon.
Making a racket: the science of tennis — While the players get most of the limelight, engineers, too, are working hard to produce the cutting-edge tennis rackets that guarantee record performances.
Over recent decades new materials have made tennis rackets ever bigger, lighter and more powerful. So what kind of science goes into designing new rackets? | {"url":"http://plus.maths.org/content/plus-olympic-calendar-friday-3rd-august","timestamp":"2014-04-20T03:29:11Z","content_type":null,"content_length":"27425","record_id":"<urn:uuid:d4641039-1db9-45f0-a52c-4a781245d29c>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00562-ip-10-147-4-33.ec2.internal.warc.gz"} |
This Article
Bibliographic References
Add to:
Pairwise Reduction for the Direct, Parallel Solution of Sparse, Unsymmetric Sets of Linear Equations
December 1988 (vol. 37 no. 12)
pp. 1648-1654
ASCII Text x
T.A. Davis, E.S. Davidson, "Pairwise Reduction for the Direct, Parallel Solution of Sparse, Unsymmetric Sets of Linear Equations," IEEE Transactions on Computers, vol. 37, no. 12, pp. 1648-1654,
December, 1988.
BibTex x
@article{ 10.1109/12.9742,
author = {T.A. Davis and E.S. Davidson},
title = {Pairwise Reduction for the Direct, Parallel Solution of Sparse, Unsymmetric Sets of Linear Equations},
journal ={IEEE Transactions on Computers},
volume = {37},
number = {12},
issn = {0018-9340},
year = {1988},
pages = {1648-1654},
doi = {http://doi.ieeecomputersociety.org/10.1109/12.9742},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Computers
TI - Pairwise Reduction for the Direct, Parallel Solution of Sparse, Unsymmetric Sets of Linear Equations
IS - 12
SN - 0018-9340
EPD - 1648-1654
A1 - T.A. Davis,
A1 - E.S. Davidson,
PY - 1988
KW - pairwise reduction; parallel solution; sparse; unsymmetric sets; linear equations; concurrent computing; shared-memory multiprocessor; pairwise solve; PSolve; linear algebra; parallel
VL - 37
JA - IEEE Transactions on Computers
ER -
A paradigm for concurrent computing is explored in which a group of autonomous, asynchronous processes shares a common memory space and cooperates to solve a single problem. The processes synchronize
with only a few others at a time; barrier synchronization is not permitted except at the beginning and end of the computation. The paradigm maps directly to a shared-memory multiprocessor with effi
[1] G. Alaghband, "Parallel pivoting combined with parallel reduction," ICASE Rep 87-75, NASA Langley Res. Center. Hampton, VA, 1987.
[2] S. G. Abraham, "Reducing interprocessor communication in parallel architectures: System configuration and task assignment," CSRD Rep. 726, Center Supercomput. Res. Develop., Univ. Illinois,
Urbana, IL, 1987.
[3] S. G. Abraham and T. A. Davis, "Blocking for parallel sparse linear system solvers," inProc. 1988 Int. Conf. Parallel Processing. Univ. Park, PA: Penn. State Univ. Press, 1988, vol. 1, pp.
[4] W. Abu-Sufah and A. D. Malony, "Vector processing on the Alliant FX/8 multiprocessor," inProc. 1986 Int. Conf. Parallel Processing. Univ. Park, PA: Penn. State Univ. Press, 1986, pp. 559-566.
[5] Alliant Computer Systems Corp.,FX/Series Architecture Manual, Littleton, MA, 1986.
[6] P. E. Bjorstad, "A large scale, sparse, secondary storage, direct linear equation solver for structural analysis and its implementation on vector and parallel architectures,"Parallel Comput.,
vol. 5, pp. 3-12, 1987.
[7] D. A. Calahan, "Parallel solution of sparse simultaneous linear equations," inProc. 11th Allerton Conf. Circuits Syst. Theory, Univ. Illinois, Urbana, IL, 1973, pp. 729-735.
[8] G. Dahlquist andÅ. Björk,Numerical Methods. Englewood Cliffs, NJ: Prentice-Hall, 1974.
[9] A. K. Dave and I. S. Duff, "Sparse matrix calculations on the CRAY- 2,"Parallel Comput., vol. 5, pp. 55-64, 1987.
[10] T. A. Davis, "PSolve: A concurrent algorithm for solving sparse systems of linear equations," CSRD Rep. 612, Center Supercomput. Res. Develop., Univ. Illinois, Urbana, IL, 1986.
[11] T. A. Davis and E. S. Davidson, "PSolve: A concurrent algorithm for solving sparse systems of linear equations," inProc. 1987 Int. Conf. Parallel Processing. Univ. Park, PA: Penn. State Univ.
Press, 1987, pp. 483-490.
[12] E. W. Dijkstra, "Hierarchical ordering of sequential processes," inOperating Systems Techniques, C. A. R. Hoare and R. H. Perrott, Eds. New York: Academic, 1972, pp. 72-93.
[13] I. S. Duff, R. Grimes, J. Lewis, and B. Poole, "Sparse matrix text problems,"SIGNUM Newsletter, vol. 17, p. 22, 1982.
[14] I. S. Duff and J. K. Reid, "Some design features of a sparse matrix code,"ACM Trans. Math. Software, vol. 5, pp. 18-35, 1979.
[15] I. S. Duff and J. K. Reid, "The multifrontal solution of unsymmetric sets of linear equaions,"SIAM J. Sci. Stat. Comput., vol. 5, pp. 633-641, 1984.
[16] I. S. Duff, "Parallel implementation of multifrontal schemes,"Parallel Comput., vol. 3, pp. 193-204, 1986.
[17] S. C. Eisenstat, M. C. Gursky, M. H. Schultz, and A. H. Sherman, "The Yale sparse matrix package, II: The non-symmetric codes," Rep. 114, Dep. Comput. Sci., Yale Univ., 1977.
[18] S. C. Eisenstat, M. C. Gursky, M. H. Schultz, and A. H. Sherman, "Yale sparse matrix package, I: The symmetric codes,"Int. J. Numer. Meth. Eng., vol. 18, pp. 1145-1151, 1982.
[19] A. George, M. T. Heath, J. W. H. Liu, and E. Ng, "Solution of sparse positive definite systems on a shared-memory multiprocessor,"Int. J. Parallel Programming, vol. 15, no. 4, pp. 309-325, 1986.
[20] A. George, M. T. Heath, E. Ng, and J. W. H. Liu, "Symbolic Cholesky factorization on a local-memory multiprocessor,"Parallel Comput., vol. 5, pp. 85-95, 1987.
[21] R. W. Hockney and C. R. Jesshope,Parallel Computers. Bristol, England: Adam Hilger, 1981.
[22] J. W. Huang and O. Wing, "Optimal parallel triangulation of a sparse matrix,"IEEE Trans. Circuits Syst., vol. CAS-26, pp. 726-732, Sept. 1979.
[23] J. A. G. Jess and H. G. M. Kees, "A data structure for parallelL/Udecomposition,"IEEE Trans. Comput., vol. C-31, pp. 231-239, 1982.
[24] H. M. Markowitz, "The elimination form on the inverse and its application to linear programming,"Management Sci., vol. 3, pp. 255-269, 1957.
[25] R. G. Melhem, "A modified frontal technique suitable for parallel systems,"SIAM J. Sci. Stat. Comput., vol. 9, pp. 289-303, 1988.
[26] J. W. H. Liu, "Computational models and task scheduling for parallel sparse Cholesky factorization,"Parallel Comput., vol. 3, pp. 327- 342, 1986.
[27] F. J. Peters, "Parallel pivoting algorithms for sparse symmetric matrices,"Parallel Comput., vol. 1, pp. 99-110, 1984.
[28] A. H. Sameh, "On some parallel algorithms on a ring of processors,"Comput. Phys. Commun., vol. 37, pp. 159-166, 1985.
[29] D. C. Sorenson, "Analysis of pairwise pivoting in Gaussian elimination,"IEEE Trans. Comput., vol. C-34, pp. 274-278, 1985.
[30] R. P. Tewarson,Sparse Matrices. New York: Academic, 1973.
[31] J. H. Wilkinson, "Error analysis of direct methods of matrix inversion,"J. ACM, vol. 8, pp. 281-330, 1961.
[32] O. Wing and J. W. Huang, "A computation model of parallel solution of linear equations,"IEEE Trans. Comput., vol. C-29, pp. 632-638, 1980.
Index Terms:
pairwise reduction; parallel solution; sparse; unsymmetric sets; linear equations; concurrent computing; shared-memory multiprocessor; pairwise solve; PSolve; linear algebra; parallel algorithms.
T.A. Davis, E.S. Davidson, "Pairwise Reduction for the Direct, Parallel Solution of Sparse, Unsymmetric Sets of Linear Equations," IEEE Transactions on Computers, vol. 37, no. 12, pp. 1648-1654, Dec.
1988, doi:10.1109/12.9742
Usage of this product signifies your acceptance of the
Terms of Use | {"url":"http://www.computer.org/csdl/trans/tc/1988/12/t1648-abs.html","timestamp":"2014-04-24T17:17:22Z","content_type":null,"content_length":"54643","record_id":"<urn:uuid:e2b2f891-223f-4535-9d70-edda89aa79c6>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00054-ip-10-147-4-33.ec2.internal.warc.gz"} |
Right Triangle Construction
February 8th 2010, 05:59 PM #1
Nov 2009
I know how to construct a right triangle when the hypotenuse & one side are given but how do I construct a right triangle whose hypotenuse is 5.4 cm & one of the acute angles is 30 degrees? What
are the steps?
Hi Ron,
If one acute angle is 30 degrees, then the other acute one is (90-30)=60 degrees. You can project these lines away from the hypotenuse until they meet at the right-angled corner.
Theorem: in a 30-60-90 degree triangle (aka "golden triangle" sometimes), the length of the cathetus in front of the 30 deg. angle is half the hypotenuse's length.
Of course, the other cathetus' length is then $\frac{\sqrt{3}}{2}\cdot hypotenuse$ , according to Mr. Pythagoras.
February 8th 2010, 06:05 PM #2
MHF Contributor
Dec 2009
February 8th 2010, 06:11 PM #3
Oct 2009 | {"url":"http://mathhelpforum.com/algebra/127883-right-triangle-construction.html","timestamp":"2014-04-17T20:42:52Z","content_type":null,"content_length":"37164","record_id":"<urn:uuid:9a08e457-4d4f-4ebd-b521-c2de31142b3e>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00432-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Content:
Geometry, Measurement
Fill a box with cubes, rows of cubes, or layers of cubes. The number of unit cubes needed to fill the entire box is known as the volume of the box. Can you determine a rule for finding the volume of
a box if you know its width, depth, and height?
Along the left side, click on a cube, a row of cubes, or a layer of cubes to fill the box. The
Remove Last
button takes away the last piece you placed in the box; the
button removes all of the pieces from the box.
By clicking on the flattened sides of the box, you can see what the box looks like when the sides are folded up.
Change the width, depth, and height to see boxes of various sizes.
of a box is equal to the number of unit cubes that will fit inside it.
• How many unit cubes are needed to fill a box that measures 3 × 5 × 7?
• What about a box that measures 5 × 7 × 3?
• What is the volume of a box that measures 7 × 3 × 5?
In general, how can you find the volume of a box if you know the width, depth, and height? | {"url":"http://illuminations.nctm.org/Activity.aspx?id=4095","timestamp":"2014-04-17T01:27:33Z","content_type":null,"content_length":"32783","record_id":"<urn:uuid:3bf339dd-68e8-4cfd-80a2-ee5740d20195>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00108-ip-10-147-4-33.ec2.internal.warc.gz"} |
Desoto Algebra 2 Tutor
Find a Desoto Algebra 2 Tutor
...I have helped students passing through fear for the test with confidence. I am a scientist; math and physics have always been very easy for me. Since 2009, till last month, I have been teaching
full time and part-time in Chapel Hill, NC for elementary k through twelfth grade and coaching their home works.
13 Subjects: including algebra 2, calculus, geometry, algebra 1
If you are searching for a tutor who understands that every person has a different learning style and is adaptable enough to alter his teaching style to best fit you, then I am your man. I am a
recent UT Dallas Graduate, and a newly accepted medical student at UTHSCSA! I have been helping my stude...
15 Subjects: including algebra 2, chemistry, physics, biology
...My goal is to make sure you understand the rules. I am an accomplished writer. I have authored ten book chapters, over 100 scientific papers, numerous research grants, patent applications,
along with a multitude of project reports.
55 Subjects: including algebra 2, reading, chemistry, writing
...I gained considerable OTJ experience writing complex SQL queries for a large mainframe relational database. I do not have experience with Microsoft ACCESS. I can help with query strategies and
with syntax of most query statements.
15 Subjects: including algebra 2, chemistry, physics, calculus
...I have taught algebra I, algebra II, geometry, and 6th grade math. I have had student success on state-administered tests, including over 80% of my students passing the algebra I end-of-course
exam and over 96% passing the geometry end-of-course exam in 2012 (the first official year of those exa...
14 Subjects: including algebra 2, calculus, geometry, statistics | {"url":"http://www.purplemath.com/desoto_tx_algebra_2_tutors.php","timestamp":"2014-04-18T18:51:24Z","content_type":null,"content_length":"23821","record_id":"<urn:uuid:8671dd2e-6793-4e42-b144-c6dacbfa18e6>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00287-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hardness and Algorithms for Rainbow Connectivity
when quoting this document, please refer to the following
http://drops.dagstuhl.de/opus/volltexte/2009/1811/ Chakraborty, Sourav
Fischer, Eldar
Matsliah, Arie
Yuster, Raphael
Hardness and Algorithms for Rainbow Connectivity
An edge-colored graph $G$ is {\em rainbow connected} if any two vertices are connected by a path whose edges have distinct colors. The {\em rainbow connectivity} of a connected graph $G$, denoted $rc
(G)$, is the smallest number of colors that are needed in order to make $G$ rainbow connected. In addition to being a natural combinatorial problem, the rainbow connectivity problem is motivated by
applications in cellular networks. In this paper we give the first proof that computing $rc(G)$ is NP-Hard. In fact, we prove that it is already NP-Complete to decide if $rc(G)=2$, and also that it
is NP-Complete to decide whether a given edge-colored (with an unbounded number of colors) graph is rainbow connected. On the positive side, we prove that for every $\epsilon >0$, a connected graph
with minimum degree at least $\epsilon n$ has bounded rainbow connectivity, where the bound depends only on $\epsilon$, and the corresponding coloring can be constructed in polynomial time.
Additional non-trivial upper bounds, as well as open problems and conjectures are also presented.
BibTeX - Entry
author = {Sourav Chakraborty and Eldar Fischer and Arie Matsliah and Raphael Yuster},
title = {{Hardness and Algorithms for Rainbow Connectivity}},
booktitle = {26th International Symposium on Theoretical Aspects of Computer Science},
pages = {243--254},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-939897-09-5},
ISSN = {1868-8969},
year = {2009},
volume = {3},
editor = {Susanne Albers and Jean-Yves Marion},
publisher = {Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik},
address = {Dagstuhl, Germany},
URL = {http://drops.dagstuhl.de/opus/volltexte/2009/1811},
URN = {urn:nbn:de:0030-drops-18115},
doi = {http://dx.doi.org/10.4230/LIPIcs.STACS.2009.1811},
annote = {Keywords: }
Seminar: 26th International Symposium on Theoretical Aspects of Computer Science
Issue date: 2009
Date of publication: 2009 | {"url":"http://drops.dagstuhl.de/opus/volltexte/2009/1811/","timestamp":"2014-04-16T16:05:33Z","content_type":null,"content_length":"9232","record_id":"<urn:uuid:40aadcc2-d226-443d-a30e-02d9fb4496d4>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00551-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sudbury Algebra 2 Tutor
Find a Sudbury Algebra 2 Tutor
...My tutoring work for the Lexington public school system for 14 years was run, for most of those years, by the Special Education department. As result, I have worked with students with all sorts
of special circumstances: Dyslexia, ADD/ADHS, Hearing loss, various forms of judgmental and function d...
34 Subjects: including algebra 2, reading, English, geometry
...Because of this wide range of grades, there are 3 levels of tests: Lower, for grades 5 and 6, Middle, for grades 7 and 8, and Upper, for grades 9 through 12. Because your child's results are
scored and compared only with others entering the same grade, your child should not worry about encounter...
33 Subjects: including algebra 2, chemistry, physics, calculus
...I'm that geeky math-loving girl, that was also a cheerleader, so I pride myself in being smart and fun!! I was an Actuarial Mathematics major at Worcester Polytechnic Institute (WPI), and
worked in the actuarial field for about 3.5 years after college. Since then I have been a nanny and a tutor ...
17 Subjects: including algebra 2, calculus, statistics, geometry
...Things need to make sense to me, I work hard to be sure to understand the big picture of everything and because of that I'm able to show others or explain things better to others about
different mathematical and science concepts. I currently have a Masters Degree in Microbiology from Loyola University in Chicago. I also got my Bachelors degree in Biochemistry from Mount Holyoke
22 Subjects: including algebra 2, reading, biology, English
...I have years of experience teaching math as a classroom teacher, as a tutor, as a math club coach, and in coaching other teachers. I focus on helping my students to develop both improved
content knowledge as well as learn helpful problem-solving strategies. My interest is in helping my students...
33 Subjects: including algebra 2, reading, writing, English | {"url":"http://www.purplemath.com/sudbury_ma_algebra_2_tutors.php","timestamp":"2014-04-17T07:47:44Z","content_type":null,"content_length":"24003","record_id":"<urn:uuid:cc7adea8-6f61-4fd1-b5d7-d7f54cd7fe57>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00419-ip-10-147-4-33.ec2.internal.warc.gz"} |
basic maths question
December 31st 2006, 05:24 PM #1
Dec 2006
basic maths question
We're doing some school holidays maths. Son is 8 yo.
I don't understand the technique for this question..
these 6 numbers can be arr in 3 pairs so they add up to same answer
I've done it the hard way, but im there is a technique here.
I do not thing there is an easy way unless guessing which work and do not.
This is for your 8-yr old son? If so, then let's think like we are 8 years old too, meaning we do not know hard Math yet. We are into addition/subtraction yet mostly.
A pair is two.
So if the sum of the units digit of a pair ends up as the same for all 3 pairs, then we might get the same sum for the 3 pairs?
The six units digits are 8, 5, 8, 6, 9, 6.
Adding any two that might give the same last number, or the sum's units digit, might solve the problem.
6+8 = 14, and 5+9 = 14.
The units digit of both sums is same 4. Umm, maybe good.
235 +349 = 584
158 +426 = 584
318 +266 = 584
Three pairs, same sum. Good.
An exercise for sudoku puzzles.
i don't understand. If I don't understand how can an 8 yo?
What is this <sum of the units digit> ?
Another way, although I doubt if an 8-yr old knows the reasoning behind it yet.
There are 6 given numbers to be divided into 3 pairs. Each pair adds up to the same number.
So, add the 6 given numbers, and divide the total by 3 to get that "same number" sum of any pair.
158 +235 +318 +426 +349 +266 = 1752
1752/3 = 584
So any pair should add up to 584.
What is the number to add to 158 to get 584?
Why, subtract 158 from 584, of course.
So, 584 - 158 = 426. Hey!
Hence one pair is 158 and 426.
Go get the other two pairs.
I see.
Okay. In 235, the last number, the 5, is the units digit.
(the first number, the 2, is the hunhreds digit; the 3 is the tens digit)
(Hundreds digit, tens digit, units digit -----for a number with 3 digits. May I assume your son understands a digit?)
So "sum of the units digits" of "235 +349" is 5+9. Which is 14. Then the units digit of this 14 is 4.
thank you.
we got this maths from a book for year 4 students. I find it's too advanced for an 8 yo.
I see again. I see he is not yet a Year-4 student.
Pushing the son too early?
Is he a gifted one, though?
Advice, if your son's got a "normal" Math-brain, and he is just in 2nd Grade yet, please avoid 4th-Grade Math for now for the kid. Drill him on his current Grade level only.
If he is gifted, hey, persevere on those 4th-Grade Math exercise!
he is going into year 4 from year 3 last year.
he is just a normal brain!
I don't think anyone has said this; forgive me if they have.
Isn't it a case of just being able to put the numbers in order? Put them in order, and then take the biggest and the smallest and pair them up. Then the next biggest and the next smallest...
There is no need to do any adding or even know what the totals are.
I don't think anyone has said this; forgive me if they have.
Isn't it a case of just being able to put the numbers in order? Put them in order, and then take the biggest and the smallest and pair them up. Then the next biggest and the next smallest...
There is no need to do any adding or even know what the totals are.
Umm, just asking, if you don't do any adding, how will you know the sums are the same?
Well they just have to be. If you're told that the numbers will pair up, then the smallest one has to go with the largest and so on. If the largest one went with anything other than the smallest,
then there wouldn't be a number to go with the smallest one to make it add up.
December 31st 2006, 05:33 PM #2
Global Moderator
Nov 2005
New York City
December 31st 2006, 06:01 PM #3
Dec 2006
December 31st 2006, 06:07 PM #4
MHF Contributor
Apr 2005
December 31st 2006, 06:21 PM #5
Dec 2006
December 31st 2006, 06:22 PM #6
MHF Contributor
Apr 2005
December 31st 2006, 06:29 PM #7
MHF Contributor
Apr 2005
December 31st 2006, 06:31 PM #8
Dec 2006
December 31st 2006, 06:43 PM #9
MHF Contributor
Apr 2005
December 31st 2006, 06:45 PM #10
Dec 2006
December 31st 2006, 06:49 PM #11
MHF Contributor
Apr 2005
December 31st 2006, 08:47 PM #12
Global Moderator
Nov 2005
New York City
January 19th 2007, 10:12 AM #13
Junior Member
Oct 2005
January 19th 2007, 10:33 AM #14
MHF Contributor
Apr 2005
January 19th 2007, 11:00 AM #15
Junior Member
Oct 2005 | {"url":"http://mathhelpforum.com/algebra/9406-basic-maths-question.html","timestamp":"2014-04-17T08:28:02Z","content_type":null,"content_length":"75137","record_id":"<urn:uuid:c0131dd6-6878-4767-aaa2-b51ddf45abb4>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00242-ip-10-147-4-33.ec2.internal.warc.gz"} |
La Canada Flintridge Math Tutor
...I could teach writing, reading, and speaking Chinese to both beginners and advanced learners. From past teaching experience, I have accumulated many useful teaching materials such as hand-outs
and homework sheets for my students. I have been using MATLAB in college and graduate school.
25 Subjects: including calculus, trigonometry, statistics, discrete math
...Graduated with Cum Laude honors. - University of Southern California, Bachelors of Science in Business Administration with an emphasis in Entrepreneurial Studies & Finance, conferred Fall 2004.
Presidential Scholarship recipient and Dean's List standing. I'm well versed in the subject of stati...
3 Subjects: including statistics, SPSS, ecology
I have a many years experience in Electrical Engineering withMaster of Science degree in Electrical Engineering from the University of Manitoba, Winnipeg, Alberta Canada. Math is the subject of my
great expertise. I had been teaching in the university as teacher assistant and tutored math to many students.
11 Subjects: including calculus, geometry, precalculus, reading
First time tutor looking to learn as much as you! I graduated from Presbyterian College three years ago and spent the interim time teaching kids and adults alike how to juggle and use their
physical skills to improve their lives. I recently moved from the fast pace of New York City to the quiet suburbs here in South Jersey - and I couldn't be happier.
8 Subjects: including algebra 1, vocabulary, grammar, prealgebra
...One of my jobs at JPL was to teach Rocket Scientists how to perform complex procedures and use specialty software to operate spacecraft. I was praised by my supervisor and science team leads
for my excellence in this. During employee turnover on long space missions, such as Voyager and Cassini,...
7 Subjects: including algebra 1, algebra 2, calculus, geometry | {"url":"http://www.purplemath.com/la_canada_flintridge_ca_math_tutors.php","timestamp":"2014-04-18T19:13:13Z","content_type":null,"content_length":"24227","record_id":"<urn:uuid:1cb50b77-74c2-4768-96d5-10283702dfc7>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00531-ip-10-147-4-33.ec2.internal.warc.gz"} |
Excel NPV Function
The Excel NPV Function
Basic Description
The Excel NPV function calculates the Net Present Value of an investment, based on a supplied discount rate, and a series of future payments and income.
The format of the function is :
NPV( rate, value1, [value2], [value3], ... )
Where the arguments are as follows :
rate - The discount rate over one period
Numeric values, representing payments and income, where :
value1, [value2], ... - - negative values are treated as payments
- positive values are treated as income
Note that :
• If the values are supplied individually, numbers, blank cells, logical values and text representations of numbers are interpreted as numeric values; Other text values and error values are ignored
• If the values are supplied as an array, all non-numbers in the array are ignored
Also note that in Excel 2007, you can provide up to 254 payment and income values to the Npv function, but in Excel 2003, you can only provide up to 29 values.
NPV Function Example
The spreadsheet on the right shows an example of the NPV function. The data used is shown in cells A1 - A7 of the spreadsheet and the NPV function is shown in cell B10.
This function gives the result $2,678.68
Note that:
• In this example, the initial investment of $10,000 (shown in cell A2), is made at the start of the first period. Therefore, this value is not included in the arguments to the NPV function.
Instead it is added on afterwards.
• If the investment was added one year (or period) into the investment, this would then be the first value1 argument in the NPV function.
More examples of the Excel NPV function can be found on the Microsoft Office website | {"url":"http://www.excelfunctions.net/Excel-Npv-Function.html","timestamp":"2014-04-20T09:14:36Z","content_type":null,"content_length":"14276","record_id":"<urn:uuid:4c139e4e-b2e5-451a-9e2d-3f811a179e0e>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00150-ip-10-147-4-33.ec2.internal.warc.gz"} |
Applications of Combinatorial Matrix Theory to Laplacian Matrices of Graphs
• Consolidates the important papers on Laplacian matrices into one succinct source
• Includes enhanced proofs to reach a wider audience
• Gives clear examples to illustrate each theorem and calculations
• Contains several "excursion sections" and "application sections" which show interesting applications of the material presented
On the surface, matrix theory and graph theory seem like very different branches of mathematics. However, adjacency, Laplacian, and incidence matrices are commonly used to represent graphs, and many
properties of matrices can give us useful information about the structure of graphs.
Applications of Combinatorial Matrix Theory to Laplacian Matrices of Graphs is a compilation of many of the exciting results concerning Laplacian matrices developed since the mid 1970s by well-known
mathematicians such as Fallat, Fiedler, Grone, Kirkland, Merris, Mohar, Neumann, Shader, Sunder, and more. The text is complemented by many examples and detailed calculations, and sections followed
by exercises to aid the reader in gaining a deeper understanding of the material. Although some exercises are routine, others require a more in-depth analysis of the theorems and ask the reader to
prove those that go beyond what was presented in the section.
Matrix-graph theory is a fascinating subject that ties together two seemingly unrelated branches of mathematics. Because it makes use of both the combinatorial properties and the numerical properties
of a matrix, this area of mathematics is fertile ground for research at the undergraduate, graduate, and professional levels. This book can serve as exploratory literature for the undergraduate
student who is just learning how to do mathematical research, a useful "start-up" book for the graduate student beginning research in matrix-graph theory, and a convenient reference for the more
experienced researcher.
Table of Contents
Matrix Theory Preliminaries
Vector Norms, Matrix Norms, and the Spectral Radius of a Matrix
Location of Eigenvalues
Perron-Frobenius Theory
Doubly Stochastic Matrices
Generalized Inverses
Graph Theory Preliminaries
Introduction to Graphs
Operations of Graphs and Special Classes of Graphs
Connectivity of Graphs
Degree Sequences and Maximal Graphs
Planar Graphs and Graphs of Higher Genus
Introduction to Laplacian Matrices
Matrix Representations of Graphs
The Matrix Tree Theorem
The Continuous Version of the Laplacian
Graph Representations and Energy
Laplacian Matrices and Networks
The Spectra of Laplacian Matrices
The Spectra of Laplacian Matrices Under Certain Graph Operations
Upper Bounds on the Set of Laplacian Eigenvalues
The Distribution of Eigenvalues Less than One and Greater than One
The Grone-Merris Conjecture
Maximal (Threshold) Graphs and Integer Spectra
Graphs with Distinct Integer Spectra
The Algebraic Connectivity
Introduction to the Algebraic Connectivity of Graphs
The Algebraic Connectivity as a Function of Edge Weight
The Algebraic Connectivity with Regard to Distances and Diameters
The Algebraic Connectivity in Terms of Edge Density and the Isoperimetric Number
The Algebraic Connectivity of Planar Graphs
The Algebraic Connectivity as a Function Genus k where k is greater than 1
The Fiedler Vector and Bottleneck Matrices for Trees
The Characteristic Valuation of Vertices
Bottleneck Matrices for Trees
Excursion: Nonisomorphic Branches in Type I Trees
Perturbation Results Applied to Extremizing the Algebraic Connectivity of Trees
Application: Joining Two Trees by an Edge of Infinite Weight
The Characteristic Elements of a Tree
The Spectral Radius of Submatrices of Laplacian Matrices for Trees
Bottleneck Matrices for Graphs
Constructing Bottleneck Matrices for Graphs
Perron Components of Graphs
Minimizing the Algebraic Connectivity of Graphs with Fixed Girth
Maximizing the Algebraic Connectivity of Unicyclic Graphs with Fixed Girth
Application: The Algebraic Connectivity and the Number of Cut Vertices
The Spectral Radius of Submatrices of Laplacian Matrices for Graphs
The Group Inverse of the Laplacian Matrix
Constructing the Group Inverse for a Laplacian Matrix of a Weighted Tree
The Zenger Function as a Lower Bound on the Algebraic Connectivity
The Case of the Zenger Equalling the Algebraic Connectivity in Trees
Application: The Second Derivative of the Algebraic Connectivity as a Function of Edge Weight
Editorial Reviews
… this book works well as a reference textbook for undergraduates. Indeed, it is a distillation of a number of key results involving, specifically, the Laplacian matrix associated with a graph (which
is sometimes called the ‘nodal admittance matrix’ by electrical engineers). … Molitierno’s book represents a well-written source of background on this growing field. The sources are some of the
seminal ones in the field, and the book is accessible to undergraduates.
—John T. Saccoman, MAA Reviews, October 2012
The book owes its textbook appeal to detailed proofs, a large number of fully elaborated examples and observations, and a handful of exercises, making beginning graduate students as well as advanced
undergraduates its primary audience. Still, it can serve as useful reference book for experienced researchers as well.
—Zentralblatt MATH | {"url":"http://www.crcpress.com/product/isbn/9781439863374","timestamp":"2014-04-17T21:42:50Z","content_type":null,"content_length":"98388","record_id":"<urn:uuid:6f8f9c4c-0050-4320-8270-ef2ff31e8df2>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00018-ip-10-147-4-33.ec2.internal.warc.gz"} |
Deer Park, TX Algebra Tutor
Find a Deer Park, TX Algebra Tutor
...If you are requesting more advanced math or science tutoring (anatomy & physiology, physics, algebra, etc.), I would appreciate a heads up sometime before the lesson, so I can refresh my
memory. I work best with younger kids and I am very patient. I've been babysitting and tutoring for years and taught all four of my younger siblings how to read.
25 Subjects: including algebra 1, reading, English, physics
...I have experience teaching algebra and geometry at the high school level. I also have experience as a tutor with a large private learning center. I've helped countless students of all ages
with math of all levels.
34 Subjects: including algebra 1, algebra 2, English, chemistry
...I am an excellent communicator with a desire to help student achieve their educational goals. I prefer using some internet instructional tools if available, but I spend most of the time in
guiding my student through the process of identifying specifically what is being asked, what are the import...
9 Subjects: including algebra 1, algebra 2, geometry, precalculus
Biochemistry can be a tough subject to understand at times; I like being able to help people get a better understanding of some of its concepts through direct tutoring. I have a Ph.D. in
Biochemistry from Duke University, with specific areas of expertise in primarily DNA replication, transcription,...
10 Subjects: including algebra 1, algebra 2, geometry, general computer
My name is Leonardo and I am a junior at the University of Houston. I am majoring in electrical engineering with a minor in math; also I am bilingual (Spanish). I was top ten percent in my high
school class and after a series of years in college, I feel confident in tutoring any kind of math, writi...
20 Subjects: including algebra 2, calculus, elementary (k-6th), vocabulary
Related Deer Park, TX Tutors
Deer Park, TX Accounting Tutors
Deer Park, TX ACT Tutors
Deer Park, TX Algebra Tutors
Deer Park, TX Algebra 2 Tutors
Deer Park, TX Calculus Tutors
Deer Park, TX Geometry Tutors
Deer Park, TX Math Tutors
Deer Park, TX Prealgebra Tutors
Deer Park, TX Precalculus Tutors
Deer Park, TX SAT Tutors
Deer Park, TX SAT Math Tutors
Deer Park, TX Science Tutors
Deer Park, TX Statistics Tutors
Deer Park, TX Trigonometry Tutors | {"url":"http://www.purplemath.com/Deer_Park_TX_Algebra_tutors.php","timestamp":"2014-04-21T05:21:15Z","content_type":null,"content_length":"24052","record_id":"<urn:uuid:1b94bf5a-725a-4db8-9949-a4d9e3b75924>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00405-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding a Rule to Fit Given Data
Date: 11/14/2005 at 22:26:36
From: Rhonda
Subject: solve for j (what math process will you use:add,subtract,etc
Given that 1J1 = 2, 3J5 = 34, 6J9 = 117, and 10J14 = 296, conjecture a
value for 3J8. Justify your answer.
For 1J1, the obvious answer is addition but that doesn't fit the rest
of the problem. I tried to multiply and divide and square it and cube
it and I cannot come up with the answer.
Date: 11/15/2005 at 09:42:44
From: Doctor Peterson
Subject: Re: solve for j (what math process will you use:add,subtract,etc
Hi, Rhonda.
Without knowing the context of the problem (what you have been
learning about, other similar problems you have seen), I'll guess that
J represents an unknown binary operation that involves combining its
operands in some simple way using the basic operations.
Rather than look at one case, I would look at how changing the
operands affects the result. I'll start by making a table:
a b aJb
1 | 1 | 2
3 | 5 | 34
6 | 9 | 117
10 |14 | 296
It would be nice if one operand were the same between two cases, so we
could see how changing one variable affected the result. But I notice
that going from 1,1 to 3,5, we've multiplied the two variables by 3
and 5 respectively, and multiplied the result by 17, which is close to
3*5. So I suspect that the product ab is involved in the definition of
J. Let's add a column to the table for ab:
a b ab aJb
1 | 1 | 1 | 2
3 | 5 | 15 | 34
6 | 9 | 54 | 117
10 |14 |140 | 296
Hmmm... aJb seems to be close to twice ab; let's look at the
difference between the two:
a b 2ab aJb aJb-2ab
1 | 1 | 2 | 2 | 0
3 | 5 | 30 | 34 | 4
6 | 9 |108 | 117 | 9
10 |14 |280 | 296 | 16
Interesting! Can you take it from here, and think of a way to
calculate aJb from a and b?
Notice my strategy: I am trying to approach the answer step by step,
by making a guess and comparing the results with the goal, then
adjusting my guess to take the error into account.
If you need more help, please write back and show me how far you got.
- Doctor Peterson, The Math Forum
Date: 11/15/2005 at 22:00:19
From: Rhonda
Subject: solve for j (what math process will you use:add,subtract,etc
Ok, I have a little more of an understanding. I see that in the table
a increases by aJb - 2ab (by adding the 1 and 3, then the 3 and
6, then the 6 and 10). I also notice that aJb - 2ab increases by the
next square number (25 would come next in the table). After this I
have tried to do 3ab, look for square roots (ab times 2 plus square
root of 4). I still cannot find a value for aJb and especially can't
see how to fit 3J8 into the table.
Thank you so much!
Date: 11/15/2005 at 22:38:04
From: Doctor Peterson
Subject: Re: solve for j (what math process will you use:add,subtract,etc
Hi, Rhonda.
I'm not sure what you have in mind when you talk about how something
"increases"; you should be thinking of each row by itself, not of
connections between rows, because the values they've chosen for a and
b are meant only to be sample values, not a specific sequence. I can
see how the numbers tempt one to think that way, but you'll want to
avoid the temptation! We have to be able to take ANY two numbers a
and b, and calculate something called aJb.
What I had in mind was that aJb - 2ab is a square--the square of what?
Can you see some way to get that number from the other numbers in the
same row? Once you do, you can say that the formula for aJb is 2ab
plus that new expression.
Once you have that formula, you can use it to find 3J8.
- Doctor Peterson, The Math Forum
Date: 11/15/2005 at 22:49:58
From: Rhonda
Subject: solve for j (what math process will you use:add,subtract,etc
You are fabulous! I think I get it now. aJb is: 2ab + the square of
(b-a)? So 3J8 would be 2*3*8 + (8-3)^2 = 48 + 25 = 73?
Wow! Thank you so much!
Date: 11/15/2005 at 23:35:52
From: Doctor Peterson
Subject: Re: solve for j (what math process will you use:add,subtract,etc
Hi, Rhonda.
That's what I got:
aJb = 2ab + (b-a)^2
But now we can simplify that algebraically; it turns out to be
surprisingly simple! I'm sure there are other ways we could have come
across it more directly.
It's worth noting, also, that the problem was worded just right: all
we can really do is to make a conjecture, based on the data we have;
the simplicity of the answer convinces us that it is what a teacher
would have come up with, but there are many more complicated formulas
that would also make the same results. What we're doing here is more
like science than math: we look at some data, work out what formula
might lie behind it, and then design an experiment to see if it's
right. Our experiment is to submit the conjecture that 3J8 = 73, and
see whether we get a good grade.
- Doctor Peterson, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/69833.html","timestamp":"2014-04-17T06:54:26Z","content_type":null,"content_length":"10149","record_id":"<urn:uuid:55c39009-b077-40eb-8884-fe7616b339e3>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00610-ip-10-147-4-33.ec2.internal.warc.gz"} |
5:3 fall sugar water mix
08-28-2012, 07:08 PM
Re: 5:3 fall sugar water mix
I give up. Even with 5:3 ratio mine is still crystallizing in my top feeder. I'm going with a 1:1 ratio, 4# sugar to 2 quarts (or more) of water, hoping to melt all the sugar crystals in the
feeder. I used my hive tool to break up the crystals into small pieces hoping the warm, thin sugar water would/will melt the crystals.
why not just ad a little more water ? instead of going back to 1-1
08-28-2012, 07:44 PM
Re: 5:3 fall sugar water mix
Ok you math guys, I have about 7 gallons of 1:1 left over from the dearth we had mid-summer. How much sugar needed to make it 5:3. About 1/2 pound of sugar gallon of 1:1 close enough? I'm
remembering that 1 gallon of water plus 8 pounds of surgar makes 1-1/2 gallon of syrup. Also, I added Honey B Healthy when I made the 1:1, do you all think it will harm the syrup to heat it just
a touch to dissolve the extra sugar?
08-28-2012, 08:18 PM
Re: 5:3 fall sugar water mix
08-28-2012, 08:55 PM
Re: 5:3 fall sugar water mix
Ok you math guys, I have about 7 gallons of 1:1 left over from the dearth we had mid-summer. How much sugar needed to make it 5:3. About 1/2 pound of sugar gallon of 1:1 close enough? I'm
remembering that 1 gallon of water plus 8 pounds of surgar makes 1-1/2 gallon of syrup. Also, I added Honey B Healthy when I made the 1:1, do you all think it will harm the syrup to heat it just
a touch to dissolve the extra sugar?
This'd better not give me a headache! 8# sugar to 4 quarts of water is 1:1 so: You have 7 gallon made from a 1:1 mix (8# sugar to 4 quarts of water that you claim produces 1 1/2 gallons of syrup.
Therefore, 7 divided by 1 1/2 gives what you began with 7/1 X 2/3 = 14/3 = almost 5 groups of 8# of sugar = 40 # sugar and 20 quarts of water originally.
40# of sugar for a 5:3 ratio would require 12 quarts of water, not 20; however, you want to know how much more sugar than the 40 pounds would be needed to match the 20 quarts of water; you've
already used 40#. You would be exact by adding another 30# of sugar making 70# of sugar to 21 quarts of water or very, very close with a total of 69 pounds (another 29#) of sugar for the 20
gallons of water. So add 29-30 pounds of sugar, and you'll have your 5:3 ratio. You owe me a bottle of good gin.
08-28-2012, 08:58 PM
Ben Franklin
Re: 5:3 fall sugar water mix
This'd better not give me a headache! 8# sugar to 4 quarts of water is 1:1 so: You have 7 gallon made from a 1:1 mix (8# sugar to 4 quarts of water that you claim produces 1 1/2 gallons of syrup.
Therefore, 7 divided by 1 1/2 gives what you began with 7/1 X 2/3 = 14/3 = almost 5 groups of 8# of sugar = 40 # sugar and 20 quarts of water originally.
40# of sugar for a 5:3 ratio would require 12 quarts of water, not 20; however, you want to know how much more sugar than the 40 pounds would be needed to match the 20 quarts of water; you've
already used 40#. You would be exact by adding another 30# of sugar making 70# of sugar to 21 quarts of water or very, very close with a total of 69 pounds (another 29#) of sugar for the 20
gallons of water. So add 29-30 pounds of sugar, and you'll have your 5:3 ratio. You owe me a bottle of good gin.
WOW Is this a riddle contest?????
08-29-2012, 12:27 PM
Re: 5:3 fall sugar water mix
08-30-2012, 10:34 PM
Re: 5:3 fall sugar water mix
Took me a while to get my head around your forumula,. At first it didn't sound right but you nailed it.
About that Gin...not one imbibe, maybe we can work something out.
08-31-2012, 02:47 PM
Re: 5:3 fall sugar water mix | {"url":"http://www.beesource.com/forums/printthread.php?t=273464&pp=20&page=3","timestamp":"2014-04-20T19:44:34Z","content_type":null,"content_length":"15416","record_id":"<urn:uuid:f0da1c31-9ada-42fc-b388-8f69f09d9687>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00109-ip-10-147-4-33.ec2.internal.warc.gz"} |
1. <philosophy, logic> A branch of philosophy and mathematics that deals with the formal principles, methods and criteria of validity of inference, reasoning and knowledge.
Logic is concerned with what is true and how we can know whether something is true. This involves the formalisation of logical arguments and proofs in terms of symbols representing propositions and
logical connectives. The meanings of these logical connectives are expressed by a set of rules which are assumed to be self-evident.
Boolean algebra deals with the basic operations of truth values: AND, OR, NOT and combinations thereof. Predicate logic extends this with existential and universal quantifiers and symbols standing
for predicates which may depend on variables. The rules of natural deduction describe how we may proceed from valid premises to valid conclusions, where the premises and conclusions are expressions
in predicate logic.
Symbolic logic uses a meta-language concerned with truth, which may or may not have a corresponding expression in the world of objects called existance. In symbolic logic, arguments and proofs are
made in terms of symbols representing propositions and logical connectives. The meanings of these begin with a set of rules or primitives which are assumed to be self-evident. Fortunately, even from
vague primitives, functions can be defined with precise meaning.
Boolean logic deals with the basic operations of truth values: AND, OR, NOT and combinations thereof. Predicate logic extends this with existential quantifiers and universal quantifiers which
introduce bound variables ranging over finite sets; the predicate itself takes on only the values true and false. Deduction describes how we may proceed from valid premises to valid conclusions,
where these are expressions in predicate logic.
Carnap used the phrase "rational reconstruction" to describe the logical analysis of thought. Thus logic is less concerned with how thought does proceed, which is considered the realm of psychology,
and more with how it should proceed to discover truth. It is the touchstone of the results of thinking, but neither its regulator nor a motive for its practice.
See also fuzzy logic, logic programming, arithmetic and logic unit, first-order logic,
See also Boolean logic, fuzzy logic, logic programming, first-order logic, logic bomb, combinatory logic, higher-order logic, intuitionistic logic, equational logic, modal logic, linear logic,
2. <electronics> Boolean logic circuits.
See also arithmetic and logic unit, asynchronous logic, TTL.
Last updated: 1995-03-17
Try this search on Wikipedia, OneLook, Google
Nearby terms: {log} « logarithmus dualis « LogC « logic » logical » logical address » Logical Block Addressing
Copyright Denis Howe 1985 | {"url":"http://foldoc.org/logic","timestamp":"2014-04-16T22:00:07Z","content_type":null,"content_length":"8404","record_id":"<urn:uuid:f59b0670-58c1-4ef2-bee3-dd05d9bcee7d>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00459-ip-10-147-4-33.ec2.internal.warc.gz"} |
La Canada Flintridge Math Tutor
...I could teach writing, reading, and speaking Chinese to both beginners and advanced learners. From past teaching experience, I have accumulated many useful teaching materials such as hand-outs
and homework sheets for my students. I have been using MATLAB in college and graduate school.
25 Subjects: including calculus, trigonometry, statistics, discrete math
...Graduated with Cum Laude honors. - University of Southern California, Bachelors of Science in Business Administration with an emphasis in Entrepreneurial Studies & Finance, conferred Fall 2004.
Presidential Scholarship recipient and Dean's List standing. I'm well versed in the subject of stati...
3 Subjects: including statistics, SPSS, ecology
I have a many years experience in Electrical Engineering withMaster of Science degree in Electrical Engineering from the University of Manitoba, Winnipeg, Alberta Canada. Math is the subject of my
great expertise. I had been teaching in the university as teacher assistant and tutored math to many students.
11 Subjects: including calculus, geometry, precalculus, reading
First time tutor looking to learn as much as you! I graduated from Presbyterian College three years ago and spent the interim time teaching kids and adults alike how to juggle and use their
physical skills to improve their lives. I recently moved from the fast pace of New York City to the quiet suburbs here in South Jersey - and I couldn't be happier.
8 Subjects: including algebra 1, vocabulary, grammar, prealgebra
...One of my jobs at JPL was to teach Rocket Scientists how to perform complex procedures and use specialty software to operate spacecraft. I was praised by my supervisor and science team leads
for my excellence in this. During employee turnover on long space missions, such as Voyager and Cassini,...
7 Subjects: including algebra 1, algebra 2, calculus, geometry | {"url":"http://www.purplemath.com/la_canada_flintridge_ca_math_tutors.php","timestamp":"2014-04-18T19:13:13Z","content_type":null,"content_length":"24227","record_id":"<urn:uuid:1cb50b77-74c2-4768-96d5-10283702dfc7>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00531-ip-10-147-4-33.ec2.internal.warc.gz"} |
Comments on A Neighborhood of Infinity: Perturbation confusion confusionTo be fair, that "impossible to compute deriv...@Anonymous I don't know a canonical source of ...Dan, can you point me to a proof that there is no ...Barak,
If by glib you mean that more could be sai...It seems a bit glib to say that the type of diff i...@Oleksandr You did a good job of convincing me :-)...@Anonymous No, it's not just about literals.Dan, thank you for your reply. In fact, I knew th...CC Shan gave a pretty good overview of the issue a...Is the problem Haskell's overloading of numeri...@Oleksandr Just inserting lifts anywhere to make t...I get a slightly different error message for examp...Chris,
> the Haskell code really does give an ...If you write some Haskell code that requires lifts...Anonymous, I wrote what I think is a very accessib...Here's an example (by Barak Perlmutter, which ...Where can I find a nice accessible introduction to...I was also puzzled by the problem claimed in that ...
tag:blogger.com,1999:blog-11295132.post393797559188947149..comments2014-04-16T10:57:46.206-07:00Dan Piponihttps://plus.google.com/
107913314994758123748noreply@blogger.comBlogger18125tag:blogger.com,1999:blog-11295132.post-58482187115735254702012-11-20T08:05:00.414-08:002012-11-20T08:05:00.414-08:00To be fair, that "
impossible to compute derivative" proof relies on assumption under which it is also impossible to compute, say, 1/x, or to test if x is positive. These are interesting results, but I don't
think they actually imply that it is impossible to compute derivative functions automatically, in the sense we desire.Barak A. Pearlmutterhttp://barak.a.pearlmutter.myopenid.com/
noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-23409881385964289552011-07-26T09:12:39.012-07:002011-07-26T09:12:39.012-07:00@Anonymous I don't know a canonical source of the theorem
but here's one I found through web searching: http://eccc.hpi-web.de/resources/pdf/ica.pdf <br /><br />This book goes into quite a bit of detail: http://www.amazon.com/
Computable-Analysis-Introduction-Theoretical-Computer/dp/3540668179/<br /><br />One way to think of it is that differentiation is itself not uniformly continuous. You can make arbitrarily small
perturbations to a function and wildly change its derivative.sigfpehttp://www.blogger.com/profile/
08096190433222340957noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-75117060682148393162011-07-26T06:45:16.008-07:002011-07-26T06:45:16.008-07:00Dan, can you point me to a proof that there
is no way to implement differentiation in a way that is also a function, or perhaps name the theorem in question?<br /><br />Thanks in advance!
Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-11295132.post-48323437357517985112011-07-01T06:36:42.241-07:002011-07-01T06:36:42.241-07:00Barak,<br /><br />If by glib you mean that more could
be said on the subject, then of course you are correct. There is no computable function for differentiation of type Num a=>(a->a)->(a->a) and this ultimately reflects the fact that
classically, differentiation is not uniformly continuous, and constructively, there is no differentiation function at all.<br /><br />A mathematical proof that there is no way to implement
differentiation in a way that it is also a function (in the mathematical sense) seems like a good justification for using lifting to me.sigfpehttp://www.blogger.com/profile/
08096190433222340957noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-5524982272260203142011-07-01T04:31:45.191-07:002011-07-01T04:31:45.191-07:00It seems a bit glib to say that the type of
diff is "Num a=>(D a->D a)->(a->a)" and use that to justify the needed lifting etc. Because the type of diff is, mathematically speaking, "Num a=>(a->a)->(a->a)
". The dual numbers are an implementation technique being inappropriately exposed in the signature of the API. This is the root cause of the modularity and nesting problems that make AD so
brittle in most current systems.Barak A. Pearlmutterhttp://barak.a.pearlmutter.myopenid.com/
noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-72044882604166960442011-05-01T07:23:37.432-07:002011-05-01T07:23:37.432-07:00@Oleksandr You did a good job of convincing me :-)<br /><br />
More generally, we can also build jet bundles. It's also fun to look at Lie algebras built this way from Lie groups: http://blog.sigfpe.com/2008/04/infinitesimal-rotations-and-lie.htmlsigfpehttp:
//www.blogger.com/profile/08096190433222340957noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-74915390484673610462011-05-01T07:16:54.060-07:002011-05-01T07:16:54.060-07:00@Anonymous No, it
's not just about literals.sigfpehttp://www.blogger.com/profile/
08096190433222340957noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-74191158091403755172011-05-01T04:55:22.023-07:002011-05-01T04:55:22.023-07:00Dan, thank you for your reply. In fact, I
knew the answer to my question; I only wanted to emphasize that the type system of Haskell does not rule out perturbation confusions -- you still need to carefully insert lifts in appropriate places.
I must admit that I haven't written any substantial program in Haskell that uses AD, so it may be that in practice lifts don't present a real problem. I guess it is time to write such a
program.<br /><br />By the way, I happen to be a PhD student of Barak, and one of the things I've been thinking about recently is typology of AD. There are a few interesting ideas floating
around. For example, dual numbers are really the tangent bundle over the real line. What if we try to extend the tangent bundle construction to other types? Tangent bundle is a functor T from the
category of smooth manifolds to itself. Furthermore, from the point of view of synthetic differential geometry, it is a representable functor represented by the "set" D of nilsquare
elements. In particular, it enjoys some nice properties. Obviously, it commutes with products, so one can define T (a, b) = (T a, T b) for types. More interestingly, if follows from the
representability that T (a -> b) is isomorphic to a -> T b. Furthermore, the representability of T implies that it admits the structure of a monad, of which lift is the unit. This structure can
also be derived directly, without appealing to synthetic differential geometry. I can go on.<br /><br />Ultimately I would like to have these ideas embodied in the form of a type system that would
allow us to reason about AD. This dream is still under construction...Oleksandr Manzyukhttp://www.blogger.com/profile/
12871948425425864333noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-71230786962615784052011-05-01T04:05:58.651-07:002011-05-01T04:05:58.651-07:00CC Shan gave a pretty good overview of the
issue and presented (even if he did not invent) the approach used to resolve it that I currently use, which is to use universal quantification to keep you from mixing up your infinitesimals, here:<br
/><br />http://conway.rutgers.edu/~ccshan/wiki/blog/posts/Differentiation/<br /><br />In http://hackage.haskell.org/package/ad, I make that quantified infinitesimal do double duty as the
differentiation mode. (That said, with hindsight, I would like have figured out a way to quantify over the mode separately to enable fewer things to have to dispatch with the dictionary.)Edward
Kmetthttp://www.blogger.com/profile/16144424873202502715noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-55304930080594829892011-04-30T23:01:24.532-07:002011-04-30T23:01:24.532-07:00Is the
problem Haskell's overloading of numeric literals? I.e. would explicitly annotating them always fix things?
Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-11295132.post-32597968468575731402011-04-30T18:32:05.944-07:002011-04-30T18:32:05.944-07:00@Oleksandr Just inserting lifts anywhere to make the
expression typecheck isn't good practice. It's easy to write code that typechecks but isn't correct.<br /><br />The easiest rule i can suggest for now is that you need to place it around
any real-valued variable involved in the computation that the lambda captures from its environment. In this case 'lift x'. It'd be nice if the type system could spot when we need this but
I don't know if it can. I've never had any difficulty applying this rule but now I'm concerned that there may be situations where it's hard to apply it. I can't think of
get a slightly different error message for example1:<br /><br />*Main> d (\x -> x*(d (\y -> x+y) 1)) 1<br /><br /><interactive>:1:12:<br /> Occurs check: cannot construct the
infinite type: a = D a<br /> Expected type: D a<br /> Inferred type: a<br /> In the second argument of `(*)', namely `(d (\ y -> x + y) 1)'<br /> In the expression: x * (d
(\ y -> x + y) 1)<br /><br />And now something curious happens: based solely on this error message my initial, arguably naive guess would be to insert lift around the second argument of (*), like
this:<br /><br />*Main> d (\x -> x * lift (d (\y -> x+y) 1)) 1<br />2<br /><br />This even typechecks, but produces the wrong answer. How do you know where to insert lift? The type system
does not help you here.Oleksandr Manzyukhttp://www.blogger.com/profile/
12871948425425864333noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-67135592270354344442011-04-30T14:18:08.256-07:002011-04-30T14:18:08.256-07:00Chris,<br /><br />> the Haskell code
really does give an incorrect answer.<br /><br />That's no incorrect answer. You've asked to evaluate the derivative at a weird value, not the integer 1. But I agree that it could be misread
as 1, especially to someone not familiar with the Num typeclass. So you've convinced me that there are indeed dangers here. I've modified my position with an update.sigfpehttp://
www.blogger.com/profile/08096190433222340957noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-59677888641850248482011-04-30T13:17:15.377-07:002011-04-30T13:17:15.377-07:00If you write some
Haskell code that requires lifts (that includes code like that using monad transformers or using fmaps with functors) but deliberately leave out the lifts and then at the end hunt around at random to
find places where you can insert lifts to keep the type checker happy, then you can expect buggy code.<br /><br />S&P exhibit the function called constant_one with type signature: Num a => D a
-> a. The type signature should make you suspicious. In fact constant_one (D 10 1), say, is not 1. You don't fix the code by inserting a lift at "at the point of type violation". I&#
39;m not sure what to call that kind of debugging. When I debug code I think about the semantics of what I'm doing.<br /><br />Note that you can write an analogous version of should_be_one for
the symbolic differentiator too.sigfpehttp://www.blogger.com/profile/
08096190433222340957noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-66179279982341116652011-04-30T13:07:39.634-07:002011-04-30T13:07:39.634-07:00Anonymous, I wrote what I think is a very
accessibly introduction some 4 or so years ago.<br /><br />http://cdsmith.wordpress.com/2007/11/29/some-playing-with-derivatives/Chris Smithhttp://www.blogger.com/profile/
10635500913510539345noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-4796686440089271902011-04-30T13:06:04.672-07:002011-04-30T13:06:04.672-07:00Here's an example (by Barak Perlmutter,
which he provided me some time ago) where the Haskell code really does give an incorrect answer.<br /><br /> d (\x -> (d (x*) 2)) 1<br /><br />This gives an incorrect answer when you keep the rank
1 type assigned by type inference to d. However, if you write the rank 2 type that d should logically have:<br /><br /> dd :: Num t => (forall a. Num a => a -> a) -> t -> t<br /><br />
then it fails with an error as well.Chris Smithhttp://www.blogger.com/profile/
10635500913510539345noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-13687396907071080842011-04-30T12:56:59.085-07:002011-04-30T12:56:59.085-07:00Where can I find a nice accessible
introduction to AD? It intrigues me.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-11295132.post-22803862087023124442011-04-30T12:44:10.359-07:002011-04-30T12:44:10.359-07:00I was also puzzled
by the problem claimed in that paper. But then at the end of the paper S&P presented Haskell code that they claimed would demonstrate the problem.<br /><br />I didn't try out this code. Does
their presented code actually demonstrate a problem? Or does it fail with a type error such as the ones you showed? Or something else?bloghttp://blog.plover.com/noreply@blogger.com | {"url":"http://blog.sigfpe.com/feeds/393797559188947149/comments/default","timestamp":"2014-04-18T13:07:57Z","content_type":null,"content_length":"38349","record_id":"<urn:uuid:43088715-945c-44c7-acda-a93f3280adf2>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00642-ip-10-147-4-33.ec2.internal.warc.gz"} |
CUTWP - Global Buckling Calculator
CUTWP by Andrew Sarawit*
Global Buckling Analysis of Thin-walled Members
The global buckling analysis (flexural-torsional, lateral-torsional, etc) of thin-walled members can be an involved task. CUTWP provides the thin-walled cross-section properties necessary in such
analysis and provides classic stability solutions as well. After all Pcr2 = ((Pe1+Pe3)-sqrt((Pe1+Pe3)^2-4*B*Pe1*Pe3))/(2*B); is not a calculation you want to do by hand every day. The December 2006
release of CUTWP has provided corrections to b1 and b2 which are used in lateral-torsional buckling calculations and CUTWP has been modified to read CUFSM input files.
Global buckling, such as flexural-torsional buckling in columns or lateral-torsional buckling in beams can sometimes be an involved calculation for the engineer. First, the section properties can be
onerous to calculate, and second, the elastic stability itself requires the solution of a cubic equation. For thin-walled members the preceding is all the more true, as the unsymmetric nature of many
sections requires that the more complicated global buckling modes such as flexural-torsional buckling always be considered. Andrew Sarawit wrote a small piece of code along with an interface for
making these section property calculations and global buckling calculations, and this code is made available in open source form for your use here. The program CUTWP was written in Matlab, and
includes a full graphical interface.
What does the CUTWP interface look like?
CUTWP uses a single page interface, and reports the section properties as well as the buckling mode shapes for the three global modes of the cross-section. A screen shot of CUTWP in action is given
below. CUTWP has been modified to read CUFSM input files.
What is the difference between CUTWP and CUFSM?
CUTWP only calculates global buckling properties. The classical formulas that you find in Timoshenko's Theory of Elastic Stability (or more correctly the work of Pekoz from the late 1960's) are used
for generating the solution. CUFSM can calculate local, and distortional buckling modes in addition to global buckling. CUFSM is closer to a general purpose finite element type of software, as
opposed to CUTWP which is closer to analytical solutions, i.e., calculate section properties and then solve the standard beam theory differential equations. As of version 3.0 of CUFSM, the reported
section properties in CUFSM use the same base code as CUTWP.
Can CUTWP and CUFSM work together?
CUTWP was modified in November of 2005 to be able to read CUFSM files as input. This allows a CUFSM user to perform traditional global buckling solutions without recourse to hand formulas. In
particular, approximate solutions when Kx, Ky, and Kt are not equal can be handled readily in CUTWP -- and the formula involved are the same as those typically used in civil engineering design
specifications. Further integration of the two programs is being considered.
What's underneath the hood / how does CUTWP work?
Some incomplete snippets of code are provided to give you a sense of the nature of the CUTWP calculation:
% compute the flexural buckling and torsional buckling
Pe1 = pi^2*E*I1/KL1^2;
Pe2 = pi^2*E*I2/KL2^2;
Pe3 = (G*J+pi^2*E*Cw/KL3^2)/rob^2;
Pcr1 = Pe2;
B = 1-(a1/rob)^2;
Pcr2 = ((Pe1+Pe3)-sqrt((Pe1+Pe3)^2-4*B*Pe1*Pe3))/(2*B);
Pcr3 = ((Pe1+Pe3)+sqrt((Pe1+Pe3)^2-4*B*Pe1*Pe3))/(2*B);
"..." indicates intermediate code left out - the code for a traditional flexural-torsional buckling calculation is provided above, just to indicate to the user what manner of calculations are
performed in CUTWP. The mode shapes are also generated and provided.
Matlab open source version of CUTWP (requires the user have Matlab)
The matlab files necessary for running CUTWP are provided here.
download matlab
Installation instructions are as follows. Click on download above. Unzip the files to a directory of your choosing. In Matlab change your working directory to the same directory as where you unzipped
the files. At the command line, type "cutwp". The program will initiate. (download November 2005 version)
Standalone version of CUTWP (runs on Windows machines)
The libraries and executable files necessary for running CUTWP are provided here.
download standalone for PC
Installation instructions are as follows. Click on download above. Save the exe file, double-click on cutwp_pkg.exe - this will expand all the files and install a Matlab runtime engine after that run
(double-click) cutwp.exe and the program will initiate.
How do I reference CUTWP? CUTWP is open source?
CUTWP is open source, Academic Free License v 1.2. Please provide a reference to the author (Andrew Sarawit) and note the version you are using. For example: Sarawit, A. (2006). "CUTWP Thin-walled
section properties" December 2006 update <www.ce.jhu.edu/bschafer/cutwp> and add the date you referenced this web page
*Andrew Sarawit, Ph.D. is the developer of CUTWP. He is currently employed at Simpson, Gumpertz & Heger Inc. and may be reached at atsarawit@sgh.com. Ben Schafer performed the modification to CUTWP
to allow it to read CUFSM input files and made corrections to b1 and b2 in December 2006. Also, Ben Schafer maintains this web site, page, and all the commentary above. Send comments, questions, etc.
to schafer@jhu.edu. | {"url":"http://www.ce.jhu.edu/bschafer/cutwp/index.htm","timestamp":"2014-04-20T21:04:34Z","content_type":null,"content_length":"16064","record_id":"<urn:uuid:91a03027-8336-47a7-a62d-af3c3f4ebfc2>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00294-ip-10-147-4-33.ec2.internal.warc.gz"} |
Washington Statistics Tutor
Find a Washington Statistics Tutor
...These days, I have a full-time job where I send upwards of 75-100 emails per day using Microsoft Outlook. I also manage the calendars for three very busy attorneys. With a Bachelor of Arts in
Political Science and Japanese, I feel I have a solid grasp of social studies, both from a social science perspective, but also a comparative cultural focus.
33 Subjects: including statistics, reading, English, writing
...I have been tutoring independently and through universities for 10 years, and I am experienced in math, writing, Political Science, English, Spanish, French, and test preparation. I am
certified to teach all types of writing, and I have professionally taught college courses in Political Science and Statistics. I have worked with all levels of students from elementary to college.
46 Subjects: including statistics, English, Spanish, algebra 1
...While there I volunteered with Helenski Espana, a human rights group. We taught lessons to school-aged children (in Spanish) on human rights and their basic rights as a citizen of Spain. I
spent my last year of undergraduate as an ESL (English as a Second Language) tutor for a pregnancy center that targeted Spanish-speaking woman.
17 Subjects: including statistics, Spanish, writing, physics
...While my particular expertise is AP and elementary statistics, I have previously tutored students from algebra I part I through BC calculus/calc 2. My most successful students have improved
their overall grade in their class ~50% (i.e., their grade is 50 points higher than it was previously over...
21 Subjects: including statistics, Spanish, physics, reading
...I can effectively instruct all the math-related aspects of the GMAT. I am a former high school math teacher with well over 10 years of full time teaching & tutoring experience. I can also
assist with some chemistry and physics.
28 Subjects: including statistics, chemistry, calculus, physics | {"url":"http://www.purplemath.com/washington_navy_yard_dc_statistics_tutors.php","timestamp":"2014-04-18T05:56:27Z","content_type":null,"content_length":"24210","record_id":"<urn:uuid:79033308-524d-4b5b-97cf-717f3e4fbbd2>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00582-ip-10-147-4-33.ec2.internal.warc.gz"} |
IACR News
12:17 [Pub][ePrint] Practical Secure Logging: Seekable Sequential Key Generators, by Giorgia Azzurra Marson and Bertram Poettering In computer forensics, log files are indispensable resources that
support auditors in identifying and understanding system threats and security breaches. If such logs are recorded locally, i.e., stored on the monitored machine itself, the problem of log
authentication arises: if a system intrusion takes place, the intruder might be able to manipulate the log entries and cover her traces. Mechanisms that cryptographically protect collected log
messages from manipulation should ideally have two properties: they should be *forward-secure* (the adversary gets no advantage from learning current keys when aiming at forging past log entries),
and they should be *seekable* (the auditor can verify the integrity of log entries in any order or access pattern, at virtually no computational cost).
We propose a new cryptographic primitive, a *seekable sequential key generator* (SSKG), that combines these two properties and has direct application in secure logging. We rigorously formalize the
required security properties and give a provably-secure construction based on the integer factorization problem. We further optimize the scheme in various ways, preparing it for real-world
deployment. As a byproduct, we develop the notion of a *shortcut one-way permutation* (SCP), which might be of independent interest.
Our work is highly relevant in practice. Indeed, our SSKG implementation has become part of the logging service of the systemd system manager, a core component of many modern commercial Linux-based
operating systems. | {"url":"https://www.iacr.org/news/index.php?p=detail&id=2539","timestamp":"2014-04-18T05:49:34Z","content_type":null,"content_length":"22985","record_id":"<urn:uuid:1023ae02-5399-4601-bbe4-59b8fd1c6393>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00607-ip-10-147-4-33.ec2.internal.warc.gz"} |
Topic: DYNO TESTING (Read 2281 times)
I have BIG cement power poles near me on a back road "closed pro course"
If you see this big white race striped Rhinoceros comin at ya, ya might want to scooch over
Heres a nice write up:
Dyno Correction Factor and Relative Horsepower
So what's all this correction factor stuff anyway??
The horsepower and torque available from a normally aspirated internal combustion engine are dependent upon the density of the air... higher density means more oxygen molecules and more power...
lower density means less oxygen and less power.
The relative horsepower, and the dyno correction factor, allow mathematical calculation of the affects of air density on the wide-open-throttle horsepower and torque. The dyno correction factor is
simply the mathematical reciprocal of the relative horsepower value.
Originally, all of the major US auto manufacturers were in or around Detroit Michigan, and the dyno reading taken in Detroit were considered to be the standard. However, as the auto industry spread
both across the country and around the globe, the auto manufacturers needed a way to correlate the horsepower/torque data taken at those "non-standard" locations with the data taken at the "standard"
location. Therefore, the SAE created J1349 in order to convert (or "correct") the dyno data taken, for example, in California or in Tokyo to be comparable to data taken at standard conditions in
What's it good for?
One common use of the dyno correction factor is to standardize the horsepower and torque readings, so that the effects of the ambient temperature and pressure are removed from the readings. By using
the dyno correction factor, power and torque readings can be directly compared to the readings taken on some other day, or even taken at some other altitude.
That is, the corrected readings are the same as the result that you would get by taking the car (or engine) to a certain temperature controlled, humidity controlled, pressure controlled dyno shop
where they measure "standard" power, based on the carefully controlled temperature, humidity and pressure.
If you take your car to the dyno on a cold day at low altitude, it will make a lot of power. And if you take exactly the same car back to the same dyno on a hot day, it will make less power. But if
you take the exact same car to the "standard" dyno (where the temperature, humidity and pressure are all carefully controlled) on those different days, it will always make exactly the same power.
Sometimes you may want to know how much power you are really making on that specific day due to the temperature, humidity and pressure on that day; in that case, you should look at the uncorrected
power readings.
But when you want to see how much more power you have solely due to the new headers, or the new cam, then you will find that the corrected power is more useful, since it removes the effects of the
temperature, humidity and atmospheric pressure and just shows you how much more (or less) power you have than in your previous tests.
There is no "right" answer... it's simply a matter of how you want to use the information.
If you want to know whether you are going to burn up the tranny with too much power on a cool, humid day, then go to the dyno and look at uncorrected power to see how exactly much power you have
under these conditions.
But if you want to compare the effects due to modifications, or you want to compare several different cars at different times, then the corrected readings of the "standard" dyno will be more useful.
If you see this big white race striped Rhinoceros comin at ya, ya might want to scooch over
Alright lunch is over back to Dyno 101.........
I was looking at Dyno sheets on Raider performance.com . That number on the right lower bottom of the sheets, I believe that is the efficiency factor givin for loss. Is that correct? So I am getting
.95 reading of my power, SAE uncorrected, tire probably slipping by looks of trq lines. I gotta figure that bumpity line in the trq spin up.
If you see this big white race striped Rhinoceros comin at ya, ya might want to scooch over
eric ray
I have a 300mm fat tire kit, do you believe this will be a factor in my test???
I have a 300mm fat tire kit, do you believe this will be a factor in my test???
I would say it depends on the contact patch width.
If you see this big white race striped Rhinoceros comin at ya, ya might want to scooch over
Belive it or not, the contact patch will be the same if the rear tire pressure is the same. The weight of the bike is divided over the same number of square inches regardless of the tire size. The
patch may change shape, but area x pressure = force.
The diameter of the 300 is larger than a 240 so the torque arm will be a little longer and that means a little less force will be seen by the Dino drum. that will drop your numbers a little bit, but
the shape is what you need to focus on and not the actual numbers.
here is the free software. Here is a file to play with from my car.
The dyno guy should be able to email you the file. Its stored on his computer. I usually take a jump drive with me and ask for them all.
« Last Edit: Dec 04, 2010, 07:52:56 AM by prostkr »
Cool thanks...........
If you see this big white race striped Rhinoceros comin at ya, ya might want to scooch over
Belive it or not, the contact patch will be the same if the rear tire pressure is the same. The weight of the bike is divided over the same number of square inches regardless of the tire size.
The patch may change shape, but area x pressure = force.
The diameter of the 300 is larger than a 240 so the torque arm will be a little longer and that means a little less force will be seen by the Dino drum. that will drop your numbers a little bit,
but the shape is what you need to focus on and not the actual numbers.
Correct me if my theory is incorrect, but I would think a larger tire and rim results in greater rotational weight which will use up horsepower, resulting in lower numbers than the same bike with a
stock rear tire.
I kinda like the low number now. When I race someone next time and beat them, I can say I have only 88hp and I am hauling 1000lbs, go figure that one out.
If you see this big white race striped Rhinoceros comin at ya, ya might want to scooch over
Belive it or not, the contact patch will be the same if the rear tire pressure is the same. The weight of the bike is divided over the same number of square inches regardless of the tire
size. The patch may change shape, but area x pressure = force.
The diameter of the 300 is larger than a 240 so the torque arm will be a little longer and that means a little less force will be seen by the Dino drum. that will drop your numbers a little
bit, but the shape is what you need to focus on and not the actual numbers.
Correct me if my theory is incorrect, but I would think a larger tire and rim results in greater rotational weight which will use up horsepower, resulting in lower numbers than the same bike with
a stock rear tire.
I have seen a calc on tire size for Drag racers. I will look for it. But your are spinning a gyro effect so it should roll even better once it gets rolling, it has more weight to slow down too.
If you see this big white race striped Rhinoceros comin at ya, ya might want to scooch over
I kinda like the low number now. When I race someone next time and beat them, I can say I have only 88hp and I am hauling 1000lbs, go figure that one out.
Belive it or not, the contact patch will be the same if the rear tire pressure is the same. The weight of the bike is divided over the same number of square inches regardless of the tire
size. The patch may change shape, but area x pressure = force.
The diameter of the 300 is larger than a 240 so the torque arm will be a little longer and that means a little less force will be seen by the Dino drum. that will drop your numbers a
little bit, but the shape is what you need to focus on and not the actual numbers.
Correct me if my theory is incorrect, but I would think a larger tire and rim results in greater rotational weight which will use up horsepower, resulting in lower numbers than the same bike
with a stock rear tire.
I have seen a calc on tire size for Drag racers. I will look for it. But your are spinning a gyro effect so it should roll even better once it gets rolling, it has more weight to slow down too.
Makes sense.
Ive been riding a long time & for me would not consider knowing the HP numbers of a bike that only delivers 100 horses, now if i have a bike that is around the 200HP mark then i would be into it.
All i know is that the Raider is a beast in the Low Range & Mid Range is Good , thats all i need to know
It is a beast. I have been searching around Youtube and sites like HDforums, they like to say "this is SAE numbers folks" they failed to say SAE CORRECTED which is like STD numbers. LOL.... I would
have 100+hp and 120+trq too on SAE Corrected or STD.
I am gonna talk to my shop again in a few weeks, after Christmass, and plan a build for power with him.
If you see this big white race striped Rhinoceros comin at ya, ya might want to scooch over | {"url":"http://www.roadstarraider.com/index.php?topic=8878.30","timestamp":"2014-04-19T12:15:35Z","content_type":null,"content_length":"82552","record_id":"<urn:uuid:95283bfa-5988-4d1e-b02d-439a55ff4536>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00230-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Cohen was right
Ali Enayat ali.enayat at gmail.com
Tue Sep 13 14:00:24 EDT 2011
The following two examples justify Cohen's position challenged by
Monore Eskew's recent postings.
In particular, the first ones addresses Eskew's comment that he sees
no philosophical difference between "completed R" (set of real
numbers) and "completed \omega_1." (set of countable ordinal), while
the second one shows the fundamental difference between "completed R"
and "completed alephs of all orders".
Example 1:
Let N be a model of ZFC in which the continuum is aleph_2; Cohen
showed us how to build N assuming Con(ZF).
Let M be H(aleph_2) as computed within M, i.e., M is the collection of
sets that are *hereditarily* of cardinality at most that aleph_1, as
viewed in N,
Then we have (1)-(3) below:
(1) All of the axioms of ZFC with the exception of the power set axiom
hold in M;
(2) The collection of real numbers DO NOT form a set in M;
(3) The collection of countable ordinals DO form a set in M (and they
are the last aleph in M).
So in M, "completed R" does not exist, but "completed omega_1" exists;
hence illustrating Cohen's claim.
Example 2:
Assuming Con(ZF + there exists an inacccessible cardinal), there is a
model N* of ZFC in which the continuum is a regular limit cardinal
(i.e., a weakly inaccessible cardinal). This is a consequence of
Solovay's classical modificaion of Cohen's argument in his "The
continuum can be anything it ought to be" paper, in which he
demonstarted that the continuum can be arranged to be any prescribed
aleph of uncountable cofinality in a cofinality-preserving generic
extension of the universe (Easton, in turn, generalized Solovay's
theorem, but that's a different story).
In such a model N*, if we define M* as H(continuum), i.e., then we have:
(1*) All of the axioms of ZFC with the exception of the power set
axiom hold in M*;
(2*) The collection of real numbers DO NOT form a set in M*';
(3') There is no last aleph in M*.
Ali Enayat
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2011-September/015747.html","timestamp":"2014-04-18T06:16:01Z","content_type":null,"content_length":"4518","record_id":"<urn:uuid:3a6a41f8-8cb7-4f71-9a1c-2061839ea1fa>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00648-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kirt's Cogitations
These original Kirt's Cogitations™ may be reproduced (no more than 5, please) provided proper credit is given to me, Kirt Blattenberger.
Please click here to return to the Table of Contents.
Cog·i·ta·tion [koj-i-tey'-shun] – noun: Concerted thought or
reflection; meditation; contemplation.
Kirt [kert] – proper noun: RF Cafe webmaster.
Riddle Me This, RiddlerRiddle me this
, Riddler: When is a search engine not a search engine? Ans: When it is a calculator. Batman might have asked just that question after learning of the amazing calculator and units conversion facility
that is built into the Google search engine. As an avid Google user, I have noticed occasionally that I would do a search for some numerical or units related topic and the result would include a
simple, unexpected calculation with an answer at the top. Since it happened again recently, I did a little investigation and discovered that indeed there is a very extensive calculator built into
Open your favorite browser, go to
and type in "10 ohms * 5 milliamps" and watch the result: "(10 ohms) * 5 milliamperes = 0.05 volts" Neat, non? Now, type in "10 ohms * 5 milliamps in millivolts " for a result of "(10 ohms) * 5
milliamperes = 50 millivolts ." Neat again. Now for an inane example of how it will present in any (valid) format. Do, "10 ohms * 5 milliamps in milliohms picoamperes " to yield "(10 ohms) * 5
milliamperes = 5.0 × 10
milliohms picoamperes ."
Of course, the calculator is not limited to electrical calculations. With built-in units like stones, cubits, grains, sidereal years, baker’s dozen, and scores, there is a good chance the Google
calculator will calculate and/or convert just about anything you need. Anyone who has taken a college physics course has been challenged to do the old "furlong per fortnight" conversion when solving
a speed/velocity problem. Your $100 HP or Casio calculator might not have the units built in, but let us give Google a try. Do "c in furlongs per fortnight," and voila, Google gives you, "the speed
of light = 1.8026175 × 10
furlongs per fortnight."
Did I mention the built-in physical constants? Yup, as in the last example, Google knows that "c" is for the speed of light. It knows that: "the speed of light = 299 792 458 m / s," when typing in
just the letter "c." Want Boltzmann’s constant? Type in "k" to get "Boltzmann constant = 1.3806503 × 10
kg s
." Need the elementary charge of an electron? Type "electron charge" to get "elementary charge = 1.60217646 × 10
coulombs." "eV" returns, "1 electron volt = 1.60217646 × 10
joules." Want that answer in watt*seconds? No problem, just type "eV in watt seconds" to get "1 electron volt = 1.60217646 × 10
watt seconds." Of course, the units are equivalent (1 joule = 1 watt*sec) so the number is the same, but you get the picture. A couple more to amaze you: "epsilon_0" returns "electric constant =
8.85418782 × 10
kg-1 s4 A2." Type "G" for "gravitational constant = 6.67300 × 10
." You gotta love it.
But wait, there’s more. Google calculator can convert between numerical bases, too. Easy example: "0b100000 in octal" yields "0b100000 = 0o40." 0b100000 in hex " yields "0b100000 = 0x20." How about
this for you: "CLXII in decimal" converts from Roman numerals to decimal, "CLXII = 162." If you would like that answer in binary, then here it is, "CLXII = 0b10100010." By the way, it also does the
mundane calculations like trigonometry functions, factorials, roots and powers, logarithms, modulo, etc. Even complex math is no sweat "(1i + 1) * (2i + 3)" gets you "((1 * i) + 1) * ((2 * i) + 3) =
1 + 5 i."
So, the next time you need a quick, easy utility to perform a calculation and/or units conversion, just fire up Google . As with so many other realms, the engineers there have managed to seize an
opportunity and improve upon it. The Google calculator out-features the majority of the online and stand-alone versions out there. How much better is it? Maybe "1 googol = 1.0 × 10
" times better? | {"url":"http://www.rfcafe.com/miscellany/factoids/kirts-cogitations-201.htm","timestamp":"2014-04-17T15:56:12Z","content_type":null,"content_length":"18764","record_id":"<urn:uuid:f0793f3d-91b8-4dee-ae31-275e75732ba9>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00322-ip-10-147-4-33.ec2.internal.warc.gz"} |
installing Dura-Ace 7800 Bottom Bracket
07-31-08, 04:44 PM #1
Senior Member
Join Date
Sep 2006
Southern Ca
0 Post(s)
0 Thread(s)
installing Dura-Ace 7800 Bottom Bracket
Got a few questions:
1. Should I put loctite anti-seize or grease on the two adapter threads?
2. Manual says I should tighten with my FC32 tool to 305-435 pounds. Do I need to get some sort of torque wrench adapter and somehow attach it to my FC32 tool? or can I tighten without the aid of
a torque wrench?
3. Grease or anti seize on the two crank arm bolts?
Last edited by OCRider2000; 07-31-08 at 04:51 PM.
1. I would think that the bearing cups would have some sort of locking compound preapplied. If yours do, I wouldn't apply anything else. If not, I'd use grease or antisieze unless they creaked
and only then try Loctite.
2. With a wrench style tool like the FC32, the best you can do is guess at the torque by hand. Figure out the distance from the centerline of the bearing cup cutout to where you'll be applying
the force and then divide the torque spec by that number (the torque spec should be in in. lbs. so measure in inches). That will give you the force you need to apply at that distance to achieve
the proper torque.
Just FYI, assuming you gave the correct numbers, the torque will be 305-435 in. * lbs. (25-36 ft. lbs.). Torque is always expressed as a distance times a force (or vice versa). Think of what you
are doing when you apply a torque with a wrench and it'll make sense. Say you have an 8 inch distance from the centerline of the bearing cup to where you are applying the force. You'd need to
apply 38-54 lbs. of force to the wrench.
3. Either should be fine.
1. I would think that the bearing cups would have some sort of locking compound preapplied. If yours do, I wouldn't apply anything else. If not, I'd use grease or antisieze unless they creaked
and only then try Loctite.
2. With a wrench style tool like the FC32, the best you can do is guess at the torque by hand. Figure out the distance from the centerline of the bearing cup cutout to where you'll be applying
the force and then divide the torque spec by that number (the torque spec should be in in. lbs. so measure in inches). That will give you the force you need to apply at that distance to achieve
the proper torque.
Just FYI, assuming you gave the correct numbers, the torque will be 305-435 in. * lbs. (25-36 ft. lbs.). Torque is always expressed as a distance times a force (or vice versa). Think of what you
are doing when you apply a torque with a wrench and it'll make sense. Say you have an 8 inch distance from the centerline of the bearing cup to where you are applying the force. You'd need to
apply 38-54 lbs. of force to the wrench.
3. Either should be fine.
Cool, thanks.
07-31-08, 08:02 PM #2
Senior Member
Join Date
May 2004
Wilmington, DE
My Bikes
2003 Specialized Hardrock, 2004 LOOK KG386i, 2005 Iron Horse Warrior Expert, 2009 Pedal Force CX1
1 Post(s)
0 Thread(s)
08-01-08, 08:00 AM #3
Senior Member
Join Date
Sep 2006
Southern Ca
0 Post(s)
0 Thread(s) | {"url":"http://www.bikeforums.net/bicycle-mechanics/448482-installing-dura-ace-7800-bottom-bracket.html","timestamp":"2014-04-19T10:40:37Z","content_type":null,"content_length":"42072","record_id":"<urn:uuid:67392d88-e2de-4767-a98c-59fd9b06d390>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00532-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] Matlab page on scipy wiki
Sebastian Haase haase at msg.ucsf.edu
Thu Mar 2 20:18:03 CST 2006
I noted on
matlab a(1:5,:)
numpy a[0:4] or a[0:4,:] a[0:4] or a[0:4,:]
the first five rows of a
I think this is wrong!
in numpy it would be: a[:5] (or a[0:5] ) for the first five elements
To the best of my knowledge (I have never used Matlab myself!) this is
one of the biggest points of confusion for Matlab users !!
WHY DOES a[4:6] NOT INCLUDE THE ELEMENTS 4,5 *AND* 6 ???
The only explanation I have is that its
a) like a C/C++-for-loop (for(i=4;i<6;i++) --> note the '<' (not '<=')
b) it then also always is true that: "last" minus first is equal to
number-of-elements (example: 6-4 = 2)
c) it turns out from experience that this convention WILL save you lots
of '-1' and '+1' in your code (actually almost all of them)
[if I see a "+1" in a Matlab person's numpy code I can almost always
guess that he/she made a mistake !]
Maybe this paragraph could be added to the wiki ...
Thanks for this wiki page - I think I looks great
- Sebastian Haase
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2006-March/006743.html","timestamp":"2014-04-20T05:44:14Z","content_type":null,"content_length":"3616","record_id":"<urn:uuid:68d1e390-d682-44dc-838a-bf052e2bfb67>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00620-ip-10-147-4-33.ec2.internal.warc.gz"} |
Techniques for Factoring Large Polynomial
I guess the minus sign in the first fraction is a typo? For the factorization, you need a plus there.
I know it's possible to use long division because the polynomial is rational, but what if I've only been given the numerator. How can I test quickly that it can be decomposed?
Well, it has order>2, it has to be composite.
To make numbers smaller, I would substitute 2t=s:
8(s^4 + 2s^3 + 6s2 + 2s + 5).
You can test imaginary numbers together with real numbers, it will need more time but it is possible.
You can even see ±i as a solution here. | {"url":"http://www.physicsforums.com/showthread.php?t=719555","timestamp":"2014-04-16T07:41:02Z","content_type":null,"content_length":"63631","record_id":"<urn:uuid:b017a7e8-7bc5-4e14-9c0a-dc85be4f9ef7>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00314-ip-10-147-4-33.ec2.internal.warc.gz"} |
Passyunk, PA Geometry Tutor
Find a Passyunk, PA Geometry Tutor
...I possess clean FBI/criminal history and Child Abuse clearances. I am able to tutor at flexible times and locations. I am able to provide references, documentation, etc. upon request.
58 Subjects: including geometry, reading, GRE, biology
...When people think of chemistry, they get caught up in numbers and complex science, but when the relationships between materials are shown, it all fits together like a puzzle. When it "clicks,"
it's so incredibly useful in daily life. This the primary technique of how I get my students to realize the cool things about chemistry and how to remember them.
14 Subjects: including geometry, chemistry, precalculus, algebra 2
...My personal challenge for each lesson is making sure to close the students’ “concept gap”: often, students do not have trouble with the process of problem solving, but understanding what the
problem is. Every session is a conversation about subject fundamentals, sample problems, and translating ...
9 Subjects: including geometry, calculus, physics, algebra 1
...For several years, I have been working with students on the verbal section of the MCAT. In lessons, we talk about how to approach the most challenging passages, how to better understand what
the questions are asking, how to feel confident about answer choices and eliminate any that are meant to ...
47 Subjects: including geometry, English, chemistry, reading
I am certified as a math teacher in Pennsylvania and spent ten years teaching math courses for grades 7-12 in the Philadelphia area. I enjoy tutoring students one-on-one, and watching them become
stronger math students. I like to help them build their confidence and problem solving ability as well as their skills.I taught Algebra to 8th and 9th grade students for over 5 years.
3 Subjects: including geometry, algebra 1, prealgebra
Related Passyunk, PA Tutors
Passyunk, PA Accounting Tutors
Passyunk, PA ACT Tutors
Passyunk, PA Algebra Tutors
Passyunk, PA Algebra 2 Tutors
Passyunk, PA Calculus Tutors
Passyunk, PA Geometry Tutors
Passyunk, PA Math Tutors
Passyunk, PA Prealgebra Tutors
Passyunk, PA Precalculus Tutors
Passyunk, PA SAT Tutors
Passyunk, PA SAT Math Tutors
Passyunk, PA Science Tutors
Passyunk, PA Statistics Tutors
Passyunk, PA Trigonometry Tutors
Nearby Cities With geometry Tutor
Almonesson geometry Tutors
Bala, PA geometry Tutors
Billingsport, NJ geometry Tutors
Carroll Park, PA geometry Tutors
Eastwick, PA geometry Tutors
Hilltop, NJ geometry Tutors
Lester, PA geometry Tutors
Middle City East, PA geometry Tutors
Middle City West, PA geometry Tutors
Oakview, PA geometry Tutors
Overbrook Hills, PA geometry Tutors
Penn Ctr, PA geometry Tutors
South Camden, NJ geometry Tutors
Verga, NJ geometry Tutors
West Collingswood Heights, NJ geometry Tutors | {"url":"http://www.purplemath.com/Passyunk_PA_Geometry_tutors.php","timestamp":"2014-04-16T08:03:16Z","content_type":null,"content_length":"24259","record_id":"<urn:uuid:a5d58f47-9e1c-43fb-8ea5-77fbcc34a770>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00251-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lévy-Flight Krill Herd Algorithm
Mathematical Problems in Engineering
Volume 2013 (2013), Article ID 682073, 14 pages
Research Article
Lévy-Flight Krill Herd Algorithm
^1Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun, Jilin 130033, China
^2University of Chinese Academy of Sciences, Beijing 100039, China
^3Department of Civil Engineering, University of Akron, Akron, OH 443253905, USA
^4Department of Civil and Environmental Engineering, Engineering Building, Michigan State University, East Lansing, MI 48824, USA
^5School of Computer Science and Information Technology, Northeast Normal University, Changchun 130117, China
Received 3 November 2012; Accepted 20 December 2012
Academic Editor: Siamak Talatahari
Copyright © 2013 Gaige Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
To improve the performance of the krill herd (KH) algorithm, in this paper, a Lévy-flight krill herd (LKH) algorithm is proposed for solving optimization tasks within limited computing time. The
improvement includes the addition of a new local Lévy-flight (LLF) operator during the process when updating krill in order to improve its efficiency and reliability coping with global numerical
optimization problems. The LLF operator encourages the exploitation and makes the krill individuals search the space carefully at the end of the search. The elitism scheme is also applied to keep the
best krill during the process when updating the krill. Fourteen standard benchmark functions are used to verify the effects of these improvements and it is illustrated that, in most cases, the
performance of this novel metaheuristic LKH method is superior to, or at least highly competitive with, the standard KH and other population-based optimization methods. Especially, this new method
can accelerate the global convergence speed to the true global optimum while preserving the main feature of the basic KH.
1. Introduction
In current competitory world, human beings make an attempt at extracting the maximum output or profit from a restricted amount of usable resources. In the case of engineering optimization, such as
design optimization of tall steel buildings [1], optimum design of gravity retaining walls [2], water, geotechnical and transport engineering [3], and structural optimization and design [4, 5],
engineers would attempt to design structures that satisfy all design requirements with the minimum possible cost. Most real-world engineering optimization problems could be converted into general
global optimization problems. Therefore, the study of global optimization is of vital importance for the engineering optimization. In this issue, many biological intelligent techniques [6] as
optimization tools have been developed and applied to solve engineering optimization problems for engineers. A general classification way for these techniques is considering the nature of the
techniques, and optimization techniques can be classified as two main groups: classical methods and modern intelligent algorithms. Classical methods such as hill climbing have a rigorous move and
will generate the same set of solutions if the iterations start with the same initial starting point. On the other hand, modern intelligent algorithms often generate different solutions even with the
same initial value. However, in general, the final solutions, though slightly different, will converge to the same optimal values within a given accuracy. The emergence of metaheuristic optimization
algorithm as a blessing from the artificial intelligence and mathematical theorem has opened up a new facet to carry out the optimization of a function. Recently, nature-inspired metaheuristic
algorithms perform powerfully and efficiently in solving modern nonlinear numerical global optimization problems. To some extent, all metaheuristic algorithms make an attempt at relieving the
conflict between diversification/exploration/randomization (global search) and intensification/exploitation (local search) [7, 8].
Inspired by nature, these strong metaheuristic algorithms have been proposed to solve NP-hard tasks, such as UCAV path planning [9, 10], test-sheet composition [11], and parameter estimation [12].
These kinds of metaheuristic methods perform on a population of solutions and always find optimal or suboptimal solutions. During the 1960s and 1970s, computer scientists studied the possibility of
formulating evolution as an optimization method and eventually this generated a subset of gradient free methods, namely, genetic algorithms (GAs) [13, 14]. In the last two decades, a huge number of
techniques were developed on function optimization, such as bat algorithm (BA) [15, 16], differential evolution (DE) [17, 18], genetic programming (GP) [19], harmony search (HS) [20, 21], particle
swarm optimization (PSO) [22–24], cuckoo search (CS) [25, 26], and, more recently, the krill herd (KH) algorithm [27] that is based on imitating the krill herding behavior in nature.
Firstly proposed by Gandomi and Alavi in 2012, inspired by the herding behavior of krill individuals, KH algorithm is a novel swarm intelligence method for optimizing possibly nondifferentiable and
nonlinear complex functions in continuous space [27]. In KH, the time-dependent position of the krill individuals involves three main components: (i) movement led by other individuals, (ii) foraging
motion, and (iii) random physical diffusion. One of the notable advantages of the KH algorithm is that derivative information is unnecessary, because it uses a random search instead of a gradient
search use in classical methods. Moreover, comparing with other population-based metaheuristic methods, this new method needs few control variables, in principle only a separate parameter Δt (time
interval) to tune, which makes KH easy to implement, more robust and fits for parallel computation.
KH is an effective and powerful algorithm in exploration, but at times it may trap into some local optima so that it cannot implement global search well. For KH, the search depends completely on
random search, so there is no guarantee for a fast convergence. In order to improve KH in optimization problems, a method has been proposed [28], which introduces a more focused mutation strategy
into KH to add the diversity of population.
On the other hand, many researchers have centralized on theories and applications of statistical techniques, especially of Lévy distribution. And recently huge advances are acquired in many fields.
One of these fields is the applications of Lévy flight in optimization methods. Previously, Lévy flights have been used together with some metaheuristic optimization methods such as firefly algorithm
[29], cuckoo search [30], krill herd algorithm [31], and particle swarm optimization [32].
Firstly presented here, an effective Lévy-flight KH (LKH) method is originally proposed in this paper, in order to accelerate convergence speed, thus making the approach more feasible for a wider
range of real-world engineering applications while keeping the desirable characteristics of the original KH. In LKH, first of all, a standard KH algorithm is implemented to shrink the search apace
and select a good candidate solution set. And then, for more precise modeling of the krill behavior, a local Lévy-flight (LLF) operator is added to the algorithm. This operator is applied to exploit
the limited promising area intensively to get better solutions so as to improve its efficiency and reliability for solving global numerical optimization problems. The proposed method is evaluated on
fourteen standard benchmark functions that have ever been applied to verify optimization methods in continuous optimization problems. Experimental results show that the LKH performs more efficiently
and effectively than basic KH, ABC, ACO, BA, CS, DE, ES, GA, HS, PBIL, and PSO.
The structure of this paper is organized as follows. Section 2 gives a description of basic KH algorithm and Lévy flight in brief. Our proposed LKH method is described in detail in Section 3.
Subsequently, our method is evaluated through fourteen benchmark functions in Section 4. In addition, the LKH is also compared with ABC, ACO, BA, CS, DE, ES, GA, HS, KH, PBIL, and PSO in that
section. Finally, Section 5 involves the conclusion and proposals for future work.
2. Preliminary
At first, in this section, a background on the krill herd algorithm and Lévy flight will be provided in brief.
2.1. Krill Herd Algorithm
Krill herd (KH) [27] is a new metaheuristic optimization method [4] for solving optimization tasks, which is based on the simulation of the herding of the krill swarms in response to particular
biological and environmental processes. The time-dependent position of an individual krill in 2D space is decided by three main actions presented as follows:(i)movement affected by other krill
individuals,(ii)foraging action,(iii)random diffusion.
KH algorithm adopted the following Lagrangian model in a d-dimensional decision space as in the following (1): where , and are the motion led by other krill individuals, the foraging motion, and the
physical diffusion of the th krill individual, respectively.
In movement affected by other krill individuals, the direction of motion, , is approximately computed by the target effect (target swarm density), local effect (a local swarm density), and a
repulsive effect (repulsive swarm density). For a krill individual, this movement can be defined as and is the maximum induced speed, is the inertia weight of the motion induced in , and is the last
motion induced.
The foraging motion is estimated by the two main components. One is the food location and the other one is the prior knowledge about the food location. For the th krill individual, this motion can be
approximately formulated as follows: where and is the foraging speed, is the inertia weight of the foraging motion between 0 and 1, is the last foraging motion.
The random diffusion of the krill individuals can be considered to be a random process in essence. This motion can be described in terms of a maximum diffusion speed and a random directional vector.
It can be indicated as follows: where is the maximum diffusion speed, and is the random directional vector and its arrays are random values in .
Based on the three above-mentioned movements, using different parameters of the motion during the time, the position vector of a krill individual during the interval to is expressed by the following
It should be noted that is one of the most important parameters and should be fine-tuned in terms of the specific real-world engineering optimization problem. This is because this parameter can be
treated as a scale factor of the speed vector. More details about the three main motions and KH algorithm can be found in [27].
2.2. Lévy Flights
Usually, the hunt of food by animals takes place in the form of random or quasi-random. That is to say, all animals feed in a walk path from one location to another at random. However, the direction
it selects relies only on a mathematical model [33]. One of the remarkable models is called Lévy flights.
Lévy flights are a class of random walk in which the steps are determined in terms of the step lengths, and the jumps are distributed according to a Lévy distribution. More recently, Lévy flights
have subsequently been applied to improve and optimize searching. In the case of CS, the random walking steps of a cuckoo are determined by a Lévy flight [34]:
Here, is the step size scaling factor, which should be related to the scales of the problem of interest. The random walk via Lévy flight is more efficient in exploring the search space as its step
length is much longer in the long run. Some of the new solutions should be generated by Lévy walk around the best solution obtained so far; this will speed up the local search.
3. Our Approach: LKH
In general, the standard KH algorithm is adept at exploring the search space and locating the promising region of global optimal value, but it is not relatively good at exploiting solution. In order
to improve the exploitation of KH, a new distribution Lévy flight performing local search, called local Lévy-flight (LLF) operator, is introduced to form a novel Lévy-flight krill herd (LKH)
algorithm. In LKH, to begin with, standard KH algorithm with high convergence speed is used to shrink the search region to a more promising area. And then, LLF operator with good exploitation ability
is applied to exploit the limited area intensively to get better solutions. In this way, the strong exploration abilities of the original KH and the exploitation abilities of the LLF operator can be
fully extracted. The difference between LKH and KH is that the LLF operator is used to perform local search and fine-tune the original KH generating a new solution for each krill instead of random
walks originally used in KH. As a matter of fact, according to the figuration of LKH, the original KH in LKH focuses on the exploration/diversification at the beginning of the search to evade
trapping into local optima in a multimodal landscape; while later LLF operator encourages the exploitation/intensification and makes the krill individuals search the space carefully at the end of the
search. Therefore, our proposed LKH method can fully exploit the merits of different search techniques and overcome the lack of the exploitation of the KH and solve the conflict between exploration
and exploitation effectively. The detailed explanation of our method is described as follows.
To start with, standard KH algorithm utilizes three main actions to search the promising area in the solution space and use these actions to guide the generation of the candidate solutions for the
next generation. It has been demonstrated that [27] KH performs well in both convergence speed and final accuracy on unimodal problems and many simple multimodal problems. Therefore, in LKH, we
employ the merit of the fast convergence of KH to implement global search. In addition, KH is able to shrink the search region towards the promising area within a few generations. However, sometimes
KH’s performance on complex multimodal problems is unsatisfying; accordingly, another search technique with good exploitation ability is crucial to exploit the limited area carefully to get optimal
To improve the exploitation ability of the KH algorithm, genetic reproduction mechanisms have been incorporated into the standard KH algorithm. Gandomi and Alavi have proved that the KH II (KH with
crossover operator only) performs the best among serials of KH methods [27]. In our present work, we use a more focused local search technique, local Lévy-flight (LLF) operator, in the local search
part of the LKH algorithm, which can increase diversity of the population in an attempt to avoid premature convergence and exploit a small region in the later run phase to refine the final solutions.
The main step of LLF operator used in the LKH algorithm is presented in Algorithm 1.
Here, and is the maximum of generations. is the number of decision variables. NP is the size of the parent population. A is max Lévy-flight step size. is the jth variable of the solution . is the
offspring. is a random integer number between 1 and drawn from exponential distribution. returns an array of random numbers chosen from the exponential distribution with mean parameter . Similarly,
is a random integer number between 1 and d drawn from uniform distribution. And rand is a random real number in interval (0, 1) drawn from uniform distribution.
In addition, another important improvement is the addition of elitism strategy into the LKH. Clearly, KH has some fundamental elitism. However, it can be further improved. As with other
population-based optimization algorithms, we combine some sort of elitism so as to store the optimal solutions in the population. Here, we use a more centralized elitism on the best solutions, which
can stop the best solutions from being ruined by three motions and LLF operator in LKH. In the main cycle of the LKH, to start with, the KEEP best solutions are retained in a variable KEEPKRILL.
Generally speaking, the KEEP worst solutions are substituted by the KEEP best solutions at the end of the every iteration. There is a guarantee that this elitism strategy can make the whole
population not decline to the population with worse fitness than the former. Note that we use an elitism strategy to save the property of the krill that has the best fitness in the LKH process, so
even if three motions and LLF operator corrupt their corresponding krill, we have retained it and can recuperate to its preceding good status if needed.
Based on the above analyses, the main steps of Lévy-flight krill herd method can be simply presented in Algorithm 2.
4. Simulation Experiments
In this section, the performance of our proposed method LKH is tested to global numerical optimization through a series of experiments implemented in benchmark functions.
To allow an unprejudiced comparison of CPU time, all the experiments were carried out on a PC with a Pentium IV processor running at 2.0GHz, 512 MB of RAM, and a hard drive of 160GB. Our execution
was compiled using MATLAB R2012b (8.0) running under Windows XP3. No commercial KH or other optimization tools were used in our simulation experiments.
Well-defined problem sets benefit for testing the performance of optimization algorithms proposed in this paper. Based on numerical functions, benchmark functions can be considered as objective
functions to fulfill such tests. In our present study, fourteen different benchmark functions are applied to test our proposed metaheuristic LKH method. The formulation of these benchmark functions
are given in Table 1 and the properties of these benchmark functions are presented in Table 2. More details of all the benchmark functions can be found in [35, 36]. We must point out that, in [35],
Yao et al. have used 23 benchmarks to test optimization algorithms. However, for the other low-dimensional benchmark functions (such as , 4, and 6), all the methods perform almost identically with
each other [37], because these low-dimensional benchmarks are too simple to clarify the performance difference among different methods. Therefore, in our present work, only fourteen high-dimensional
complex benchmarks are applied to verify our proposed LKH algorithm.
4.1. General Performance of LKH
In order to explore the merits of LKH, in this section, we compared its performance on global numeric optimization problems with eleven population-based optimization methods, which are ABC, ACO, BA,
CS, DE, ES, GA, HS, KH, PBIL, and PSO. ABC (artificial bee colony) [38] is an intelligent optimization algorithm based on the smart behavior of honey bee swarm. ACO (ant colony optimization) [39] is
a swarm intelligence algorithm for solving optimization problems which is based on the pheromone deposition of ants. BA (bat algorithm) [16] is a new powerful metaheuristic optimization method
inspired by the echolocation behavior of bats with varying pulse rates of emission and loudness. CS (cuckoo search) [40] is a metaheuristic optimization algorithm inspired by the obligate brood
parasitism of some cuckoo species by laying their eggs in the nests of other host birds. DE (differential evolution) [17] is a simple but excellent optimization method that uses the difference
between two solutions to probabilistically adapt a third solution. An ES (evolutionary strategy) [41] is an algorithm that generally distributes equal importance to mutation and recombination and
that allows two or more parents to reproduce an offspring. A GA (genetic algorithm) [13] is a search heuristic that mimics the process of natural evolution. HS (harmony search) [20] is a new
metaheuristic approach inspired by behavior of musician’ improvisation process. PBIL (probability-based incremental learning) [42] is a type of genetic algorithm where the genotype of an entire
population (probability vector) is evolved rather than individual members. PSO (particle swarm optimization) [22] is also a swarm intelligence algorithm which is based on the swarm behavior of fish
and bird schooling in nature. In addition, it should be noted that, in [27], Gandomi and Alavi have proved that, comparing all the algorithms, the KH II (KH with crossover operator) performed the
best which confirms the robustness of the KH algorithm. Therefore, in our work, we use KH II as a standard KH algorithm.
In our experiments, we will use the same parameters for KH and LKH that are the foraging speed , the maximum diffusion speed , the maximum induced speed , and max Lévy-flight step size (only for
LKH). For ACO, DE, ES, GA, PBIL, and PSO, we set the same parameters as [36, 43]. For ABC, the number of colony size (employed bees and onlooker bees) , the number of food sources , and maximum
search times (a food source which could not be improved through “limit” trials is abandoned by its employed bee). For BA, we set loudness , pulse rate , and scaling factor ; for CS, a discovery rate
. For HS, we set harmony memory accepting rate and pitch adjusting rate .
We set population size NP = 50 and maximum generation Maxgen = 50 for each method. We ran 100 Monte Carlo simulations of each method on each benchmark function to get representative performances.
Tables 3 and 4 illustrate the results of the simulations. Table 3 shows the average minima found by each method, averaged over 100 Monte Carlo runs. Table 4 shows the absolute best minima found by
each method over 100 Monte Carlo runs. That is to say, Table 3 shows the average performance of each method, while Table 4 shows the best performance of each method. The best value achieved for each
test problem is marked in bold. Note that the normalizations in the tables are based on different scales, so values cannot be compared between the two tables. Each of the functions in this study has
20 independent variables (i.e., ).
From Table 3, we see that, on average, LKH is the most effective at finding objective function minimum on twelve of the fourteen benchmarks (F01–F08, F10, and F12–F14). ABC and GA are the second most
effective, performing the best on the benchmarks F11 and F09 when multiple runs are made, respectively. Table 4 shows that LKH performs the best on twelve of the fourteen benchmarks which are
F01–F04, F06–F08, and F10–F14. ACO and GA are the second most effective, performing the best on the benchmarks F05 and F09 when multiple runs are made, respectively.
Moreover, the computational times of the twelve optimization methods were alike. We collected the average computational time of the optimization methods as applied to the 14 benchmarks considered in
this section. The results are given in Table 3. From Table 3, PBIL was the quickest optimization method, and LKH was the eleventh fastest of the twelve algorithms. This is because that the evaluation
of step size by Lévy flight is too time consuming. However, we must point out that in the vast majority of real-world engineering applications, it is the fitness function evaluation that is by far
the most expensive part of a population-based optimization algorithm.
In addition, in order to further prove the superiority of the proposed LKH method, convergence plots of ABC, ACO, BA, CS, DE, ES, GA, HS, KH, LKH, PBIL, and PSO are illustrated in Figures 1–14 which
mean the process of optimization. The values shown in Figures 1–14 are the average objective function optimum obtained from 100 Monte Carlo simulations, which are the true objective function value,
not normalized. Most importantly, note that the best global solutions of the benchmarks (F04, F05, F11, and F14) are illustrated in the form of the semilogarithmic convergence plots. KH is short for
KH II in the legends of the figures.
Figure 1 shows the results obtained for the twelve methods when the F01 Ackley function is applied. From Figure 1, clearly, we can draw the conclusion that LKH is significantly superior to all the
other algorithms during the process of optimization. For other algorithms, although slower, KH II eventually finds the global minimum close to LKH, while ABC, ACO, BA, CS, DE, ES, GA, HS, PBIL, and
PSO fail to search the global minimum within the limited generations. Here, all the algorithms show the almost same starting point; however, LKH outperforms them with fast and stable convergence
Figure 2 illustrates the optimization results for F02 Fletcher-Powell function. In this multimodal benchmark problem, it is clear that LKH outperforms all other methods during the whole progress of
optimization. Other algorithms do not manage to succeed in this benchmark function within maximum number of generations. At last, ABC and KH II converge to the value that is significantly inferior to
Figure 3 shows the optimization results for F03 Griewank function. From Figure 3, we can see that the figure shows that there is a little difference between the performance of LKH and KH II. However,
from Table 3 and Figure 3, we can conclude that, LKH performs better than KH II in this multimodal function. Through carefully looking at Figure 6, ACO has a fast convergence initially towards the
known minimum, as the procedure proceeds LKH gets closer and closer to the minimum, while ACO comes into being premature and traps into the local minimum.
Figure 4 shows the results for F04 Penalty #1 function. From Figure 4, clearly, LKH outperforms all other methods during the whole progress of optimization in this multimodal function. Eventually, KH
II performs the second best at finding the global minimum. Although slower later, DE performs the third best at finding the global minimum.
Figure 5 shows the performance achieved for F05 Penalty #2 function. For this multimodal function, similar to the F04 Penalty #2 function as shown in Figure 4, LKH is significantly superior to all
the other algorithms during the process of optimization. Here, KH II shows a stable convergence rate in the whole optimization process and eventually it performs the second best at finding the global
minimum that is significantly superior to the other algorithms.
Figure 6 shows the results achieved for the twelve methods when using the F06 Quartic (with noise) function. For this case, the figure shows that there is a little difference among the performance of
DE, GA, KH II, and LKH. From Table 3 and Figure 6, we can conclude that LKH performs the best in this multimodal function. KH II, DE, and GA perform as well and have ranks of 2, 3, and 4,
respectively. Through carefully looking at Figure 6, PSO has a fast convergence initially towards the known minimum; as the procedure proceeds, LKH gets closer and closer to the minimum, while PSO
comes into being premature and traps into the local minimum.
Figure 7 shows the optimization results for the F07 Rastrigin function. In this multimodal benchmark problem, it is obvious that LKH outperforms all other methods during the whole progress of
optimization. For other algorithms, the figure shows that there is little difference between the performance of ABC and KH II. From Table 3 and Figure 7, we can conclude that, KH II performs slightly
better than ABC in this multimodal function. In addition, other algorithms do not manage to succeed in this benchmark function within the maximum number of generations.
Figure 8 shows the results for F08 Rosenbrock function. From Figure 8, we can conclude that LKH performs the best in this unimodal function. In addition, KH II, DE, and ACO perform very well and have
ranks of 2, 3, and 4, respectively. Through carefully looking at Figure 8, PSO has a fast convergence initially towards the known minimum; however, it is outperformed by LKH after 10 generations. For
other algorithms, they do not manage to succeed in this benchmark function within the maximum number of generations.
Figure 9 shows the equivalent results for the F09 Schwefel 2.26 function. From Figure 9, clearly, GA is significantly superior to other algorithms including LKH during the process of optimization,
while ACO and ABC perform the second and the third best in this multimodal benchmark function, respectively. Unfortunately, LKH only performs the fourth in this multimodal benchmark function.
Figure 10 shows the results for F10 Schwefel 1.2 function. For this case, LKH, CS, KH II, and ACO perform the best and have ranks of 1, 2, 3, and 4, respectively. Looking carefully at Figure 7, LKH
has the fastest and stable convergence rate at finding the global minimum and significantly outperforms all other approaches.
Figure 11 shows the results for F11 Schwefel 2.22 function. From Figure 11, similar to the F09 Schwefel 2.26 function as shown in Figure 9, it is clear that ABC is significantly superior to other
algorithms including LKH during the process of optimization. For other algorithms, DE and KH II perform very well and have ranks of 2 and 3, respectively. Unfortunately, LKH only performs the tenth
best in this unimodal benchmark function among the twelve methods.
Figure 12 shows the results for F12 Schwefel 2.21 function. Very clearly, LKH has the fastest convergence rate at finding the global minimum and significantly outperforms all other methods. For other
algorithms, KH II and ACO that are only inferior to LKH perform very well and have ranks of 2 and 3, respectively.
Figure 13 shows the results for F13 Sphere function. From Figure 13, LKH shows the fastest convergence rate at finding the global minimum and significantly outperforms all other methods. In addition,
KH II, DE, and ACO perform very well and have ranks of 2, 3, and 4, respectively.
Figure 14 shows the results for F14 Step function. Clearly, LKH shows the fastest convergence rate at finding the global minimum and significantly outperforms all other approaches. Though slow, KH II
performs the second best at finding the global minimum that is only inferior to the LKH.
From the above analyses about Figures 1–14, we can come to a conclusion that our proposed hybrid metaheuristic LKH algorithm significantly outperforms the other eleven algorithms. In general, KH II
is only inferior to LKH and performs the second best among twelve methods. ABC, ACO, DE, and GA perform the third best only inferior to the LKH and KH II; ABC and GA especially perform better than
LKH on benchmark functions F11 and F09, respectively. Furthermore, the illustration of benchmarks F04, F05, F06, F08, and F10 shows that PSO has a faster convergence rate initially, while later, it
converges slower and slower to the true objective function value.
4.2. Discussion
For all of the standard benchmark functions considered in this section, the LKH method has been demonstrated to perform better than, or at least highly competitive with, the standard KH and other
eleven acclaimed state-of-the-art population-based methods. The advantages of LKH involve performing simply and easily and have few parameters to regulate. The work here proves the LKH to be robust,
powerful, and effective over all types of benchmark functions.
Benchmark evaluating is a good way for testing the performance of the metaheuristic methods, but it is also not flawless and has some limitations. First, we did not do much work painstakingly to
carefully regulate the optimization methods in this section. In general, different tuning parameter values in the optimization methods might lead to significant differences in their performance.
Second, real-world optimization problems may have little of a relationship to benchmark functions. Third, benchmark tests may arrive at fully different conclusions if the grading criteria or problem
setup changes. In our present work, we looked into the mean and best values obtained with some population size and after some number of iterations. However, we might reach different conclusions if,
for example, we change the population size, or look at how many population size it needs to reach a certain function value, or if we change the iteration. Despite these caveats, the benchmark results
represented here are prospective for LKH and show that this novel method might be capable of finding a niche among the plethora of population-based optimization methods.
Note that running time is a bottleneck to the implementation of many population-based optimization algorithms. If an algorithm converges too slowly, it will be impractical and infeasible, since it
would take too long to search an optimal or suboptimal solution. LKH seems not to require an unreasonable amount of computational time; of the twelve comparative optimization methods used in this
paper, LKH was the eleventh fastest. How to speed up the LKH’s convergence is worthy of further study.
In our study, 14 benchmark functions have been applied to evaluate the performance of our LKH method; we will test our proposed method on more optimization problems, such as the high-dimensional (d ≥
20) CEC 2010 test suit [44] and the real-world engineering problems. Moreover, we will compare LKH with other optimization algorithms. In addition, we only consider the unconstrained function
optimization in this study. Our future work consists of adding the other techniques into LKH for constrained optimization problems, such as constrained real-parameter optimization CEC 2010 test suit
5. Conclusion and Future Work
Due to the limited performance of KH on complex problems, LLF operator has been introduced into the standard KH to develop a novel Lévy-flight krill herd (LKH) algorithm for optimization problems. In
LKH, at first, original KH algorithm is applied to shrink the search region to a more promising area. Thereafter, LLF operator is implemented as a critical complement to perform the local search to
exploit the limited area intensively to get better solutions. In principle, KH takes full advantage of the three motions in the population and has experimentally demonstrated very good performance on
the multimodal problems. In a rugged region of the fitness landscape, KH may fail to proceed to better solutions [27]. Then, LLF operator is adaptively launched to reboost the search. The LKH makes
an attempt at taking merits of the KH and Lévy flight in order to avoid all krill getting trapped in inferior local optimal regions. The LKH enables the krill to have more diverse exemplars to learn
from as the krill are updated each generation and also form new krill to search in a larger search space. With both techniques combined, LKH can balance exploration and exploitation and effectively
solve complex multimodal problems.
Furthermore, this new method can speed up the global convergence rate without losing the strong robustness of the basic KH. From the analysis of the experimental results, we can see that the
Lévy-flight KH clearly improves the reliability of the global optimality and they also enhance the quality of the solutions. Based on the results of the twelve methods on the test problems, we can
conclude that LKH significantly improves the performances of the KH on most multimodal and unimodal problems. In addition, LKH is simple and implements easily.
In the field of numerical optimization, there are considerable issues that deserve further study, and some more efficient optimization methods should be developed depending on the analysis of
specific engineering problem. Our future work will focus on the two issues. On the one hand, we would apply our proposed LKH method to solve real-world civil engineering optimization problems [46],
and, obviously, LKH can be a promising method for these optimization problems. On the other hand, we would develop more new metaheuristic methods to solve optimization problems more efficiently and
This work was supported by the State Key Laboratory of Laser Interaction with Material Research Fund under Grant no. SKLLIM0902-01 and Key Research Technology of Electric-discharge Nonchain Pulsed DF
Laser under Grant no. LXJJ-11-Q80.
1. S. Gholizadeh and F. Fattahi, “Design optimization of tall steel buildings by a modified particle swarm algorithm,” The Structural Design of Tall and Special Buildings. In press.
2. S. Talatahari, R. Sheikholeslami, M. Shadfaran, and M. Pourbaba, “Optimum design of gravity retaining walls using charged system search algorithm,” Mathematical Problems in Engineering, vol.
2012, Article ID 301628, 10 pages, 2012. View at Publisher · View at Google Scholar
3. X. S. Yang, A. H. Gandomi, S. Talatahari, and A. H. Alavi, Metaheuristics in Water, Geotechnical and Transport Engineering, Elsevier, Waltham, Mass, USA, 2013.
4. A. H. Gandomi, X. S. Yang, S. Talatahari, and A. H. Alavi, Metaheuristic Applications in Structures and Infrastructures, Elsevier, Waltham, Mass, USA, 2013.
5. S. Gholizadeh and A. Barzegar, “Shape optimization of structures for frequency constraints by sequential harmony search algorithm,” Engineering Optimization. In press.
6. S. Chen, Y. Zheng, C. Cattani, and W. Wang, “Modeling of biological intelligence for SCM system optimization,” Computational and Mathematical Methods in Medicine, vol. 2010, Article ID 769702, 10
pages, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
7. X. S. Yang, Nature-Inspired Metaheuristic Algorithms, Luniver Press, Frome, UK, 2nd edition, 2010.
8. X. S. Yang, Engineering Optimization: An Introduction with Metaheuristic Applications, Wiley & Sons, NJ, USA, 2010.
9. G. Wang, L. Guo, H. Duan, L. Liu, H. Wang, and M. Shao, “Path planning for uninhabited combat aerial vehicle using hybrid meta-heuristic DE/BBO algorithm,” Advanced Science, Engineering and
Medicine, vol. 4, no. 6, pp. 550–564, 2012.
10. G. Wang, L. Guo, H. Duan, L. Liu, and H. Wang, “A bat algorithm with mutation for UCAV path planning,” The Scientific World Journal, vol. 2012, Article ID 418946, 15 pages, 2012. View at
Publisher · View at Google Scholar
11. H. Duan, W. Zhao, G. Wang, and X. Feng, “Test-sheet composition using analytic hierarchy process and hybrid metaheuristic algorithm TS/BBO,” Mathematical Problems in Engineering, vol. 2012,
Article ID 712752, 22 pages, 2012. View at Publisher · View at Google Scholar
12. W.-H. Ho and A. L.-F. Chan, “Hybrid Taguchi-differential evolution algorithm for parameter estimation of differential equation models with application to HIV dynamics,” Mathematical Problems in
Engineering, vol. 2011, Article ID 514756, 14 pages, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
13. D. E. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning, Addison-Wesley, New York, NY, USA, 1998.
14. M. Shahsavar, A. A. Najafi, and S. T. A. Niaki, “Statistical design of genetic algorithms for combinatorial optimization problems,” Mathematical Problems in Engineering, vol. 2011, Article ID
872415, 17 pages, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
15. G. Wang and L. Guo, “A novel hybrid bat algorithm with harmony search for global numerical optimization,” Journal of Applied Mathematics. In press.
16. X. S. Yang and A. H. Gandomi, “Bat algorithm: a novel approach for global engineering optimization,” Engineering Computations, vol. 29, no. 5, pp. 464–483, 2012.
17. R. Storn and K. Price, “Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces,” Journal of Global Optimization, vol. 11, no. 4, pp. 341–359, 1997.
View at Scopus
18. A. H. Gandomi, X.-S. Yang, S. Talatahari, and S. Deb, “Coupled eagle strategy and differential evolution for unconstrained and constrained global optimization,” Computers & Mathematics with
Applications, vol. 63, no. 1, pp. 191–200, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
19. A. H. Gandomi and A. H. Alavi, “Multi-stage genetic programming: a new strategy to nonlinear system modeling,” Information Sciences, vol. 181, no. 23, pp. 5227–5239, 2011.
20. Z. W. Geem, J. H. Kim, and G. V. Loganathan, “A new heuristic optimization algorithm: harmony search,” Simulation, vol. 76, no. 2, pp. 60–68, 2001. View at Scopus
21. G. Wang and L. Guo, “Hybridizing harmony search with biogeography based optimization for global numerical optimization,” Journal of Computational and Theoretical Nanoscience. In press.
22. J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, pp. 1942–1948, December 1995. View at Scopus
23. S. Talatahari, M. Kheirollahi, C. Farahmandpour, and A. H. Gandomi, “A multi-stage particle swarm for optimum design of truss structures,” Neural Computing & Applications. In press. View at
Publisher · View at Google Scholar
24. A. H. Gandomi, G. J. Yun, X. -S. Yang, and S. Talatahari, “Chaos-enhanced accelerated particle swarm optimization,” Communications in Nonlinear Science and Numerical Simulation, vol. 18, no. 2,
pp. 327–340, 2013.
25. A. H. Gandomi, X.-S. Yang, and A. H. Alavi, “Cuckoo search algorithm: a metaheuristic approach to solve structural optimization problems,” Engineering with Computers, vol. 29, no. 1, pp. 1–19,
2013. View at Publisher · View at Google Scholar · View at Scopus
26. G. Wang, L. Guo, H. Duan, L. Liu, H. Wang, and W. Jianbo, “A hybrid meta-heuristic DE/CS algorithm for UCAV path planning,” Journal of Information and Computational Science, vol. 9, no. 16, pp.
1–8, 2012.
27. A. H. Gandomi and A. H. Alavi, “Krill Herd: a new bio-inspired optimization algorithm,” Communications in Nonlinear Science and Numerical Simulation, vol. 17, no. 12, pp. 4831–4845, 2012.
28. G. Wang, L. Guo, H. Duan, H. Wang, L. Liu, and J. Li, “Incorporating mutation scheme into krill herd algorithm for global numerical optimization,” Neural Computing and Applications. In press.
View at Publisher · View at Google Scholar
29. G. Wang, L. Guo, H. Duan, H. Wang, and L. Liu, “A new improved firefly algorithm for global numerical optimization,” Journal of Computational and Theoretical Nanoscience. In press.
30. G. Wang, L. Guo, H. Duan, H. Wang, L. Liu, and M. Shao, “A hybrid meta-heuristic DE/CS algorithm for UCAV three-dimension path planning,” The Scientific World Journal, vol. 2012, Article ID
583973, 11 pages, 2012. View at Publisher · View at Google Scholar
31. G. Wang, L. Guo, A. H. Gandomi et al., “A new improved krill herd algorithm for global numerical optimization,” Neurocomputing. In press.
32. S. Yang and J. Lee, “Multi-basin particle swarm intelligence method for optimal calibration of parametric Lévy models,” Expert Systems with Applications, vol. 39, no. 1, pp. 482–493, 2012.
33. P. Barthelemy, J. Bertolotti, and D. S. Wiersma, “A Lévy flight for light,” Nature, vol. 453, no. 7194, pp. 495–498, 2008. View at Publisher · View at Google Scholar · View at Scopus
34. A. Natarajan, S. Subramanian, and K. Premalatha, “A comparative study of cuckoo search and bat algorithm for Bloom filter optimisation in spam filtering,” International Journal of Bio-Inspired
Computation, vol. 4, no. 2, pp. 89–99, 2012.
35. X. Yao, Y. Liu, and G. Lin, “Evolutionary programming made faster,” IEEE Transactions on Evolutionary Computation, vol. 3, no. 2, pp. 82–102, 1999. View at Publisher · View at Google Scholar ·
View at Scopus
36. D. Simon, “Biogeography-based optimization,” IEEE Transactions on Evolutionary Computation, vol. 12, no. 6, pp. 702–713, 2008. View at Publisher · View at Google Scholar · View at Scopus
37. X. Li, J. Wang, J. Zhou, and M. Yin, “A perturb biogeography based optimization with mutation for global numerical optimization,” Applied Mathematics and Computation, vol. 218, no. 2, pp.
598–609, 2011. View at Publisher · View at Google Scholar
38. D. Karaboga and B. Basturk, “A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm,” Journal of Global Optimization, vol. 39, no. 3, pp.
459–471, 2007. View at Publisher · View at Google Scholar · View at Scopus
39. M. Dorigo and T. Stutzle, Ant Colony Optimization, MIT Press, Cambridge, Mass, USA, 2004.
40. X. S. Yang and S. Deb, “Engineering optimisation by cuckoo search,” International Journal of Mathematical Modelling and Numerical Optimisation, vol. 1, no. 4, pp. 330–343, 2010.
41. H.-G. Beyer, The Theory of Evolution Strategies, Springer, Berlin, Germany, 2001. View at MathSciNet
42. B. Shumeet, “Population-Based Incremental Learning: A Method for Integrating Genetic Search Based Function Optimization and Competitive Learning,” Tech. Rep. CMU-CS-94-163, Carnegie Mellon
University, Pittsburgh, Pa, USA, 1994.
43. G. Wang, L. Guo, H. Duan, L. Liu, and H. Wang, “Dynamic deployment of wireless sensor networks by biogeography based optimization algorithm,” Journal of Sensor and Actuator Networks, vol. 1, no.
2, pp. 86–96, 2012.
44. K. Tang, X. Li, P. N. Suganthan, Z. Yang, and T. Weise, “Benchmark functions for the CEC'2010 special session and competition on large scale global optimization,” Inspired Computation and
Applications Laboratory, USTC, Hefei, China, 2010.
45. R. Mallipeddi and P. Suganthan, “Problem definitions and evaluation criteria for the CEC 2010 Competition on Constrained Real-Parameter Optimization,” Nanyang Technological University, Singapore,
46. P. Lu, S. Chen, and Y. Zheng, “Artificial intelligence in civil engineering,” Mathematical Problems in Engineering, vol. 2012, Article ID 145974, 22 pages, 2012. View at Publisher · View at
Google Scholar | {"url":"http://www.hindawi.com/journals/mpe/2013/682073/","timestamp":"2014-04-17T19:27:31Z","content_type":null,"content_length":"154012","record_id":"<urn:uuid:2a72987a-d754-48ae-b8ab-642ac87ec814>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00388-ip-10-147-4-33.ec2.internal.warc.gz"} |