content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Math Help
October 30th 2009, 10:18 AM #1
Apr 2009
tan A
Let L1 and L2 be two lines in the plane, with equations y = m1x + c1 and y = m2x + c2 respectively. Suppose that they intersect at an acute angle A. Show that
tan A = (m1-m2)/(1+m1m2) <-- modulus of this
The lines and modulus shows that I should use the formula for cos A with the dot product etc. But I am not sure what the dot product will be.
Any hints?
Hello Aquafina
Let L1 and L2 be two lines in the plane, with equations y = m1x + c1 and y = m2x + c2 respectively. Suppose that they intersect at an acute angle A. Show that
tan A = (m1-m2)/(1+m1m2) <-- modulus of this
The lines and modulus shows that I should use the formula for cos A with the dot product etc. But I am not sure what the dot product will be.
Any hints?
This is a very similar question to this one. Use the same method as I have done there, and the result follows immediately. (Don't try to use dot product!)
Last edited by Grandad; October 31st 2009 at 01:40 AM. Reason: Fix typo
October 30th 2009, 01:04 PM #2 | {"url":"http://mathhelpforum.com/trigonometry/111381-tan.html","timestamp":"2014-04-19T18:17:30Z","content_type":null,"content_length":"29618","record_id":"<urn:uuid:e67aeebb-077f-432a-855f-e43e3c9c478b>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00607-ip-10-147-4-33.ec2.internal.warc.gz"} |
Show that if m and n are relatively prime and a and b are any integer,
February 7th 2013, 12:14 PM #1
Feb 2013
New York
Show that if m and n are relatively prime and a and b are any integer,
Show that if m and n are relatively prime and a and b are any integer,
then there is an integer x that simultaneously satisfies
the two congruence x ≡ a (mod m) and x ≡ b (mod m)
Re: Show that if m and n are relatively prime and a and b are any integer,
This is the Chinese remainder theorem.
Re: Show that if m and n are relatively prime and a and b are any integer,
thank you so much. my prof and the textbook didnt cover this topic
Re: Show that if m and n are relatively prime and a and b are any integer,
Here's a very simple constructive proof:
Form the sequence a+mi, 0<=i<n. If two of these were the same mod n, a+mi=a+mj mod n, m(i-j)=0 mod n. Since n and m are relatively prime, n divides i-j and i=j. So the sequence forms a complete
set of remainders mod n; b mod n must be one of these. Actually, there's more: the x is unique mod mn, but I won't bother with a proof.
Since the above proof is constructive, for "small" m and n, you can rapidly find the x. Example: suppose you know the last two digits of a positive integer are 7 mod 25 and 1 mod 4. Then the
sequence is 7+25=32, 7+2*25=57 which is 1 mod 4. So the last two digits are 57
February 7th 2013, 12:30 PM #2
MHF Contributor
Oct 2009
February 7th 2013, 12:49 PM #3
Feb 2013
New York
February 7th 2013, 06:52 PM #4
Super Member
Dec 2012
Athens, OH, USA | {"url":"http://mathhelpforum.com/number-theory/212731-show-if-m-n-relatively-prime-b-any-integer.html","timestamp":"2014-04-21T07:23:55Z","content_type":null,"content_length":"37550","record_id":"<urn:uuid:49a08fb1-6f2e-4b47-a231-78f4674059b3>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00010-ip-10-147-4-33.ec2.internal.warc.gz"} |
Searle's Bar for a good heat conductor
Thermal Conductivity of a good conductor.
Measure the thermal conductivity of Copper using the Searle's bar method.
This experiment uses steam heating. Be careful to avoid touching the hot surfaces of the steam generator, tubing and the Searle's bar apparatus. Make sure that the steam outlet tube from the
apparatus goes to a sink.
Constant-head apparatus, measuring cylinder, stop watch, Searle's apparatus, steam generator, four thermometers T[1], T[2], T[3], T[4], Vernier callipers.
T[1] and T[2] measure the temperature at points on the bar, T[3] and T[4] measure the temperature of water entering and leaving the spiral C.
1. Adjust the constant-head device to give a steady flow of water through the coiled tube.
2. Pass steam from the steam generator through the steam chest. wait until the thermometers have reached a steady state (i.e. no significant increase or reduction of temperature for 10 minutes).
3. Measure T[1], T[2], T[3] and T[4].
4. Measure the rate of water flow through the spiral by measuring the amount of water (m) collected in the measuring cylinder in a given time (t). Collect approximately 1 litre.
5. Using Vernier callipers, measure the diameter of the bar D and the distance d between the thermometers T[1] and T[2].
Assuming no loss of heat along the bar, it can be shown that:
Q is the heat supplied to the bar in time t,
A is the cross-sectional area of the bar,
dT is the difference in temperature between two points in the bar dx apart,
k is the coefficient of thermal conductivity of the bar.
The heat Q warms up a mass m (in kilograms) of water from temperature T[4] to T[3] according to the formula:
where c is the specific heat capacity of water (c = 4190 J kg^-1 K^-1).
^-1 K^-1).
Calculate k and the error in k - see below.
Quote your final result for the thermal conductivity as k k with appropriate units.
Error Calculation
1. There is an error in assuming that no heat lost along the bar, but no correction has been made for this, although this will obviously affect the values of T[2] and T[1].
2. The absolute error in each of the temperature differences
3. Errors in m arise from errors in determining the mass of water collected.
4. Errors in the time t depend on the accuracy of the stop-watch.
5. Errors in measuring with the Vernier calliper are at least 0.05 mm, but may be bigger (estimate how precisely you can measure D and d).
6. The fractional error in k is given by: Dk.
© Mark Davison, 1997, give feedback or ask questions about this experiment.
Back to the experiment menu | {"url":"http://media.paisley.ac.uk/~davison/labpage/searle/searle.html","timestamp":"2014-04-21T07:04:23Z","content_type":null,"content_length":"4892","record_id":"<urn:uuid:132febb8-1f63-4d1d-b541-8df92c41ee31>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00127-ip-10-147-4-33.ec2.internal.warc.gz"} |
A formalization of floating-point numeric base conversion
- Proceedings of PLDI ’90 , 1990
"... Converting decimal scientific notation into binary floating point is nontrivial, but this conversion can be performed with the best possible accuracy without sacrificing efficiency. 1. ..."
Cited by 25 (0 self)
Add to MetaCart
Converting decimal scientific notation into binary floating point is nontrivial, but this conversion can be performed with the best possible accuracy without sacrificing efficiency. 1.
- NUMERICAL ANALYSIS MANUSCRIPT 90-10, AT&T BELL LABORATORIES , 1990
"... This note discusses the main issues in performing correctly rounded decimal-to-binary and binary-to-decimal conversions. It reviews recent work by Clinger and by Steele and White on these
conversions and describes some efficiency enhancements. Computational experience with several kinds of arithmeti ..."
Cited by 22 (3 self)
Add to MetaCart
This note discusses the main issues in performing correctly rounded decimal-to-binary and binary-to-decimal conversions. It reviews recent work by Clinger and by Steele and White on these conversions
and describes some efficiency enhancements. Computational experience with several kinds of arithmetic suggests that the average computational cost for correct rounding can be small for typical
conversions. Source for conversion routines that support this claim is available from netlib.
- IEEE Transactions on Computers , 1973
"... For scientific computations on a digital computer the set of real numbers is usually approximated by a finite set F of “floating-point ” numbers. We compare the numerical accuracy possible with
different choices of F having approximately the same range and requiring the same word length. In particul ..."
Cited by 8 (3 self)
Add to MetaCart
For scientific computations on a digital computer the set of real numbers is usually approximated by a finite set F of “floating-point ” numbers. We compare the numerical accuracy possible with
different choices of F having approximately the same range and requiring the same word length. In particular, we compare different choices of base (or radix) in the usual floating-point systems. The
emphasis is on the choice of F, not on the details of the number representation or the arithmetic, but both rounded and truncated arithmetic are considered. Theoretical results are given, and some
simulations of typical floating point-computations (forming sums, solving systems of linear equations, finding eigenvalues) are described. If the leading fraction bit of a normalized base 2 number is
not stored explicitly (saving a bit), and the criterion is to minimise the mean square roundoff error, then base 2 is best. If unnormalized numbers are allowed, so the first bit must be stored
explicitly, then base 4 (or sometimes base 8) is the best of the usual systems. Index Terms: Base, floating-point arithmetic, radix, representation error, rms error, rounding error, simulation.
, 1998
"... 1 2. INTRODUCTION 1 2.1. Portability and Purity 2 2.2. Goals of Borneo 3 2.3. Brief Description of an IEEE 754 Machine 3 2.4. Language Features for Floating Point Computation 6 3. FUTURE WORK 9
3.1. Incorporating Java 1.1 Features 9 3.2. Unicode Support 10 3.3. Flush to Zero 10 3.4. Variable Trappin ..."
Cited by 1 (0 self)
Add to MetaCart
1 2. INTRODUCTION 1 2.1. Portability and Purity 2 2.2. Goals of Borneo 3 2.3. Brief Description of an IEEE 754 Machine 3 2.4. Language Features for Floating Point Computation 6 3. FUTURE WORK 9 3.1.
Incorporating Java 1.1 Features 9 3.2. Unicode Support 10 3.3. Flush to Zero 10 3.4. Variable Trapping Status 10 3.5. Parametric Polymorphism 10 4. CONCLUSION 10 5. ACKNOWLEDGMENTS 11 6. BORNEO
LANGUAGE SPECIFICATION 13 6.1. indigenous 13 6.2. Floating Point Literals 16 6.3. Float, Double, and Indigenous classes 17 6.4. New Numeric Types 18 6.5. Floating Point System Properties 20 + This
material is based upon work supported under a National Science Foundation Graduate Fellowship. Any opinions, findings, conclusions or recommendations expressed in this publication are those of the
author(s) and do not necessarily reflect the views of the National Science Foundation. ii 6.6. Fused mac 21 6.7. Rounding Modes 21 6.8. Floating Point Exception Handling 31 6.9. Operator Overloading
51 6.10...
"... 1 Introduction A real number x is usually approximated in a digital computer by an element fl(x) of a finite set F of "floating-point " numbers. We regard the elements of F as exactly
representable real numbers, and take fl(x) as the floating-point number closest to x. The definition of &q ..."
Add to MetaCart
1 Introduction A real number x is usually approximated in a digital computer by an element fl(x) of a finite set F of "floating-point " numbers. We regard the elements of F as exactly
representable real numbers, and take fl(x) as the floating-point number closest to x. The definition of "closest", rules for breaking ties, and the possibility of truncating instead of
rounding are discussed later. We restrict our attention to binary computers in which floating-point numbers are represented in a word (or multiple word) of fixed length w bits, using some convenient
(possibly redundant) code. Usually F is a set of numbers of the form
"... † Formerly known as Teak. In the future will be known as Kalimantan. ..." | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=2540530","timestamp":"2014-04-21T01:03:16Z","content_type":null,"content_length":"24424","record_id":"<urn:uuid:95a6717c-f515-48fd-b350-8186d966467b>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00188-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
February 4th 2009, 09:47 AM #1
Feb 2009
1st one: differentiate y=(x^3/2)+(48/x)
2nd one: The fixed point A has coordinates (8, -6,5) and the variable point P has coordinates (t,t2t).
a. Show that AP^2=6t^2-24t+125
b. Hence find the value of t for which the distance AP is least.
c. Determine this least distance.
thank u
$y = \frac{x^3}{2} + 48 x^{-1}$
$y' = \frac{3x^2}{2} - 48 x^{-2}$
For the second
$AP^2 = (t - 8)^2+(t+6)^2+(2t-5)^2$ expand this.
For the second part, take the derivative and set this to zero. I'm sure you can do the rest.
February 4th 2009, 09:58 AM #2
February 4th 2009, 10:03 AM #3
Feb 2009 | {"url":"http://mathhelpforum.com/calculus/71759-derivatives.html","timestamp":"2014-04-19T02:31:17Z","content_type":null,"content_length":"35879","record_id":"<urn:uuid:0e4ebeed-b444-4f8a-a879-1ff746d0a13b>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00225-ip-10-147-4-33.ec2.internal.warc.gz"} |
Strongly consistent code-based identification and order estimation for constrained finite-state model classes
Results 1 - 10 of 17
- IEEE Trans. Inform. Theory , 2002
"... Abstract—An overview of statistical and information-theoretic aspects of hidden Markov processes (HMPs) is presented. An HMP is a discrete-time finite-state homogeneous Markov chain observed
through a discrete-time memoryless invariant channel. In recent years, the work of Baum and Petrie on finite- ..."
Cited by 170 (3 self)
Add to MetaCart
Abstract—An overview of statistical and information-theoretic aspects of hidden Markov processes (HMPs) is presented. An HMP is a discrete-time finite-state homogeneous Markov chain observed through
a discrete-time memoryless invariant channel. In recent years, the work of Baum and Petrie on finite-state finite-alphabet HMPs was expanded to HMPs with finite as well as continuous state spaces and
a general alphabet. In particular, statistical properties and ergodic theorems for relative entropy densities of HMPs were developed. Consistency and asymptotic normality of the maximum-likelihood
(ML) parameter estimator were proved under some mild conditions. Similar results were established for switching autoregressive processes. These processes generalize HMPs. New algorithms were
developed for estimating the state, parameter, and order of an HMP, for universal coding and classification of HMPs, and for universal decoding of hidden Markov channels. These and other related
topics are reviewed in this paper. Index Terms—Baum–Petrie algorithm, entropy ergodic theorems, finite-state channels, hidden Markov models, identifiability, Kalman filter, maximum-likelihood (ML)
estimation, order estimation, recursive parameter estimation, switching autoregressive processes, Ziv inequality. I.
"... . The Bayesian Information Criterion (BIC) estimates the order of a Markov chain (with finite alphabet A) from observation of a sample path x 1 ; x 2 ; : : : ; x n , as that value k = k that
minimizes the sum of the negative logarithm of the k-th order maximum likelihood and the penalty term jAj ..."
Cited by 55 (3 self)
Add to MetaCart
. The Bayesian Information Criterion (BIC) estimates the order of a Markov chain (with finite alphabet A) from observation of a sample path x 1 ; x 2 ; : : : ; x n , as that value k = k that
minimizes the sum of the negative logarithm of the k-th order maximum likelihood and the penalty term jAj k (jAj\Gamma1) 2 log n: We show that k equals the correct order of the chain, eventually
almost surely as n ! 1, thereby strengthening earlier consistency results that assumed an apriori bound on the order. A key tool is a strong ratio-typicality result for Markov sample paths. We also
show that the Bayesian estimator or minimum description length estimator, of which the BIC estimator is an approximation, fails to be consistent for the uniformly distributed i.i.d. process. AMS 1991
subject classification: Primary 62F12, 62M05; Secondary 62F13, 60J10 Key words and phrases: Bayesian Information Criterion, order estimation, ratiotypicality, Markov chains. 1 Supported in part by a
joint N...
"... Abstract—In this work a method for statistical analysis of time series is proposed, which is used to obtain solutions to some classical problems of mathematical statistics under the only
assumption that the process generating the data is stationary ergodic. Namely, three problems are considered: goo ..."
Cited by 12 (12 self)
Add to MetaCart
Abstract—In this work a method for statistical analysis of time series is proposed, which is used to obtain solutions to some classical problems of mathematical statistics under the only assumption
that the process generating the data is stationary ergodic. Namely, three problems are considered: goodness-of-fit (or identity) testing, process classification, and the change point problem. For
each of the problems a test is constructed that is asymptotically accurate for the case when the data is generated by stationary ergodic processes. The tests are based on empirical estimates of
distributional distance. Index Terms—Non-parametric hypothesis testing, stationary ergodic processes, goodness-of-fit test, process classification, change point problem. I.
- In Proceedgings of Information Theory Workshop (2008 , 1998
"... We propose a method for statistical analysis of time series, that allows us to obtain solutions to some classical problems of mathematical statistics under the only assumption that the process
generating the data is stationary ergodic. Namely, we consider three problems: goodness-of-fit (or identity ..."
Cited by 11 (11 self)
Add to MetaCart
We propose a method for statistical analysis of time series, that allows us to obtain solutions to some classical problems of mathematical statistics under the only assumption that the process
generating the data is stationary ergodic. Namely, we consider three problems: goodness-of-fit (or identity) testing, process classification, and the change point problem. For each of the problems we
construct a test that is asymptotically accurate for the case when the data is generated by stationary ergodic processes. The tests are based on empirical estimates of distributional distance.
- In ITW : 291– 295
"... processes ..."
"... Abstract. This paper deals with order identification for Markov chains with Markov regime (MCMR) in the context of finite alphabets. We define the joint order of a MCMR process in terms of the
number k of states of the hidden Markov chain and the memory m of the conditional Markov chain. We study th ..."
Cited by 4 (3 self)
Add to MetaCart
Abstract. This paper deals with order identification for Markov chains with Markov regime (MCMR) in the context of finite alphabets. We define the joint order of a MCMR process in terms of the number
k of states of the hidden Markov chain and the memory m of the conditional Markov chain. We study the properties of penalized maximum likelihood estimators for the unknown order (k, m) of an observed
MCMR process, relying on information theoretic arguments. The novelty of our work relies in the joint estimation of two structural parameters. Furthermore, the different models in competition are not
nested. In an asymptotic framework, we prove that a penalized maximum likelihood estimator is strongly consistent without prior bounds on k and m. We complement our theoretical work with a simulation
study of its behaviour. We also study numerically the behaviour of the BIC criterion. A theoretical proof of its consistency seems to us presently out of reach for MCMR, as such a result does not yet
exist in the simpler case where m = 0 (that is for hidden Markov models). Résumé. Ce travail porte sur l’identification de l’ordre d’une chaîne de Markov à régime Markovien (MCMR) sur un alphabet
fini. L’ordre d’une MCMR est défini comme le couple (k, m) où k est le nombre d’états de la chaîne cachée et m la mémoire de la chaîne de Markov conditionnelle. Nous étudions des estimateurs du
maximum de vraisemblance pénalisée en utilisant des techniques issues de
"... Abstract — We consider the discrete universal filtering problem, where the components of a discrete signal emitted by an unknown source and corrupted by a known DMC are to be causally estimated.
We derive a family of filters which we show to be universally asymptotically optimal in the sense of achi ..."
Cited by 3 (2 self)
Add to MetaCart
Abstract — We consider the discrete universal filtering problem, where the components of a discrete signal emitted by an unknown source and corrupted by a known DMC are to be causally estimated. We
derive a family of filters which we show to be universally asymptotically optimal in the sense of achieving the optimum filtering performance when the clean signal is stationary, ergodic, and
satisfies an additional mild positivity condition. Our schemes are based on approximating the noisy signal by a hidden Markov process (HMP) via maximumlikelihood (ML) estimation, followed by use of
the well-known forward recursions for HMP state estimation. We show that as the data length increases, and as the number of states in the HMP approximation increases, our family of filters attain the
performance of the optimal distribution-dependent filter. I.
- IEEE Trans. Inform. Theory
"... We consider the problem of joint universal variable-rate lossy coding and identification for parametric classes of stationary β-mixing sources with general (Polish) alphabets. Compression
performance is measured in terms of Lagrangians, while identification performance is measured by the variational ..."
Cited by 1 (1 self)
Add to MetaCart
We consider the problem of joint universal variable-rate lossy coding and identification for parametric classes of stationary β-mixing sources with general (Polish) alphabets. Compression performance
is measured in terms of Lagrangians, while identification performance is measured by the variational distance between the true source and the estimated source. Provided that the sources are mixing at
a sufficiently fast rate and satisfy certain smoothness and Vapnik–Chervonenkis learnability conditions, it is shown that, for bounded metric distortions, there exist universal schemes for joint
lossy compression and identification whose Lagrangian redundancies converge to zero as � Vn log n/n as the block length n tends to infinity, where Vn is the Vapnik–Chervonenkis dimension of a certain
class of decision regions defined by the n-dimensional marginal distributions of the sources; furthermore, for each n, the decoder can identify n-dimensional marginal the active source up to a ball
of radius O ( � Vn log n/n) in variational distance, eventually with probability one. The results are supplemented by several examples of parametric sources satisfying the regularity conditions.
Keywords: Learning, minimum-distance density estimation, two-stage codes, universal vector quantization, Vapnik– Chervonenkis dimension. I.
, 908
"... We show that large-scale typicality of Markov sample paths implies that the likelihood ratio statistic satisfies a law of iterated logarithm uniformly to the same scale. As a consequence, the
penalized likelihood Markov order estimator is strongly consistent for penalties growing as slowly as log lo ..."
Cited by 1 (0 self)
Add to MetaCart
We show that large-scale typicality of Markov sample paths implies that the likelihood ratio statistic satisfies a law of iterated logarithm uniformly to the same scale. As a consequence, the
penalized likelihood Markov order estimator is strongly consistent for penalties growing as slowly as log log n when an upper bound is imposed on the order which may grow as rapidly as log n. Our
method of proof, using techniques from empirical process theory, does not rely on the explicit expression for the maximum likelihood estimator in the Markov case and could therefore be applicable in
other settings.
"... Key words and phrases: Consistency; hidden Markov model; minimum-distance method; number of components. ..."
Add to MetaCart
Key words and phrases: Consistency; hidden Markov model; minimum-distance method; number of components. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=316621","timestamp":"2014-04-16T16:49:17Z","content_type":null,"content_length":"36859","record_id":"<urn:uuid:030c2394-0e10-4ecb-bee7-a609ad30676c>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00628-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linear Programming - Math Central
Natasha Glydon
Consider this scenario: your school is planning to make toques and mitts to sell at the winter festival as a fundraiser. The school’s sewing classes divide into two groups – one group can make
toques, the other group knows how to make mitts. The sewing teachers are also willing to help out. Considering the number of people available and time constraints due to classes, only 150 toques and
120 pairs of mitts can be made each week. Enough material is delivered to the school every Monday morning to make a total of 200 items per week. Because the material is being donated by community
members, each toque sold makes a profit of $2 and each pair of mitts sold makes a profit of $5.
In order to make the most money from the fundraiser, how many of each item should be made each week? It is important to understand that profit (the amount of money made from the fundraiser) is equal
to the revenue (the total amount of money made) minus the costs: Proft = Revenue - Cost. Because the students are donating their time and the community is donating the material, the cost of making
the toques and mitts is zero. So in this case, profit ≡ revenue.
If the quantity you want to optimize (here, profit) and the constraint conditions (more on them later) are linear, then the problem can be solved using a special organization called linear
programming. Linear programming enables industries and companies to find optimal solutions to economic decisions. Generally, this means maximizing profits and minimizing costs. Linear programming is
most commonly seen in operations research because it provides a “best” solution, while considering all the constraints of the situation. Constraints are limitations, and may suggest, for example, how
much of a certain item can be made or in how much time.
Creating equations, or inequalities, and graphing them can help solve simple linear programming problems, like the one above. We can assign variables to represent the information in the above
x = the number of toques made weekly
y = the number of pairs of mitts made weekly
Then, we can write linear inequalities based on the constraints from the problem.
x ≤ 150 The students can only make up to 150 toques and up to 120 pairs of mitts each week. This is one restriction.
y ≤ 120
x + y ≤ 200
The total number of mitts and toques made each week cannot exceed 200. This is the material restriction.
We may also want to consider that x ≥ 0 and y ≥ 0. This means that we cannot make -3 toques.
Our final equation comes from the goal of the problem. We want to maximize the total profit from the toques and mitts. This can be represented by $2x + $5y = P, where P is the total profit, since
there are no costs in production. If the school sells x toques, then they make $2x from the sales of toques. If the school sells y mitts, then they make $5y from the sales of mitts.
In some applications, the linear equations are very complex with numerous constraints and there are too many variables to work out manually, so they have special computers and software to perform the
calculations efficiently. Sometimes, linear programming problems can be solved using matrices or by using an elimination or substitution method, which are common strategies for solving systems of
linear equations.
Using the equations and inequations generated above, we can graph these, to find a feasible region. Our feasible region is the convex polygon that satisfies all of the constraints. In this situation,
one of the vertices of this polygon will provide the optimal choice, so we must first consider all of the corner points of the polygon and find which pair of coordinates makes us the most money. From
our toque and mitt example, we can produce the following graph:
We can see that our feasible region (the green area) has vertices of (0, 120), (150, 0),
(150, 50), and (80, 120). By substituting these values for x and y in our revenue equation, we can find the optimal solution.
R = 2x + 5y
R = 2(80) + 5(120)
R = $760
After considering all of the options, we can conclude that this is our maximum revenue. Therefore, the sewing students (and teachers) must make 80 toques and 120 pairs of mitts each week in order to
make the most money. We can check that these solutions satisfy all of our restrictions:
80 + 120 ≤ 200. This is true. We know that we will have enough material to make 80 toques and 120 pairs of mitts each week. We can also see that our values for x and y are less than 150 and 120,
respectively. So, not only is our solution possible, but it is the best combination to optimize profits for the school. This is a fairly simple problem, but it is easy to see how this type of
organization can be useful and very practical in the industrial world.
The airline industry uses linear programming to optimize profits and minimize expenses in their business. Initially, airlines charged the same price for any seat on the aircraft. In order to make
money, they decided to charge different fares for different seats and promoted different prices depending on how early you bought your ticket. This required some linear programming. Airlines needed
to consider how many people would be willing to pay a higher price for a ticket if they were able to book their flight at the last minute and have substantial flexibility in their schedule and flight
times. The airline also needed to know how many people would only purchase a low price ticket, without an in-flight meal. Through linear programming, airlines were able to find the optimal breakdown
of how many tickets to sell at which price, including various prices in between.
Airlines also need to consider plane routes, pilot schedules, direct and in-direct flights, and layovers. There are certain standards that require pilots to sleep for so many hours and to have so
many days rest before flying. Airlines want to maximize the amount of time that their pilots are in the air, as well. Pilots have certain specializations, as not all pilots are able to fly the same
planes, so this also becomes a factor. The most controllable factor an airline has is its pilot’s salary, so it is important that airlines use their optimization teams to keep this expense as low as
possible. Because all of these constraints must be considered when making economic decisions about the airline, linear programming becomes a crucial job.
The Manufacturing Industry
Many other industries rely on linear programming to enhance the economy of their business. These include:
□ The military
□ Capital budgeting
□ Designing diets
□ Conservation of resources
□ Economic growth prediction
□ Transportation systems (busses, trains, etc.)
□ Strategic games (e.g. chess)
□ Factory manufacturing
All of these industries rely on the intricate mathematics of linear programming. Even farmers use linear programming to increase the revenue of their operations, like what to grow, how much of it,
and what to use it for. Amusement parks use linear programming to make decisions about queue lines. Linear programming is an important part of operations research and continues to make the world more
economically efficient. | {"url":"http://mathcentral.uregina.ca/beyond/articles/LinearProgramming/linearprogram.html","timestamp":"2014-04-16T04:49:56Z","content_type":null,"content_length":"17629","record_id":"<urn:uuid:60a248c7-a4da-4563-a7b3-acc55d268514>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00621-ip-10-147-4-33.ec2.internal.warc.gz"} |
North Bergen Precalculus Tutor
Find a North Bergen Precalculus Tutor
...Prior to that, I taught my robotics teams to create 3D-animations for competitions using Maya and 3D-Studio Max. I'm also proficient with GNU 3D programs such as Blender. I've been teaching
math including linear algebra for 10 years.
83 Subjects: including precalculus, chemistry, algebra 1, statistics
...I then earned a Masters of Arts in Teaching from Bard College in '07. I've been tutoring for 8+ years, with students between the ages of 6 and 66, with a focus on the high school student and
the high school curriculum. I have also been an adjunct professor at the College of New Rochelle, Rosa Parks Campus.
26 Subjects: including precalculus, calculus, physics, GRE
...Trigonometry is important in itself, but is also used in upper level math such as calculus; hence it is important to learn it the first time around. I have been involved with math in some way
since 1970. I have a PhD in the subject.
19 Subjects: including precalculus, reading, writing, calculus
...Please feel free to contact me via WyzAnt and I will gladly assist you with your tutorial needs. Thank you. ---DanteI am a Columbia University alumni with extensive experience (over 20 years)of
professional tutoring in mathematics and science. Although I have specialized in math and science at ...
22 Subjects: including precalculus, Spanish, chemistry, physics
...I have helped many students as a private tutor in both mathematics and Spanish. My areas of knowledge range from Spanish grammar, phonetics, syntax, Spanish conversation and Business Spanish.
As a math tutor my areas of knowledge range from Algebra, trigonometry, geometry, Differential Calculus...
9 Subjects: including precalculus, Spanish, calculus, geometry | {"url":"http://www.purplemath.com/North_Bergen_precalculus_tutors.php","timestamp":"2014-04-19T00:03:17Z","content_type":null,"content_length":"24226","record_id":"<urn:uuid:8f340d72-9547-499c-855e-6674dae6f8e7>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00589-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Big Picture » Defining Risk Versus Uncertainty » Print
Longtime readers will recall that I find the Uncertainty meme to be mostly silly (see this ^[1] and this ^[2]). The foolishness continues to come up amongst allegedly serious people. I find many of
these folks (mostly) devoid of original thought, choosing instead to repeat things pundits of questionable insight have previously said (PoQI™ is a registered trademark of TBP).
I mention this because Michael Mauboussin, who is an orginal thinker with outstanding insight, discussed this very subject not too long ago. On Bloomberg TV ^[3], he had broad discussion of skill and
luck in sports, business and investing. Almost as an aside, Mauboussin’s made some extremely insightful commentary about “What is Risk,” and how that compares to our peeve, “What is Uncertainty.”
He nailed the distinction in a way I found simply fascinating. Consider the distinctions Mauboussin makes:
Risk: We don’t know what is going to happen next, but we do know what the distribution looks like.
Uncertainty: We don’t know what is going to happen next, and we do not know what the possible distribution looks like.
In other words, his view is that the future is always unknown — but that does not make it “uncertain.” Rather, he takes the analysis a step further, quantifying this in the language of statistical
A statistical approach perfectly clarifies the falsity of the uncertainty meme.
When we don’t know what any future outcome will be, but we understand the probability distribution — think of dice or a multiple choice exam — we have risk, but we do NOT have uncertainty. We never
know what the roll of the dice will be, but we do know its one of six choices.
Is that uncertainty? The answer is of course not — it is an unknown outcome with well-defined possibilities. We may not know precisely which outcome will occur in advance, but we do know its either
1, 2,3, 4, 5 or 6. Call that risk or an unknown future, but do not call that uncertainty.
I am pushing against a usage that conflates “Uncertainty” with”Unknown.” Since the future is, by definition, always “unknown,” then what purpose does it serve to say there is Uncertainty? By that
definition, there is always uncertainty. As currently heard in the MSM, this renders the word utterly meaningless.
Consider alternatively what is the true definition of Uncertainty: That occurs when we have no idea of what the possible outcome might be. The probability distribution is unknown (or so extremely
large as to functionally be the same as unknown).
The so-called fiscal cliff is a perfect example — we know what the possible outcomes are, and we have a very good idea what their impact will be.
Hopefully clarifies the silly meme that seems to conflate “Uncertainty” and “Unknown” as the same things…
Kiss Your Assets Goodbye When Certainty Reigns ^[1] (Bloomberg, November 9, 2010)
There’s nothing new about uncertainty ^[2] (Washington Post, July 7 2012)
Mauboussin on ‘The Success Equation’ ^[3] (November 20th, 2012) | {"url":"http://www.ritholtz.com/blog/2012/12/defining-risk-versus-uncertainty/print/","timestamp":"2014-04-17T21:25:39Z","content_type":null,"content_length":"7243","record_id":"<urn:uuid:27113c5e-7f9b-4b64-af1b-36d0c22bbe71>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00028-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math major Liem Nguyen wins Celebration of Scholarship Undergraduate School Award
Parity of K-regular Partition Functions.
A partition of a positive integer n is a non-increasing sequence of positive integers whose sum is n. A k-regular partition of a positive integer n is a partition of n whose parts are not divisible
by k, and we denote bk(n) as the number of k-regular partitions of n. We are interested in the parity of these functions, in particular the exact criteria for when bk(n) is even. In this
presentation, we will give such criteria for b7(n) and b13(n), and prove that these functions satisfy Ramanujan type congruences modulo 2.
Read more. | {"url":"http://www.uwosh.edu/mathematics/news/math-major-liem-nguyen-wins-celebration-of-scholarship-undergraduate-school-award","timestamp":"2014-04-20T14:02:15Z","content_type":null,"content_length":"36487","record_id":"<urn:uuid:9e4f0bca-430e-4a6d-b94e-92d84aed5009>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00056-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cardiff By The Sea Algebra 2 Tutor
...I have developed many hands-on projects that bring the content of Algebra 2 to life, including modeling activities for studying quadratic, polynomial, exponential and logarithmic functions.
Learning Algebra 2 content inherently requires a solid foundation of Algebra 1. Algebra, of course, is often abstract and confusing.
15 Subjects: including algebra 2, physics, geometry, GRE
...Calculus is an exciting subject because it represents a leap into advanced math and therefore it is perfectly normal for someone to seek out extra tutoring in this subject. Please feel free to
contact me for more information about calculus tutoring. Chemistry is sometimes referred to as the fun...
22 Subjects: including algebra 2, chemistry, organic chemistry, algebra 1
...I have worked with students who speak English as a second language and students with learning disabilities such as ADHD. My teaching style is all about helping my students understand the
material from the inside out through discussion and critical thinking strategies, so that when I'm not there ...
24 Subjects: including algebra 2, English, reading, calculus
...I used to hold events where I would cook healthy meals for up to fifty people every other week. I have trained cooking experience in Santa Cruz where I worked full time in a kitchen preparing a
wide variety of foods. I have spent much of my time at UCSD helping other UCSD students with their st...
10 Subjects: including algebra 2, calculus, geometry, algebra 1
...I have nearly fifteen years experience in teaching, coaching, and personal development. I am very motivated and thoroughly committed to seeing young people succeed. I look forward to being of
service or answering any questions you may have.
6 Subjects: including algebra 2, calculus, algebra 1, geometry | {"url":"http://www.purplemath.com/Cardiff_By_The_Sea_Algebra_2_tutors.php","timestamp":"2014-04-18T11:12:35Z","content_type":null,"content_length":"24372","record_id":"<urn:uuid:770d37a0-f073-4b4c-8afc-87f8f4e01569>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00035-ip-10-147-4-33.ec2.internal.warc.gz"} |
probability and stastics
October 20th 2008, 10:48 AM #1
Oct 2008
houston, texas
probability and stastics
Consider the birthdays of the students in a class of size r. assume that the year consists of 365 days.
(a) how many different ordered samples of birthdays are possible(r in sample) allowing reptitions(with replacement)?
(B)the same as part(a) except requiring that all the students have different birthdays(without replacement)?
(c) if we can assume that each ordered outcome in part (a) has the same probability, what is the probability that no two students have the same birthday?
(d) for what value of r is the probabilty in part (c) about equal to 1/2? Is this number surprisingly small?
hint: use a calculator or computer to find r.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/advanced-statistics/54742-probability-stastics.html","timestamp":"2014-04-21T06:14:43Z","content_type":null,"content_length":"29402","record_id":"<urn:uuid:5fa60e2b-e1c5-4d60-af08-1b0026c54fc8>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00212-ip-10-147-4-33.ec2.internal.warc.gz"} |
┃ │ Charles Sanders Peirce ┃
┃ │ ┃
┃ │ (1839 - 1914) ┃
┃ │ ┃
┃ │ ┃
Science Quotes by Charles Sanders Peirce (4 quotes)
Every work of science great enough to be well remembered for a few generations affords some exemplification of the defective state of the art of reasoning of the time when it was written; and each
chief step in science has been a lesson in logic.
— Charles Sanders Peirce
It is a common observation that a science first begins to be exact when it is quantitatively treated. What are called the exact sciences are no others than the mathematical ones.
— Charles Sanders Peirce
Science has hitherto been proceeding without the guidance of any rational theory of logic, and has certainly made good progress. It is like a computer who is pursuing some method of arithmetical
approximation. Even if he occasionally makes mistakes in his ciphering, yet if the process is a good one they will rectify themselves. But then he would approximate much more rapidly if he did not
commit these errors; and in my opinion, the time has come when science ought to be provided with a logic. My theory satisfies me; I can see no flaw in it. According to that theory universality,
necessity, exactitude, in the absolute sense of these words, are unattainable by us, and do not exist in nature. There is an ideal law to which nature approximates; but to express it would require an
endless series of modifications, like the decimals expressing surd. Only when you have asked a question in so crude a shape that continuity is not involved, is a perfectly true answer attainable.
— Charles Sanders Peirce
The rudest numerical scales, such as that by which the mineralogists distinguish different degrees of hardness, are found useful. The mere counting of pistils and stamens sufficed to bring botany out
of total chaos into some kind of form. It is not, however, so much from counting as from measuring, not so much from the conception of number as from that of continuous quantity, that the advantage
of mathematical treatment comes. Number, after all, only serves to pin us down to a precision in our thoughts which, however beneficial, can seldom lead to lofty conceptions, and frequently descend
to pettiness.
— Charles Sanders Peirce | {"url":"http://www.todayinsci.com/P/Pierce_Charles/PierceCharles-Quotations.htm","timestamp":"2014-04-21T02:02:09Z","content_type":null,"content_length":"72997","record_id":"<urn:uuid:3c196f07-407b-43ee-81ca-0d8297f7374b>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00533-ip-10-147-4-33.ec2.internal.warc.gz"} |
Circuit Theory/Lab4.5.1
From Wikibooks, open books for an open world
Example, find the thevenin equivalent of this circuit, treating R7 as the load.
• Simulate the circuit, displaying load voltage and current as the load is swept through a range of resistance values
• Simulate the thevenin equivalent circuit and again sweep the load voltage and current through a range of resistance values
Finding Thevenin Voltage[edit]
Open the load (resistor R7), and find the voltage across it's terminals.
R[5] and R[6] are dangling and can be removed.
V[th] = V[A] - V[B]
V[A] = 2-V[R1] = 2 - (2-5)*2.2/(2.2+4.7) .. voltage divider
V[A] = 2.9565
V[B] = 5-V[R3] = 5 - 5*6.8/(6.8+6.8) .. voltage divider
V[B] = 2.5 volts
V[th] = 2.9565 - 2.5 = 0.4565 volts
Can check with this simulation.
Finding Thevenin Resistance[edit]
Remove the load, zero the sources.
Redraw up and down so the parallel/serial relationships between the resistors are obvious.
• $R_{th} = 1 + \frac{1}{\frac{1}{2.2} + \frac{1}{4.7}} + \frac{1}{\frac{1}{6.8} + \frac{1}{6.8}} + 1.5$
• $R_{th} = 7.3986$
Finding Norton Current[edit]
I[N] = V[th]/R[th] = 0.4565/7.3986 = 0.0617 amp
Simulating the original circuit[edit]
In the simulation, can see the computed Norton's current when the load is 0 ohms.
Can see the computed Thevenin voltage when the load is around 20 ohms which approximates an open.
Comparing with the Thevenin Equivalent[edit]
In this simulation, can see the same values, except this time the load voltage is relative to ground, so don't have to look at a drop or differences between two voltages as with the original circuit | {"url":"https://en.wikibooks.org/wiki/Circuit_Theory/Lab4.5.1","timestamp":"2014-04-20T19:36:17Z","content_type":null,"content_length":"32541","record_id":"<urn:uuid:22cb8c9e-c10f-43d5-b25f-093d5684d60e>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00474-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kew Gardens Algebra 2 Tutor
Find a Kew Gardens Algebra 2 Tutor
...I have competed at the Marshall Chess Club in Manhattan, and was the captain of my high school's chess team. I currently volunteer as a chess instructor at PS 39 in Brooklyn. I excel at
opening, middle, and end-game study, and during my lessons I use a mix of students' games, chess puzzles, online videos and computer analysis to keep students engaged.
13 Subjects: including algebra 2, Spanish, geometry, English
...I am also willing to meet with the teacher or with the parents and the school to help advocate for the student if needed. We now know a number of strategies that assist the ADD/ADHD student in
the classroom, such as a seating position that helps to limit distractions or directions presented in b...
39 Subjects: including algebra 2, reading, geometry, English
...I have extensive experience in teaching and tutoring. While I was in undergraduate school in China, I assisted the English teacher to help freshmen in Engineer School pass the National English
Test level 4. My responsibility included leading English evening class twice a week, as well as checking students’ homework and answering questions after class.
20 Subjects: including algebra 2, calculus, prealgebra, precalculus
I am a retired middle school English teacher of 25 years. I have taught and tutored English, Reading, Writing, ESL, Math and Social Studies. My teacher exam (L.A.S.T.) score was 294/300 with 2
perfect essay scores.
35 Subjects: including algebra 2, reading, English, algebra 1
...My ability is to assist you primarily comes from five (5) points: 1. Knowledge of Content: I graduated and worked as an engineer for several years before transitioning to education then to
entertainment. I have taken the classes and the tests, so I know what to expect. 2.
15 Subjects: including algebra 2, physics, geometry, accounting | {"url":"http://www.purplemath.com/kew_gardens_algebra_2_tutors.php","timestamp":"2014-04-18T14:03:33Z","content_type":null,"content_length":"24149","record_id":"<urn:uuid:5b61da06-d8af-4b9d-9867-a11ce0b6b901>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00030-ip-10-147-4-33.ec2.internal.warc.gz"} |
Trigonometric Equation
December 5th 2008, 10:10 AM #1
Aug 2007
Trigonometric Equation
Question please everyone, I'm a bit stumped on this problem. For it I have to find the general solutions.
The problem is:
2cos^2(x) = 1 + sin(x)
I'm just stumped as to how to solve the problem, do I use a power reducing formula or?
Thanks everyone!
Hi WolfMV,
Note that $\sin^2 x+\cos^2 x=1$ and so $\cos^2 x=1-\sin^2 x$ thus the equation $2\cos^2 x=1+\sin x$ becomes $2-2\sin^2 x=1+\sin x$ . Rearranging gives $2\sin^2 x+\sin x-1=0$ and we can see that $
(2\sin x-1)(\sin x +1)=0$ .
Hope this helps.
Use the trig identity $\cos{(2x)} = 1 - 2\sin^2{(x)}$
A quadratic equation will be formed which you can solve to find x.
December 5th 2008, 10:19 AM #2
Aug 2007
December 5th 2008, 10:19 AM #3
Dec 2008
Auckland, New Zealand | {"url":"http://mathhelpforum.com/trigonometry/63468-trigonometric-equation.html","timestamp":"2014-04-18T15:22:10Z","content_type":null,"content_length":"33842","record_id":"<urn:uuid:f2f4be1e-f613-48ea-8bc8-c946c7d075b4>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00253-ip-10-147-4-33.ec2.internal.warc.gz"} |
4.4 The Determinant and the Inverse of a Matrix
Home | 18.013A | Chapter 4 Tools Glossary Index Up Previous Next
4.4 The Determinant and the Inverse of a Matrix
The inverse of a square matrix $M$ is a matrix, denoted as $M − 1$ , with the property that $M − 1 M = M M − 1 = I$ . Here $I$ is the identity matrix of the same size as $M$ , having 1's on the
diagonal and 0's elsewhere.
In terms of transformations, $M − 1$ undoes the transformation produced by $M$ and so the combination $M − 1 M$ represents the transformation that changes nothing.
The condition $M M − 1 = I$ can be written as
$1 = ∑ j m i j M j i - 1$
$0 = ∑ j m k j M j i - 1$
when $k$ and $i$ are different, and these conditions completely determine the matrix $M − 1$ given $M$ , when $M$ has an inverse.
These equations have the same form as the two conditions (A) and (B) of section 4.3 except that $det M$ is on the left-hand side in (A) instead of 1, and $( − 1 ) i + j M i j$ appears in (A) and
(B) instead of $M j i − 1$ here.
We can therefore divide both sides of (A) and (B) by $det M$ , and deduce
$M j i − 1 = ( − 1 ) i + j M i j det M$
Remember that here $M i j$ is the determinant of the matrix obtained by omitting the i-th row and j-th column of $M$ ; the elements of $M$ are the $m i j$ , while $M j i − 1$ here represents the
element of the inverse matrix to $M$ in j-th row and i-th column.
We can phrase this in words as: the inverse of a matrix $M$ is the matrix of its cofactors, with rows and columns interchanged, divided by its determinant.
4.7 Compute the inverse of the matrix in Exercise 4.4 using this formula. Check the product $M − 1 M$ to be sure your result is correct.
4.8 Set up a spreadsheet that computes the inverse of any three by three matrix with non-zero determinant, using this formula.
(Hint: by copying the first two rows into a fourth and fifth row and the first two columns into a fourth and fifth column, you can make one entry and copy to get all of the $( − 1 ) i + j M i j$ at
once. Then all that is left is rearranging to swap indices and dividing by the determinant (which is the dot product of any row of $M$ with the corresponding cofactors).) | {"url":"http://ocw.mit.edu/ans7870/18/18.013a/textbook/MathML/chapter04/section04.xhtml","timestamp":"2014-04-18T20:42:52Z","content_type":null,"content_length":"13104","record_id":"<urn:uuid:ccfd5d81-24cc-4f02-b573-81097b2fdc6e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00090-ip-10-147-4-33.ec2.internal.warc.gz"} |
On the Fourier Spectra of the Infinite Families
Bracken, Carl and Zha, Zhengbang (2009) On the Fourier Spectra of the Infinite Families of Quadratic APN Functions. Advances in Mathematics of Communications , 3 (3). pp. 219-226. ISSN 1930-5346
It is well known that a quadratic function defined on a finite field of odd degree is almost bent (AB) if and only if it is almost perfect nonlinear (APN). For the even degree case there is no
apparent relationship between the values in the Fourier spectrum of a function and the APN property. In this article we compute the Fourier spectrum of the new quadranomial family of APN functions.
With this result, all known infinite families of APN functions now have their Fourier spectra and hence their nonlinearities computed.
Item Type: Article
Keywords: Fourier spectrum; APN function; nonlinearity; Infinite Families;
Subjects: Science & Engineering > Mathematics
Item ID: 2695
Identification Number: 10.3934/amc.2009.3.219
Depositing User: IR Editor
Date Deposited: 06 Sep 2011 14:44
Journal or Publication Title: Advances in Mathematics of Communications
Publisher: American Institute of Mathematical Sciences (AIMS)
Refereed: No
Repository Staff Only(login required) | {"url":"http://eprints.nuim.ie/2695/","timestamp":"2014-04-17T00:50:12Z","content_type":null,"content_length":"21429","record_id":"<urn:uuid:74ed46c1-9d7e-443b-92c4-7df81a5dbb08>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00315-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quadratic Formula Tutors
Littleton, CO 80122
Patient and enthusiastic biology and chemistry tutor
...I help students analyze the question to understand what information is given, what specifically is being asked, and what steps are needed to answer the question. I go over step-by-step
instructions, explaining the theory behind each step, so the student will be...
Offering 7 subjects including algebra 2 | {"url":"http://www.wyzant.com/Centennial_CO_quadratic_formula_tutors.aspx","timestamp":"2014-04-17T06:56:00Z","content_type":null,"content_length":"58421","record_id":"<urn:uuid:d57ea5f6-caf6-4f38-ac36-7300680f7232>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00051-ip-10-147-4-33.ec2.internal.warc.gz"} |
Global alignment of protein–protein interaction networks by graph matching methods
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
Bioinformatics. Jun 15, 2009; 25(12): i259–1267.
Global alignment of protein–protein interaction networks by graph matching methods
Motivation: Aligning protein–protein interaction (PPI) networks of different species has drawn a considerable interest recently. This problem is important to investigate evolutionary conserved
pathways or protein complexes across species, and to help in the identification of functional orthologs through the detection of conserved interactions. It is, however, a difficult combinatorial
problem, for which only heuristic methods have been proposed so far.
Results: We reformulate the PPI alignment as a graph matching problem, and investigate how state-of-the-art graph matching algorithms can be used for that purpose. We differentiate between two
alignment problems, depending on whether strict constraints on protein matches are given, based on sequence similarity, or whether the goal is instead to find an optimal compromise between sequence
similarity and interaction conservation in the alignment. We propose new methods for both cases, and assess their performance on the alignment of the yeast and fly PPI networks. The new methods
consistently outperform state-of-the-art algorithms, retrieving in particular 78% more conserved interactions than IsoRank for a given level of sequence similarity.
Availability: All data and codes are freely and publicly available upon request.
Contact: jean-philippe.vert/at/mines-paristech.fr
Protein–protein interactions (PPIs) play a central role in most biological processes. Recent years have witnessed impressive progresses towards the elucidation of large-scale PPI networks in various
organisms, thanks in particular to the development of high-throughput experimental techniques such as yeast two-hybrid (Fields and Song, 1989) or co-immunoprecipitation followed by mass spectrometry
(Aebersold and Mann, 2003). As the amount of PPI network data increases, computational methods to analyze and compare them are also being developed at a fast pace. In particular, comparative PPI
network analysis across species has already provided insightful views of similarities and differences between species at the systemic level (Sharan et al., 2005; Suthram et al., 2005) and helped in
the identification of functional orthologs (Bandyopadhyay et al., 2006).
Comparing PPI networks usually involves some form of network alignment, i.e. the identification of pairs of homologous proteins from two different organisms, such that PPIs are conserved between
matched pairs. The rationale behind this notion is that a protein and its functional orthologs are likely to interact with proteins in their respective network that are themselves functional
orthologs. Hence, while direct sequence homology alone is often not sufficient to identify functional orthologs within paralogous families (Sjölander, 2004), the use of PPI information can help in
the disambiguation of functional orthologs within clusters of homologous sequences, such as those produced by the Inparanoid algorithm (Remm et al., 2001). This approach has been investigated in
particular by (Bandyopadhyay et al., 2006). Conversely, network alignment can also be a valuable approach to validate PPI conserved across multiple species and detect evolutionary conserved pathways
or protein complexes (Kelley et al., 2003; Sharan et al., 2005).
Several methods have been proposed to perform local network alignment (LNA) of PPI networks, i.e. to identify subsets of matching pairs of proteins with conserved subgraphs of interactions. These
methods include PathBLAST (Kelley et al., 2003, 2004) and NetworkBLAST (Sharan et al., 2005), which adapt the ideas of the BLAST algorithm to the search for local alignments between graphs, the
method of Koyutürk et al. (2006), inspired by biological models of deletion and duplication, Graemlin (Flannick et al., 2006), which uses networks of modules to infer the alignment, or the Bayesian
approach of Berg and Lässig (2006). Less attention has been paid to the problem of global network alignment (GNA), i.e. the search for a global correspondence between most or all vertices of two
networks that again matches similar proteins and leads to conserved interactions. Notable exceptions include the Markov random field (MRF)-based method of Bandyopadhyay et al. (2006) and the IsoRank
algorithm (Singh et al., 2008), which formulates the problem as an eigenvalue problem.
While LNA procedures can detect multiple, unrelated matched regions between networks, and can in particular match a given protein of a network to several proteins of the other network in different
local matchings, GNA seeks the best consistent matching across all nodes simultaneously. This can be a desirable property for many applications, such as functional ortholog identification. On the
other hand, from a computational point of view, GNA is arguably more difficult than LNA since it must find a solution among all possible global matchings. In fact, as we explain below, it is natural
to reformulate GNA as weighted graph matching problem, a problem for which no polynomial time algorithm is known. Solving the general GNA problem therefore must involve some sort of approximate or
heuristic method, such as IsoRank.
Following this line of thought, we propose here to formulate explicitly GNA as a graph matching problem, and investigate the use of modern state-of-the-art exact and approximate methods to solve it.
While no exact solution of the graph matching optimization problem can be found in general, we show that in certain cases, if ‘enough constraints’ are put on the possible protein associations, and if
the PPI networks are ‘not too dense’ (these notions being rigorously defined in Section 3.2), then an exact solution can be found efficiently by a new message passing (MP) algorithm. Interestingly,
this case arises in particular in the functional ortholog detection problem between yeast and fly investigated by Bandyopadhyay et al. (2006), where matching pairs are constrained to belong to
clusters of proteins produced by the Inparanoid algorithm and the PPI networks of both species are not too dense. On these data, we are therefore able to find a matching that conserves more
interactions than the solutions found by MRF (Bandyopadhyay et al., 2006) as well as a version of IsoRank adapted to this situation (Singh et al., 2008), and we are in fact certain that our solution
is optimal in the sense that it produces the largest possible number of conserved interactions. Interestingly, the resulting alignment retrieves 13% more HomoloGene pairs than the alignments of MRF
and 5% more than that of IsoRank, suggesting that maximizing the number of conserved interactions indeed improves functional orthology disambiguation. When the GNA is more complex, e.g. matched pairs
are not limited to belong to the same Inparanoid clusters, or the PPI networks have more edges, then our MP algorithm cannot be used and the optimal matching cannot be found in reasonable time
anymore. In that case, we propose to use a recent state-of-the-art approximate methods for graph matching (Zaslavskiy et al., 2008b), which tracks a path of solutions for a family of relaxed
problems, as well as a new, faster and more direct gradient-based method, which bears similarities with the IsoRank method. Like IsoRank, these methods have a free parameter to balance the trade-off
between matching similar proteins, on the one hand, and producing an alignment with many conserved interactions, on the other hand. We test them on the global unconstrained alignment of the fly and
yeast networks, and show that for a given level of mean sequence similarity between matched proteins, our new method retrieves 78% more conserved interactions than IsoRank.
In this section, we set the notations and formalize two variants of the GNA problems. We represent a PPI network describing the interactions among N proteins of an organism as an undirected simple
graph G=(V[G], E[G]), where V[G]=(v[1],…, v[N]) is a finite set of N vertices representing the N proteins, and E[G]V[G]×V[G] is the set of edges representing the pairs of interacting proteins. Each
such graph (or network) can equivalently be represented by a symmetric N×N adjacency matrix A[G] where [A[G]][ij]=[A[G]][ji]=1 if protein v[i] interacts with protein v[j] and 0 otherwise.
Given two graphs G and H representing the PPI networks of two species, the GNA problem is, roughly speaking, to find a correspondence between the vertices of G and the vertices of H that matches
similar proteins and enforces as much as possible the conservation of interactions between matched pairs in the two graphs. To formalize this, let us assume that G and H have the same number N of
vertices, and that we are looking for a bijection between the vertices of G and the vertices of H. Although this may sound at first sight a strong assumption, given that PPI networks usually do not
have the same size, and that we may not want to match all proteins of each network, both limitations can be addressed by adding dummy nodes (with no connection) to each graph in order to ensure that
they finally have the same size. In a complete matching of such graphs with dummy nodes, matching a protein to a dummy node simply means that in the GNA the protein is not matched. G and H being
assumed to have the same number of vertices, a matching of their vertices is now simply a permutation π of {1,…, N}, which associates the i-th vertex of H with the π(i)-th vertex of G. Equivalently,
the permutation π can be represented by a N×N permutation matrix P, i.e. a binary matrix whose (i, j)-th entry is equal to 1 if and only if π(i)=j (i.e. when the i-th vertex of H is matched to the j
-th vertex of G). We denote by P^N×N : P1[N]=1[N], P^T1[N]=1[N]} the set of permutation matrices, where 1[N] is the N-dimensional vectors whose entries are all equal to 1.
The number of interactions conserved by a permutation π is the number of pairs (i,j) that are connected in H, and such that their corresponding vertices π(i) and π(j) are also connected in G. Let us
denote the number of such interactions conserved by the permutation encoded in the permutation matrix P by J(P). In order to express J(P), we can observe that if we apply the permutation encoded by P
to the vertices of H, we obtain a new graph isomorphic to H which we denote by P(H). It is easy to see that the adjacency matrix of the permuted graph, A[P(H)], is simply obtained from A[H] by the
equality A[P(H)]=PA[H]P^T (Umeyama, 1988). As a result, J(P) is simply obtained as half the number of entries that are simultaneously equal to 1 in both binary matrices A[G] and PA[H]P^T (each
conserved interaction results in two identical entries, by symmetry of the adjacency matrices). Hence we obtain the following expression for J(P):
Besides the number of conserved interactions, a good GNA should match proteins with similar sequences. We consider here two possible formulations of this objective.
• Constrained GNA
. Here, we assume that a pre-processing of the protein sequences has produced a set of candidate matched pairs
, and we simply wish to disambiguate the matching using PPI information, if some proteins have several candidate matchings. This is, for example, the formulation proposed by Bandyopadhyay
et al.
), where a first clustering of all proteins sequences is performed to define a collection of protein clusters with the Inparanoid algorithm, and the pairs matched between the yeast and fly
proteome are constrained to belong to the same cluster. Such constraints can be directly encoded as constraints over the permutation matrix
, by imposing
=0 if the
-th vertex of the first graph and the
−th vertex of the second graph are not allowed to match. We are then looking for a solution in the set of matrices
=0}, and it is then natural to look for the permutation compatible with the constraints with the largest number of conserved interactions, i.e. to solve:
• Balanced GNA
. An interesting property of constrained GNA is that, by reducing the search space to
, it can result in a tractable optimization problem (as shown for example in
Section 3.2
). On the other hand, in some cases one may want to accept matching between less similar vertices if it leads to an important increase in the number of conserved interactions. In other words, one
would like to be able to automatically
the matching of similar vertices with the conservation of interactions, as advocated by Singh
et al.
) and implemented by IsoRank. This can be formalized by assuming that a
matrix of similarities between vertices
is given (e.g. derived from pairwise sequence similarity scores), and by trying to maximize the total similarity between matched pair.
denoting the similarity between the
-th vertex of
and the
−th vertex of
, the total similarity between pairs matched by a permutation matrix π is simply
In order to find a balance between matching similar pairs [large
)] and having many conserved interactions [large
)], we propose to consider the following optimization problem:
where λ
) only, i.e. to find a good topological matching of the graphs independently of the similarity between matched pairs, while λ=0 amounts to focus only on the similarity between proteins and
finding a matching which maximized the mean sequence similarity, without using PPI information.
When λ>0, the balanced GNA problem (4) is equivalent to a general graph matching problem, discussed in Section 3.1, which is known to be computationally intractable in general. The constrained GNA (2
) can be seen as a particular case of the balanced GNA, by taking the similarity function equal to 0 between two vertices allowed to match and −∞ for two vertices not allowed to match. Indeed, in
that case (4) is equivalent to minimizing J(P) over the set of matrices P for which S(P) is finite, that is exactly the set [] of (2). While indeed general graph matching methods to solve (4) can be
applied to solve (2), we show in the next section that in some cases there exists a simple polynomial time algorithm to solve (2) directly even for large non-sparse graphs.
3 METHODS
In this section, we present methods to solve both the constrained GNA problem (2) and the balanced GNA problem (4). Since any algorithm to solve the balanced GNA problem can also solve the
constrained GNA, as explained in the previous section, we start by describing methods to solve the balanced GNA problem.
3.1 Algorithms for the balanced GNA problem
The balanced GNA problem (4) is a general graph matching problem, which is known to be a difficult combinatorial problem. While some methods based on incomplete enumeration may be applied to search
for an exact optimal solution in the case of small or sparse graphs, only approximate algorithms that usually find non-optimal solutions but are more scalable can be used for large non-sparse graph
matching. Many such approximate algorithms have been proposed, see e.g. the review of Conte et al. (2004). They include in particular spectral methods (Caelli and Kosinov, 2004; Singh et al., 2008;
Umeyama, 1988), or methods based on a relaxation of the optimization problem (4) (Almohamad and Duffuaa, 1993; Gold and Rangarajan, 1996). They differ mainly on their scalability, and on the accuracy
of the solution found. For example, a comparison of several such methods was carried out recently (Zaslavskiy et al., 2008b, 2008c).
Based on these observation, we propose here to use state-of-the-art graph matching methods to balanced GNA for PPI networks. In particular, we focus on the PATH algorithm (Zaslavskiy et al., 2008b),
which was shown to provide state-of-the-art performance in various graph matching benchmark. We also propose a new and simpler gradient ascent method, similar in spirit to the graduated assignment
(GA) algorithm (Gold and Rangarajan, 1996). As a benchmark, we consider the IsoRank method, which can be thought of as a particular spectral method for graph alignment, and which is currently the
method of choice for balanced GNA of PPI networks. We now briefly describe these methods.
• PATH method
. The PATH algorithm is based on two relaxations of (
), one concave and one convex, over the set of doubly stochastic matrices (Zaslavskiy
et al.
). The method starts by solving the convex relaxation, and then iteratively solves a linear combination of the convex and concave relaxations by gradually increasing the weight of the concave
relaxation and following the path of solutions thus created. It finishes when the solution reaches a corner of the set of doubly stochastic matrices, i.e. when the solution is a permutation
matrix in
• GA method
. We propose a new, simple gradient method based on a relaxation of (
) over the set of doubly stochastic matrices. Although the function to be maximized is not concave [because of the term
)], we simply start from an initial solution and iteratively choose a new permutation matrix in the direction of the gradient of the objective function. This approach may be relevant if we can
start from a ‘good’ initial solution, i.e., if we solve a constrained GNA (
) where the constraints are strong enough. The gradient of
) in (
) is equal to
, the gradient of
) in (
) at a matrix
is equal to
. Hence we propose to iteratively update the permutation matrix following the rule
), which can be found efficiently by the Hungarian algorithm (Kuhn,
• IsoRank method
. The idea of the IsoRank algorithm is to use the following recursive formula (Singh
et al.
) denotes the set of neighbors of
denotes the set of vertices of graph
and element
) represents the similarity between vertex
of graph
and vertex
of graph
. In the case of PPI networks, it represents the ‘likelihood’ that proteins
are functional orthologs. The recursive formula says that the more
have similar neighbors, the greater is the similarity measure between
. To estimate
, Singh
et al.
, (
) propose to use the power method to iteratively update
according to:
is the
matrix defined as:
To take into account the information on protein sequence similarities encoded by matrix
, the following modification of (
) is used
where λ has the same interpretation as in (
3.2 Algorithms for the constrained GNA problem
As explained in Section 2, all methods for solving the balanced GNA problem (4) can also be used to solve the constrained GNA problem (2), by using a particular similarity function to enforce the
constraints. Hence a first series of methods to solve (2) are the constrained version of IsoRank, GA and PATH, described in the previous section. In addition to these three methods, we consider two
additional approaches specifically dedicated to the constrained GNA problem: the MRF method of Bandyopadhyay et al. (2006), and a new method based on MP which we propose to find the global optimum of
(2) when the graphs are not too dense.
• MRF method
. To solve ambiguous assignments in Inparanoid clusters with more than two proteins, Bandyopadhyay
et al.
) propose to use the information on protein interactions, by choosing the assignments that maximize the number of conserved interactions between two species. For that purpose they use the
following probabilistic model. They associate a binary variable
to each possible protein ortholog pair (
) (here
denote fly and yeast proteins from the same Inparanoid cluster), where
=1 means that
are functional orthologs. Two variables
are connected if at least one pair of proteins (
) or (
) is connected in its PPI network, and the other one has a common neighbor (or is also connected). Let
) denote the set of indices connected to
. Then the probability law of
is modeled by:
The interpretation of this formula is that
has more chances to be equal to one when the number of neighbors equal to one is large. When there are only two proteins in cluster
then by definition
=1. If
are from different clusters then also by definition
=0. The parameters α and β are estimated on the basis of training data, then a Gibbs sampling is performed to define the value of unknown variables
on the test set. We refer to Bandyopadhyay
et al.
) for more details on this method.
• MP method for exact optimization
. Although intractable in general, we now show that constrained GNA problem (
) can be solved exactly and efficiently in some cases, and propose a new, efficient algorithm based on MP for that purpose. More precisely, we consider the situation where the set of proteins
have been clustered into a finite set of
, which form a partition of
, and where only proteins within the same group can be matched.
This situation, illustrated in
Figure 1
, represents for example the problem investigated by Bandyopadhyay
et al.
), where proteins of two organisms are first clustered by the Inparanoid algorithm, and functional orthologs are searched within clusters. Let us now consider the
clusters as vertices of a graph, and connect two clusters
if they contain proteins of both organisms that interact in their respective PPI network. For example, in
Figure 1
are connected because
from the first organism and
from the second organism, which interact with
, respectively, both in
. The reason why we introduce this graph of clusters is that it allows to decompose the choice of a global matching
into local matchings within each cluster, the dependency between the local choices being described by the edges of the graph. For example, if a cluster is isolated, then the choice of the
matching within this cluster has no influence over the total number of conserved interactions apart from interactions within this cluster. In other words, the local matching within an isolated
cluster can be optimized independently from the others. On the other hand, if a cluster is connected to other clusters, then changing the matching within this cluster can affect the total number
of interactions between proteins of different clusters, and the matchings between connected clusters must be chosen synchronously to optimize the total number of conserved interactions.
Inparanoid cluster network. Two clusters are connected if there exist at least one pair of proteins in one cluster, and one pair of proteins in the other cluster, which may produce a conserved
More formally, if we denote the permutation P restricted to the L clusters by P[1],…, P[L], then an important property is that the total number of interactions conserved by P decomposes as:
where J[1](P[i]) denotes the number of conserved interactions within c[i], J[2](P[i], P[j]) denotes the number of conserved interactions between c[i] and c[j] and i~j means that c[i] is connected to
While maximizing (9) remains a challenging optimization problem in general, it may be optimized efficiently if the graph of clusters has a particular structure, e.g. if many nodes are isolated or if
it contains no loop. For example, Figure 2a shows the graph of clusters for the problem of fly/yeast protein alignment investigated by Bandyopadhyay et al. (2006). Interestingly, this graph has no
loop. In this case, we can maximize (9) by a particular MP algorithm (Jordan, 2001). The idea of the MP algorithm is similar to the Viterbi algorithm (Viterbi, 1973) widely used to optimize functions
over linear graphs, such as finding the most likely set of hidden states in a hidden Markov model (Durbin et al., 1998). Here we describe how to apply MP on a graph without loop to optimize (9).
First, we note that each of the permutations involving proteins within a connected component of the graph can be optimized independently from each other, so we just consider a single connected
component without loop, i.e. a tree c[i] except the root has a unique parent cluster, namely, the connected cluster in the direction of the root. The clusters connected to a cluster c that are not
its parent are called its children and are denoted ch(c). To each node c of u[c]^[c], where [c] is the set of possible local matchings within c, i.e., the set of possible P[c]'s. The MP algorithm to
solve (9) is then a recursive algorithm, which starts from the leaves up to the root in a first phase (the ‘forward’ step) to find the optimal value of the functional, and then downwards from the
root to leaves (the ‘backward’ step) to find the solution which achieves the optimal value. The forward step at node c solves, for any P[c][c]:
At the end of the forward step, the maximum value of the vector u at the root is equal to the maximal value of J(P), and the local permutation which achieves this maximum is the optimal local
permutation. In the backward step, the optimal local matching of the children of a cluster are obtained by recovering the local permutations P[c′] which achieved the optimal value in (10) for the
optimal permutation of the parent cluster.
Inparanoid cluster networks. (a) The case of the benchmark data used in Bandyopadhyay et al. (2006). (b) The case of generalized interactions (1–4), see text.
We note that it is also possible to use the MP algorithm on graphs that are not trees, but which have a small tree-width value (Jordan, 2001). Roughly speaking it means that the graph of clusters is
not a tree, we may transform it into a tree by grouping together clusters. If the size of these cluster groups is not very large, then the exact optimization may still be feasible.
4 DATA
In order to compare the performance of the different graph matching methods, we performed several experiments aiming at aligning the PPI networks of the yeast Saccharomyces cerevisiae and of the fly
Drosophila melanogaster, as already investigated by Bandyopadhyay et al. (2006) and Singh et al. (2008). We downloaded all necessary data from the Supplementary Material of Bandyopadhyay et al. (2006
) (http://www.cellcircuits.org/Bandyopadhyay2006). The yeast PPI network contains 4389 proteins and 14 319 pairwise interactions, while the fly network contains 7038 proteins and 20 720 interactions.
In addition, we also retrieved the set of Inparanoid clusters used by Bandyopadhyay et al. (2006), consisting in 2244 cluster covering 2834 yeast proteins and 3881 fly proteins. The majority of these
clusters (1552) contains only two proteins (one from fly, one from yeast), while the remaining 692 cluster contain at least two proteins from the same species and one from the other species. Those
692 clusters are called ambiguous in Bandyopadhyay et al. (2006), since they do not allow to associate a single protein from the fly to a single protein from the yeast as functional orthologs.
5 RESULTS
We wish to investigate two different questions: (i) compare the ability of the different methods to find alignment with many conserved interactions, and (ii) assess whether conserving more
interactions really helps in retrieving more functional orthologs. While the first question can be answered without ambiguity by counting the number of conserved interactions found by the different
methods in different settings, the second one, as we will see, remains difficult to answer due to the lack of large-scale and curated ground truth.
We performed three sets of experiments, in order to compare the different methods in different settings and to test different formulations of the GNA problem. In the first set of experiments, we
reproduce the problem studied by Bandyopadhyay et al. (2006), where the goal is to disambiguate functional orthologs within Inparanoid clusters using PPI information. This is a particular instance of
the constrained GNA problem which turns out to be amenable to exact optimization by the MP method. In the second set of experiments, we generalize the benchmark problem of Bandyopadhyay et al. (2006)
by adding second-order interactions between proteins in order to account for possible noise in the interaction data or protein duplications. In that case, we are again confronted with a constrained
GNA problem, but the increased number of interactions makes its exact minimization intractable and only approximate methods for constrained GNA can be applied. Finally, in a third set of experiments,
we discard the knowledge of Inparanoid clusters and directly search a global alignment which balances the similarity between aligned proteins and the number of conserved interactions. This is then an
instance of the balanced GNA problem. In all cases, we assess the number of conserved interactions captured by the different methods, as an indicator of how well they solve the GNA problem.
Furthermore, since the final objective of PPI network alignment is to match functional orthologs, we assess for each method how many matched pairs are present in the HomoloGene database, a set of
curated functional orthologous pairs based on the comparison of the protein as well as the DNA sequence which we consider here as a ‘gold standard’ for disambiguation purpose.
5.1 Disambiguation of functional orthologs within Inparanoid clusters
The goal of this experiment is to use PPI GNA to select functional orthologs between the yeast and the fly for proteins with several homologs. More precisely, all proteins sequences are first
clustered into groups by the Inparanoid algorithm (Brein et al., 2005), and only proteins from the same cluster can be considered as protein functional orthologs. Then each GNA algorithm tries to
find an association of protein functional orthologs which maximizes the total number of conserved interactions. In other words, we try to solve the constrained GNA (2), where the constraints are
provided by the Inparanoid clusters. A priori, the most natural definition of ‘conserved interaction’ for the alignment (f[1]−y[1]) and (f[2]−y[2]) (where f[1] and f[2] are fly's proteins, and y[1]
and y[2] are yeast's proteins) is the following:
1. f[1] interacts with f[2], and y[1] interacts with y[2] in their respective PPI networks.
However, this strict notion of conserved interaction leads to a very small number of potentially conserved interactions. To have more potential interactions, Bandyopadhyay et al. (2006) generalized
this definition by adding the following two cases, which additionally allow to account for possible duplication or fusion events in the two proteomes:
2. f[1] interacts with f[2] in the fly PPI network, and y[1] has a common neighbor with y[2] in the yeast PPI networks;
3. f[1] has a common neighbor with f[2] in the fly PPI network, and y[1] interacts with y[2] in the yeast PPI networks.
To be able to compare the results of different algorithms, we use this exact definition of conserved interactions (Cases 1–3). Figure 2a presents the network of Inparanoid clusters (as explained in
Figure 1) used in Bandyopadhyay et al. (2006), where only non-isolated ambiguous clusters are shown. As can be easily seen, this network which contains 121 ambiguous clusters has no loop, which
implies that we can use the MP method to find the optimal alignment with the largest number of conserved interactions. Although we know how to solve the problem exactly in this case with the MP
method, it is instructive to compare also the results of the different approximate algorithms for constrained GNA, namely, MRF and the constrained versions of IsoRank, GA and PATH. To construct the
alignment made by the MRF method (Bandyopadhyay et al., 2006), we downloaded the result file (http://www.cellcircuits.org/Bandyopadhyay2006/data/Bandyopadhyay_results.xls) with probabilities for all
possible protein association, and we extracted the one-to-one alignment by taking the most probable pairs. The results of the PATH, GA and IsoRank algorithms were obtained with the GraphM package
(Zaslavskiy et al., 2008a).
Table 1 presents the results of all algorithms on this benchmark, in terms of conserved interactions, number of HomoloGene pairs and running time. We know that the MP algorithm produces the maximal
possible value (238 in this case), and an interesting observation is that the GA and the PATH algorithms reach this maximum, while the MRF (233) and the IsoRank (228) algorithms do not. All methods
are comparable in terms of CPU time, except for MRF which is one order of magnitude slower on this dataset. Although the differences in number are slight, with only 2% more conserved interactions for
MP/GA/PATH than for MRF, and 4% more than for IsoRank, this nevertheless confirms that even on this relatively easy optimization problem neither MRF nor IsoRank finds the optimal solution, which can
be found by other methods at no additional computational cost.
Performance of the different methods for constrained GNA on the benchmark of Bandyopadhyay et al. (2006)
Figure 3a and b show some examples where the MRF assignment and the assignment made by the MP, PATH and GA algorithms are different, and illustrate how these differences influence the total number of
conserved interactions. For instance, in the Inparanoid cluster 1113, the MRF algorithm associate the fly protein skpA to the yeast protein skp1, while the MP algorithm prefers the assignment skpF to
skp1. In the later case, we lose one conserved interaction with pair ago-cdc4, but we gain two new conserved interactions with (vha36 and vm28) and (ef2b and eft2). In another example, shown in
Figure 3b, the MP algorithm proposes a different association for the yeast protein act1 in the 94th Inparanoid cluster. This assignment results in two lost and three gained conserved interactions.
From a biological point of view, the assignment of the fly protein act87e to act1 proposed by the MRF algorithm seems to be worse that the assignment (act5c and act1) proposed by the MP algorithm.
Indeed, although proteins act5c and act87e are very similar (being both from the actine family), it is known that act1 and act5c participate together to the INO80 protein complex (which exhibits
chromatin remodeling activity and 3′ to 5′ DNA helicase activity), while act87e does not.
Illustration of difference between MRF and MP alignment. Each box represents an Inparanoid cluster, white unfilled boxes represent clusters where MP and MRF assignments are the same. Red solid lines
represent interactions conserved by MP alignment and ...
In order to assess more systematically and quantitatively whether differences in the number of conserved interactions lead to significant differences in number of correctly assigned functional
orthologous pairs, we counted how many pairs in each alignment is reported as functional orthologous in the HomoloGene database, considered here as a ‘gold standard’. As shown in Table 1, the number
of HomoloGene pairs in each alignment also differs between the different methods, ranging from 36 for MRF to 39 for IsoRank and 41 for MP/GA/PATH. Interestingly, we observe that the methods MP, GA
and PATH, which retrieve the largest number of conserved interaction, also result in the largest number HomoloGene pairs (41), which represents a relative increase of 13% compared to MRF (36), and of
5% compared to IsoRank. To illustrate the differences between the methods, Table 2 lists the HomoloGene pairs found by MRF and not MP/GA/PATH, and vice versa. Interestingly, a new method for PPI
network alignment was published recently (Yosef et al., 2008), which detects 37 HomoloGene orthologs on the same set of proteins. This puts its between MRF and IsoRank according to this criterion.
HomoloGene orthologs found by the MP method and not by MRF and vice versa
The validity of taking HomoloGene as a ‘gold standard’ for assessing the number of correctly assigned homologous pairs remains, however, subject to discussion. Indeed, although HomoloGene clusters
are defined using a variety of evidences, they are mainly driven by sequence similarity. To illustrate this, we assessed the performance of a simple alignment method that matches pairs within an
ambiguous cluster by maximizing the total sequence similarity over matched pairs. This method does not use any PPI information for the matching. The resulting alignment has only 184 conserved
interaction, which is not surprisingly much worse than all methods which take PPI into account. However, the resulting matched pairs contain 43 HomoloGene pairs, which is more than all methods taking
into account PPI. This shows that the number of HomoloGene pairs as an indicator should be taken with caution, since it favors methods which focus on matching proteins based on sequence similarity
5.2 Disambiguation of Inparanoid clusters with second-order interactions
The idea of Bandyopadhyay et al. (2006), to expand the natural notion of conserved interaction (Case 1) to Cases 2 and 3, aims to take into account second-order interactions, that is, when two
proteins do not interact directly to each other have a common neighbor. Another natural generalization of the notion of conserved interaction is then the following case:
4. f[1] has a common neighbor with f[2], and y[1] has a common neighbor with y[2], in their respective PPI networks.
Adding interactions according to this rule makes the problem computationally more difficult, since ambiguous clusters become more connected. Indeed, while we were able to solve the original problem
exactly with the MP algorithm, the network of Inparanoid clusters when Cases 1–4 are included takes the form presented in Figure 2b. Contrary to the previous network (Cases 1–3 in Figure 2a), the new
network has loops and is not amenable to exact optimization with the MP procedure. Only approximate algorithms can be applied in this case.
In order to compare all methods (except MP) in this new setting, we re-implemented the MRF algorithm with the new data. The estimated values of the model parameters [see details in Bandyopadhyay et
al. (2006)] are α=0.51 and β=−6.87. We used the same training and test data as those used used in Bandyopadhyay et al. (2006) to estimate them. Then, we estimated the probabilities of being protein
orthologs for potential pairs of proteins by Gibbs sampling, and obtained a one-to-one alignment based on the most probable associations.
Table 3 shows the results obtained by the different graph matching algorithms. Although we do not know the maximum number of interactions that can be conserved in this case, we observe again that
PATH and GA find solutions with 3–4% more interactions conserved than MRF and IsoRank. There is no clear difference in the number of HomoloGene pairs between the different methods, and the addition
of second-order interactions has no obvious effects on this indicator neither: it leads to a gain of three pairs for MRF, but to a loss of one pair for IsoRank and PATH, and to no change for GA.
Performance of the different methods for constrained GNA on the benchmark of (Bandyopadhyay et al., 2006) with second-order interactions added
5.3 Global PPI network alignment by balancing sequence and interaction conservation
In this last series of experiments, we consider the problem proposed by Singh et al. (2008), for which IsoRank reflects the state-of-the-art: find a global PPI alignment by balancing the sequence
similarity in matched pairs with the total number of conserved interactions, allowing in particular matches between proteins in different Inparanoid clusters if they allow an increased number of
conserved interactions. For this application, we can only compare the three methods for balanced GNA, namely, IsoRank, GA and PATH. The trade-off between matching proteins with similar sequences and
matching with a lot of conserved interactions is controlled by the parameter λ in (4) and (7). The greater the λ, the more attention we pay to the sequence similarity and the less to the number of
conserved interactions. For each method, by varying λ, we therefore obtain a family of alignments with different compromise found between the number of conserved interactions J(P) (4) and the summary
sequence similarity score S(P) (4).
Figure 4 shows the different trade-offs that are found by the different methods. For a given level of average sequence similarity, we wish to have the largest possible number of conserved pairs. We
observe that over all the range of average sequence similarity, the GA algorithms clearly outperforms PATH, which itself outperforms IsoRank. For example, for the trade-off parameter choice advocated
by (Singh et al., 2008) for IsoRank (λ=0.6), IsoRank finds an alignment with 566 conserved interactions, corresponding to an average sequence similarity score in the matched pairs of 15.26. At this
level of average sequence similarity, PATH and GA find alignments with, respectively, 678 and 1006 interactions, which corresponds to relative improvements of, respectively, 20 and 78%.
Algorithm performance comparison. Number of conserved interaction J(P) versus sequence similarity S(P).
Again, there is still only limited objective evidence that optimizing the number of conserved interactions leads to better matching in terms of functional orthology detection. As an attempt to test
this fact, we first counted, for each alignment, the number of HomoloGene pairs in the alignment. However, we observed that, for each method, this number increases monotonically when more weight is
given to sequence similarity as opposed to interaction conservation. This again highlights the limitation of this criterion, which is optimized by construction when sequences are optimally matched in
terms of similarity. We then attempted to compare the different alignments in terms of mean similarity between Gene Ontology (GO) annotations of matched pairs. In order to compare GO annotations of
two proteins, we tested the method presented by Singh et al. (2008) to compute the functional coherence of a pair. However, we were not able to observe any clear difference between the methods, or
between the different parameter choice for each individual method. The maximum mean functional coherence over the choice of the trade-off parameter is 0.519, 0.509 and 0.522 for IsoRank, GA and PATH,
respectively. However, the fluctuations of this score when the parameters change are so large that these maximum values are not significantly different. This is due to the fact that the number of
annotated proteins remains limited, and that they are rarely annotated with such precision that it is possible to clearly differentiate true functional orthologs from spurious ones (Bandyopadhyay et
al., 2006). For example, when we estimate the functional score of a given alignment, there is rarely >15–20% of pairs with GO annotations.
We presented two general formulations for the GNA problem. The constrained GNA formulation corresponds to a situation where we have a strong a priori about which pairs can be matched. In the balanced
GNA problem, we replace the binary constraints on which pairs are allowed by a more global objective function that balances the matching of similar proteins with the conservation of interactions,
with a parameter to smoothly control the trade-off between these two contradictory goals. While MRF and IsoRank are popular methods for these two formulations, we proposed in this article new methods
which lead to significantly better alignments, when we assess the quality of an alignment in terms how many conserved interactions are retrieved. In particular, the MP method, when it is applicable,
finds the optimal solution of a constrained GNA problem, and the GA method provides consistently good results in both cases. The question of which formulation is the best for a given application and
dataset, between the constrained and balanced GNA, remains largely open and worth further systematic investigations. Regarding the relative performance of the different methods in terms of how many
conserved interactions they find, we observed that the MP/GA/PATH methods outperform MRF and IsoRank in both situations. This is not so surprising given that, once the problem is explicitly stated as
a graph matching problems, it makes sense to use methods borrowing ideas and techniques from state-of-the-art graph matching approaches. The impressive performance of GA compared to PATH in the
balanced GNA experiment (Fig. 4) is more surprising, given the good performance of PATH on a number of other benchmarks (Zaslavskiy et al., 2008c). We believe that this weakness of PATH is due to the
large difference in the number of nodes between the two networks. Indeed, the resulting large number of dummy nodes that must be added generate singularities in the convex relaxation in the PATH
The GNA problems we studied have several extensions. First, it may be interesting to consider alignment of weighted PPI networks with weights representing, for instance, experimental evidence of
interaction existence. Interestingly, the PATH, GA and IsoRank algorithm can be applied directly to a weighted network, by just replacing the binary graph adjacency matrix by a real-valued matrix.
Another relevant extension is the alignment of multiple PPI networks, corresponding to more than two species, via pairwise comparisons as it was presented by Singh et al. (2008). Finally, it may be
relevant in some cases to match one protein of one species with several proteins of the other species, to account for possible duplications or fusion events. An interesting property of the PATH
algorithm is the fact that estimate a permutation matrix by first solving a relaxed problem. The solution of the relaxed problem is a doubly stochastic matrix whose entries can be interpreted as
probabilities for proteins to be functional orthologs (Zaslavskiy et al., 2008c). Therefore, in order to allow many-to-many assignments of proteins, we could use the solution of the convex
Finally, although progresses in graph alignment algorithms can be monitored by objective quantitative measures such as the number of conserved interactions, their biological relevance remains
difficult to assess. In particular, for the detection of functional orthologs, it is apparent that current GO annotations or curated databases of functional orthologs are either biased by
construction (e.g. HomoloGene), or not precise enough and too scarce for systematic evaluation (e.g. GO annotations). We believe we are reaching a point where more experimental validations are
needed. On the other hand, there are many other possible applications for efficient graph matching algorithms scaling to large biological networks, such as phylogenetic comparison of sets of
networks, detection of new conserved pathways or curation of PPI data. We expect the methods proposed in this article to have a direct impact in these applications.
Conflict of Interest: none declared.
^1Technically, we add dummy nodes in each cluster to obtain the same number of proteins of each species in each cluster.
• Aebersold R, Mann M. Mass spectrometry-based proteomics. Nature. 2003;422:198–207. [PubMed]
• Almohamad H, Duffuaa S. A linear programming approach for the weighted graph matching problem. IEEE Trans. Inform. Theor. 1993;15:522–525.
• Bandyopadhyay S, et al. Systematic identification of functional orthologs based on protein network comparison. Genome Res. 2006;16:428–435. [PMC free article] [PubMed]
• Berg J, Lässig M. Cross-species analysis of biological networks by bayesian alignment. Proc. Natl Acad. Sci. USA. 2006;103:10967–10972. [PMC free article] [PubMed]
• Brein K, et al. Inparanoid: a comprehensive database of eukaryothic orthologs. Nucleic Acids Res. 2005;33:D476–D480. [PMC free article] [PubMed]
• Caelli T, Kosinov S. An eigenspace projection clustering method for inexact graph matching. IEEE Trans. Pattern Anal. Mach. Intell. 2004;26:515–519. [PubMed]
• Conte D, et al. Thirty years of graph matching in pattern recognition. Intern. J. Pattern Recognit. Artif. Intell. 2004;18:265–298.
• Durbin R, et al. Biological Sequence Analysis: Probabilistic Models of Proteins and Nucleic Acids. NY: Cambridge University Press; 1998.
• Fields S, Song O. A novel genetic system to detect protein-protein interactions. Nature. 1989;340:245–246. [PubMed]
• Flannick J, et al. Graemlin: general and robust alignment of multiple large interaction networks. Genome Res. 2006;16:1169–1181. [PMC free article] [PubMed]
• Gold S, Rangarajan A. A graduated assignment algorithm for graph matching. IEEE Trans. Pattern Anal. Mach. Intell. 1996;18:377–388.
• Jordan M. Learning in Graphical Models. Cambridge: The MIT Press; 2001.
• Kelley B, et al. Conserved pathways within bacteria and yeast as revealed by global protein network alignment. Proc. Natl Acad. Sci. USA. 2003;100:11394–11399. [PMC free article] [PubMed]
• Kelley B, et al. PathBLAST: a tool for alignment of protein interaction networks. Nucleic Acids Res. 2004;32:W83–W88. [PMC free article] [PubMed]
• Koyutürk M, et al. Pairwise alignment of protein interaction networks. J. Comput. Biol. 2006;13:182–199. [PubMed]
• Kuhn HW. The Hungarian method for the assignment problem. Nav. Res. 1955;2:83–97.
• Remm M, et al. Automatic clustering of orthologs and in-paralogs from pairwise species comparisons. J. Mol. Biol. 2001;314:1041–1052. [PubMed]
• Sharan R, et al. Conserved patterns of protein interaction in multiple species. Proc. Natl Acad. Sci. USA. 2005;102:1974–1979. [PMC free article] [PubMed]
• Singh R, et al. Global alignment of multiple protein interaction networks with application to functional orthology detection. Proc. Natl Acad. Sci. USA. 2008;105:12763–12768. [PMC free article] [
• Sjölander K. Phylogenomic inference of protein molecular function: advances and challenges. Bioinformatics. 2004;20:170–179. [PubMed]
• Suthram S, et al. The plasmodium protein network diverges from those of other eukaryotes. Nature. 2005;438:108–112. [PMC free article] [PubMed]
• Umeyama S. An eigendecomposition approach to weighted graph matching problems. IEEE Trans. Pattern Anal. Mach. Intell. 1988;10:695–703.
• Viterbi A. Error bounds for convolutional codes and an asymptotically optimum decoding algorithm. IEEE Trans. Inform. Theor. 1973;13:260–269.
• Yosef N, et al. Improved network-based identification of protein orthologs. Bioinformatics. 2008;24:i200–i206. [PubMed]
• Zaslavskiy M, et al. GRAPHM: graph matching package. 2008a Available at http://cbio.ensmp.fr/graphm (last accessed date March 2009)
• Zaslavskiy M, et al. A path following algorithm for graph matching. In: Elmoataz A, editor. Image and Signal Processing, Proceedings of the 3rd International Conference, ICISP 2008. Vol. 5099.
Berlin/Heidelberg: Springer; 2008b. pp. 329–337. LNCS.
• Zaslavskiy M, et al. Technical Report 00232851, HAL. Mines ParisTech; 2008c. A path following algorithm for the graph matching problem.
Articles from Bioinformatics are provided here courtesy of Oxford University Press
Your browsing activity is empty.
Activity recording is turned off.
See more... | {"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2687950/?tool=pubmed","timestamp":"2014-04-18T04:57:26Z","content_type":null,"content_length":"127340","record_id":"<urn:uuid:31a44512-56ea-466a-aa6c-b76fc0388967>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00643-ip-10-147-4-33.ec2.internal.warc.gz"} |
Convert rods to feet - Conversion of Measurement Units
›› Convert rod [international] to foot
Did you mean to convert rod [international] to feet
rod [survey]
›› More information from the unit converter
How many rods in 1 feet? The answer is 0.0606060606061.
We assume you are converting between rod [international] and foot.
You can view more details on each measurement unit:
rods or feet
The SI base unit for length is the metre.
1 metre is equal to 0.198838781516 rods, or 3.28083989501 feet.
Note that rounding errors may occur, so always check the results.
Use this page to learn how to convert between rods and feet.
Type in your own numbers in the form to convert the units!
›› Definition: Rod
A rod is a unit of length, equal to 11 cubits, 5.0292 metres or 16.5 feet. A rod is the same length as a perch[1] and a pole. The lengths of the perch (one rod) and chain (four rods) were
standardized in 1607 by Edmund Gunter.
The length is equal to the standardized length of the ox goad used by medieval English ploughmen; fields were measured in acres which were one chain (four rods) by one furlong (in the United Kingdom,
ten chains).
›› Definition: Foot
A foot (plural: feet) is a non-SI unit of distance or length, measuring around a third of a metre. There are twelve inches in one foot and three feet in one yard.
›› Metric conversions and more
ConvertUnits.com provides an online conversion calculator for all types of measurement units. You can find metric conversion tables for SI units, as well as English units, currency, and other data.
Type in unit symbols, abbreviations, or full names for units of length, area, mass, pressure, and other types. Examples include mm, inch, 100 kg, US fluid ounce, 6'3", 10 stone 4, cubic cm, metres
squared, grams, moles, feet per second, and many more!
This page was loaded in 0.0029 seconds. | {"url":"http://www.convertunits.com/from/rods/to/feet","timestamp":"2014-04-19T14:31:42Z","content_type":null,"content_length":"21656","record_id":"<urn:uuid:904501e9-f675-496d-b6a7-4701cdfbcd82>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00404-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pairs of Disjoint $q$-element Subsets Far from Each Other
Let $n$ and $q$ be given integers and $X$ a finite set with $n$ elements. The following theorem is proved for $n>n_0(q)$. The family of all $q$-element subsets of $X$ can be partitioned into disjoint
pairs (except possibly one if $n\choose q$ is odd), so that $|A_1\cap A_2|+|B_1\cap B_2|\leq q$, $|A_1\cap B_2|+|B_1\cap A_2| \leq q$ holds for any two such pairs $\{ A_1,B_1\} $ and $\{ A_2,B_2\} $.
This is a sharpening of a theorem in [2]. It is also shown that this is a coding type problem, and several problems of similar nature are posed.
Full Text: | {"url":"http://www.combinatorics.org/ojs/index.php/eljc/article/view/v8i2r7/0","timestamp":"2014-04-16T22:23:26Z","content_type":null,"content_length":"14564","record_id":"<urn:uuid:a502e0d1-8833-43ed-88a8-a27a5b59b9a8>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00609-ip-10-147-4-33.ec2.internal.warc.gz"} |
Programming the Cell Processor
The Problem
To illustrate the peculiarities of Cell programming, we use the Breadth-First Search (BFS) on a graph. Despite its simplicity, this algorithm is important because it is a building block of many
applications in computer graphics, artificial intelligence, astrophysics, national security, genomics, robotics, and the like.
Listing One is a minimal BFS implementation in C. Variable G contains the graph in the form of an array of adjacency lists. G[i].length tells how many neighbors the i-th vertex has, which are in G
[i].neighbors[0], G[i].neighbors[1], and so on. The vertex from which the visit starts is in variable root. A BFS visit proceeds in levels: First, the root is visited, then its neighbors, then its
neighbors' neighbors, and so on. At any time, queue Q contains the vertices to visit in the current level. The algorithm scans every vertex in Q, fetches its neighbors, and adds each neighbor to the
list of vertices to visit in the next level, Qnext. To prevent being caught in loops, the algorithm avoids visiting those vertices that have been visited before. To do so, it maintains a marked array
of Boolean variables. Neighbors are added to Qnext only when they are not already marked, then they get marked. At the end of each level, Q and Qnext swap, and Qnext is emptied.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
/* ... */
/* the graph */
vertex_t * G;
/* number of vertices in the graph */
unsigned card_V;
/* root vertex (where the visit starts) */
unsigned root;
void parse_input( int argc, char** argv );
int main(int argc, char ** argv)
unsigned *Q, *Q_next, *marked;
unsigned Q_size=0, Q_next_size=0;
unsigned level = 0;
parse_input(argc, argv);
Q =
(unsigned *) calloc(card_V, sizeof(unsigned));
Q_next =
(unsigned *) calloc(card_V, sizeof(unsigned));
marked =
(unsigned *) calloc(card_V, sizeof(unsigned));
Q[0] = root;
Q_size = 1;
while (Q_size != 0)
/* scanning all vertices in queue Q */
unsigned Q_index;
for ( Q_index=0; Q_index<Q_size; Q_index++ )
const unsigned vertex = Q[Q_index];
const unsigned length = G[vertex].length;
/* scanning each neighbor of each vertex */
unsigned i;
for ( i=0; i<length; i++)
const unsigned neighbor =
if( !marked[neighbor] ) {
/* mark the neighbor */
marked[neighbor] = TRUE;
/* enqueue it to Q_next */
Q_next[Q_next_size++] = neighbor;
unsigned * swap_tmp;
swap_tmp = Q;
Q = Q_next;
Q_next = swap_tmp;
Q_size = Q_next_size;
Q_next_size = 0;
return 0;
On a Pentium 4 HT running at 3.4 GHz, this algorithm is able to check 24-million edges per second. On the Cell, at the end of our optimization, we achieved a performance of 538-million edges per
second. This is an impressive result, but came at the price of an explosion in code complexity. While the algorithm in Listing One fits in 60 lines of source code, our final algorithm on the Cell
measures 1200 lines of code.
Let's Get Parallel
The first step in adapting programs to a multicore architecture is making it parallel. The basic idea is to split loop for (Q_index=0; Q_index<Q_size; Q_index++)... among different SPEs. Then you
access a lock marked by the protection of a synchronization mechanism. Locks work fine in cache-coherent shared-memory machines with uniform memory and limited threads, but scale poorly on multicore
systems. Instead, we partition both the vertex space and the marked array evenly among the SPEs. Each SPE explores only the vertices it owns, and forwards the others to their respective owners.
Function which_owner() returns the owner of a given vertex identifier.
Rather than synchronizing at a fine grain, we adopt a Bulk-Synchronous Parallel (BSP) approach. In BSP, an algorithm is split in steps, and all the cores execute the same step at the same time. After
each step, there is a barrier; see barrier() in Listing Two (available at http://www.ddj.com/code/). At a barrier, whoever finishes first waits for all the others to complete before proceeding to the
next step. The BSP approach is very common in the parallel programming community because it greatly simplifies the control flow and the communication protocols, at the expense of a negligible
performance penalty.
Listing Two is a pseudo-C rendition of the algorithm in BSP form. The code is executed by each of the SPEs, numbered from 0 to Nspe-1. Each SPE examines the neighbor lists of each of the vertices in
its Q, encountering neighbors that belong to different SPEs. It dispatches them, putting those owned by the i-th SPE in queue Qout[i]. Then, an all-to-all exchange takes place, which routes the
vertices to their respective owners. Each Qout[i] is sent to the i-th SPE, which receives it into Qin[s], where s is the identifier of the sender SPE. Next, each SPE examines the incoming adjacency
lists, and marks and adds vertices to its private Qnext as before. The entire algorithm is complete when all the SPEs find their Qs empty. This is done via a reduce_all operation, which performs a
distributed addition of all the variables Q_size among all the SPEs. | {"url":"http://www.drdobbs.com/parallel/programming-the-cell-processor/197801624?pgno=3","timestamp":"2014-04-18T03:06:30Z","content_type":null,"content_length":"97215","record_id":"<urn:uuid:66e23940-1a64-4412-b75d-eed84b403fe1>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00321-ip-10-147-4-33.ec2.internal.warc.gz"} |
Estimates of array and pool-construction variance for planning efficient DNA-pooling genome wide association studies
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
BMC Med Genomics. 2011; 4: 81.
Estimates of array and pool-construction variance for planning efficient DNA-pooling genome wide association studies
Until recently, genome-wide association studies (GWAS) have been restricted to research groups with the budget necessary to genotype hundreds, if not thousands, of samples. Replacing individual
genotyping with genotyping of DNA pools in Phase I of a GWAS has proven successful, and dramatically altered the financial feasibility of this approach. When conducting a pool-based GWAS, how well
SNP allele frequency is estimated from a DNA pool will influence a study's power to detect associations. Here we address how to control the variance in allele frequency estimation when DNAs are
pooled, and how to plan and conduct the most efficient well-powered pool-based GWAS.
By examining the variation in allele frequency estimation on SNP arrays between and within DNA pools we determine how array variance [var(e[array])] and pool-construction variance [var(e
[construction])] contribute to the total variance of allele frequency estimation. This information is useful in deciding whether replicate arrays or replicate pools are most useful in reducing
variance. Our analysis is based on 27 DNA pools ranging in size from 74 to 446 individual samples, genotyped on a collective total of 128 Illumina beadarrays: 24 1M-Single, 32 1M-Duo, and 72
For all three Illumina SNP array types our estimates of var(e[array]) were similar, between 3-4 × 10^-4 for normalized data. Var(e[construction]) accounted for between 20-40% of pooling variance
across 27 pools in normalized data.
We conclude that relative to var(e[array]), var(e[construction]) is of less importance in reducing the variance in allele frequency estimation from DNA pools; however, our data suggests that on
average it may be more important than previously thought. We have prepared a simple online tool, PoolingPlanner (available at http://www.kchew.ca/PoolingPlanner/), which calculates the effective
sample size (ESS) of a DNA pool given a range of replicate array values. ESS can be used in a power calculator to perform pool-adjusted calculations. This allows one to quickly calculate the loss of
power associated with a pooling experiment to make an informed decision on whether a pool-based GWAS is worth pursuing.
Genome-wide association studies (GWAS) have been used to examine over 200 diseases and traits, and identified over 4000 single nucleotide polymorphisms (SNPs) associated with these traits, as listed
in the Catalog of Published Genome-Wide Association Studies [1]. In many cases, GWAS have revealed previously unsuspected molecular mechanisms of disease, highlighting the value of this
hypothesis-free approach [reviewed in [2,3]]. Unfortunately, GWAS are very costly due to the price of genotyping thousands of individual DNA samples on high-density SNP arrays. Consequently, GWAS
have only been feasible for research groups with the necessary budget, studying well-funded diseases or traits. A simple strategy to drastically reduce cost is to replace individual genotyping in
Phase I of a GWAS with genotyping of DNA pools. DNA pools yield estimated allele frequencies rather than observed genotypes; hence, this step has been called allelotyping [4]. Several studies have
provided proof of principle for the pooling strategy, using it to re-discover known disease-variant associations of moderate to large effect size for a fraction of the cost of conventional GWAS [5,4
]. To date, more than twenty pooled-based GWAS have been published, many reporting genome-wide significant associations for diseases and traits such as follicular lymphoma, otosclerosis, multiple
sclerosis, Alzheimer's disease, melanoma, psoriasis, and skin colour [6-12]. Depending on the number of samples being pooled, the cost reduction in Phase I can easily reach 100 fold. Consider, if a
SNP array costs $250 and there are 2000 cases and 2000 controls to genotype, a million dollars is required for Phase I individual genotyping alone. Conversely, the pool-based experiment using 12
replicate arrays on two pools (case and control) would be $6000, or 0.6% of the cost. Simply put, a pooling GWAS is feasible for most grant budgets, while an individual genotyping GWAS is not. The
criticism of pool-based GWAS is that they have reduced power relative to conventional GWAS because of errors introduced by estimating allele frequency from DNA pools rather than individual genotyping
data. While it is true that pool-based GWAS forfeit some power, these losses can be estimated, are often less than expected, and may not change the associations discovered. Although array costs will
continue to drop and conventional GWAS will become more feasible, the potential savings associated with the pooling approach will scale in proportion, leaving more funds for subsequent replication,
fine-mapping, and sequencing of associated genomic regions. For diseases or traits with unknown biology or genetic involvement, a pooling GWAS represents an economical way to test for associations
with moderate odds ratios. In addition, work using DNA extracted from pooled whole blood suggests that a large time-savings (50-100 fold) may also be possible, presenting the possibility of an
incredibly fast (<1 month) and economical experiment [5]. For a comprehensive introduction and review of DNA pooling readers are directed to Sham et al. 2002 and Pearson et al. 2007 [13,4], and for a
set of best practices for any GWAS to Pearson & Manolio, 2008 [14].
We know that in the process of estimating allele frequencies from DNA pools we introduce error, and these must be taken into consideration to plan an adequately powered experiment or to appropriately
calculate association statistics [15,16]. With respect to doing this, the most important consideration is the pooling variance [17]; the variance in the errors arising from estimating allele
frequency from a DNA pool. Pooling variance is the sum of many sources of variation, including in particular, array variance and pool construction variance. Array variance can be attributed to those
errors arising from estimating allele frequency from a DNA pool on an SNP array [17,18]. Pool construction variance can be attributed to those errors arising from the physical creation of a DNA pool.
As pooling variance increases, the ability of a pool-based GWAS to detect odds ratios similar to those detectable by conventional GWAS decreases. In this report we assume pooling variance is the sum
of array variance and pool-construction variance and attempt to determine which makes the greater contribution to the pooling variance. This is relevant to determining how best to design a pool-based
GWAS and how to allocate resources, for example, replicate arrays can be used to reduce array variance and/or pools can be constructed in replicate to control pool construction variance.
Here we partition and estimate variance components using the approach described by MacGregor [17], which examines variation in allele frequency measurements between and within DNA pools. Briefly,
within-pool variation is that observed between two arrays used to allelotype the same DNA pool (i.e. replicate arrays), and is an estimate of array variance. Between-pool variation is that observed
between two arrays used to allelotype two different DNA pools, and is an estimate of pooling variance. Estimates of array variance and pooling variance are used to calculate pool construction
variance by subtraction [17]. Using this approach in an analysis of two DNA pools allelotyped on twelve Affymetrix Genechip HindIII arrays (6 arrays per pool) MacGregor [7] found that approximately
87.5% of pooling variation could be attributed to the arrays, leaving 12.5% to pool-construction [17]. It was noted, however, that more data sets would be necessary to determine the variability in
these estimates. Here we inspect 27 DNA pools allelotyped on a total of 128 Illumina arrays, including the Human1M Single (1M-Single), Human1M Duo (1M-Duo), and HumanHap660 Quad (660-Quad) arrays,
allowing us to better address the question of what values array variance and pool-construction variance are likely to take. In addition, we perform our analysis on normalized array data and raw array
data to examine how normalization affects pooling variance estimates.
In the first part of this study we establish values for array variance and pool-construction variance. In the second part, we use these estimates to calculate the effective sample size (ESS) of a DNA
pool (where ESS is the equivalent number of samples that would need to be individually genotyped to give a similar result) [19]. We also present a simple online tool, PoolingPlanner, which uses our
empirical variance estimates as default values to calculate the effective sample size (ESS) of a DNA pool given a range of replicate array values (available at http://www.kchew.ca/PoolingPlanner/).
PoolingPlanner also accepts user-supplied values for variance estimates. ESS can then be used in one of the available power calculators, such as CaTS [20], or Quanto [21], to perform pool-adjusted
power calculations [4]. PoolingPlanner is intended to help researchers quickly calculate the loss of power associated with a particular pooling experiment, which is a first step in making on informed
decision on whether a pool-based GWAS is worth pursuing.
Our analysis is based on 27 DNA pools ranging in size from 74 to 446 individual samples. These were allelotyped on a collective total of 128 Illumina beadarrays: 24 1M-Single, 32 1M-Duo, and 72
660-Quad. Our dataset comprises four batches of genotyping (details given in Additional File 1, Table S1), which correspond to four ongoing pool-based GWAS that have not yet been published. Each of
these studies was approved by the joint Clinical Research Ethics Board of the British Columbia Cancer Agency and the University of British Columbia. All subjects gave written informed consent.
Genomic DNA was extracted from peripheral venous blood collected between 2001 and 2008 by different laboratories using different methods. DNA samples were diluted to 50-100 ng/uL and then quantified
in duplicate by fluorometry using PicoGreen™(Molecular Probes, Eugene, OR, US). Pools were constructed by combining 200 ng of each sample DNA by manual pipetting. Pools were assayed (allelotyped) at
the Centre for Applied Genomics at Sick Children's Hospital in Toronto."
SNP allele frequency in DNA pools was estimated using Illumina's beadarrays, where on average each SNP is estimated by 16-18 "bead" observations per array (oligonucleotide probes are designed to
assay a SNP and attached to beads, where individual beads are coated with one probe type and interrogate one site in the genome) [22]. Equation 1 was used in the calculation of each SNP allele
where G[i ]and R[i ]are the green and red fluorescence intensity for the ith bead assaying a given SNP. The two colours correspond to the two alleles of the SNP, and n is the number of beads assaying
a given SNP, typically 16-18. Illumina beadarrays are manufactured such that there are multiple strips on each array [22], and our preliminary analysis revealed that unique groups of SNPs are
consistently on only a subset of strips. From our previous experience, and that of others [18], it was known that the average relative intensity of the red and green channels could differ
dramatically between strips and between arrays. To prevent these manufacturing and/or assaying properties from biasing allele frequency estimation, a simple normalization was performed. Each array
was normalized on a strip-by-strip basis by adjusting the red channel intensity to give a mean strip-wide allele frequency estimate of 0.5 [18]. To examine the effect of this normalization on the
variance terms estimated, the analyses presented in this paper are performed on both normalized and raw Illumina array data.
Statistical Analysis
Our purpose is to calculate empirical estimates of pooling variance and array variance, and then to estimate pool construction variance by subtraction. Pooling variance and array variance are both
estimated by calculating allele frequency differences across two paired (by SNP, for all SNPs on the array) arrays [17]. The two arrays used in the comparison will dictate whether an estimate of
array or pooling variance is generated. For example, to calculate array variance, let allele frequency estimates on arrays x used to allelotype DNA pool a be:
where is the true allele frequency for those samples in DNA pool a, and e[array_x ]is the error associated with estimating the allele frequency from a DNA pool [15]. Then, the variance of the allele
frequency difference on two replicate arrays (x = 1, 2) is [17]:
This yields an estimate of array variance:
where $var(p˜a1-p˜a2)$ is calculated as the average of the squared allele frequency differences for all SNPs, i (i = 1...n), on arrays 1 and 2:
Var(e[array]) is assumed constant for all SNPs. If more than two replicate arrays are used to allelotype a given DNA pool, multiple array comparisons are possible, and the best estimate of var(e
[array]) is the average of all possible pairings [17].
If arrays 1 and 2 interrogate two different DNA pools, an estimate of pooling variance can be obtained. When two DNA pools (a, b) are constructed from identical samples (i.e replicate pool
where var(e[construction]) is the variance in the pool construction errors, which are assumed to be constant for all SNPs. Thus, an estimate of pooling variance, var(e[pooling-][1]) is [17]:
where "pooling-1" is used to indicate that this estimate of pooling variance is based on the comparison of arrays that allelotype two replicate DNA pools. As before, if more than two replicate arrays
are used to allelotype a given DNA pool, multiple array comparisons are possible, and the best estimate of var(e[pooling-][1]) is the average of all possible pairings [17].
When DNA pools a and b are constructed from non-identical samples (ex. a case and control pool), an alternative estimate of pooling variance is var(e[pooling-][2])[15,17]:
Here $var(p˜a1-p˜b2)$ is calculated as the average of the squared allele frequency difference minus a random binomial sampling variance term, $Ṽa1,b2$, for all SNPs, i (i = 1...n), on arrays 1 and 2:
$Ṽa1,b2$ is calculated using the usual equation for binomial sampling variance:
The random binomial sampling variance terms accounts for the additional component of variation arising from the comparison of non-identical pools. It is assumed that the two DNA pools are constructed
from samples drawn from the same population, and although in fact it is often a case and control being compared (where we specifically look for differences in allele frequency), for most SNPs on an
array this is a valid assumption [15].
Figure Figure11 visually summarizes the three types of pair-wise arrays comparisons used in this report, including the sources of error in each comparison. When comparing arrays used to allelotype
the same DNA pool (henceforth referred to as 'Type A' comparisons), the variation observed can only arise due to the arrays, giving an estimate of array variance. When comparing arrays used to
allelotype replicate DNA pools (henceforth referred to as 'Type B' comparisons), the variation observed is due to the arrays and pool-construction, giving a direct estimate of pooling variance.
Pool-construction variance is then calculated by subtracting the array variance (Type A) from the pooling variance (Type B). If replicate DNA pools have not been constructed, as is the case for many
of the pools in our data set, we are still able to estimate the pooling variance by comparing non-identical pools (henceforth referred to as 'Type C' comparison) and account for the additional
binomial sampling variance term that arises in this case. Pool-construction variance is then calculated by subtracting Type A values from Type C values.
Overview of the pair-wise array comparison's performed in this study. Step 1 depicts the construction of three DNA pools. The first two pools (orange and red) are constructed using the same DNA
samples and are pool-construction replicates. The third pool ...
A number of assumptions are made in this analysis. We assume that the array variance is comparable across the DNA pools in an experiment, and that the average array variance is the best estimate. For
arrays with larger than average array variance, perhaps caused greater variation in PCR amplification steps and/or measurement of allele frequency (detection of red and green fluorescence), array
variance will be underestimated; arrays with smaller than average array variance will be overestimated. It is known that SNPs with smaller minor allele frequencies are estimates with a greater margin
of error, i.e. var(e[array]) is not constant for all SNPs. For SNP with a small minor allele frequency, average array variance will underestimate the array variance. We also assume that the pooling
variance is constant across all SNPs, and that unequal amplification and/or hybridization of alleles (A or B) will have a negligible effect on results. Because our analysis is based upon contrasting
array data from two DNA pools, the effects of unequal hybridization should largely cancel out [15,18].
PoolingPlanner Theory
In choosing to conduct a pool-based GWAS, one accepts a loss in power relative to a conventional GWAS. How much power is lost can be expressed in terms of the effective sample size (N*) resulting
from pooling N individuals [4]. PoolingPlanner uses an estimate of var(e[pooling]) to calculate the effective sample size of a DNA pool. N* and var(e[pooling]) are related through two expressions for
relative sample size (RSS) [defined in 19]:
In one, the RSS of a DNA pool is expressed as the ratio of effective sample size to the actual sample size (N). In two, it is expressed as the fraction of the total variance, (V[s ]+ var(e
[pooling])), explained by the binomial sampling variance, V[s]. V[s ]is calculated as p(1-p)/2N, where p is the average minor allele frequency on the array, and N is number of individuals
contributing to the DNA pool. If DNA pools have been constructed in replicate we let var(e[pooling])= var(e[pooling-1]), otherwise we let var(e[pooling])= var(e[pooling-2]). The two equations for RSS
can then be equated and solved for N*. It is worth noting that because our calculation of RSS relies on our empirical estimates of var(e[pooling]) (Equation 2), estimates which are based on
contrasting allele frequencies in two DNA pools, the effects of unequal hybridization, which would typically thwart a direct comparison of a pooling-based and conventional genotyping experiment,
cancels out (15, 18).
Replicate arrays can be used to reduce var(e[pooling]) by a factor of 1/k, where k is the number of replicate arrays [4]. In making var(e[pooling]) smaller the RSS and N* become larger. Effective
sample size can then be used with one of the available power calculators, for example CaTS [20] or Quanto [21] to perform pool-adjusted power calculations [4]. PoolingPlanner is intended to help
first time users plan a DNA pooling experiment, and our empirical estimates of array variance and pool construction variance are supplied as the default setting for the program for this reason. Users
with their own estimates of variances can provide these to the program as well. PoolingPlanner is available at http://www.kchew.ca/PoolingPlanner/).
In our analyses we encountered beads with negative intensity values in the red, green, or both channels. The number of negative beads varied by strip and typically affected 1-10% beads, a pattern
consistently seen across all arrays. This can occur due to local background intensity removal at the point of image processing [23]. These beads were removed from our variance calculations.
Furthermore, beads with zero in both the red and green channels were considered failed beads and also dropped from our analysis. There were typically fewer than 100 of these per strip. Finally, SNPs
having fewer than four bead observations were excluded. The rationale for this was that SNPs having fewer than four beads observation would have poorly estimated allele frequency.
Array Variance or var(e[array]): Type A comparisons
We estimate array variance by comparing replicate arrays, Type A comparison in Figure Figure1,1, for three types of Illumina beadarrays, the 1M-Single, the 1M-Duo, and the 660-Quad. The results for
normalized and raw data are given in Table Table1,1, and box plots in Figure Figure22 provide a visual summary of the estimates. Clearly normalization dramatically reduces the range of observed
array variance estimates for all array types. As well, normalization reduced the mean array variance estimate approximately 2.5-fold for the 1M-Duo arrays and approximately 8-fold for the 1M-Single
and 660-Quad arrays. For normalized data most estimates of array variance, regardless of array type, fell between 2.5 × 10^-4 and 5.0 × 10^-4.
Estimates of array variance, var(e[array]), for three Illumina arrays types for normalized and raw data.
Box plots of array variance for three Illumina array types. Box plots of var(e[array(x,y)]) for Illumina 1M-Duo, 1M-Single, and 660-Quad arrays for normalized and raw data. The 1M-Duo arrays were
genotyped in two batches and are plotted stratified by batch ...
For the 1M-Single arrays 12 DNA pools were allelotyped using 24 arrays (2 arrays per pool), yielding 12 estimates of array variance, the mean of which was 3.8 × 10^-4 (normalized) and 2.9 × 10^-3
(raw data), see Table Table1.1. For the 1M-Duo array 8 DNA pools were analyzed on 32 arrays (4 arrays per pool), yielding 48 estimates of var(e[array]). Three of these estimates, each from pair-wise
array comparisons involving the same array, were extreme outliers in both the normalized and raw dataset (see Figure Figure3).3). This array was determined faulty (see discussion) and removed from
further analysis. For the remaining 45 estimates the mean var(e[array]) was 3.2 × 10^-4 (normalized) and 9.0 × 10^-4 (raw data), see Table Table1.1. Unlike the data for the 1M-Single arrays, the
1M-Duo array data spanned two batches of genotyping, carried out at two different times. To look for batch effects the 1M-Duo data was also analyzed stratified by batch. The mean array variance was
significantly different between batches for normalized data but not raw data (based on non-overlapping confidence intervals constructed assuming a normal distribution). Batch 1 (18 var(e[array])) and
batch 2 (27 var(e[array])) had mean estimates of array variance of 4.2 × 10^-4 and 2.6 × 10^-4, respectively. For the 660-Quad arrays, 7 pools were assayed using 72 arrays (6 or 12 arrays per pool),
and mean array variance was 3.3 × 10^-4 for normalized data, and 2.7 × 10^-3 for raw data, see Table Table11.
Box plots of array variance for Illumina 1M-Duo arrays highlighting extreme outliers. Box plots of var(e[array]) estimates (n = 48) for the 1M-Duo arrays (Batch 1 and 2 combined) highlighting the
three extreme outlier estimates in both normalized and raw ...
Pooling Variance or var(e[pooling]): Type B and C comparisons
We estimate pool-construction variance for 27 DNA pools, discussed in order by Illumina array type. Six pools were allelotyped on the 1M-Single array, and for each, pools were constructed in
replicate and allelotyped by two arrays. This allowed us to calculate and compare pooling variance and pool-construction variance estimates as calculated using Type B and Type C comparison values.
Figure Figure44 summarizes the var(e[pooling]) and var(e[construction]) estimates for those pools on the 1M-Single array. For normalized data var(e[pooling-1]) ranged from 3.2 × 10^-4 to 5.5 × 10^-4
and averaged 4.0 × 10^-4. In comparison var(e[pooling-2]) ranged from 3.5 × 10^-4 to 7.0 × 10^-4 and averaged 4.8 × 10^-4. Var(e[construction-1]) ranged from 0 to 6.7 × 10^-5 and had a mean of 2.9 ×
10^-5 (where negative values have been set to zero). Thus, for these pools var(e[construction-1]) accounts for between 0 and 20%, or an average 7.5% of the pooling variance when using Type B derived
values (see Additional File 2, Table S2 for all values). Var(e[construction-2]) ranged from 0 to 3.2 × 10^-4 and averaged 1.0 × 10^-4; thus, pool-construction variance accounted for between zero and
46%, or an average 20% of the pooling variance using Type C derived values (Additional File 2, Table S2). There does not appear to be any correlation between pool size and pool-construction variance,
see Figure Figure44.
Decomposition of pooling variance for Illumina 1M-Single arrays. Stacked barplots showing the normalized pooling variance estimates, and the breakdown into array and to pool-construction variance for
pools allelotyped on the Illumina 1M-Single array. ...
Using raw data, estimates of var(e[pooling-1]) were approximately 8-fold higher than the normalized data. Estimates of var(e[construction-1]) tended to be higher as well, averaging ~20% of the
pooling variance. Var(e[pooling-2]) estimates followed the same pattern, larger estimates of pooling variance and pool-construction variance (data not shown).
Pools allelotyped on the 1M-Duo and 660-Quad arrays were not constructed twice; hence, for these we estimated pool-construction variance based on Type C comparisons only. Seven DNA pools were
allelotyped on the 660-Quad array, two using six replicate arrays (396 estimates of var(e[pooling-2]) each), and five using twelve replicate arrays (720 estimates of var(e[pooling-2]) per pool.
Figure Figure55 summarizes the var(e[pooling-2]) and var(e[construction-2]) estimates for these pools (normalized data). Var(e[pooling-2]) estimates ranged from 4.3 × 10^-4 to 5.7 × 10^-4, and
averaged 5.1 × 10^-4; meanwhile, the var(e[construction-2]) estimates ranged from 1.0 × 10^-4 (23%) to 2.4 × 10^-4 (42%) and averaged 1.9 × 10^-4 (35%). These estimates of pooling variance are very
similar to those seen for pools on the 1M-Single array; however, the estimates of pool-construction variance are higher (see Additional File 3, Table S3 for all values). For the raw data var(e
[pooling-2]) estimates ranged from 2.6 × 10^-3 to 2.9 × 10^-3, and averaged 2.7 × 10^-3; meanwhile, the matched var(e[construction-2]) estimates ranged from 0 to 2.6 × 10^-4 (9%) and averaged 1.9 ×
10^-4 (2%).
Decomposition of pooling variance for Illumina 660-Quad arrays. Stacked barplots showing the normalized pooling variance estimates, and the breakdown into array and to pool-construction variance for
pools allelotyped on the Illumina 660-Quad array. All ...
1M-Duo arrays were analyzed separately by batch using batch-specific estimate of array variance for normalized data. The 1M-Duo batch 1 data contained three DNA pools, each allelotyped by four
replicate arrays; therefore, each var(e[pooling-2]) estimate is the average of 32 pair-wise array comparisons. Figure Figure66 summarizes var(e[pooling-2]) and var(e[construction-2]) estimates for
these pools (normalized data). Var(e[pooling-2]) was estimated at 5.6 × 10^-4, 6.0 × 10^-4 and 6.1 × 10^-4. The matched var(e[construction-2]) estimates were 1.5 × 10^-4, 1.8 × 10^-4, and 1.9 × 10^-4
, or 26%, 31%, and 32% of the pooling variance for pools sized 122, 246, and 121 (see Additional File 3, Table S3 for values). These values reflect those seen for pools on 660-Quad and 1M-Single
arrays. In comparison, the 1M-Duo batch 2 data deviated dramatically. This batch contained 5 pools, each also alleloyped by four replicate arrays. For these var(e[pooling-2]) ranged from 1.8 × 10^-3
to 3.7 × 10^-3, and averaged 2.6 × 10^-3, and var(e[construction-2]) estimates ranging from 7.9 × 10^-4 (43%) to 2.7 × 10^-3 (72%) (see Additional File 3, Table S3). For these pools the estimates of
pooling variance are nearly 2-3 fold higher than those of batch 1 but the array variance remained low at 2.4 × 10^-4, leading to high estimates of pool-construction variance (see discussion). For raw
data batch 1 & 2 were analyzed combined using all possible array comparisons and var(e[array]) = 9.0 × 10^-4. Estimates of var(e[pooling-2]) ranged from 2.2 × 10^-3 to 5.4 × 10^-3 and averaged 3.4 ×
10^-3. Var(e[construction-2]) estimates averaged at 51% of the calculated var(e[pooling-2]).
Decomposition of pooling variance for Illumina 1M-Duo arrays. Stacked barplots showing the normalized pooling variance estimates, and the breakdown into array and to pool-construction variance for
pools allelotyped on the Illumina 1M-Duo array. All estimates ...
PoolingPlanner Example
To demonstrate how to use PoolingPlanner we consider a hypothetical scenario. A researcher has a collection of samples including 300 cases and 1000 controls and wants to conduct a pool-based GWAS.
The researcher needs to decide how many arrays to use, and wants to construct power curves that take into consideration the power loss concomitant with this cost-efficient strategy. They plan on
using Illumina's 660-Quad array and normalizing their data. PoolingPlanner is used to calculate the effective sample size of each DNA pool using four input values: 1) var(e[array]), 2) var(e
[construction]), 3) pool size, and 4) allele frequency. Figure Figure7A7A shows the PoolingPlanner input panel for the case pool; Figure Figure7B7B the input panel for the control pool.
PoolingPlanner will supply the var(e[array]) value as calculated based on our 660-Quad normalized data, 3.3 × 10^-4, see Table Table2.2. Alternatively, the user may specify a custom value. In this
example we assume var(e[construction]) is 30% of the pooling variance, chosen to reflect values we observed. Var(e[construction]) is entered into PoolingPlanner by specifying "Array:Construction
Ratio = 7:3", as seen in Figure Figure7A7A and and7B.7B. An exact value for var(e[construction]) can also be entered (30% of 3.3 × 10^-4 would be 9.9 × 10^-5). For allele frequency, by default
PoolingPlanner uses HapMap CEU data (release 27) to set p to the average minor allele frequency (MAF) on the 1M-Single, 1M-Duo, or 660-Quad Illumina array. For the 1M-Single and 1M-Duo arrays p =
0.21 (>95% of SNPs had available HapMap data), and for the 660-Quad array p = 0.29 (87% of SNPs had available HapMap data). Estimates of p based on our pooled array data were similar (see Additional
File 4, Table S4). In this example the average MAF is set to 0.29, but the user can enter any value between 0 and 0.5. Once these values are entered the program calculates the relative and effective
sample size of each DNA pool for a range of replicate array values, and provides a corresponding table of values as seen in Figure Figure7A7A and and7B.7B. A plot of relative sample size versus
number of replicate arrays is also automatically generated. For a DNA pool containing 300 individuals (blue line in Figure Figure7C),7C), an RSS of 80% is achieved with 6 arrays (N* is 244) while an
RSS of 90% requires 13 arrays (N* is 271). In contrast, for a pool of 1000 individuals (red line in Figure Figure7C),7C), an RSS of 80% is achieved with 19 arrays (N* is 806). This plot makes it
easy to see at what point additional replicate arrays begin to yield diminishing returns in terms of increasing the effective sample size of a DNA pool.
PoolingPlanner. (A) Control input and output panel for the case pool. (B) Control input and output panel for the control pool. (C) Corresponding plot of relative sample size versus the number if
replicate arrays used in allelotyping the case (blue line) ...
Impact of replicate arrays on effective sample size (N*) and minimum detectable odds ratio (MDOR) in pooling-GWAS.
To perform pooling-adjusted power calculations, a pool's effective sample size, output by PoolingPlanner, is entered into a power calculator. We have used Quanto [21] for this example. Assuming an
unmatched case-control design testing for gene-only effects using a log-additive model, where the incidence of the case phenotype is 0.02%, and the risk allele frequency (p[risk]) is 29% (and in
complete linkage disequilibrium with a SNP on the array), the power curves corresponding to a pooling experiment where 3, 6, 12, or 24 Illumina 660-Quad replicate arrays are used per pool is given in
Figure Figure8.8. The power curve for individual genotyping is also plotted for reference. Table Table22 accompanies this Figure Figure88 and gives the minimum detectable odds ratio (MDOR) at 80%
power for each curve when p[risk ]is 0.29, and for comparison, when p[risk ]is 0.1. Assuming individual genotyping, the MDOR at 80% power would be 1.32 when p[risk ]is 0.29. Using 24 arrays per pool
this value rises incrementally to 1.33. Using 12, 6, or 3 arrays per pool, the MDOR's further increase to 1.35, 1.38, and 1.44, respectively. Only when 3 arrays are used per pool does the MDOR
dramatically differ between pooling and individual genotyping. Marginal improvements in MDOR should be considered in light of increasing experimental cost, and the percent cost of a pooling GWAS
relative to a conventional GWAS is given in Table Table22 to highlight this difference. If arrays cost $250, the ability to detect an odds ratio of 1.38 with 80% power would cost $3,000 (6 arrays
per pool), while the ability to detect an odds ratio of 1.33 would be $325,000 (individual genotyping). In many cases, particularly for phenotypes suggestive of moderate to large odds ratio, this
difference in detectable odds ratios will not change of the overall outcome of the association study. In a pooling GWAS, as in conventional GWAS, for rarer risk alleles we have less power to detect
associations, see the MDOR in Table Table22 when p[risk ]is 0.1. We note that as p[risk ]gets smaller, the difference in the MDOR for a pooling versus individual genotyping experiment becomes more
noticeable. For example, when 6 replicate arrays are used per pool and p[risk ]is 0.29, the MDOR differs by 0.06 from individual genotyping, but this difference becomes 0.09 when p[risk ]is 0.1. It
is also worth noting in Table Table22 that using the same number of replicate arrays on different sized DNA pools of very different RSS values. Contrary to what might be expected, the maximally
powered pool-based experiment occurs when arrays are equally distributed amongst pools, regardless of differences in pool size and RSS, assuming the pool-construction variance is constant (see
Additional File 5, Table S5 & Additional File 6, Figure S1). By conducting an analysis such as this a user can decide what power is forfeited by conducting a pool-based GWAS, and decide whether the
approach makes practical sense in their situation.
Example use of PoolingPlanner. Power curves for a theoretical pooling experiment with 300 cases and 1000 controls where 24, 12, 6, or 3 Illumina 660-Quad replicate arrays are used to allelotype the
DNA pools. The equivalent individual genotyping experiment ...
In the first part of this study we set out to establish a range of experimentally observed values for array variance on Illumina's SNP-genotyping beadarrays. At the same time, we wanted to establish
a range of values for pool construction variance. In the second part, we used these estimates to calculate the effective sample size of a DNA pool given a range of replicate array values, and provide
an online tool to allow readers to do the same.
At the time of our analysis we were aware of only one report that estimated array variance (var(e[array])= 1.1 × 10^-4 ) for an Illumina HumanHap300 beadarray [18]. Illumina has since released higher
density arrays (>1 million SNPs per array), and we wanted to determine if increased SNP density negatively impacted array variance. Overall, we found this was not the case. All of the Illumina array
types examined here (660-Quad, 1M-Single, 1M-Duo) had very similar var(e[array]) estimates, centering around 3 × 10^-4 for our normalized data, which is largely in keeping with the HumanHap300 result
[18]. We expect this result would extend to the HumanOmni1-Quad array, although it was not analyzed it here. We found that the normalization procedure we used reduced the array variance between
2-8-fold, and a newly reported normalization algorithm suggests that array variance can be reduced even further [24]. Reduced array variance should mean more precise estimates of allele frequency,
which should further minimize the loss of power associated with using the DNA pooling strategy.
The Illumina arrays analyzed here yielded var(e[array]) estimates ~10-fold smaller than those of the Affymetrix HindIII 50K arrays (var(e[array])= 1.26 × 10^-3) analyzed by MacGregor [17]. A similar
result was noted when Affymetrix arrays were compared to Illumina HumanHap300 arrays [18]. In part, this may be explained by differences in the manufacturing of the arrays. MacGregor et al. [18]
report that pooling errors appear to be highly related to number of probes used to estimate SNP allele frequency. While 10 probe pairs are assigned to each SNP on the Affymetrix HindIII 50K arrays [
18], on average 16-18 beads are used on the Illumina arrays. Further, on Illumina arrays beads are randomly dispersed on a slide [22], while on Affymetrix arrays probes are fixed in a given location,
making the latter more susceptible to location-specific technical errors. As the array variance gets smaller (i.e. when using Illumina arrays), we expect the pool-construction variance to account for
a greater proportion of the pooling variance.
Our estimates of var(e[construction]) spanned 27 DNA pools, ranging in size from 74 to 446 individual samples, allowing us to sample a range of possible pool construction variances. First, in
contrast to a previous report [25], we did not observe a relationship between pool size and pool-construction variance. We did, however, observe batch effects. For the 1M-Duo arrays, which were
processed in two batches on different dates, we observed very different estimates of pooling variance and pool-construction variance (see Figure Figure6).6). Most of our estimates of
pool-construction variance were based on values from Type C comparisons, and for these var(e[construction]) usually fell between 20 and 40% of the pooling variance. When calculations were based on
the comparison of replicate DNA pools (Type B comparisons, 1M-Single arrays only) our estimates were smaller, on average 7.5% of the pooling variance. There are several possible reasons for this. The
adjustment for binomial sampling variance may not fully account for the variance arising from sampling, leaving variance that is then attributed to pool-construction in the Type C comparisons. As
well, some estimates of pool-construction variance were negative, and these were set to zero, which would lead to overestimation of pool-construction variance. We conclude that relative to var(e
[array]), var(e[construction.]) is of less importance; however, our results suggest pool construction may account for more of the pooling variance than previously estimated [17]. MacGregor [17]
attributed 12.5% of the pooling variance to pool-construction when using Affymetrix HindIII 50K arrays. On average we attribute 30% of pooling variance to pool construction when using Illumina
arrays. This difference is what might be expected given the smaller var(e[array]) for Illumina arrays. Further reductions in array variance, for example, through improved normalization of array data,
have the potential to further shift the proportion of an experiment's pooling variance that is attributed to pool-construction errors.
With respect to the design of pool-based experiments when using Illumina arrays, our partitioning of the pooling variance still suggests [17] that constructing fewer (large) pools while using more
replicate arrays (i.e. target array variance), is the most effective way to reduce pooling variance and conduct the most efficient pool-based GWAS. Further, for an equivalent pool-based experiment
using Affymetrix arrays in place of Illumina arrays, more array replicates will be needed (~10-fold more). As the proportion of array variance to pool construction variance approaches 50:50,
strategies to reduce pool construction variance become more important.
For one of our experiments, 1M-Duo Batch 2, we observed unusually high estimates of pool-construction variance and low estimates of array variance (see Figure Figure6).6). In this experiment, pool
replicates were allelotyped on the same physical array (which holds two samples). Subsequently, we noticed that the array variance for replicates on the same chip were much smaller than the variance
for replicates on different chips. Overall, this led to the array variance being underestimated relative to the pooling variance, leaving more variance to be accounted for by pool construction. In
addition, the between-chip variance for these arrays was much higher than observed in the 1M-Duo Batch 1 dataset, which lead to large estimates of pooling and pool-construction variance overall.
Ultimately, this was traced back to unusually high red channel intensity on some arrays, despite normalization, which biased allele frequency estimates array-wide. Clearly this will influence any
downstream association analysis, so in this case, our analysis of variance served to flag a serious problem in the array data. It also highlighted the need to randomize DNA pool replicates among
arrays that carry more than one sample, and to randomize by location on the array, particularly in the case of the 660-Quad and HumanOmni1-Quad arrays, which carry four samples.
The differences between 1M-Duo Batch 1 and 2 data were significant for normalized data, but not raw data. On one hand, it may be that greater noise associated with the raw data prevented differences
in array variance and pool construction variance from being significant. On the other, it is possible that the normalization procedure itself exacerbated technical artifacts only present on some
arrays, leading to the observed differences in normalized data. This can occur if technical artefacts violate the assumptions of the normalization [26].
We have provided empirical estimates of var(e[array]) and var(e[construction]) for a range of DNA pool sizes. We have also presented PoolingPlanner, a simple program to help translate these variances
into their effect on sample size, information that can then be use in a power calculator to conduct pool-adjust calculations. PoolingPlanner may be helpful in quickly assessing theoretical best and
worst-case scenarios for a DNA pooling GWAS. With this information the user can then make a more informed decision about how to carry out their pooling experiment to optimally balance cost with loss
of power.
Competing interests
The authors declare that they have no competing interests.
Authors' contributions
MAE performed all statistical analysis and drafted the manuscript. KC developed and implemented the online tool PoolingPlanner. MR and ABW participated in study design, coordination, and manuscript
drafting. All of the authors have read and approved the final manuscript.
Pre-publication history
The pre-publication history for this paper can be accessed here:
Acknowledgements and Funding
We thank Dr. John Spinelli, Senior Biostatistician, for very useful discussion and critical advice during the preparation of this manuscript.
This work was supported in part by OvCaRe, through the BC Cancer Foundation [NSA10112 to A.B-W.]; and Canadian Institutes for Health Research [BMA-63184, IG1-93476 to A.B-W]. A.B-W. is a Senior
Scholar of the Michael Smith Foundation for Health Research [CI-SSH-00947(06-1)]. M.E. was supported by studentships from Natural Sciences and Engineering Research Council of Canada and the
University of British Columbia [17G44444].
• Hindorff LA, Sethupathy P, Junkins HA, Ramos EM, Mehta JP, Collins FS, Manolio TA. Potential etiologic and functional implications of genome-wide association loci for human diseases and traits.
Proc Natl Acad Sci USA. 2009;106(23):9362–9367. [PMC free article] [PubMed]
• Hirschhorn JN. Genomewide association studies--illuminating biologic pathways. N Engl J Med. 2009;360(17):1699–1701. [PubMed]
• McCarthy MI, Abecasis GR, Cardon LR, Goldstein DB, Little J, Ioannidis JP, Hirschhorn JN. Genome-wide association studies for complex traits: consensus, uncertainty and challenges. Nat Rev Genet.
2008;9(5):356–369. [PubMed]
• Pearson JV, Huentelman MJ, Halperin RF, Tembe WD, Melquist S, Homer N, Brun M, Szelinger S, Coon KD, Zismann VL, Webster JA, Beach T, Sando SB, Aasly JO, Heun R, Jessen F, Kolsch H, Tsolaki M,
Daniilidou M, Reiman EM, Papassotiropoulos A, Hutton ML, Stephan DA, Craig DW. Identification of the genetic basis for complex disorders by use of pooling-based genomewide
single-nucleotide-polymorphism association studies. Am J Hum Genet. 2007;80(1):126–139. [PMC free article] [PubMed]
• Craig JE, Hewitt AW, McMellon AE, Henders AK, Ma L, Wallace L, Sharma S, Burdon KP, Visscher PM, Montgomery GW, MacGregor S. Rapid inexpensive genome-wide association using pooled whole blood.
Genome Res. 2009;19(11):2075–2080. [PMC free article] [PubMed]
• Skibola CF, Bracci PM, Halperin E, Conde L, Craig DW, Agana L, Iyadurai K, Becker N, Brooks-Wilson A, Curry JD, Spinelli JJ, Holly EA, Riby J, Zhang L, Nieters A, Smith MT, Brown KM. Genetic
variants at 6p21.33 are associated with susceptibility to follicular lymphoma. Nat Genet. 2009;41(8):873–875. [PMC free article] [PubMed]
• Schrauwen I, Ealy M, Huentelman MJ, Thys M, Homer N, Vanderstraeten K, Fransen E, Corneveaux JJ, Craig DW, Claustres M, Cremers CW, Dhooge I, Van de Heyning P, Vincent R, Offeciers E, Smith RJ,
Van Camp G. A genome-wide analysis identifies genetic variants in the RELN gene associated with otosclerosis. Am J Hum Genet. 2009;84(3):328–338. [PMC free article] [PubMed]
• Comabella M, Craig DW, Camina-Tato M, Morcillo C, Lopez C, Navarro A, Rio J, BiomarkerMS Study Group, Montalban X, Martin R. Identification of a novel risk locus for multiple sclerosis at 13q31.3
by a pooled genome-wide scan of 500,000 single nucleotide polymorphisms. PLoS One. 2008;3(10):e3490. [PMC free article] [PubMed]
• Abraham R, Moskvina V, Sims R, Hollingworth P, Morgan A, Georgieva L, Dowzell K, Cichon S, Hillmer AM, O'Donovan MC, Williams J, Owen MJ, Kirov G. A genome-wide association study for late-onset
Alzheimer's disease using DNA pooling. BMC Med Genomics. 2008;1:44. [PMC free article] [PubMed]
• Brown KM, Macgregor S, Montgomery GW, Craig DW, Zhao ZZ, Iyadurai K, Henders AK, Homer N, Campbell MJ, Stark M, Thomas S, Schmid H, Holland EA, Gillanders EM, Duffy DL, Maskiell JA, Jetann J,
Ferguson M, Stephan DA, Cust AE, Whiteman D, Green A, Olsson H, Puig S, Ghiorzo P, Hansson J, Demenais F, Goldstein AM, Gruis NA, Elder DE, Bishop JN, Kefford RF, Giles GG, Armstrong BK, Aitken
JF, Hopper JL, Martin NG, Trent JM, Mann GJ, Hayward NK. Common sequence variants on 20q11.22 confer melanoma susceptibility. Nat Genet. 2008;40(7):838–840. [PMC free article] [PubMed]
• Capon F, Bijlmakers MJ, Wolf N, Quaranta M, Huffmeier U, Allen M, Timms K, Abkevich V, Gutin A, Smith R, Warren RB, Young HS, Worthington J, Burden AD, Griffiths CE, Hayday A, Nestle FO, Reis A,
Lanchbury J, Barker JN, Trembath RC. Identification of ZNF313/RNF114 as a novel psoriasis susceptibility gene. Hum Mol Genet. 2008;17(13):1938–1945. [PMC free article] [PubMed]
• Stokowski RP, Pant PV, Dadd T, Fereday A, Hinds DA, Jarman C, Filsell W, Ginger RS, Green MR, van der Ouderaa FJ, Cox DR. A genomewide association study of skin pigmentation in a South Asian
population. Am J Hum Genet. 2007;81(6):1119–1132. [PMC free article] [PubMed]
• Sham P, Bader JS, Craig I, O'Donovan M, Owen M. DNA Pooling: a tool for large-scale association studies. Nat Rev Genet. 2002;3(11):862–87. [PubMed]
• Pearson TA, Manolio TA. How to interpret a genome-wide association study. JAMA. 2008;299(11):1335–1344. [PubMed]
• Macgregor S, Visscher PM, Montgomery G. Analysis of pooled DNA samples on high density arrays without prior knowledge of differential hybridization rates. Nucleic Acids Res. 2006;34(7):e55. [PMC
free article] [PubMed]
• Visscher PM, Le Hellard S. Simple method to analyze SNP-based association studies using DNA pools. Genet Epidemiol. 2003;24(4):291–296. [PubMed]
• Macgregor S. Most pooling variation in array-based DNA pooling is attributable to array error rather than pool construction error. Eur J Hum Genet. 2007;15(4):501–504. [PubMed]
• Macgregor S, Zhao ZZ, Henders A, Nicholas MG, Montgomery GW, Visscher PM. Highly cost-efficient genome-wide association studies using DNA pools and dense SNP arrays. Nucleic Acids Res. 2008;36
(6):e35. [PMC free article] [PubMed]
• Barratt BJ, Payne F, Rance HE, Nutland S, Todd JA, Clayton DG. Identification of the sources of error in allele frequency estimations from pooled DNA indicates an optimal experimental design. Ann
Hum Genet. 2002;66(Pt 5-6):393–405. [PubMed]
• Skol AD, Scott LJ, Abecasis GR, Boehnke M. Joint analysis is more efficient than replication-based analysis for two-stage genome-wide association studies. Nat Genet. 2006;38(2):209–213. [PubMed]
• Gene × Environment, Gene × Gene Interaction Home page. http://hydra.usc.edu/gxe/
• Steemers FJ, Gunderson KL. Whole genome genotyping technologies on the BeadArray platform. Biotechnol J. 2007;2(1):41–49. [PubMed]
• Kuhn K, Baker SC, Chudin E, Lieu MH, Oeser S, Bennett H, Rigault P, Barker D, McDaniel TK, Chee MS. A novel, high-performance random array platform for quantitative gene expression profiling.
Genome Res. 2004;14(11):2347–2356. [PMC free article] [PubMed]
• Bostrom MA, Lu L, Chou J, Hicks PJ, Xu J, Langefeld CD, Bowden DW, Freedman BI. Candidate genes for non-diabetic ESRD in African Americans: a genome-wide association study using pooled DNA. Hum
Genet. 2010;128(2):195–204. [PMC free article] [PubMed]
• Jawaid A, Sham P. Impact and quantification of the sources of error in DNA pooling designs. Ann Hum Genet. 2009;73(1):118–24. [PubMed]
• Leek JT, Scharpf RB, Bravo HC, Simcha D, Langmead B, Johnson WE, Geman D, Baggerly K, Irizarry R. Tackling the widespread and critical impact of batch effects in high-throughput data. Nat Rev
Genet. 2010;11(10):733–739. [PMC free article] [PubMed]
Articles from BMC Medical Genomics are provided here courtesy of BioMed Central
• Finding Markers That Make a Difference: DNA Pooling and SNP-Arrays Identify Population Informative Markers for Genetic Stock Identification[PLoS ONE. ]
Ozerov M, Vasemägi A, Wennevik V, Diaz-Fernandez R, Kent M, Gilbey J, Prusov S, Niemelä E, Vähä JP. PLoS ONE. 8(12)e82434
• Cost-effective genome-wide estimation of allele frequencies from pooled DNA in Atlantic salmon (Salmo salar L.)[BMC Genomics. ]
Ozerov M, Vasemägi A, Wennevik V, Niemelä E, Prusov S, Kent M, Vähä JP. BMC Genomics. 1412
See all...
Your browsing activity is empty.
Activity recording is turned off.
See more... | {"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3247851/?tool=pubmed","timestamp":"2014-04-21T13:22:07Z","content_type":null,"content_length":"148020","record_id":"<urn:uuid:fc1d6269-c88d-42d6-9a85-73dc556488df>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00450-ip-10-147-4-33.ec2.internal.warc.gz"} |
does 0<(unsigned short)0x8000 hold?
On 27/03/2012 19:32, Tim Rentsch wrote:
> Francois Grieu <(E-Mail Removed)> writes:
>> I just tested my code for portability problems on systems
>> where int is 16-bit, and found one such system where
>> #include <stdio.h>
>> int main(void)
>> {
>> printf("%d\n", 0<(unsigned short)0x8000 );
>> return 0;
>> }
>> outputs 0. If I change 0< to 0u<, the output is 1.
>> Is that a bug in this compiler, w.r.t. C99?
> Yes. There is a simple argument that it is:
> 1. The value 0x8000 is always representable as an unsigned short.
> 2. The rules for 'integer promotions' always preserve value.
> 3. The rules for 'usual arithmetic conversions' preserve value
> for all operand values that are non-negative. (A negative
> operand value might become non-negative due to UAC, but
> all non-negative values don't change under UAC.)
> Since 0 < 0x8000 must hold, 0 < (unsigned short) 0x8000 must hold,
> because ultimately the same mathematical comparison is done.
Nice, reusable argument. Thanks.
>> As an aside, 6.5.8p2 puzzles me; [snip]
>> As an aside of the aside, I do not understand the third option in
>> the constraint, and it has been removed in N1570.
> The removal is not a change but just indicative of a change in
> nomenclature. In C99, there are incomplete types and object
> types. In N1570, there are incomplete object types and complete
> object types, which together make up object types. So in N1570,
> both complete and incomplete types are included under the now
> more general 'object type' term, and hence the third option in
> C99 is folded into the second option in N1570.
Again, thanks for the synthetic explanation.
Francois Grieu | {"url":"http://www.velocityreviews.com/forums/t944792-p2-does-0-unsigned-short-0x8000-hold.html","timestamp":"2014-04-16T22:27:26Z","content_type":null,"content_length":"41343","record_id":"<urn:uuid:240e699c-7e80-49ab-b97c-f6bfd056850a>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00367-ip-10-147-4-33.ec2.internal.warc.gz"} |
Not so very long ago, the descriptions used to identify all the various shapes which women encompass left the fruit bowl and migrated to the mathematical shapes department. 767 more words
SMART Tab
If you google The Golden Mean you will also find the Golden Ratio, the Golden Angle, Devine Proportion, and Sacred Geometry. It can be found in photography, art, nature, the proportions of the human
body, architecture. 223 more words
Heres a couple of functions for calculating the confidence intervals for proportions.
Firstly I give you the Simple Asymtotic Method:
simpasym <- function(n, p, z=1.96, cc=TRUE){
out <- list()
out$lb <- p - z*sqrt((p*(1-p))/n) - 0.5/n
out$ub <- p + z*sqrt((p*(1-p))/n) + 0.5/n
} else {
out$lb <- p - z*sqrt((p*(1-p))/n)
out$ub <- p + z*sqrt((p*(1-p))/n)
} 259 more words
Today in geometry, I had an empty 2.5 gallon container of water. I raised the question “How long did it take for me to drain the container?” 199 more words
Like the show or not, who doesn’t want to have a little bit of Carrie Bradshaw’s style in their life? We can all have her best, most fashionable accessory– confidence. 345 more words | {"url":"http://en.wordpress.com/tag/proportions/","timestamp":"2014-04-16T19:28:27Z","content_type":null,"content_length":"78649","record_id":"<urn:uuid:eb6b0f84-d8e5-41b1-ae6b-e60d32b8982c>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00506-ip-10-147-4-33.ec2.internal.warc.gz"} |
A T matrix for scattering from a doubly infinite fluid--solid
ASA 124th Meeting New Orleans 1992 October
5pPA3. A T matrix for scattering from a doubly infinite fluid--solid interface with doubly periodic surface roughness.
Judy Smith
Garner C. Bishop
Naval Undersea Warfare Ctr. Div., Newport, RI 02841-5047
The T-matrix formalism is used to calculate scattering of a pressure wave from a doubly infinite fluid--solid interface with doubly periodic surface roughness. The Helmholtz--Kirchhoff integral
equations are used to represent the scattered pressure field in the fluid and the displacement field in the solid. The boundary conditions are applied and a system of four coupled integral equations
is obtained. The incident field, the surface fields, and scattered pressure field in the fluid and displacement field in the solid, are represented by infinite series of Floquet plane waves. This
process discretizes the integral equations and transforms them into a system of four coupled doubly infinite linear equations. The extended boundary condition is applied and the T matrix that relates
the spectral amplitudes of the incident field to the spectral amplitudes of the scattered fields is constructed. An exact analytic solution and numerical results are obtained for scattering from a
doubly periodic rough surface constructed by superposing two sinusoids each of which depends on a single but different orthogonal coordinate. | {"url":"http://www.auditory.org/asamtgs/asa92nwo/5pPA/5pPA3.html","timestamp":"2014-04-19T10:51:20Z","content_type":null,"content_length":"1825","record_id":"<urn:uuid:ae6ba9ee-8abd-4b3f-8829-bcb27f24a256>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00207-ip-10-147-4-33.ec2.internal.warc.gz"} |
Post a reply
And if you have a shape of known cross section, the integral becomes much simpler.
Like this paraboloid, for example. The equation is z = -(x² + y²) + 10. The cross section is a circle of radius √[-z + 10] = x² + y². The circumference is then 2π√[-z + 10] = x² + y².
You then integrate the circumference multiplied by the differential of z, dz.
In coming up with this example, I've realized a deficiency in my instruction that I'm sure will be filled at a later date, but I want to know now. | {"url":"http://www.mathisfunforum.com/post.php?tid=2847&qid=27962","timestamp":"2014-04-20T16:54:30Z","content_type":null,"content_length":"22845","record_id":"<urn:uuid:15528067-ba46-4ff7-88ed-827f11d51140>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00205-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matlab-Like Tools for HPC » ADMIN Magazine
Matlab-Like Tools for HPC
From people who build a simple two-node cluster all the way up to people who have access to very large systems, one of the most common questions about high-performance computing (HPC) is: “What
applications can I run on an HPC system?” One of the most popular applications is Matlab, which a large number of people use in their everyday work and research – either Matlab or Matlab-like tools.
For example, a fairly recent blog posting from Harvard University’s Faculty of Arts and Sciences, Research Computing Group showed that the second most popular Environment Module was Matlab. People
are using Matlab for a variety of tasks that range from the humanities, to science, to engineering, to games, and more. Some researchers use it for parameter sweeps by launching 25,000 or more
individual Matlab runs at the same time. Needless to say, Matlab is used very heavily at a number of places, so it is a very good candidate for running on an HPC system.
I don’t want to take anything away from MathWorks, the creator of Matlab, because their product is a wonderful application, but for a number of reasons, Matlab might not be the answer for some people
(e.g., they either can’t afford Matlab or can’t afford 25,000 licenses, they just want to try a few Matlab features, or they want or need access to the source code). This brings up the category of
tools that are typically called “Matlab-like”; that is, they try to emulate the concept of Matlab and make the syntax basically compatible so moving back and forth is relatively easy. When people ask
what tools or applications they can try on their shiny new cluster, I tend to recommend one of these Matlab-like tools, even though they aren’t strictly parallel right out of the box (so to speak).
In this article, I want to talk about a few of these tools so you can get an idea of what’s available in the open source world for Matlab-like tools. I won’t be looking at other numerical tools that
have a syntax different from Matlab, such as R or Scipy; rather, I’ll be covering tools that are trying to be like Matlab.
I’ll be briefly covering Scilab, GNU Octave, and FreeMat. These tools try to be as close as possible to Matlab syntax so that Matlab code will transfer over easily, with the possible exception of
Simulink and GUI Matlab code. They have varying degrees of success with Matlab compatibility, but all are inherently serial applications.
Serial in this case means that the vast majority of the code is executed on a single core, although some of the programs have the ability to do a small amount of parallel execution. To get them to
run code in parallel usually requires some add-ons, such as MPI, and rewriting the code. This approach allows you to start multiple instances of the tool on different nodes and have them communicate
over a network so that code can be executed in parallel.
I won’t be comparing or contrasting the tools; rather, I’ll briefly present them with some pointers on how to install and use the tool, and I’ll leave the final determination of which tool is
“better” for your case up to you.
Scilab is one of the oldest Matlab-like tools. It was started in 1990 in France, and in May 2003, a Scilab Consortium was formed to better promote the tool. In June 2012, the Consortium created
Scilab Enterprises, which provides a comprehensive set of services around Scilab. Currently, it also develops and maintains the software. Scilab is released under a GPL-compatible license called
Prepackaged versions of Scilab exist for Linux (32-bit and 64-bit); Mac OS X; and Windows XP, Vista, and Windows 7, along with, of course, the source code. These packages include all of Scilab
including something called Xcos which is something along the lines of Simulink from MathWorks. Scilab is the only open source Matlab-like tool to include something akin to Simulink. Scilab also comes
with both 2D and 3D visualization, extensive optimization capability, statistics, control system design and analysis, signal processing, and the ability to create GUIs by writing code in Scilab. You
can also interface Fortran, C, C++, Java, or .NET code to Scilab.
Installing Scilab on Linux is easy with either one of the two precompiled binaries: 32- or 64-bit. I downloaded the 64-bit binary (a tar.gz file), and untarred it into /opt. This produces a
subdirectory /opt/scilab-5.4.0 (which was the latest version as I wrote this). To run Scilab, I just used the command
which brought up the Scilab GUI tool. The main window is shown in Figure 1.
The console in the middle of the figure accepts commands; the remainder of the window is a file browser on the left, a variable browser at top right, and a command history on the bottom right. It
also has a very nice built-in text editor called “SciNotes” (Figure 2), which can be used to write code.
Scilab’s innovative Variable Browser allows you to edit variable values, including those in matrices, using something like a spreadsheet tool. When you first bring up the editor, it displays a list
of the variables in the current workspace (Figure 3).
When you double-click on a variable, you call up the variable editor to edit the values. For example, double-clicking on variable A brought up the spreadsheet-like view shown in Figure 4.
At this point, I can edit any value for any entry of A.
A “Modules” capability adds extra functionality to Scilab. Much like the “toolboxes,” of Matlab, Scilab keeps modules at a website called ATOMS (AuTomatic mOdules Management for Scilab). One of the
most critical modules for HPC is probably sciGPGPU, which provides GPU computing capabilities. Using sciGPGPU within Scilab is relatively straightforward, but you need to know something about GPUs
and CUDA or OpenCL to use it effectively. Listing 1 shows a code snippet taken from the main sciGPGU site that illustrates how to use the cuBLAS library. (Note that you can also used the cuFFT
library, but sample code for it is not shown.)
Listing 1: Sample Scilab code for GPUs using sciGPGPU
// Init host data (CPU)
A = rand(1000,1000);
B = rand(1000,1000);
C = rand(1000,1000);
// Set host data on the Device (GPU)
dA = gpuSetData(A);
dC = gpuSetData(C);
d1 = gpuMult(A,B);
d2 = gpuMult(dA,dC);
d3 = gpuMult(d1,d2);
result = gpuGetData(d3); // Get result on host
// Free device memory
dA = gpuFree(dA);
dC = gpuFree(dC);
d1 = gpuFree(d1);
d2 = gpuFree(d2);
d3 = gpuFree(d3);
Scilab has a vibrant community, and one excellent place to go to learn more or to help get started is the Scilab wiki, which has a very good section on migrating from Matlab to Scilab. At this site,
an extensive PDF discusses differences between Matlab and Scilab and how to change your Matlab code, if it needs to be changed, to run on Scilab.
An additional excellent Scilab resource is a PowerPoint presentation by Johnny Heikell of 504 slides (at last count) that introduces Scilab and how to use it. He also shows how to convert Matlab
files to Scilab files.
Keep in mind that the downloadable Scilab binaries are built to be as fast as possible, yet still be transportable. Because performance is extremely important in HPC, you might want to build Scilab
yourself. This would allow you to include Intel’s MKL library, to get the fastest possible BLAS and FFT operations for Intel processors, or ACML (AMD Core Math Library), which is used to tune AMD
processors. Be sure to read all of the details on building Scilab at the wiki site; the GUI portion of Scilab requires Java.
GNU Octave
The GNU Octave project was conceived by John W. Eaton at the University of Wisconsin-Madison as a companion to a chemical reactor course he taught. Serious design of Octave, as it was first called,
began in 1992, with the first alpha release on January 4, 1993, and the 1.0 release on February 17, 1994. In 1997, Octave became GNU Octave (starting with version 2.0.6). From the beginning, it was
published under the GNU GPL license – initially, the GNU GPLv2 license but later switched to the GNU GPLv3 license.
For the rest of this article, I will refer to GNU Octave as just Octave. Like Scilab and Matlab, Octave is a high-level interactive language for numerical computations. Its language is very similar
to, but slightly different from, Matlab. It comes with a large number of functions and packages and uses Gnuplot for plotting and visualization.
Octave is popular and widely used, perhaps partly because it is part of GNU, so it is commonly built for Linux distributions. However, I also think it is widely used because the basic syntax is close
to Matlab, and it is open-source. Some differences between Octave and Matlab are explained in the Octave wiki, a FAQ on porting, a table of key differences, and a Wikibook.
A huge number of additional toolkits (same concept as a Matlab toolbox) for Octave are available at Octave-Forge. Although there are far too many to be listed here, a few notable ones include:
• Benchmark (about 2 years old but still possibly useful)
• Control
• Data smoothing
• Database
• Financial
• Fuzzy-Logic-toolkit
• Image (processing images)
• IO (I/O in external formats)
• Linear algebra (additional linear algebra computations)
• Multicore (about 2 years old, but intended for parallel processing functions)
• nnet (Neural networks)
• Optim (optimization)
• Signal (signal processing)
• Specfun (special functions)
• Statistics
• Symbolic (symbolic computations)
One thing you do need to note about Octave is that files from Matlab Central’s File Exchange cannot be used in Octave, as explained in the Octave FAQ.
Octave is easy to install because your favorite distribution probably has it available. In my case, I use Scientific Linux 6.2 (Listing 2).
Listing 2: Abbreviated installation output of Octave on SL6.2 system
[root@test1 laytonjb]# yum install octave
Dependencies Resolved
Package Arch Version Repository Size
octave x86_64 6:3.4.3-1.el6 epel 9.1 M
Installing for dependencies:
GraphicsMagick x86_64 1.3.17-1.el6 epel 2.2 M
GraphicsMagick-c++ x86_64 1.3.17-1.el6 epel 103 k
blas x86_64 3.2.1-4.el6 sl 320 k
environment-modules x86_64 3.2.7b-6.el6 sl 95 k
fftw x86_64 3.2.2-14.el6 atrpms 1.6 M
fltk x86_64 1.1.10-1.el6 atrpms 375 k
glpk x86_64 4.40-1.1.el6 sl 358 k
hdf5-mpich2 x86_64 1.8.5.patch1-7.el6 epel 1.4 M
mpich2 x86_64 1.2.1-2.3.el6 sl 3.7 M
qhull x86_64 2010.1-1.el6 atrpms 346 k
qrupdate x86_64 1.1.2-1.el6 epel 79 k
suitesparse x86_64 3.4.0-2.el6 epel 782 k
texinfo x86_64 4.13a-8.el6 sl 667 k
Transaction Summary
Install 14 Package(s)
Total download size: 21 M
Installed size: 81 M
Is this ok [y/N]: y
octave.x86_64 6:3.4.3-1.el6
Dependency Installed:
GraphicsMagick.x86_64 0:1.3.17-1.el6 GraphicsMagick-c++.x86_64 0:1.3.17-1.el6
blas.x86_64 0:3.2.1-4.el6 environment-modules.x86_64 0:3.2.7b-6.el6
fftw.x86_64 0:3.2.2-14.el6 fltk.x86_64 0:1.1.10-1.el6
glpk.x86_64 0:4.40-1.1.el6 hdf5-mpich2.x86_64 0:1.8.5.patch1-7.el6
mpich2.x86_64 0:1.2.1-2.3.el6 qhull.x86_64 0:2010.1-1.el6
qrupdate.x86_64 0:1.1.2-1.el6 suitesparse.x86_64 0:3.4.0-2.el6
texinfo.x86_64 0:4.13a-8.el6
After installing Octave, I had one small problem to solve. The HDF5 libraries couldn’t be found, so I added a line to my .bashrc file so the library was in LD_LIBRARY_PATH:
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/lib64/mpich2/lib/"
To run Octave, I simply enter octave at the command prompt.
Right now, Octave is a command-line-driven tool without a standard GUI. Several attempts have been made at a GUI, but none have been successful enough to be included with Octave. You can read more
about it here in the Octave FAQ, but for the time being, Octave is a command-line tool.
Figure 5 below shows a console window on my system with some Octave commands.
Octave can also use gnuplot to plot results and for visualization. Figure 6 below is an example of a 3D plot from an Introduction to GNU Octave website that shows the commands used to create a plot
(Figure 7).
Octave creates a new window with the resulting plot, as shown in Figure 7.
A number of sites have introductions and examples of Octave, and a good place to start is the Octave wiki or a slightly dated Introduction to Octave PDF, which is nevertheless still a valuable
resource for help getting started with Octave.
Recently, an effort has been made to create a JIT (Just In Time) compiler for Octave. It is a work in progress and not quite ready for production work, but you can read about the goals and possibly
experiment with it. Be warned that work on the JIT has not progressed for a few months, but I’m hoping it doesn’t become another dead Octave project.
As with Scilab, the downloadable binaries for Octave that come with your distribution are likely to be the least common denominator in terms of performance, but building Octave is fairly easy. Intel
provides a set of instructions on how to build Octave using MKL, and a blog post tells you how to build Octave with ACML for AMD processors (it’s for Ubuntu, but the principles are the same). To make
things a little more generic, you can also use OpenBLAS to build Octave.
Some efforts have been made to run some Octave functions on GPUs. However, adding GPU capability to Octave is not likely to happen anytime soon. To be honest, I don’t completely understand the
issues, but it involves license issues because the GPU GPLv3 licenses are not compatible with licenses for various GPU tools and languages (CUDA in particular). Hopefully, this will be resolved in
the future, but in my opinion, it really hurts Octave’s applicability in HPC.
A more recent development effort for a Matlab-like tool is called FreeMat. The intention is to develop an interactive numerical environment that is similar to both Matlab and IDL. FreeMat has
prebuilt binaries for Windows, Mac OS X, and Linux and is released under the GPL license (I think GPLv2).
FreeMat follows the same lines as Scilab and Octave, and the language is fairly close to Matlab’s language. The FreeMat FAQ has a short section on the differences between FreeMat and Matlab that
should help you take Matlab code and run it with FreeMat.
I tried installing an FC14 (Fedora Core 14) version of FreeMat 4.x on my Scientific Linux 6.2 system using rpm to install it and yum to help resolve dependencies, but I received errors that I could
not resolve, and it failed, so I tested FreeMat on a Windows 7 system.
Figure 8 shows the FreeMat console that comes up when started.
The window looks similar to Scilab and, to some degree, Matlab. A console appears on the right, and the stacked windows on the left are the file browser, history, variable list, and debug windows.
The figure shows that the simple AX=B works just the same as in Matlab, Scilab, and Octave.
FreeMat can also do some reasonable graphics. Figure 9 shows the console for a simple 3D plot example taken from the FreeMat help site, and Figure 10 shows the plot.
The FreeMat site has a good introduction to the software, and you can find a FreeMat Primer on the FLOSS for Science website. A good introduction to FreeMat is combined with a discussion of basic
numerical methods, as well. The PDF is incomplete by a few pages, but it does get you started with FreeMat.
A few tools that are somewhat Matlab-like – some still surviving and some defunct – include RLaB, RLaB+, JMathlab, and O-Matrix (commercial). A whole host of other tools exist if you want to stray
from Matlab compatibility even further.
Going Parallel
Matlab and Matlab-like tools are extremely useful in HPC even though they are serial applications. As I mentioned earlier in this article, Matlab and Matlab-like tools can be used for tasks such as
parameter sweeps by running something like 25,000 simultaneous instances of the application. However, in other situations, you might want to run the underlying functions in parallel.
For example, you might want to perform a large FFT or a large SVD (single-value decomposition) as quickly as possible by running the application using all of the cores in the node, or even by running
the computations across several distributed nodes.
Several parallel processing options for Scilab are summarized in the Scilab parallel computing documentation. The first option is to use the inherent multicore capabilities in the functions used in
Scilab. For example, certain libraries perform the linear algebra computations in Scilab, and these libraries could perform the computations using all of the cores in the system. For example, Intel’s
MKL library can use all of the cores for performing matrix multiplications or other functions. Typically this is done using OpenMP, but not necessarily. However, these computations are limited to
intrinsic functions, so you can’t parallelize Scilab code such as a for loop.
Scilab also has the capability of running more explicit parallel applications on multicore systems (i.e., cores on the same node). A function called parallel_run allows parallel calls to a function.
This allows you to parallelize function calls on the system – but remember that the execution is on a single node (but with four-socket AMD systems, you can get 64 cores on a single system).
For parallel distributed applications on Scilab, you can also use PVM (Parallel Virtual Machine). PVM is a rather old approach to parallel programming and has given way to MPI (Message Passing
Interface) for the most part, but it is still used in some areas. A good blog post discusses how to use PVM within Scilab (but it is two years old by now). A git repository holds some early code
developed by Scilab Enterprises to create MPI capability for Scilab.
In a manner similar to Scilab, Octave can also use numerical libraries that have been parallelized to run on a single node, such as Intel’s MKL or something similar, perhaps using OpenMP. You just
have to build Octave yourself and use the appropriate libraries.
Octave also has a parallel toolbox to use for running applications on a cluster or a distributed system, and with the parcellfun command, you can execute parallel function calls on the same node.
This is very similar to Scilab’s parallel_run command.
The openmpi_ext toolbox uses MPI to allow Octave instances on different nodes to communicate and share data. It requires the use of Open MPI, but if you have experience in HPC, it isn’t difficult to
build and install.
Parallel coding in FreeMat is a little more difficult. Evidently, early versions of FreeMat could use MPI for parallel coding; however, it appears this work has not been continued in the current
versions of FreeMat.
One interesting FreeMat feature is the use of threads within the language. FreeMat-threads can communicate with each other through the use of global variables. Although I have not tested this
feature, it appears to be in the current versions.
In this article, I briefly reviewed three Matlab-like tools: Scilab, Octave, and FreeMat. All three have their pluses and minuses that can be debated, but in my opinion, which one you chose
ultimately depends on your requirements. If you need a comparison of these tools check out this University of Maryland technical report.
If you searching for a general-purpose numerical tool for HPC, one of these tools is a good candidate. If you are willing to stray further from Matlab compatibility, other candidates could work as
well, but that is the subject of another article and likely another series of debates. In the mean time, give one of these applications a whirl – I think you’ll like what you see. | {"url":"http://www.admin-magazine.com/HPC/Articles/Matlab-Like-Tools-for-HPC","timestamp":"2014-04-19T04:59:41Z","content_type":null,"content_length":"57064","record_id":"<urn:uuid:2c7a9c71-f58d-485d-91c8-b7599647f115>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00267-ip-10-147-4-33.ec2.internal.warc.gz"} |
Transposing of formulae
October 26th 2009, 10:09 AM
Transposing of formulae
Hi there, thanks in advance for any help
Make A the subject of the formula:
Q = C.A square root (2g.h / (1-A^2/x^2))
I have got this far but unsure what to do with the fraction of a fraction... the (x^2) bit.
Q^2 / C^2.A^2 = (2g.h / (1-A^2/x^2))
Thank you again for any assitance
October 26th 2009, 12:07 PM
Hi there, thanks in advance for any help
Make A the subject of the formula:
Q = C.A square root (2g.h / (1-A^2/x^2))
I have got this far but unsure what to do with the fraction of a fraction... the (x^2) bit.
Q^2 / C^2.A^2 = (2g.h / (1-A^2/x^2))
Thank you again for any assitance
I would first isolate that square root:
$\frac{Q}}{CA}= \sqrt{\frac{2gh}{1- \frac{A^2}{x^2}}$
and then square both sides.
[tex]\frac{Q^2}{C^2A^2}= \frac{2gh}{1- \frac{A^2}{x^2}}
Multiply both numerator and denominator, on the right, by $x^2$
$\frac{Q^2}{C^2A^2}= \frac{2ghx^2}{x^2- A^2}$
Get rid of the fractions by multiplying both sides by the denominators, $C^2A^2$ and $x^2- A^2$
$Q^2(x^2- A^2)= 2ghx^2C^2A^2$
$Q^2x^2- Q^2A^2= 2ghx^2C^2A^2$
Add $Q^2A^2$ to both sides
$Q^2x^2= 2ghx^2C^2A^2+ Q^2A^2$
Factor [tex]A^2[/quote] out of the right side
$Q^2x^2= (2gx^2C^2+ Q^2)A^2$
Divide both sides by $2gx^2C^2+ Q^2$
$\frac{Q^2x^2}{2gx^2C^2+ Q^2}= A^2$
Finally, take the square root of both sides
$A= \sqrt{\frac{Q^2x^2}{2gx^2C^2+ Q^2}}$
October 26th 2009, 02:00 PM
Thanks alot, great help, i just needed a worked example, now I can do them! | {"url":"http://mathhelpforum.com/algebra/110625-transposing-formulae-print.html","timestamp":"2014-04-19T09:47:14Z","content_type":null,"content_length":"8149","record_id":"<urn:uuid:b22354da-258f-454e-a51b-4e28463bfc5f>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00131-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lockers Maths Puzzle
Lockers Maths Puzzle
What is the answer to below puzzle? I am unable to solve this.
"A high school has a strange principal. On the first day, he has his students perform an odd opening day ceremony:
There are one thousand lockers and one thousand students in the school. The principal asks the first student to go to every locker and open it. Then he has the second student go to every second
locker and close it. The third goes to every third locker and, if it is closed, he opens it, and if it is open, he closes it. The fourth student does this to every fourth locker, and so on. After the
process is completed with the thousandth student, how many lockers are open?"
Re: Lockers Maths Puzzle
Hi miansons;
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Lockers Maths Puzzle
hi bobbym,
How are you today?
This seems to be a general rule for any number of lockers. But why? hhhmmmm
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: Lockers Maths Puzzle
Arrh! I've just worked out why.
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: Lockers Maths Puzzle
Hi Bob;
I am still chugging along.
But why?
Seek and ye shall find.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Lockers Maths Puzzle
I have.
Sorry I double posted.
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: Lockers Maths Puzzle
The bulb is lit! Very good!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Lockers Maths Puzzle
Hi guys
The reason is that only lockers whose numbers have and odd number of divisors will become open. Showing that only square numbers satisfy this condition is a piece of pie chart.
Hi Bob
Is this what you meant?
Last edited by anonimnystefy (2012-11-27 00:11:25)
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Real Member
Re: Lockers Maths Puzzle
Hi miansons,
Last edited by phrontister (2012-11-27 09:04:08)
"The good news about computers is that they do what you tell them to do. The bad news is that they do what you tell them to do." - Ted Nelson
Re: Lockers Maths Puzzle
Showing that only square numbers satisfy this condition is a piece of pie chart.
Piece of pie chart? That does not sound delicious at all.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Lockers Maths Puzzle
Who said it was?
I have to ask you something in the random chatter thread later...
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Lockers Maths Puzzle
Easy as pie chart!
Look at the time between post 3 and post 4.
Impressive speed huh?
If only the OP was as quick to respond, sob, sob.
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: Lockers Maths Puzzle
Impressive, indeed. But, why haven't you posted your idea?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Lockers Maths Puzzle
Thank you. It is a new policy of mine. If the OP remains silent then I keep my answer to myself.
I've been sitting on the 45 degree plane answer for 24 hours.
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: Lockers Maths Puzzle
I can see why you would do that... But, do you really think that there will be an improvement in the response of OPs?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Lockers Maths Puzzle
I have to ask you something in the random chatter thread later...
It better not be my age.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Lockers Maths Puzzle
Stefy wrote:
But, do you really think that there will be an improvement in the response of OPs?
No it probably won't. But it will help to stop me getting annoyed when I've spent time on a problem and typing it up and then [blank]
bobbym wrote:
It better not be my age.
But we know you'll never tell that. btw. I'm still working on Phrontister's puzzle.
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: Lockers Maths Puzzle
Phrontister is brilliant but standing upside for his entire 28 years has affected his judgement. He believes that he saw me in 1904 at the fair.
His puzzle is going to place me at about 350 years old.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Lockers Maths Puzzle
Have you solved his puzzle then ?
I kinow from what I have done so far, that it will not make you more than 100 .
So he couldn't have seen you in 1904.
1913 I could believe.
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: Lockers Maths Puzzle
No, I have never solved that problem.
1913 is not correct. Phrontister is a good friend and a cousin but he is unable to see the contradiction between my age and my appearance at the fair.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Real Member
Re: Lockers Maths Puzzle
bobbym wrote:
He believes that he saw me in 1904 at the fair.
No! No! No! Closure is closure!!
phrontister wrote:
I think that's it, Bobby!
I was probably so swept along with the whole "I've gotta know how old Bobby is and what he looks like" thing here on MIF that my imagination took hold. No other explanation makes sense...so,
closure at last!
bobbym wrote:
I believe you are probably imagining the whole thing. I mean some people want to believe so bad that they have seen me that they just think they did.
Btw, I improved my C1 formula in my post #9 by removing a duplication, making it more succinct. I'm glad I spotted this now before the OP did.
Hi, Bob. So someone is actually working on my puzzle!
"The good news about computers is that they do what you tell them to do. The bad news is that they do what you tell them to do." - Ted Nelson | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=241760","timestamp":"2014-04-17T01:06:05Z","content_type":null,"content_length":"37456","record_id":"<urn:uuid:ada4aaca-4d82-431a-80cc-15a0a5868bf1>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00663-ip-10-147-4-33.ec2.internal.warc.gz"} |
College Algebra Tutors
Cambridge, MA 02139
Polymath tutoring math, science, and writing
...I began using Microsoft Excel in the early 1990s as a graduate student at MIT, and since then I've used the software for a wide variety of personal, professional, and academic projects in science,
engineering, publishing, administration, finance, and management...
Offering 10+ subjects including algebra 2 | {"url":"http://www.wyzant.com/geo_Revere_MA_College_Algebra_tutors.aspx?d=20&pagesize=5&pagenum=2","timestamp":"2014-04-24T04:25:36Z","content_type":null,"content_length":"59972","record_id":"<urn:uuid:ecb5de41-d75a-4b6a-acf5-51313ce7be33>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00254-ip-10-147-4-33.ec2.internal.warc.gz"} |
Russell, IL Algebra 2 Tutor
Find a Russell, IL Algebra 2 Tutor
...The key to learning geometry is for the student to solve problems by finding the information that is implied in the problem. This involves applying the knowledge that we already know such as
what are supplementary angles. I have taught middle school math for 3 years.
12 Subjects: including algebra 2, calculus, algebra 1, geometry
...As an undergraduate, I worked with my school, Towson University, tutoring peers in various entry-level to advanced biology and chemistry topics. In all of these experiences, I worked mostly
one on one (but up to groups of five) which allowed me to conform my teaching methods to the student. Fin...
26 Subjects: including algebra 2, chemistry, geometry, algebra 1
...Geometry uses many properties and theorem to solve problems for angles, lines, triangles, and many more figures. I have taught pre-algebra to many students in the past. This course starts out
with basic properties of operations such as associative, distributive, and many more.
11 Subjects: including algebra 2, calculus, geometry, trigonometry
...I have had many physics courses and much related subject matter in my under-graduate engineering coursework and graduate work in applied mathematics. I bring a diverse background to the
tutoring sessions. I thoroughly enjoy tutoring ACT Math due to the diversity of subject matter.
18 Subjects: including algebra 2, physics, calculus, geometry
...I can help students with Chemistry and Math (pre-calculus, differential equations). I have worked as a volunteer tutor and have helped people working towards their GEDs. I have worked as a
graduate assistant when I was working towards my Masters degree in Chemistry at DePaul University and I use...
13 Subjects: including algebra 2, English, reading, chemistry
Related Russell, IL Tutors
Russell, IL Accounting Tutors
Russell, IL ACT Tutors
Russell, IL Algebra Tutors
Russell, IL Algebra 2 Tutors
Russell, IL Calculus Tutors
Russell, IL Geometry Tutors
Russell, IL Math Tutors
Russell, IL Prealgebra Tutors
Russell, IL Precalculus Tutors
Russell, IL SAT Tutors
Russell, IL SAT Math Tutors
Russell, IL Science Tutors
Russell, IL Statistics Tutors
Russell, IL Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Benet Lake algebra 2 Tutors
Franksville algebra 2 Tutors
Indian Creek, IL algebra 2 Tutors
Ingleside, IL algebra 2 Tutors
Kansasville algebra 2 Tutors
Lindenhurst, IL algebra 2 Tutors
Paddock Lake, WI algebra 2 Tutors
Round Lake Heights, IL algebra 2 Tutors
Somers, WI algebra 2 Tutors
Sturtevant algebra 2 Tutors
Third Lake, IL algebra 2 Tutors
Tower Lakes, IL algebra 2 Tutors
Trevor algebra 2 Tutors
Union Grove, WI algebra 2 Tutors
Woodworth, WI algebra 2 Tutors | {"url":"http://www.purplemath.com/Russell_IL_Algebra_2_tutors.php","timestamp":"2014-04-21T04:54:07Z","content_type":null,"content_length":"24110","record_id":"<urn:uuid:e8437abf-c48b-4340-a0c1-1ed62fc7b09d>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00314-ip-10-147-4-33.ec2.internal.warc.gz"} |
Why we've got the cosmological constant all wrong
Effective field theory incorrectly predicts the value of the cosmological constant, Λ, as well as the value of an analogous term in an analogous gravity model in the form of a BEC. BECs are correctly
described only by quantum models, and a quantum theory of gravity may be required to correctly predict Λ. Image credit: Finazzi, et al. ©2012 American Physical Society
(PhysOrg.com) -- Some scientists call the cosmological constant the "worst prediction of physics." And when today s theories give an estimated value that is about 120 orders of magnitude larger than
the measured value, it s hard to argue with that title. In a new study, a team of physicists has taken a different view of the cosmological constant, Λ, which drives the accelerated expansion of the
universe. While the cosmological constant is usually interpreted as a vacuum energy, here the physicists provide evidence to support the possibility that the mysterious force instead emerges from a
microscopic quantum theory of gravity, which is currently beyond physicists reach.
The scientists, Stefano Finazzi, currently of the University of Trento in Povo-Trento, Italy; Stefano Liberati at SISSA, INFN in Trieste, Italy; and Lorenzo Sindoni from the Albert Einstein Institute
in Golm, Germany, have published their study in a recent issue of Physical Review Letters.
The authors are far from the first who are dissatisfied with the cosmological constant. Previously, other scientists have suggested that the huge discrepancy between the observed and estimated values
is due to the use of semi-classical effective field theory (EFT) calculations for estimating a quantity that can be computed only using a full quantum theory of gravity. Although no one can show what
value a quantum theory of gravity would give without having such a theory, physicists have shown that EFT calculations fail at estimating similar values in analogue gravity models.
Here, the physicists consider an analogue gravity model in the form of a Bose-Einstein condensate (BEC), a group of atoms that behave as a single quantum system when cooled to temperatures near
absolute zero. While a BEC may seem to have nothing in common with the expanding universe, the physicists showed in a previous paper that a BEC can be described by the same Poisson equation that
describes nonrelativistic (Newtonian) gravity. This framework includes a term that is analogous to the cosmological constant; this term describes the part of a BEC s ground-state energy that
corresponds to the condensate s quantum depletion.
Since BECs are accurately described by other (quantum) equations, the physicists decided to test how well EFT calculations could compute the BEC s analogous cosmological constant term. They found
that EFT calculations do not give the correct result. The finding confirms the earlier studies that showed that EFT calculations produce an incorrect result when used to compute the ground-state
energy of other analogue gravity models.
We have shown how conceptually subtle could be the computation of the cosmological constant, by considering an analogue gravity model, Finazzi told PhysOrg.com. This simple example shows that the
knowledge of the microscopic structure of spacetime might be an essential guide for a correct interpretation of the nature of the cosmological constant, and hence for a correct estimate of it. We
then reinterpret the large discrepancy between the naive computation and the observed value as a basic misunderstanding on this point. Interestingly, this reasoning might also be a guide to the
selection of the correct quantum gravity theory.
As the physicists explain, the BEC model described by Poisson equations is too simple to completely describe the complex features of the universe s accelerating expansion. However, the failure of the
EFT framework to describe BECs analogue cosmological constant supports the possibility that the EFT framework also fails at describing the cosmological constant.
The details have further implications. For one thing, the results suggest that there may be no a priori reason to describe the cosmological constant as vacuum energy. Instead, the cosmological
constant may emerge from the underlying quantum theory of gravity describing spacetime. As the physicists explain, a quantum theory of gravity differs from various modified theories of gravity that
have been proposed recently in that a quantum theory describes spacetime at the most fundamental level.
In a modified gravity theory, one is just postulating a different gravitational dynamics that might show accelerated expansion also for a universe filled with standard matter (i.e., without the
so-called dark energy component), Liberati said. We instead consider the case where a gravitational dynamics is emergent from a microscopic quantum theory, i.e., a theory describing the fundamental
constituents, whatever they are, of our spacetime. From such a theory one would be able to derive a theory of gravity (general relativity or any form of modified gravity) in some appropriate limit
(possibly similar in nature to the hydrodynamic limit of a gas of interacting atoms). Our point is that it is only throughout this derivation/emergence of the gravitational dynamics that in the end
one can determine what is the gravitating energy of the vacuum. We have proven this explicitly in our toy model where it is clearly shown that the use of the macroscopic constituents (and
corresponding energy scales) of the emergent physics might lead to a completely wrong estimate.
We can try to explain this issue with a simple analogy, he said. Water is made by molecules. At a microscopic level molecular dynamics is properly described by quantum mechanics. However, no one
would use quantum mechanics to describe a flowing river, but rather one would use fluid mechanics laws. Of course, fluid dynamics must be compatible with quantum mechanics, i.e., it must be possible
to derive it from the microscopic quantum theory of molecules. Finally, the choice of the most appropriate equations for the description of any phenomenon depends on the scale at which one observes
the physical system. We hence can say that the microscopic quantum theory of gravity corresponds in the analogy to the quantum mechanics of molecules, a theory of gravity corresponds to fluid
mechanics, and the evolution of the universe to the flow of the river.
Continuing the analogy, Liberati adds that there might be a quantity in macroscopic fluid dynamics that cannot be calculated using macroscopic parameters alone. Instead, a microscopic model is
necessary to calculate the correct value.
We argue that, in the case of the calculation of the cosmological constant, this is exactly what happens: the reason of the worst prediction of theoretical physics might ultimately be due to the
attempt to compute a quantity that is sensitive to the microphysics only in terms of macroscopic quantities, he said.
In the future, the physicists hope to further investigate how the BEC analogue model of gravity could possibly lead to the development of a quantum theory of gravity, since many proposed theories of
gravity have features in common with the new model.
We believe that this model can help to change the way how people usually think about the cosmological constant, Sindoni said. In recent years, the idea that spacetime is a form of condensate is
gaining momentum. Of course, to be able to get to theories as close as possible to general relativity, the microscopic models have to be considerably more complex than BECs. However, it can be
conjectured that spacetime is the final outcome of a phase transition for a large number of suitable microscopic constituents, and that the determination of the resulting macroscopic dynamics might
be essentially the same, at the conceptual level, of the determination of the dynamics of a BEC from the knowledge of effective molecular or atomic dynamics, near a phase transition. The translation
of the language and ideas of BECs to quantum gravity models might be a key in the understanding of the physical content of the latter.
Sindoni adds that the cosmological constant will provide a vital test of any proposed quantum theory of gravity.
We think that the comparison of the observational value of the cosmological constant against its theoretical value, predicted by any theory of quantum gravity, can be a very good (if not the unique)
test to validate such theories, he said.
More information: Stefano Finazzi, et al. Cosmological Constant: A Lesson from Bose-Einstein Condensates. PRL 108, 071101 (2012). DOI: 10.1103/PhysRevLett.108.071101
2.7 / 5 (14) Mar 05, 2012
"In recent years, the idea that spacetime is a form of condensate is gaining momentum."
Isn't that ether again..?
1 / 5 (29) Mar 05, 2012
Scientists have so much trouble explaining the expansion because there is no expansion. The universe is simply becoming gradually more opaque to light. This is due to solar particles as the billions
and billions of stars burn their fuel. The stars emit far more solar wind than is required to explain this. It is likely that the visible universe is shrinking.
The rate of increase is density to create this illusion is a factor of 75 parts per 30,856,775,810,000,000,000 each second. That works out to 23652/308567758100000 per year. The density would have
doubled over 13.04 billion years if this rate were not changing. The rate is however increasing as old stars are burning faster than new stars are forming. The rate of expansion appears to be
At the current rate, the sun will appear to be 0.712 miles farther away in 100 years.(twice its current distance in 13 billion years).
2.3 / 5 (3) Mar 05, 2012
40 years since I studied physics, but the author made his point understandable.
3.9 / 5 (13) Mar 05, 2012
Scientists have so much trouble explaining the expansion because there is no expansion. The universe is simply becoming gradually more opaque to light. This is due to solar particles as the
billions and billions of stars burn their fuel. The stars emit far more solar wind than is required to explain this. It is likely that the visible universe is shrinking.
The rate of increase is density to create this illusion is a factor of 75 parts per 30,856,775,810,000,000,000 each second. That works out to 23652/308567758100000 per year. The density would
have doubled over 13.04 billion years if this rate were not changing. The rate is however increasing as old stars are burning faster than new stars are forming. The rate of expansion appears to
be increasing.
At the current rate, the sun will appear to be 0.712 miles farther away in 100 years.(twice its current distance in 13 billion years).
So i'm assuming this is taking into account Edwin Hubble didn't exist.
4.5 / 5 (16) Mar 05, 2012
You havent heard of significant figures then or exponents ?
Where did those figures come from, can u show your working ?
Where does red shift place in your idea re those figures ?
What solar particles are you on about, other than protons (hydrogen) ?
What do you mean by "..more solar wind than is required.." - for what ?
At the beginning of the para you say there is no expansion then you say the expansion is increasing - can you explain this contradiction ?
Was the sun closer by ~0.712 miles 100 years ago ?
As the sun has no specific surface then how can one confirm aspects of your idea over such short comparative distances without knowing where the sun's center happens to be at the time ?
2.1 / 5 (9) Mar 05, 2012
From Article:
While the cosmological constant is usually interpreted as a vacuum energy, here the physicists provide evidence to support the possibility that the mysterious force instead emerges from a
microscopic quantum theory of gravity, which is currently beyond physicists reach.
The gravity the physicists' are talking about is in the ultramicroscopic realm which is definitely way too small for us to see or use. It is good to see physicists are turning away from Einstein
Gravity (gravitational waves) and are turning towards something that has great potential.
3 / 5 (2) Mar 05, 2012
From Article:
The details have further implications. For one thing, the results suggest that there may be no a priori reason to describe the cosmological constant as vacuum energy. Instead, the cosmological
constant may emerge from the underlying quantum theory of gravity describing spacetime. As the physicists explain, a quantum theory of gravity differs from various modified theories of gravity
that have been proposed recently in that a quantum theory describes spacetime at the most fundamental level.
Yes, spacetime fabric (dark energy) does contain gravity (the good stuff) although almost all, if not all, particles contain a certain degree of gravity.
However, it can be conjectured that spacetime is the final outcome of a phase transition for a large number of suitable microscopic constituents.
That statement is like frosting on a big juicy cake.
5 / 5 (8) Mar 05, 2012
"In recent years, the idea that spacetime is a form of condensate is gaining momentum."
Isn't that ether again..?
In the other hand there's no valid quantum gravity theory to date.
Waiting for you "water ripples"
1 / 5 (13) Mar 05, 2012
The cosmological constant controversy is easy to understand with water surface space-time analogy of AWT based on dense aether model. At the water surface can observe the reality with transverse
waves in two perspectives: one deals with much larger objects, than the wavelength of CMBR noise (and the size of humans), you're always inside of observable reality, so we can call it an intrinsic
perspective. This perspective corresponds the quantum mechanics.
The second perspective is used for observations of small object, which are always observed from outside. This perspective is therefore called extrinsic and it essentially corresponds the perspective
of quantum mechanics. These perspectives cannot be mixed, they're separated with hundreds of extradimensions each other. Nevertheless it means, when you observe the water surface from intrinsic
perspective, it appears like nondispersive environment, whereas from intrinsic perspective we are confronted with highly dispersive underwater.
4 / 5 (8) Mar 05, 2012
It is good to see physicists are turning away from Einstein Gravity (gravitational waves) and are turning towards something that has great potential.
Where did they say they're turning away from 'Einstein Gravity'?
1.1 / 5 (10) Mar 05, 2012
The cosmological constant is related to the perceived density of environment, which affects the speed, in which the energy waves are spreading in it. From intrinsic perspective the water surface
would appear like sparse, nondispersive environment, whereas from extrinsic perspective it will appear like dense dispersive environment full of Brownian noise. The energy density ratio essentially
corresponds the cube root of the ratio of speed of waves at the water surface and underwater.
At the case of vacuum, which is behaving like extremely dense environment this ratio is more than one hundred orders of magnitude, which leads to so-called http://en.wikiped...strophe. This is how
the deep difference in predictions of vacuum density calculated from quantum field and general relativity theories is called. The string theory uses a renormalization approach, so its prediction of
cosmological constant differs from reality in forty orders of magnitude.
1 / 5 (3) Mar 05, 2012
Where did they say they're turning away from 'Einstein Gravity'?
Einstein Gravity works on a universal scale. Quantum gravity is in the ultramicroscopic realm.
1 / 5 (14) Mar 05, 2012
..but the author made his point understandable...
The river analogy is the step in the right i.e. "aetheric" direction - nevertheless the AWT handles the water surface analogy of space-time in even much more straightforward way. In dense aether
model the Universe appearance is really just an heavily expanded situation of waves spreading at the water surface, extrapolated into hyperdimensional perspective of nested foam, forming the
It essentially means, if some artefact exists at the water surface, it MUST somehow exist at the cosmic space too. The opposite direction unfortunately doesn't work, because the Universe is way more
complex, than the water surface. But the duality of observational perspectives and the values of cosmological constants and vacuum density can be deduced from the water surface analogy in quite
trivial way. You needn't to bother with boson condensates and with river flow, as these parables just obscure the geometry of the water surface analogy
1 / 5 (12) Mar 05, 2012
My question therefore is, why various physicists don't use the water analogy in its most straightforward way? If nothing else, it would make the mainstream physics more palatable for normal people.
The average physicists must know about it in the same way, like every reader of PO forum.
My suspicion is, the mainstream physics community avoids the using of all models, which would make the subject of their research more transparent for laymans. Many concepts of string theory or
supersymmetry could be explained in much more straightforward way with it. But you can get these analogies only in close range of experts and they're generally considered a heretical.
Eduard Witten: "One thing I can tell you, though, is that most string theorists suspect that spacetime is a emergent phenomena in the language of condensed matter physics".
These guys know quite well about dense aether model. But they will not admit it.
1 / 5 (10) Mar 05, 2012
You havent heard of significant figures then or exponents ?
Where did those figures come from, can u show your working ?
Where does red shift place in your idea re those figures ?
What solar particles are you on about, other than protons (hydrogen) ?
What do you mean by "..more solar wind than is required.." - for what ?
At the beginning of the para you say there is no expansion then you say the expansion is increasing - can you explain this contradiction ?
Was the sun closer by ~0.712 miles 100 years ago ?
As the sun has no specific surface then how can one confirm aspects of your idea over such short comparative distances without knowing where the sun's center happens to be at the time ?
There is only the appearance of an increasing expansion. The numbers are based on the hubble constant of 75Km/sec expansion over a distance of 30,856,775,810,000,000,000 km. Red shift occurs when the
light passes through various densities(I think).
1.2 / 5 (10) Mar 05, 2012
The universe is simply becoming gradually more opaque to light.
This is essentially correct insight. But wait...
This is due to solar particles as the billions and billions of stars burn their fuel.
This is suspicious claim and it sounds "crackpotish" for me. What the "fuel" is supposed to mean?
The stars emit far more solar wind than is required to explain this.
And this is unfortunately apparent BS already. The contemporary cosmology doesn't bother with solar wind at all, so it cannot face any excess of it. In addition, the speed in which light is losing
its energy during travel trough vast areas of CMBR noise has nothing to do with intensity of solar wind at all. The intensity of solar wind has nothing to do with the speed of light dispersion (which
propagates a way faster than the solar wind particles, BTW).
1 / 5 (7) Mar 05, 2012
The conceptual thinking doesn't require some solar wind particles for explanation of light dispersion at all. In dense aether model the space-time simply must always remain inhomogeneous, or it
couldn't exist at all. The water surface appears so large for surface waves, because these waves are spreading so slowly in it. These waves are slowed down with myriads of tiny density fluctuations,
the gradient of which is serving as an environment for for surface wave spreading. The Brownian noise cannot be separated from space-time concept. And this noise is sufficient for explanation of the
light dispersion by itself. We are observing it as a CMBR noise fifty years already.
The similar result follows from general relativity, in which the space-time must be always curved - or it couldn't exist at all. After all, this is the original reason, for which Einstein adjusted
his theory with arbitrary constant - and it has lead to Friedman models developed later.
4.2 / 5 (5) Mar 05, 2012
Edward Witten: "One thing I can tell you, though, is that most string theorists suspect that spacetime is a emergent phenomena in the language of condensed matter physics".
Edward Witten also said people need to start thinking in eleven-dimensions and not in one and twos.
1 / 5 (9) Mar 05, 2012
The another thing is, whereas the CMBR noise is essential for whole existence of space-time in general relativity, it violates the general relativity from more general perspective, being a source of
dark matter and energy artefacts. Every tiny fluctuation follows the GR well at the local level - but in average their different density leads to the effects, which are inconsistent with GR at the
global level. This animation illustrates it for gravitational lensing. Locally every fluctuation behaves as a tiny gravitational lens, fulfilling the general relativity laws well - the light is
moving along geodesics with constant speed all the time. But the variable concentration of these curved paths is something, which general relativity cannot account to. So it becomes violated
gradually and the speed of light changes from place to place. In Czech we have a proverb for it:"A hundred times nothing killed the donkey."
1.4 / 5 (10) Mar 05, 2012
Edward Witten also said people need to start thinking in eleven-dimensions and not in one and twos.
And he was still correct because of it. In AWT these two approaches are dual, which is why it's called an AETHER WAVE theory: the nested density fluctuations of many particles in low number of
dimensions correspond the spreading of low number of waves in high number of dimensions. These two approaches are essentially interchangeable, but the former one is still easier to grasp with human
brain, which used to handle particle environment during whole human history.
There are few less or more straightforward ways, how to understand this duality. In general, the hyperdimensional object becomes the more fuzzy, random and separated spatially, the lower number of
dimensions we are using for its observation. We could say, we are observing it's noncompact lowdimensional slice of it.
1 / 5 (8) Mar 05, 2012
In essence, if we would compress the dense gas into small box, it's foamy density fluctuations would form the same seemingly random mess, like if we would solve the wave equation in lets say fifty
dimensions and observed the three-dimensional projection of the resulting solution into this small box.
2 / 5 (4) Mar 05, 2012
but the former one is still easier to grasp with human brain, which used to handle particle environment during whole human history.
My brain works quite well when it come to math and physics. Edward Witten needs to explain how you can skip one and two to get to eleven.
1 / 5 (9) Mar 05, 2012
Witten needs to explain how you can skip one and two to get to eleven.
String theory has explained it before years, but this explanation is not palatable for those, who don't understand the formal math very well. AWT provides more accessible explanation based on real
life experience: no matter, how much the dense gas is compressed, its density fluctuations will always remain close to 3D spheres. Even when the another level of density fluctuations is formed, then
the resulting nested hyperspheres are most close to 3D spheres in their shape. It's because this way of energy spreading is most effective with respect to principle of least action. The
surface-radius ratio of hypersphere doesn't increase with increasing number of dimensions in monotonous way, but it gets through supremum just for 3D hyperspheres. It means, just the 3D particles and
density fluctuations allows the most intensive propagation of energy at distance and such Universe would appear most dense and observable.
1.6 / 5 (8) Mar 05, 2012
The string theory based derivation of space-time dimensionality you can find for example here
http://news.disco...19.html, but similar solutions have been provided before many years already. Because it's essentially invariant to the geometry of elements choosen, the same result provides the
optimization of "entropy flow" for holographic projection by number of dimensions: http://arxiv.org/abs/1106.4548
3 / 5 (2) Mar 05, 2012
String theory has explained it before years, but this explanation is not palatable for those, who don't understand the formal math very well.
Like I said: "My brain works quite well when it come to math and physics." I am not a proponent of string theory (M-theory). And from the looks of it with your Dense Aether Theory you are not a
string theory lover either.
5 / 5 (5) Mar 05, 2012
I'm disappointed. I was hoping for something more sunstansial than "It must be because of quantum gravity". I was expecting a breakthrough of some sort and very excited.
1 / 5 (5) Mar 05, 2012
I am not a proponent of string theory (M-theory). And from the looks of it with your Dense Aether Theory you are not a string theory lover either
String theory is too schematic. IMO the same result could be achieved in much easier way with finding of speed of energy spreading with common wave equations in gradually increasing number of
dimensions. Why nobody checked how the wave equation behaves in higher dimensional space? We are missing the simplest numeric experiments in this directions.
5 / 5 (8) Mar 05, 2012
This corresponds well to the problem of predicting the total energy of the universe. General relativity has no such concept, it can hint at the right answer, but the real prediction (of zero energy)
is emergent from a system analysis. (Faraoni et al.)
@ KinedryL: "The cosmological constant controversy is easy to understand with water surface space-time analogy of AWT based on dense aether model."
We have known since the early 20th century that there is no aether. No go; and please do keep up!
1.8 / 5 (5) Mar 05, 2012
No offense but there has to be more than 11 dimensions:
Take a simple journey with me into conceptual space, try to imagine zero, the concept of it, try to imagine what zero really is. It's impossible to comprehend but we can say stuff like before the big
bang or after the entire universe decays in a few quadrillion years etc etc. Some also equate zero with death and perfect stillness.
But since here we are in the universe which has stuff instead of nothing we start to progress conceptually, zero may be nothing but the funny part is that zero is a concept to us and therefore a
thing. When examined it becomes a thing or ONE thing. Giving rise to the concept of one is interesting it sets up a more complex relationship of concepts ie zero and one.
Well now, if I take everything I have now, the concept of zero and the concept of one, well holy smokes that's two things. And as soon as you conceptualize two you now have three distinct concepts,
zero, one and two.
1 / 5 (2) Mar 05, 2012
So maybe now you can understand a little better how nothing led to an infinite. At least in terms of human understanding. I think this all has to do with simplex theory but I came up with it myself
before i heard or saw any other examples.
I personally believe it is a big part of the meaning of the universe. In the same way we might draw interesting conclusions about the origins of consciousness or reality simply by studying other
culture or employing hallucinogens.
Everything we do is adding another number to that chain of infinite quanta. Because if zero is "real" then every number you conceive of is really part of a system that is always exactly one higher
than the current level.
1 / 5 (6) Mar 05, 2012
The fact is, there is not acceleration of the expansion of the universe.
That is given as a "conclusion" from the "discovery" that "supernovas five billion light years away are receding faster than they should be". Up to five billion light years away, galaxies are
receding according to Hubble's Law, but, at five billion light years, galaxies are moving away at a faster Hubble Constant. But galaxies closer to us, galaxies viewed at less than five billion years
ago, are moving at the lower Hubble Constant! But that means that expansion occurring in the recent past is less than that of galaxies five billion years ago! And that means that expansion is less
than that at five billion years ago, and that means the universe is not speeding up!
1.7 / 5 (6) Mar 05, 2012
We have known since the early 20th century that there is no aether. No go; and please do keep up!
In recent years, the idea that spacetime is a form of condensate is gaining momentum. Of course, to be able to get to theories as close as possible to general relativity, the microscopic models have
to be considerably more complex than BECs. However, it can be conjectured that spacetime is the final outcome of a phase transition for a large number of suitable microscopic constituents, and that
the determination of the resulting macroscopic dynamics might be essentially the same, at the conceptual level, of the determination of the dynamics of a BEC from the knowledge of effective molecular
or atomic dynamics, near a phase transition...
1 / 5 (4) Mar 05, 2012
"The fact is, there is not acceleration of the expansion of the universe."
We cannot determine if the universe is expanding or accelerating until we properly account for "red" shifting of light frequencies due to bending. Shifting until the light changes from visible
specturm to radio spectrum and gets confused as the echo of the Big Bang.
3.7 / 5 (3) Mar 05, 2012
I am not a proponent of string theory (M-theory). And from the looks of it with your Dense Aether Theory you are not a string theory lover either
String theory is too schematic. IMO the same result could be achieved in much easier way with finding of speed of energy spreading with common wave equations in gradually increasing number of
dimensions. Why nobody checked how the wave equation behaves in higher dimensional space? We are missing the simplest numeric experiments in this directions.
Reported for spamming.
Also, you are clearly an idiot. The wave equation is just a Sturm-Liouville hyperbolic PDE. It's behaviour is already well-understood in arbitrary dimensions.
3.7 / 5 (3) Mar 05, 2012
Well now, if I take everything I have now, the concept of zero and the concept of one, well holy smokes that's two things. And as soon as you conceptualize two you now have three distinct
concepts, zero, one and two.
A qutrit.
5 / 5 (1) Mar 05, 2012
I'm disappointed. I was hoping for something more sunstansial than "It must be because of quantum gravity". I was expecting a breakthrough of some sort and very excited.
substantial. sorry
5 / 5 (5) Mar 05, 2012
The universe is simply becoming gradually more opaque to light. This is due to solar particles as the billions and billions of stars burn their fuel.
The last time I checked, hydrogen and helium (by far the greatest fractions of the solar wind) were completely transparent to visible light. Neither has there been any (reliable) report to the
1 / 5 (3) Mar 05, 2012
"You are playing the wrong note!"
"Impossible! The 'score' says play any note(s)or no note here!"
"There's no recognizable rhythm, rhyme, continuity, consistence or melody!"
Physics...of music.
5 / 5 (8) Mar 05, 2012
The cranks are out in force today.
*checks moon phase*
Yep, full moon.
1 / 5 (9) Mar 06, 2012
I will receive many "1s" but I will write what most of people don't like to hear/read.
The universe is NOT expanding, red-shift is NOT due to expansion but to photons loosing momentum due to encounters with electric charged particles, etc.
We need to stop pulling theories from our mind just because they are "beautiful"so they MUST be true...and last but not least: There was never a BigBang!
The universe is infinite.
Hopefully, the new space Webb Telescope will show that there are galaxies beyond 1e billion light-years that are complete, beautifully spirals, elliptical, whatever that could not have been formed
before the BB.
4.4 / 5 (7) Mar 06, 2012
yeah, we dont like to hear it because its nonsense.
1.4 / 5 (9) Mar 06, 2012
yeah, we dont like to hear it because its nonsense.
You just downoted him, so why are you using a plural majesticus? Are you infallible pope or something similar? Try to learn to speak for yourself.
1.5 / 5 (8) Mar 06, 2012
Hopefully, the new space Webb Telescope will show that there are galaxies beyond 1e billion light-years that are complete, beautifully spirals, elliptical, whatever that could not have been
formed before the BB.
IMO the JWST will never leave the Earth. It's heavily delayed, overpriced and its cameras have large number of pixels dysfunctional already - a five years before starting the mission.
Anyway, we have enough of evidence, that the universe is way larger, than the Big Bang theory allows - despite the opinion of famous PO trolls, who cannot swallow new facts. http://www.techno...iv/
4.3 / 5 (6) Mar 06, 2012
Anyway, we have enough of evidence, that the universe is way larger, than the Big Bang theory allows - despite the opinion of famous PO trolls, who cannot swallow new facts. http://
Wow, you're an idiot. Did you even read the article you cited? Everyone already knows that the universe is bigger than the observable universe.
This just places a lower limit on the size of the universe if it is positively curved.
Wow, so many people here who think they're experts in cosmology just from browsing the web. If you were actually interested in cosmology, you would care enough to read a basic introduction on the
1 / 5 (6) Mar 06, 2012
Everyone already knows that the universe is bigger than the observable universe.
Of course, but by Big Bang theory the diameter of Universe can be 93 billion light years only - not 3425 billion light years. http://en.wikiped...rse#Size So it's just you, who is the personality
above claimed and who cannot read the basic articles about subject... :-P I'm not sayin' "wow" though, because I'm not very surprised with it.
5 / 5 (4) Mar 06, 2012
Everyone already knows that the universe is bigger than the observable universe.
Of course, but by Big Bang theory the diameter of Universe can be 93 billion light years only - not 3425 billion light years. http://en.wikiped...rse#Size So it's just you, who is the personality
above claimed and who cannot read the basic articles about subject... :-P I'm not sayin' "wow" though, because I'm not very surprised with it.
No. You have no idea what you are talking about. The universe is INFINITE in every possible scenario except when omega>1. Go ahead and solve the Friedmann equation for a flat universe with the
current best estimates of energy densities for the different components and it will be obvious. You are an uneducated fool with no background in cosmology.
1 / 5 (1) Mar 06, 2012
"Of course, to be able to get to theories as close as possible to general relativity, the microscopic models have to be considerably more complex than BECs."
Note Lorentz contraction, time dilation, and mass increase can be easily explained using a very simple quantum model of spacetime - hollow shells of energy carried on the surface, with propagation
along this surface. A 2d cut of one of these quanta would be a circle with diameter ab, for instance. The propagation occurs on the circumference of the circle but the distance which the quantum
travels would be the distance ab. Applied force (for example gravity) causes the circle to collapse so the distance ab becomes smaller. That is the distance which the quantum travels in the time it
takes to propagate around the perimeter becomes smaller, but the distance around the perimeter remains constant. So the quantum appears to slow down.
1.9 / 5 (9) Mar 06, 2012
No. You have no idea what you are talking about. The universe is INFINITE in every possible scenario except when omega>1
I don't believe you. You should rewrite the corresponding section at Wikipedia first - or you should consult the list of common miconceptions here http://en.wikiped...ceptions The infinite size of
Universe is valid only for Universe of infinite age in Friedman models (which you apparently don't understand at all). Anyway, the Wikipedia says clearly, how large the Universe is predicted with Big
Bang theory. You're not a relevant source of information for me. You should rewrite its record first, or I will simply not discuss it with you here.
You are an uneducated fool with no background in cosmology.
And you're a victim of Dunning-Krueger effect. You're so silly, you even cannot recognize, who is actually clever here and who not.
1 / 5 (2) Mar 06, 2012
The area of the circle is reduced as the quantum is compressed, so the energy density (mass) of the quantum is increased. Finally if an infinite force is applied to the quantum its area goes to zero
and its mass density goes to infinity. So in this model mass is the actual energy density of spacetime quanta. The spacetime quanta of matter take many different forms but its mass depends only on
the density of these quanta.
1 / 5 (5) Mar 06, 2012
The area of the circle is reduced as the quantum is compressed
A quantum? The "quantum" means nothing, so it cannot be compressed. It has no volume defined.
3 / 5 (2) Mar 06, 2012
The area of the circle is reduced as the quantum is compressed
A quantum? The "quantum" means nothing, so it cannot be compressed. It has no volume defined.
Try for example one planck volume of spacetime.
1 / 5 (5) Mar 06, 2012
Try for example one planck volume of spacetime.
I'm not obliged to check anything. This is not a "quantum". This is a "Planck volume", i.e. not a "quantum" and it's invariant until the Planck constant remains constant. Do you realize, how mentally
incoherent your posts actually are? You're using the words and concepts of mainstream physics outside the scope of their meaning. If the "quantum" can "expand", then everything is actually possible:
the "space-time" can "jump" or the "energy" can "sink" or whatever else.
5 / 5 (5) Mar 06, 2012
The universe is NOT expanding, red-shift is NOT due to expansion but to photons loosing momentum due to encounters with electric charged particles, etc.
Such interactions produce effects that are frequency dependent:
Cosmological redshift is independent of frequency. In 1985, Wolfe et al studied light from quasar PKS0458-02. The quasar itself is at z=2.29 but the light from it passes through a gas cloud at z=
2.039. The Lyman alpha line and the 21cm line have the same redshift to within 0.03% even though the frequencies differ by a factor of over 172,000:
In addition, distant supernova light curves last longer because the distance between us and the source increases during the time we can watch them so expansion is proven beyond doubt. Deal with it.
1 / 5 (5) Mar 06, 2012
Such interactions produce effects that are frequency dependent
In dense aether model the light is indeed not slowed down with particles of solar wind (which are of short distance scope and they do interact with charged bodies only) - but with omnipresent density
fluctuations of vacuum, which are manifesting itself as a CMBR photons. This effect is indeed frequency dependent. For example, the CMBR photons cannot disperse with itself, which means, at the
wavelength of CMBR all red shift effects will effectively disappear from Universe and the Universe will appear rock steady state at this wavelength. This prediction of aether model has been confirmed
many times already. In addition, from AWT follows, for light of longer wavelength the red shift effects will be reversed and the Universe will appear collapsing. Again, we have observational evidence
of these effects.
1 / 5 (1) Mar 06, 2012
Try for example one planck volume of spacetime.
I'm not obliged to check anything. This is not a "quantum". This is a "Planck volume", i.e. not a "quantum" and it's invariant until the Planck constant remains constant. Do you realize, how
mentally incoherent your posts actually are? You're using the words and concepts of mainstream physics outside the scope of their meaning. If the "quantum" can "expand", then everything is
actually possible: the "space-time" can "jump" or the "energy" can "sink" or whatever else.
You're right. I certainly wouldn't say the quantum can expand.
1 / 5 (4) Mar 06, 2012
The fluctuations which would be able to disperse the light under shifting of its frequency must be very thin and sparse, so that the light wave can actually travel trough it without changing its
direction. Only the quantum fluctuations of vacuum have such a properties. The charged particles disperse the light with Compton scattering, which changes the direction of light and such a light is
effectively wasted for terrestrial observer. In addition, the effective crossection of such interaction is very low, because solar wind is very sparse in comparison to photon flux from stars. We
would always have two portions of light: scattered with frequency shifted with distance and non-scattered one, which wouldn't depend on the distance. It would mean, we wouldn't see the spectral lines
of elements, but the spectral bands in the light of distant galaxies together.
You're right. I certainly wouldn't say the quantum can expand.
After then it cannot be compressed. Problem solved.
4 / 5 (4) Mar 06, 2012
After then it cannot be compressed. Problem solved.
Unless you compress it into some form of matter, or otherwise apply a force such as gravity, as in the model proposed.
1 / 5 (2) Mar 06, 2012
I believe that the purpose of these comments should be to bring forth new ideas preferably as they pertain to the article. If the ideas are new then at least 95% of accredited scientists will
disagree. Your personal opinions of someone elses ideas are of no importance. We don't need you to protect us from radical ideas. Express your own idea and be done. This is aimed at no one in
particular. Sorry this doesn't pertain to this specific article.
5 / 5 (8) Mar 06, 2012
Such interactions produce effects that are frequency dependent
In dense aether model ...
There is no "dense aether model", a gaseous or liquid model cannot support transverse waves as I explained to you four days ago in this thread:
5 / 5 (8) Mar 06, 2012
I believe that the purpose of these comments should be to bring forth new ideas preferably as they pertain to the article. If the ideas are new then at least 95% of accredited scientists will
That is because the posters here generally have limited knowledge of the current state of observational evidence. For example one poster has suggested that cosmological redshift might be due to
particle interactions. He was obviously unaware of the use of that effect in measuring electron densities in the light path of pulsars and the widely known fact that cosmological redshift is
frequency independent.
No purpose is served by people spending their time posting ideas that are not viable due to their ignorance of what we already know. For example, my posting the link to the measurements made by Wolfe
et al. doesn't stop anyone thinking about the topic, but it does give them another data point that any workable theory must match.
5 / 5 (5) Mar 06, 2012
No. You have no idea what you are talking about. The universe is INFINITE in every possible scenario except when omega>1
I don't believe you. You should rewrite the corresponding section at Wikipedia first - or you should consult the list of common miconceptions here http://en.wikiped...eptions. You're so silly,
you even cannot recognize, who is actually clever here and who not.
You are so incredibly arrogant it's shocking. You don't even understand the Dunning-Kruger effect which you're referring to. The effect refers to the phenomenon where incompetent people (such as
yourself) are too incompetent to realize why they are wrong.
I actually know what I'm talking about. I've taken courses in cosmology in astrophysics and I have personally solved the Friedmann equation more times than I can count.
I will repeat again since you can't grasp it. The observable universe is NOT the same as the size of the actual universe!
5 / 5 (7) Mar 06, 2012
This just places a lower limit on the size of the universe if it is positively curved.
Wow, so many people here who think they're experts in cosmology just from browsing the web. If you were actually interested in cosmology, you would care enough to read a basic introduction on the
"Bewertow" is correct, the "concordance model" which is the best fit to what we know is as close to flat as we can measure implying that it is either infinite or at least much bigger than the tiny
patch we can observe. If anyone wants to learn a bit about the current models, this is probably the most cited tutorial around and reasonably accessible for anyone with some basic physics background:
For those who want to dispute the model, finding out what it says rather than tilting at windmills is probably a good idea too ;-)
5 / 5 (3) Mar 06, 2012
I don't believe you. You should rewrite the corresponding section at Wikipedia first
Solve the Friedmann equation yourself if you don't believe me. Even if you consider a simplified two-component model (which is trivial to solve) then you can easily see what is going on.
If you can't even solve a simple ODE for yourself, then your opinions are clearly meaningless and irrelevant.
1 / 5 (4) Mar 06, 2012
No Friedmann model predicts infinite size for finite age Universe. Period.
5 / 5 (5) Mar 06, 2012
No Friedmann model predicts infinite size for finite age Universe. Period.
Oh really? So you took my advice and solved the Friedmann equation and proved it?
Oh right, I forgot, you are too incompetent to even solve a simple ODE!
I will repeat again since you are so incredibly stupid: THE ONLY CASE WHERE THE UNIVERSE IS FINITE IN SIZE IS FOR OMEGA>1. There is no arguing with this. This is literally the first or second chapter
in any introductory cosmology book. Get your fat ass to the library and check out a book if you don't believe me.
5 / 5 (5) Mar 06, 2012
No Friedmann model predicts infinite size for finite age Universe. Period.
You don't even understand the diagram you linked to.
The separation between galaxies is NOT the same as the size of the universe. The separation between galaxies is determined by the scale factor.
Seriously, you don't even understand the most basic principles of cosmology, astrophysics or GR.
1 / 5 (6) Mar 06, 2012
THE ONLY CASE WHERE THE UNIVERSE IS FINITE IN SIZE IS FOR OMEGA >1
LOL... :-) http://www.physic...odel.gif It's evident, you're shouting about universe of infinite age, not about the Big Bang universe, which is just 13.7 GYrs old. These mathematicians and ODE
solvers.. Do you see? They will remain silly 4ever.....;-)
5 / 5 (6) Mar 06, 2012
No Friedmann model predicts infinite size for finite age Universe. Period.
The word "open" means infinite.
Like cometary orbits, the "flat" and "hyperbolic" cases are infinite, only the "hypershpherical" case is finite. The volume of the universe is then like the surface of a sphere, finite but without a
It's evident, you're shouting about universe of infinite age, not about the Big Bang universe, which is just 13.7 GYrs old.
Nope, he's teaching you cosmology 101.
5 / 5 (5) Mar 06, 2012
THE ONLY CASE WHERE THE UNIVERSE IS FINITE IN SIZE IS FOR OMEGA >1
LOL... :-) http://www.physic...odel.gif It's evident, you're shouting about universe of infinite age, not about the Big Bang universe, which is just 13.7 GYrs old. These mathematicians and ODE
solvers.. Do you see? They will remain silly 4ever.....;-)
I can't tell if you're just trolling, or if you are actually this incredibly stupid.
You keep linking to the same diagram. Scale factor is NOT the same as the size of the universe. I have been repeatedly telling you this over and over but you can't get it through your head.
The parameter which determines whether the universe is finite or infinite is the curvature constant. Hyperbolic and flat universes are ALWAYS INFINITE. We live in a flat, and therefore infinite
Unlike you I actually have a degree in physics. I have studied cosmology. I know what I'm talking about.
5 / 5 (6) Mar 06, 2012
"Also remember that the o = 1 spacetime is infinite in extent so the conformal space-time diagram can go on far beyond our past lightcone, ..."
Prof. Wright's background:
5 / 5 (5) Mar 06, 2012
The parameter which determines whether the universe is finite or infinite is the curvature constant. Hyperbolic and flat universes are ALWAYS INFINITE. We live in a flat, and therefore infinite
We can't be entirely sure of that though, inflation pushes curvature so close to flat that it could be just one side or the other and the difference would be immeasurable. Dark energy of course
ensures there won't be a crunch either way so it's still uncertain if the universe is infinite or merely vastly larger than our small observable portion.
There is a fundamental difference in philosophical terms between the two but pragmatically they are indistinguishable.
1 / 5 (6) Mar 06, 2012
The word "open" means infinite.... Unlike you I actually have a degree in physics. I have studied cosmology. I know what I'm talking about.
LOL, exactly as I expected...:-) Can you please depict on this diagram the actual size/scale factor of Universe by contemporary Big Bang cosmology? Just draw the segment of a line, label it and link
back to this forum.
1 / 5 (6) Mar 06, 2012
inflation pushes curvature so close to flat that it could be just one side or the other and the difference would be immeasurable
You apparently didn't understand the Big Bang cosmology, in which the Universe emerged from sub-Planckian singularity and it's of finite age. Such universe may be flat or not (it's completely
irrelevant in this particular context) - but it will always remain of finite size, simply because the limited singularity cannot expand into infinity in limited time.
BTW It's just me and GuruShabu, who has been opposed here with opinion, the Universe is infinite. Now you're trying to convince us about the very same obstinately...;-)
1 / 5 (1) Mar 06, 2012
The string theory based derivation of space-time dimensionality you can find for example here
No luck.
1 / 5 (2) Mar 06, 2012
Remove the comma at the end of link
and it will work for you. Be careful what you click, as it can save a lotta troubles for you.
1 / 5 (3) Mar 06, 2012
So a candidate for stratification is constant curvature?
When language falls short of describing observations...
everyone takes sides.
5 / 5 (3) Mar 07, 2012
The word "open" means infinite...
LOL, exactly as I expected...:-) Can you please depict on http://www.physic...odel.gif the actual size/scale factor of Universe by contemporary Big Bang cosmology? Just draw the segment of a
line, label it and link back to this forum.
What you ask for is already on the diagram. The line marked "Omega=1 flat" is where WMAP etc. put us (though it should curve up to the right), and it is also the boundary between finite and infinite.
Above or on that line, the universe is spatially infinite while below it space is finite but unbounded.
You don't appear to understand what the term "scale factor" means. It is a fractional change comparing distances between widely separated objects at different times. By convention, it has the value 1
at the present time. On its own it doesn't define the overall size. This is basic stuff, check any textbook on the subject.
1 / 5 (4) Mar 07, 2012
@Fleefoot: OK, so do you agree, that the Universe is always infinite in Friedmann models and in Big Bag cosmology it's always finite, which makes these two models mutually incompatible?
5 / 5 (3) Mar 07, 2012
@Fleefoot: OK, so do you agree, that the Universe is always infinite in Friedmann models and in Big Bag cosmology it's always finite, which makes these two models mutually incompatible?
No. When Fred Hoyle coined the name "Big Bang model", he was talking of the expanding, finite age solution to the Friedmann Equations, they are one and the same.
Ignoring dark energy for a moment, the maths is simple. If the density of the universe was greater than a critical value, expansion would stop and reverse resulting in a "Big Crunch". That universe
is also spatially of finite volume but unbounded, like the area of the surface of a sphere.
If the density is less than or equal to the critical value, the expansion would continue forever but would always be slowing like a bullet fired from the Earth at greater than escape velocity. That
universe would be spatially infinite.
Observations say that the universe is probably flat:
1 / 5 (1) Mar 07, 2012
The twentieth century was the century of major scientific revolutions, triggered by Quantum Mechanics (which profoundly changed what we know about matter) and of Einstein's General Theory of
Relativity (which radically changed what we know about time and space), in addition to a growing interest in the study of nonlinear dynamic systems (which changed what we know of the dynamics of
physical phenomena). The question is: do we have a reference paradigm that properly integrates these three descriptions of the three fundamental aspects of reality, matter-energy, space-time,
dynamic? So welcome to any effort that goes in this direction.
1 / 5 (3) Mar 07, 2012
When Fred Hoyle coined the name "Big Bang model", he was talking of the expanding, finite age solution to the Friedmann Equations, they are one and the same.
I don't care, what some Fred Hoyle twaddled about before fifty years - my question is related to Big Bang theory, as it's accepted TODAY. In this theory (no-matter, whether it's called the L-CDM or
FRLW model today) the Universe started its existence before 13,7 billions of years so it's of FINITE age. The flat Universe in Friedman's model for omega = 1 is INFINITE. Can you spot this
5 / 5 (3) Mar 07, 2012
When Fred Hoyle coined the name "Big Bang model", he was talking of the expanding, finite age solution to the Friedmann Equations, they are one and the same.
I don't care, what some Fred Hoyle twaddled about before fifty years
Obviously, but if you learned a little about the subject before making wild statements, you wouldn't make so many errors.
my question is related to Big Bang theory, as it's accepted TODAY. In this theory (no-matter, whether it's called the L-CDM or FRLW model today) the Universe started its existence before 13,7
billions of years so it's of FINITE age. The flat Universe in Friedman's model for omega = 1 is INFINITE. Can you spot this difference?
For omega <= 1, the model says that spatial slices are and always have been of infinite extent.
The question is can you understand the difference between "age" and "spatial extent"?
1 / 5 (4) Mar 07, 2012
The question is can you understand the difference between "age" and "spatial extent"?
If you said, the universe is flat and of infinite spatial extent in Friedman model (which is part of Standard model of cosmology), then this universe cannot be of finite age. In addition, the
Friedman model apparently describes neither the initial explosion from singularity, neither inflation, which had come later.
Anyway, the whole formal model is pretty bothering for me, it just extrapolates the Universe formation with relativity (in pretty inconsistent way) - but it doesn't explain, what really happened with
it, why it exploded, why it inflated, why it's expanding with accelerating speed. The contemporary cosmology is just a chain of formal regressions glued and fitted to observations. And because it
lacks the sense, it remains ad-hoced.
5 / 5 (5) Mar 07, 2012
The question is can you understand the difference between "age" and "spatial extent"?
If you said, the universe is flat and of infinite spatial extent in Friedman model (which is part of Standard model of cosmology), then this universe cannot be of finite age. ...
You need to think before posting, the curves in that plot all reach a scale factor of zero at a finite time in the past, that is the basic Big Bang model. What did you think it showed?
Anyway, the whole formal model is pretty bothering for me, it just extrapolates the Universe formation with relativity .. but it doesn't explain, what really happened with it
We live a long time after that event and the universe was opaque for the first 378 thousand years. In the absence of a QM model, extrapolating from what we can see is the best we can do.
1 / 5 (4) Mar 07, 2012
the curves in that plot all reach a scale factor of zero at a finite time in the past
Well exactly - the Universe was of zero size at the very beginning, so it cannot be infinite at the 13,7 Gyrs time by the same model, despite the actual omega value.
5 / 5 (5) Mar 07, 2012
the curves in that plot all reach a scale factor of zero at a finite time in the past
Well exactly - the Universe was of zero size at the very beginning, so it cannot be infinite at the 13,7 Gyrs time by the same model, despite the actual omega value.
Zero scale factor times infinite extent doesn't give zero size, the product is undefined. That's the trouble with singularities.
To be more realistic, the Friedmann Equations are classical so they don't take account of QM effects. They can't tell us how that first event occurred or whether it resulted in a universe that is
finite or infinite, only that it happened 13.7 billion years ago.
1 / 5 (4) Mar 07, 2012
Zero scale factor times infinite extent doesn't give zero size
Once again: the extent handled with Friedmann equations applies to observable Universe, which is not infinite by now, at the 13.7 GYrs after Big Bang. So it can be infinite neither at the "zero
time", when the Universe was supposed to be a way smaller.
5 / 5 (4) Mar 07, 2012
Zero scale factor times infinite extent doesn't give zero size
Once again: the extent handled with Friedmann equations applies to observable Universe, ...
No, the Friedmann Equations can be derived from the postulate that the universe is homogenous and isotropic (as shown by Robertson and Walker) so if they apply anywhere, they apply everywhere.
Unfortunately there is no easy way to determine whether the universe is finite or infinite.
1 / 5 (4) Mar 07, 2012
Unfortunately there is no easy way to determine whether the universe is finite or infinite.
You cannot have infinite space-time of finite age in relativity neither Big Bang theory. Observable Universe is definitely finite by observations and from these observation the finite age and Big
Bang model were deduced. Friedman equations were derived from observable data, so they do apply to the observable Universe as well.
Here are another connections, which don't apply directly from the above Friedman model logic, but they're relevant to Big Bang model as well. For example, well known argument for finite Universe is
Olber's paradox. It's believed, the light of distant objects are hidden with reionization epoch (dark ages), so they're unobservable. In Big Bang theory this epoch covers the particle horizon of
Universe, thus placing strict limit for not only observable Universe size, but for the size of the whole Universe.
1 / 5 (4) Mar 07, 2012
If the Universe would be really infinite, then we shouldn't observe the older epochs of Universe formation, because the particle horizon would be enough to cover them before our eyes. Just the fact,
the Universe expands faster, than the speed of light behind the Hubble deep field would be sufficient to hide all older objects. But Big Bang theory considers, that the Universe is of finite age and
it puts the origin of Universe right before the particle event horizon, because the density of observable Universe at this place reaches the GUT scale limit.
So, maybe it's easy determine whether the universe is finite or infinite, maybe not - but Big Bang theory rather clearly implies the finite size of Universe, which doesn't differ very much from
observable Universe size. It can be ten times larger at best.
5 / 5 (4) Mar 08, 2012
You cannot have infinite space-time of finite age in relativity neither Big Bang theory.
On the contrary, for omega <= 1, that is the model that GR produces.
Observable Universe is definitely finite by observations and from these observation the finite age and Big Bang model were deduced. Friedman equations were derived from observable data, ..
That is wrong again, Friedmann published the equations as a purely theoretical solution to GR in 1922.
Hubble found the first Cepheid variable in the Andromeda Galaxy in 1923 proving that it wasn't just a gas cloud in our own galaxy as Shapley was arguing. Bear in mind the Great Debate was only 2
years before Friedmann published his solution.
5 / 5 (5) Mar 08, 2012
If the Universe would be really infinite, then we shouldn't observe the older epochs of Universe formation, because the particle horizon would be enough to cover them before our eyes.
We can observe that epoch for "nearby" material, stuff from which light has taken nearly 13.7 billion years to reach us, but there is more stuff farther away that we can't see, and never will.
Just the fact, the Universe expands faster, than the speed of light behind the Hubble deep field would be sufficient to hide all older objects.
That's another common misconception. The HDF includes galaxies up to a redshift of 6 while. The distance between us and galaxies at a redshift of about 1 was increasing by about 1 light year per
year. That's one reason why you cannot use a model with galaxies moving through space and redshift caused by Doppler shift, you have to use GR and the model of expanding space.
1 / 5 (4) Mar 08, 2012
one reason why you cannot use a model with galaxies moving through space and redshift caused by Doppler shift
This is completely irrelevant to my objection. I know, the Friedman equations are based on general relativity and they don't provide the prediction of actual Universe size. This size is defined with
another constrains of L-CDM model, with critical Universe density in particular. In pure GR model described with FLRW metric the most distant objects would disappear from sky due their red shift. In
Big Bang theory they do disappear because of dark epoch of reionization way sooner.
Whereas in steady state Universe model the galaxies don't move at all and the space-time doesn't expand. The light is propagating with increasing speed while dispersing itself. This is why the more
distant galaxies appear relatively larger than these close ones, whereas in GR they should collapse with condensing space-time accordingly.
5 / 5 (4) Mar 08, 2012
This is completely irrelevant to my objection.
Of course it's relevant, you were claiming we couldn't see older objects if expansion between us exceeded the speed of light and that's simply not true. You have so many misconceptions about the
model that your objections don't even make sense.
I know, the Friedman equations are based on general relativity and they don't provide the prediction of actual Universe size. This size is defined with another constrains of L-CDM model, with
critical Universe density in particular.
Nope, the density determines curvature and whether it is open or closed but not the size.
In pure GR model described with FLRW metric the most distant objects would disappear from sky due their red shift. In Big Bang theory they do disappear because of dark epoch of reionization way
Reionisation ended around z=6 (Gunn Peterson trough). The first stars are estimated at around z=65. We can see the CMBR at z=1089. Try again.
4.5 / 5 (8) Mar 08, 2012
They can't tell us how that first event occurred or whether it resulted in a universe that is finite or infinite, only that it happened 13.7 billion years ago -Fleetfoot
Indeed. I don't know which notion boggles the mind more - that the universe is infinite or that it has a finite limit.
2 / 5 (4) Mar 08, 2012
Seriously, you don't even understand the most basic principles of cosmology, astrophysics or GR.
If we are to have in the Universe an average density of matter which differs from zero, however small may be that difference, then the Universe cannot be quasi-Euclidean. On the contrary, the results
of calculation indicate that if matter be distributed uniformy, the Universe would necessarily be spherical (or elliptical). Since in reality the detailed distribution of matter is not uniform, the
real universe will deviate in individual parts from the spherical, but it will be necessarily finite. In fact the theory supplies us with a simple connection between the space-expanse of the universe
& the average density of matter in it.
Albert Einstein:Relativity-Section 30
Written: 1916 (revised edition 1924)
Part III: Considerations on the Universe as a Whole
"The Structure of Space According to the General Theory of Relativity"
5 / 5 (4) Mar 08, 2012
Seriously, you don't even understand the most basic principles of cosmology, astrophysics or GR. ...
This is from the WMAP site but it is a standard result you will find in most textbooks:
"The density of the universe also determines its geometry. If the density of the universe exceeds the critical density, then the geometry of space is closed and positively curved like the surface of
a sphere. .. If the density of the universe is less than the critical density, then the geometry of space is open (infinite), and negatively curved like the surface of a saddle. If the density of the
universe exactly equals the critical density, then the geometry of the universe is flat like a sheet of paper, and infinite in extent.
We now know that the universe is flat with only a 0.5% margin of error. This suggests that the Universe is infinite in extent; however, since the Universe has a finite age, we can only observe a
finite volume of the Universe."
5 / 5 (2) Mar 08, 2012
This is from the WMAP site but it is a standard result you will find in most textbooks:
The link wouldn't fit into the character limit, it is:
1 / 5 (1) Mar 08, 2012
"Of course, to be able to get to theories as close as possible to general relativity, the microscopic models have to be considerably more complex than BECs."
Not according to my posts which were censored yesterday.
1 / 5 (2) Mar 08, 2012
"In recent years, the idea that spacetime is a form of condensate is gaining momentum."
Isn't that ether again..?
I think they're talking about a condensate of energy. Spacetime itself is more like a void, it only contains energy. Spacetime doesn't expand, only the distribution of energy within.
1 / 5 (4) Mar 08, 2012
I think they're talking about a condensate of energy
Condensate of energy has no shape and geometry, so you can deduce nothing from such concept. In addition, it was never observed - we observed only some kind of photon condensation inside of dense
environment, where the photons gain positive rest mass. Of course, such condensate is still better model of vacuum with respect to aether theory, than nothing - but this concept is fairly old
already. Apparently, mainstream physics converges to the dense aether model again.
1 / 5 (2) Mar 08, 2012
We now know that the universe is flat with only a 0.5% margin of error. This suggests that the Universe is infinite in extent; however, since the Universe has a finite age, we can only
observe a finite volume of the Universe."
What we know is that it's flat for as far as present instrumentation imposes measurement limits to reach further into the Universe. If the Universe is a lot bigger than is present consensus among
cosmologists, and I mean like hundreds or thousands of times bigger, that creates odds in Einstein's favor that the Universe is "spherical & closed".
1 / 5 (5) Mar 08, 2012
We now know that the universe is flat with only a 0.5% margin of error
We know, the Universe expands with accelerating speed, so it cannot be flat (if somebody claims the opposite, then the last Nobel price should be returned). In addition, even if the Universe is flat,
the limited speed of light implies rather strict limit for the Universe size, which may be only 250x larger, than the observable Universe size.
Everything else is a speculation, which doesn't follow from L-CDM model, but some other private cosmology, which is inconsistent with it.
1 / 5 (5) Mar 08, 2012
After finding of red shift Hubble wrote six years later:
"The velocity-distance relation is linear, the distribution of the nebula is uniform, there is no evidence of expansion, no trace of curvature, no restriction of the time scale and we find ourselves
in the presence.... If redshifts are velocity shifts which measure the rate of expansion, the expanding models are definitely inconsistent with the observations that have been made, i.e. expanding
models are a forced interpretation of the observational results. If the redshifts are a Doppler shift, then observations as they stand lead to the anomaly of a closed universe, curiously small and
dense, and, it may be added, suspiciously young. On the other hand, if redshifts are not Doppler effects, these anomalies disappear and the region observed appears as a small, homogeneous, but
insignificant portion of a universe extended indefinitely both in space and time."
3.7 / 5 (9) Mar 08, 2012
I would normally not nitpick as there is lots of dirty laundry flying around in the comments sections (with most of it not even being worth the few atoms it occupies for storage), but after reading a
few of the last posts by Fleetfoot feat. Teh Infamous Zephyr (aka Callipoo, aka Kynedril, aka..) I just couldn't help it but quote this little bit..
This suggests that the Universe is infinite in extent; however, since the Universe has a finite age, we can only observe a finite volume of the Universe.
Well, it's all nice that you point out the errors of others, try to teach them proper physics, and free them from their (obvious) "misconceptions"..
But do you ever try to fully comprehend what you write yourself?
For starters - many of your arguments were about a model which is based on GR. But regardless of that, you managed to butcher one of the most fundamental tenets of relativity just within the above
quoted single sentence..
3.5 / 5 (8) Mar 08, 2012
The tenet being, that SPACE and TIME are ONE entity.
There is no such thing as space without time (and vice-versa), up to a point where one could even say that space is an emergent property of time.
My point being, that you simply can NOT suggest infinite space WITHOUT infinite time to support it (as you did in the above quoted sentence).
The fact that you can divide and multiply zeroes/infinites on paper does not imply that it has any ressemblance to reality whatsoever.
Maybe I am being too picky here and you just used slightly wrong words to make youself clear..
Seeing as you are baseing it on Omega, perhaps "no limit" (from intrinsic perspective) would have been more fitting than "infinite extent" in this case?
2.6 / 5 (5) Mar 08, 2012
I explain..
[Omega > 1] - would basicaly represent a black hole from the INSIDE perspective. Being seemingly infinitesimaly large when observed from the inside (due to full 4pi curvature), but seemingly
infinitesimaly small when observed from the outside (being bellow the SS radius).
[Omega = 1] - could be called "Schwarzshild unity" in the BH slang :-)
[Omega < 1] - is where my brain refuses to cooperate, but suggests that we are part of an (ever faster) expanding "explosion", as is depicted by the BB model.
So yes, for [Omega > 1], from the "inside" perspective, you can move towards the "edge" but never be able to reach it, giving an impression of infinite freedom (eg. infinite extent), but this is just
an illusion, as essentially you would be "moving in circles" at some point.
This is not the same as "infinite extent", as is clearly demonstrated by the "outside" perspective.
And as I don't want to play the semantics violin, I omitt the rest of my response.. x-D
1 / 5 (4) Mar 08, 2012
My point being, that you simply can NOT suggest infinite space WITHOUT infinite time to support it (as you did in the above quoted sentence)
This is essentially what I argued a few pages of this thread before.
1 / 5 (6) Mar 08, 2012
Funnily enough, both your comment, both Fleetfoot's one were upvoted with the same voting bots/trolls yyz and CardacianNeverid, who are apparently confused with subject of this discussion already.
4.6 / 5 (11) Mar 09, 2012
Funnily enough, both your comment, both Fleetfoot's one were upvoted with the same voting bots/trolls yyz and CardacianNeverid, who are apparently confused with subject of this discussion already
On this point you are 100% correct. You bring delusion and confusion into every thread you post in, so is it any wonder?
1 / 5 (4) Mar 09, 2012
For contemporary scientists it's advantageous to keep as many conceptual models and theories as possible, because it helps them in employment. The more theories, the more theorists can keep their
jobs. From this reason I'm providing single general concept/model, which enables to understand all these particular models in consistent way.
But because it threatens the social status and employment of many people involved into development of contemporary cosmology and formal education, it's just me who is accused from doing of confusion
at the very end. Of course, the enhancing of the fact, the contemporary theories are logically inconsistent may appear like attempt for confusion of readers - but is it really the problem of mine?
1 / 5 (2) Mar 09, 2012
I appreciate your posts. The biggest problem I see with the geometry of the universe that infers "infinite parameters" is "information loss". This is the conundrum Hawking got caught up in & which
Einstein avoided with his stand on a "closed & spherical" universe.
To me, a universe that has an infinite parameter, such as the "flat" or "saddle" universe, is a "leaky universe", subject to "information loss" (ie:energy). I'd like to get an opinion from one of the
two of you why "information" (energy, photons) is not lost in a universe with an unbounded parameter such as the "saddle" or "flat".
4.4 / 5 (7) Mar 09, 2012
For contemporary scientists it's advantageous to keep as many conceptual models and theories as possible, because it helps them in employment -AnotherZephyrSock
The more theories, the more theorists can keep their jobs -AnotherZephyrSock
But because it threatens the social status and employment of many people involved into development of contemporary cosmology and formal education -AnotherZephyrSock
Yep, scientists are only in it for the money and social status! Delusional - the reality is quite the opposite.
but is it really the problem of mine? -AnotherZephyrSock
Yes. Absolutely. And pick single handle already!
1 / 5 (3) Mar 09, 2012
Yep, scientists are only in it for the money and social status!
As we know, they do it at least from forty percent already. According to this Scientific American editorial, 40% is typical.
5 / 5 (1) Mar 09, 2012
In addition, it was never observed - we observed only some kind of photon condensation inside of dense environment, where the photons gain positive rest mass.
BEC has been observed many time but mostly in alkaline earths:
Apparently, mainstream physics converges to the dense aether model again.
Hardly. Try putting these together:
5 / 5 (1) Mar 09, 2012
We now know that the universe is flat with only a 0.5% margin of error
We know, the Universe expands with accelerating speed, so it cannot be flat ..
You understand that omega=1 corresponds to flat. The main contributions to omega are:
dark energy 0.72
dark matter 0.23
IGM plasma 0.04
visible matter 0.01
The total is 1.00 to the WMAP accuracy so the universe is very close to flat. The dark energy part is what causes expansion to accelerate, if it was all matter expansion would still be slowing.
In addition, even if the Universe is flat, the limited speed of light implies rather strict limit for the Universe size, which may be only 250x larger, than the observable Universe size.
The important words there are "a lower limit", the finite speed of light only defines what is observable, an infinite universe is a standard prediction of the LCDM model.
5 / 5 (2) Mar 09, 2012
To me, a universe that has an infinite parameter, such as the "flat" or "saddle" universe, is a "leaky universe", subject to "information loss" (ie:energy). I'd like to get an opinion from one of
the two of you why "information" (energy, photons) is not lost in a universe with an unbounded parameter such as the "saddle" or "flat".
Interesting question. My immediate reaction would be that that photons are only moving around within the universe, not being lost so I can't see any mechanism for a "leak". Redshift seems to lose
energy but energy is frame-dependent so it is no worse than the Doppler effect. The whole subject is however much more complex. This is the physics FAQ article on the question which says more than I
5 / 5 (1) Mar 09, 2012
Hubble wrote:
"If the redshifts are a Doppler shift, then observations as they stand lead to the anomaly of a closed universe, curiously small and dense, and, it may be added, suspiciously young. On the other
hand, if redshifts are not Doppler effects, these anomalies disappear and the region observed appears as a small, homogeneous, but insignificant portion of a universe extended indefinitely both
in space and time."
Close, he only got the infinite age part wrong. Compare that with what I said a few posts back:
The HDF includes galaxies up to a redshift of 6 while the distance between us and galaxies at a redshift of about 1 was increasing by about 1 light year per year. That's one reason why you cannot
use a model with galaxies moving through space and redshift caused by Doppler shift, you have to use GR and the model of expanding space.
5 / 5 (1) Mar 09, 2012
The tenet being, that SPACE and TIME are ONE entity.
There is no such thing as space without time (and vice-versa), up to a point where one could even say that space is an emergent property of time. ... My point being, that you simply can NOT
suggest infinite space WITHOUT infinite time to support it (as you did in the above quoted sentence).
Emergence is a more complex question but certainly they are interchangeable to a degree. However, that doesn't mean both must be infinite, only that wherever you have space you also have time. Have a
look at the grahic below the Mercator Projection near the bottom of Ned Wright's tutorial here:
Seeing as you are baseing it on Omega, perhaps "no limit" (from intrinsic perspective) would have been more fitting than "infinite extent" in this case?
I'm using what is standard terminology in the subject. This paper may be of interest, compare figure 2 with Ned Wright's graphic:
5 / 5 (1) Mar 09, 2012
I've had to trim to <1000 chars
I explain..
[Omega > 1] - ...
[Omega = 1] - could be called "Schwarzshild unity" in the BH slang :-)
[Omega < 1] - is where my brain refuses to cooperate, ...
So yes, for [Omega > 1], from the "inside" perspective, you can move towards the "edge" but never be able to reach it, giving an impression of infinite freedom (eg. infinite extent), but this is
just an illusion, as essentially you would be "moving in circles" at some point.
This is not the same as "infinite extent", ...
The diagram on the right here may perhaps help:
The top image of a sphere shows a closed universe with the big bang at the bottom and the big crunch at the top. A horizontal slice is a circle representing the volume of the universe at that epoch.
You can think of a small patch near the "equator" as a Minkowski spacetime diagram.
The other two are for flat and negative curvature and both are infinite in extent.
5 / 5 (1) Mar 09, 2012
The first link got lost in my previous post:
Have a look at the graphic below the Mercator Projection near the bottom of Ned Wright's tutorial here:
This paper may be of interest, compare figure 2 with Ned Wright's graphic:
5 / 5 (1) Mar 09, 2012
After finding of red shift Hubble wrote six years later:
Note at the bottom of page 507 it states that the paper adapted a formula previously derived by Tolman. It is now known as the "Tolman Test". Note also that in item 3 just above the footnotes it is
mentioned that Eddington had cautioned that there was an assumption that all galaxies had the same brightness.
This article gives a summary of more modern results:
1 / 5 (1) Mar 10, 2012
Most of the energy in the universe was concentrated in suns during the early universe, probably after inflation had finished. Now they are releasing that energy into space at an accelerating pace.
Seems logical that the expansion is speeding up.
5 / 5 (2) Mar 10, 2012
Most of the energy in the universe was concentrated in suns during the early universe, probably after inflation had finished.
If inflation happened as is currently thought, it finished around 10^-32s.
Nucleogenesis happened when the universe was a few seconds to a few minutes old.
The light we see as the CMBR was emitted from the hot plasma when it was around 378,000 years old after which the universe was filled with little more than thin, cool hydrogen/helium gas mix.
The first stars couldn't form until it was 30 million to 130 million years old (depending on details of simulations).
3 / 5 (2) Mar 10, 2012
All these comments sound like a bunch of blind men trying to describe an elephant.
1 / 5 (2) Mar 10, 2012
Is it possible that the stretch is from orbiting a larger mass and just happen to be at max whipping point?
5 / 5 (1) Mar 10, 2012
Is it possible that the stretch is from orbiting a larger mass and just happen to be at max whipping point?
If by "stretch" you mean the Hubble expansion then no, that would cause expansion in two directions (towards the mass and along the orbit) but contraction in the direction perpendicular to the plane
of our orbit. There also seems to be no evidence of rotation though it is difficult to be sure.
1 / 5 (2) Mar 11, 2012
The first stars couldn't form until it was 30 million to 130 million years old (depending on details of simulations).
A gamma ray burst (GRB) has been recorded at 520 million years from BB/Infl. GRB's are associated with aging stars. This GRB proximity to the "first stars" at 30 - 130 million yrs seems awfullly
close in time. I have looked at the charts which show the progression of a stars fusion rates until it starts to fuse iron.
If I subtract 130 from 520 I get mere 390 million years to the first GRB of a supposedly aging star. I'm very suspicious a star can progress to forming iron in its core at such a young age,
leading me to believe the Universe is a lot older than 13.7 billion years. The redshift of that GRB is z=9.2, that is on the cusp of the supposed boundary of the universe that existed 13.7
billion light years ago. I will not be surprised when a GRB at z=10 shows up, that puts us inside the 30-130 million year before the first stars formed... cont'd..
1 / 5 (2) Mar 11, 2012
...If after a GRB at z=10 is detected we, in my opinion, are looking at a universe having formed longer than 13.7 billion years ago. Then if we start seeing them at z=11, then 12, then 13, cosmology
will go into a new metamorphisis.
The "new metamorphisis" will support Einstein's concept of a "spherical universe", & detract from the "flat universe" concept because the boundary of the universe at increased redshift beyond z=10
must shift, this allows for smaller curvature over a longer distance before the full cicumference of Einstein's sphere is realized.
I look upon the "flat" or "saddle" universe with great suspicion due to the "infinity" parameter of each. Any infinity parameter strongly hints at "loss of information", hence Einstein's conclusion
of a spherically closed Universe in order that energy be conserved.
So far, no one has ever come up against Einstein & won, not even Einstein, he did it once & lost, the biggest blunder of his career.
5 / 5 (1) Mar 11, 2012
The first stars couldn't form until it was 30 million to 130 million years old (depending on details of simulations).
A gamma ray burst (GRB) has been recorded at 520 million years from BB/Infl. ... The redshift of that GRB is z=9.2
GRB's are associated with aging stars.
GRBs are thought to be from black holes, not stars.
I have looked at the charts which show the progression of a stars fusion rates until it starts to fuse iron.
You need to look up "Pop III" stars. They were the first to form so there was nothing beyond Helium in them. H and He radiate poorly so their mass was much higher, maybe 300 times our Sun so they had
lifetimes of less than 10 million years. The heavier elements weren't produced slowly but in the supernova at the end of its life, in just a few seconds perhaps.
5 / 5 (1) Mar 11, 2012
I will not be surprised when a GRB at z=10 shows up, that puts us inside the 30-130 million year before the first stars formed. .. If after a GRB at z=10 is detected we, in my opinion, are
looking at a universe having formed longer than 13.7 billion years ago.
Most textbooks say the first stars formed around z=25 which is 132 million years. Recent simulations which note that since dark matter doesn't interact with light therefore doesn't feel radiation
pressure allow it to collapse earlier and put the first stars at z=65 which is 32 million years.
WMAP suggests reionisation wasn't a sudden event but gradual, starting around z=25 but possibly higher. JWST is designed to investigate out to z=15 or more.
We can already see the CMBR from z=1090 and age 378,000 years so finding anything later than that can only raise questions about our star formation theories, not the big bang.
5 / 5 (1) Mar 11, 2012
I look upon the "flat" or "saddle" universe with great suspicion due to the "infinity" parameter of each. ... So far, no one has ever come up against Einstein & won, ...
Exactly, and infinite extent is what his models predict for a flat universe, to go against that you need to discard GR.
Any infinity parameter strongly hints at "loss of information", hence Einstein's conclusion of a spherically closed Universe in order that energy be conserved.
Energy isn't necessarily conserved in any of the models but it's a complex question. Dark energy conserves energy because it acts as a negative pressure in gravitational terms.
However, what you should consider is that for the flat universe, the negative gravitational potential energy exactly balances the positive energy of matter, radiation etc. so the total is zero. The
Hamiltonian of a closed universe is also zero so energy is conserved either way. That approach doesn't help us decide.
1 / 5 (2) Mar 11, 2012
at z=10 shows up, that puts us inside the 30-130 million year before the first stars formed. ..
Most textbooks say the first stars formed around z=25 which is 132 million years. Recent simulations which note that since dark matter doesn't interact with light therefore doesn't feel radiation
pressure allow it to collapse earlier and put the first stars at z=65 which is 32 million years.
We can already see the CMBR from z=1090 and age 378,000 years so finding anything later than that can only raise questions about our star formation theories, not the big bang.
@Fleet: Great points. I guess I don't have the scale for redshift properly scaled for distance, it appears to be logarithmic formula from the numbers you've given me.
I'm curious, how'd you like the point I made about Einstein going up against Einstein & losing. My point being that whatever issue in science Einstein leans most heavily toward, is where the rest of
us ought to be....
5 / 5 (1) Mar 11, 2012
@Fleet: Great points. I guess I don't have the scale for redshift properly scaled for distance, it appears to be logarithmic formula from the numbers you've given me.
Unfortunately it's a complex integral. The easy way is to use an applet like this one:
I'm curious, how'd you like the point I made about Einstein going up against Einstein & losing. My point being that whatever issue in science Einstein leans most heavily toward, is where the rest
of us ought to be ...
I tend not to respond to such points, authority doesn't count for much because the data on which opinions are based is always moving forward. The equations of his theory are all that matters, so far
they have never failed (in the range where they are applicable) so we have no reason to discard them. Einstein was wrong about QM and no human is infallible.
1 / 5 (4) Mar 12, 2012
Einstein was wrong about QM and no human is infallible.
This article just says the opposite. Did you missed it?
1 / 5 (3) Mar 12, 2012
It's quite easy to understand, where the hole in the quantum mechanics is. This is the result, which QM predicts for double slit experiment.
And this is real result.
It looks the same, but it's not the same: the paths of individual electrons are still observable. QM just cannot predict them.
5 / 5 (1) Mar 12, 2012
Einstein was wrong about QM and no human is infallible.
http://www.physorg.com/news/2012-03-physicist-einstein-beaten-bohr-famous.html#firstCmt just says the opposite. Did you missed it?
I was referring to Einstein's often quoted "God doesn't play dice with the world.". The article you quote is not relevant to that but instead refers to non-locality. AFAIK, the outcome of specific
trials is still random in QM and can only be predicted statistically.
Regarding non-locality, Einstein's argument is stated mathematically in Bell's inequalities and the experiment by Aspect, repeated by many others for various particle types, showed that Bell's
Inequality is violated in reality as predicted by QM. Einstein might have been able to win the argument, but he would subsequently have been proven to be wrong.
5 / 5 (3) Mar 12, 2012
This is the result, which QM predicts for double slit experiment.
It looks the same, but it's not the same: the paths of individual electrons are still observable. QM just cannot predict them.
You can't see paths in either picture, all you see is where the particles were detected by the (photo-)multiplier plate.
1 / 5 (3) Mar 12, 2012
You can't see paths in either picture
You can see the dots at the second picture or not? The QM can predict just the density of these dots - their exact location at the target is an additional bonus of information, which is not predicted
/supplied with QM in any way.
5 / 5 (3) Mar 12, 2012
You can't see paths in either picture
You can see the dots at the second picture or not?
Yes, those are the points where the particles hit. You can't see what path they took to get there (in fact there is no unique path, the pattern is deterined by both slits). In the first picture,
there are simply a lot more dots so the image doesn't resolve them.
The QM can predict just the density of these dots - their exact location at the target is an additional bonus of information, which is not predicted/supplied with QM in any way.
Exactly, QM predicts only the statistics, the place where the next dot will appear is random, to be determined by a throw of the dice in Einstein's phrase and contrary to his belief.
1 / 5 (4) Mar 12, 2012
In the first picture, there are simply a lot more dots so the image doesn't resolve them.
Nope, this is a result of Schrodinger equation solution, i.e. the result of simulation, which you can play with at the Java applet here. QM predicts only the statistics, the place where the next dot
will appear is random, but it's still observable. The result of experiment is richer just by this information. You can use this additional information for example in the sequence of weak measurements
for determination of the whole path of particle during double slit experiment. http://www.nature...371.html
5 / 5 (3) Mar 12, 2012
In the first picture, there are simply a lot more dots so the image doesn't resolve them.
Nope, this is a result of Schrodinger equation solution, i.e. the result of simulation,
OK, you see the same if you use a bright light and do the experiment for real.
You can use this additional information for example in the sequence of weak measurements for determination of the whole path of particle during double slit experiment.
I think you cited the wrong article, that one is about a triple slit experiment which only tested statistics, not paths.
Bottom line though is still that if you look at your original second image, there are only points of detection, no paths to those points. I have no idea what you think you are seeing.
1 / 5 (2) Mar 12, 2012
OK, you see the same if you use a bright light and do the experiment for real.
Please, don't cheat..;-) This is a prediction of double slit experiment made with QM for both photons, both electrons. But the experimental result bellow is valid only for electrons.
I think you cited the wrong article
It seems, you're right - the citation of experiment which I had on mind is here
Iourii Gribov
1 / 5 (5) Mar 12, 2012
The Cosmological Constant (CC) for quantum Cooper-like (electron/positron) vacuum must be ZERO. It is weightless ghost superfluid, with the composite-ghostly SUSY. The underlying Gribov
Pico-Periodical Multiverse (PPM) concept and Vallita's CPT- enlargement in the GR allow this conclusion. The PPM contains equal quantity of matter-antimatter, with theoretically estimated correct DE
/ (DM plus OM).. ratio ~74%/26%, if the CC=0. Matter and antimatter clusters are placed along 2D-bubble's surfaces and repeal each others, voids are empty. The Higgs bosons are excluded by the
3D-waveguided rest-mass creation mechanism. The equal periodical-overlapped Universes/Antiuniverses have the same SM-particles and physics. Our civilization is very young between plenty of developed
hyper-civilizations (placed proximately near 10 -100 light minutes in a R4-distance around via Milky Way galaxy).
5 / 5 (1) Mar 12, 2012
OK, you see the same if you use a bright light and do the experiment for real.
Please, don't cheat..;-) This is a prediction of double slit experiment made with QM for both photons, both electrons. But the experimental result bellow is valid only for electrons.
Even though it is simulated, your upper picture would be typical for any double slit using a bright source, either light or electrons. Similarly, the lower image could also be the same experiment run
with dim light or a low current of electrons. That's why I used the generic term "particles", you can't tell what was used.
I think you cited the wrong article
It seems, you're right - the citation of experiment which I had on mind is http://www.scienc...abstract
That story is what I thought you meant. I don't see it's relevance to what we were discussing.
1 / 5 (3) Mar 12, 2012
I thought they solved which path they take in the slit experiment by using entanglement?
5 / 5 (2) Mar 13, 2012
I thought they solved which path they take in the slit experiment by using entanglement?
There's a readable description here:
"They haven't done anything to prove orthodox quantum mechanics wrong, though I can predict with confidence that there will be at least one media report about this that is so badly written that it
implies that they did. In reality, though, their measurements are completely in accord with ordinary quantum theory. ... I confidently predict that there will be no shortage of crazy people trying to
claim this as conclusive proof for their particular favorite interpretation of quantum theory."
I still can't see a connection to the discussion of the cosmological redshift or even to my statement that I don't give much credence to authoritative opinion, it is only the equations that matter.
5 / 5 (1) Mar 13, 2012
I thought they solved which path they take in the slit experiment by using entanglement?
I should have said that what they did was find the mean momentum of a large number of photons at different locations and use that to map the average of many "trajectories" in the way that iron
filings map the flux lines of a magnet.
1 / 5 (2) Mar 13, 2012
I thought they solved which path they take in the slit experiment by using entanglement?
I should have said that what they did was find the mean momentum of a large number of photons at different locations and use that to map the average of many "trajectories" in the way that iron
filings map the flux lines of a magnet.
I was thinking of the quantum eraser,but maybe I read it wrong?
5 / 5 (1) Mar 13, 2012
I was thinking of the quantum eraser,but maybe I read it wrong?
I think so. I don't have access to the full paper but the reviews don't mention using entaglement at all and there is no attempt to measure individual trajectories, just the average of a very large
number. It only confirms QM's prediction of the overall statistics.
1 / 5 (2) Mar 25, 2012
Why are somes scientist so determined to say that the Universe
is a result expansion and it still expands?
Does expansion hear more correct than explosion?
If we had a linear or nonlinear transformation of the Universe (expansion)
we would see something like a projection in space. Nothing more.
But the Universe is like a living objects - new galaxies, stars, planets
come to live.
The elemensts near the beginning of the big bang were quite different
than those we see in stars (synthesis od heavier elements from the lighter ones).
So there is much more in the Universe than a plain expansion.
5 / 5 (2) Mar 25, 2012
Why are somes scientist so determined to say that the Universe is a result expansion and it still expands?
Does expansion hear more correct than explosion?
If you think of a map of the galaxies, that map is currently expanding by about 1% every 200 million years everywhere. No matter how far away from here you looked, you would see the same overall
picture. An explosion suggests a region in space filled with matter expanding into a void which is incorrect.
So there is much more in the Universe than a plain expansion. | {"url":"http://phys.org/news/2012-03-weve-cosmological-constant-wrong.html","timestamp":"2014-04-18T15:03:04Z","content_type":null,"content_length":"301470","record_id":"<urn:uuid:d5145e8f-0892-4dd0-9373-0d7fa7a1bdb3>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00214-ip-10-147-4-33.ec2.internal.warc.gz"} |
Newest 'division-algebras involutions' Questions
Notations: Let $Q=(a,b)$ be a quaternion algebra over a field of characteristic $\neq 2$, i.e. $i^2=a, j^2=b, k=ij, ij=-ji$. Consider $K=k(t)(\alpha)$, where $\alpha=\sqrt{at^2+b}$. Let ...
Let $K$ be a skew-field, infinite dimensional over its center $F$. From Kaplansky's PI-theorem it then follows that $K$ cannot satisfy a polynomial identity (the theorem says that primitive ... | {"url":"http://mathoverflow.net/questions/tagged/division-algebras+involutions","timestamp":"2014-04-21T04:47:23Z","content_type":null,"content_length":"33609","record_id":"<urn:uuid:e188672d-c772-4ddf-ab25-7863ff8205d8>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00156-ip-10-147-4-33.ec2.internal.warc.gz"} |
Poynting theorem and derivation
Posted by: amsh on November 20, 2012.
Poynting Theorem
Statement. This theorem states that the cross product of electric field vector, E and magnetic field vector, H at any point is a measure of the rate of flow of electromagnetic energy per unit area at
that point, that is
P = E x H
Here P → Poynting vector and it is named after its discoverer, J.H. Poynting. The direction of P is perpendicular to E and H and in the direction of vector E x H
Proof. Consider Maxwell’s fourth equation (Modified Ampere’s Circuital Law), that is
del x H = J + ε dE/dt
or J = (del x H) - ε dE/dt
The above equation has the dimensions of current density. Now, to convert the dimensions into rate of energy flow per unit volume, take dot product of both sides of above equation by E, that is
E. J = E. (del x H) – εE. dE/dt (1)
Use vector Indentity
del. (E x H) = H. (del x E) – E. (del x H)
or E. (del x H) = H. (del x E) – del ( E x H )
By substituting value of E. (del x H) in equation (1) , we get
E. J. =H . (del x E) – del . (E x H) – εE dE/dt (2)
also from Maxwell’s third equation (Faraday’s law of electromagnetic induction).
del x E = μdH/dt
By substituting value of del x E in equation (2) we get
E. J. =μH . (dH)/dt – εE. dE/dt - del . (E x H) (3)
We can write
H. dH/dt = 1/2 dH^2/dt (4a)
E. dE/dt = 1/2 dE^2/dt (4b)
By substituting equations 4a and 4b in equation 3 , we get
E. J. = -μ/2 dH ^2/dt - ε/2 dE ^2/dt – del . (E x H)
E. J. = -d/dt [ dH ^2/2 + εE^2 /2] – del . (E x H)
By taking volume integral on both sides, we get
∫E. J. dV = -d/dt ∫ [μ H ^2 /2 + εE^2/2 ] dV – ∫del . (E x H) dV (5)
apply Gauss’s Divergence theorem to second term of R.H.S., to change volume integral into surface integral, that is
∫del . (E x H) dV = ∫ (E x H) . dS
Substitute above equation in equation 5
∫E. J. dV = -d/dt ∫[ ε E^2/2 + μ H^2/2] dV – ∫(E x H) . dS (6)
or ∫ (E x H) . dS = ∫-d/dt [ ε E ^2 /2 + μ H^ 2 /2] dV –∫ E. J. dV
Interpretation of above equation :
L.H.S. Term
∫ (E x H) . dS → It represents the rate of outward flow of energy from a volume S
V and the integral is over the closed surface surrounding the volume. This rate of outward flow of power from a volume V is represented by
∫ P . dS = ∫ (E x H) . dS
where Poynting vector, P = E x H
Inward flow of power is represented by
- ∫ P . dS = – φ∫ (E x H) . ds
R.H.S. First Term
-d /dt [μ H^ 2/2 + ε E ^2 /2] dV → If the energy is flowing out of the region, there must be a corresponding decrease of electromagnetic energy. So here negative sign
indicates decrease. Electromagnetic energy is the sum of magnetic energy, μ H^ 2/2 and electric
energy, ε E ^2/2 . So first term of R.H.S. represents rate of decrease of stored electromagnetic 2 energy.
R.H.S. Second Term
∫ (E. J) dV →Total ohmic power dissipated within the volume.
So from the law of conservation of energy, equation (6) can be written in words as
Rate of energy disripation in volume V = Rate at which stored electromagnetic energy is decreasing in V + Inward rate of flow of energy through the surface of the volume.
Share this This entry was posted by amsh on November 20, 2012 at 6:03 am, and is filed under Electromagnetism. Follow any responses to this post through RSS 2.0. You can leave a response or
post! trackback from your own site. | {"url":"http://www.winnerscience.com/electromagnetic-field-theory/poynting-theorem-and-derivation/","timestamp":"2014-04-21T08:19:22Z","content_type":null,"content_length":"67165","record_id":"<urn:uuid:72604ebb-40bb-466c-8bd6-37dadd981776>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00009-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: Converting SAS code into Stata code
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: Converting SAS code into Stata code
From "Hugh Colaco" <hmjc66@gmail.com>
To statalist <statalist@hsphsun2.harvard.edu>
Subject st: Converting SAS code into Stata code
Date Wed, 10 Dec 2008 08:34:35 -0500
Dear Statalisters,
I was given some code in SAS and need to translate it into Stata. My
dataset is in Stata. I have attempted the translation, but would
appreciate if someone would check it. I don't fully understand the
files that the author of the SAS code has created (at the beginning of
the code), but the bottom line is that the data consists of years
2002-2007. I have the same variables listed below for all these years,
each year in a separate file. In my Stata translation below, I have
used the 2002 data (original02.dta) as an example. But I will do the
same for the other years as well. Each file is very big (300MB, on
average), so I'd rather treat each one separately. I am using Stata10.
SAS code
libname tmp1 'c:\original';
data tr1; set tmp1.original1;
data tr22; set tmp1.original2;
data tr33; set tmp1.original3;
data tmp1.original0207;
set tmp1.original0203 tmp1.original04 tmp1.original05 tmp1.original06;
/* create v2 variable & recode largest values*/
data original; set tr1 tr22 tr33;
if v1='5MM+' then v1='5000000';
if v1='1MM+' then v1='1000000';
/* remove v1 under 100k)*/
data original;set original;
if v2>=100000;
data original; set original;
proc sort nodupkey; by v3 v4 v5 v6 v7;
/* remove canceled)*/
data canceled (keep= v8 v9 v10); set original;
if v8='C';
data canceled (drop=v8); set canceled;
rename v9=v4;
proc sort data=canceled; by v10 v4;
proc sort data=original; by v10 v4;
data original; merge original canceled; by v10 v4;
if x=1 then delete; if v8='C' then delete;
/* remove corrected)*/
data corrected (keep= v8 v9 v10); set original;
if v8='W';
data corrected (drop=v8); set corrected;
rename v9=v4;
proc sort data=corrected; by v10 v4;
data original; merge original corrected; by v10 v4;
if x=1 then delete;
/* remove price values)*/
data original; set original;
if v11 = 'N';
/* (create a file with the cleaned original data)*/
data tmp1.original_clean100k; set original; run;
Equivalent Stata code
use "C:\original02.dta", clear;
replace v1="5000000" if v1=="5MM+";
replace v1="1000000" if v1=="1MM+";
destring v1, gen(v2);
keep if v2>=100000;
sort v3 v4 v5 v6 v7;
duplicates drop v3 v4 v5 v6 v7, force;
save temp, replace;
keep if v8=="C";
keep v9 v10;
rename v9 v4;
gen x=1;
sort v10 v4;
save temp1, replace;
use temp, clear;
sort v10 v4;
merge v10 v4 using temp1;
drop if x==1 | v8=="C";
keep if v8=="W";
keep v9 v10;
rename v9 v4;
gen x=1;
sort v10 v4;
save temp2, replace;
use temp, clear;
sort v10 v4;
merge v10 v4 using temp2;
drop if x==1;
keep if v11 == "N";
save original02_clean100k, replace;
Thanks in advance,
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2008-12/msg00507.html","timestamp":"2014-04-21T10:23:21Z","content_type":null,"content_length":"7959","record_id":"<urn:uuid:4cf18f47-d86a-4e08-87b5-83b8deaef30f>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00191-ip-10-147-4-33.ec2.internal.warc.gz"} |
What do you think?
Re: What do you think?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: What do you think?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
A 2x2 matrix? How?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: What do you think?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
Hi bobbym
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: What do you think?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
Hi bobbym,
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: What do you think?
Hi gAr;
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
Hi bobbym,
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: What do you think?
Hi gAr;
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
Hi Bobby,
"The good news about computers is that they do what you tell them to do. The bad news is that they do what you tell them to do." - Ted Nelson
Re: What do you think?
Hi phrontister;
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
Hi bobbym,
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: What do you think?
Hi gAr;
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
Hi Bobby,
The spreadsheet (which I've deleted by mistake
"The good news about computers is that they do what you tell them to do. The bad news is that they do what you tell them to do." - Ted Nelson
Re: What do you think?
You're welcome!
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: What do you think?
Hi gAr,
Thank you for your welcome! You must speed read/speed type!
"The good news about computers is that they do what you tell them to do. The bad news is that they do what you tell them to do." - Ted Nelson
Re: What do you think?
Hi phrontister;
Yes the naswer is independent of the volumes of the urns and the starting amounts of water.
Hi gAr;
The Feller books have a lot of stuff in them!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
Hi phrontister,
I did not see that you had written up there, I actually replied to bobbym!
Hi bobbym,
Yes, you're right, he has covered a lot of topics.
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: What do you think?
Thanks for both of you looking at the problem.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
Hi bobbym,
I have a question on numerical methods:
How do we test whether a huge integer is a perfect square, if I'm to implement it in a language like C?
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: What do you think?
How big is the integer?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
A number which fits "long" data type, taking 64 bits..
I'll take a break, see you later..
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: What do you think?
Hi gAr;
Comparing floating point numbers like the routine sqr in C++ does and integers can be tricky.
I assume you want unsigned 64 bit numbers you will be testing numbers from 0 to 18446744073709551616. The roots would range from 0 to 4294967296.
Are your arguments going to be integers or floating point numbers?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
In particular, I'm trying to find values of n for which a polynomial like n*(5*n+14)+1 are perfect squares..
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=248959","timestamp":"2014-04-18T21:43:50Z","content_type":null,"content_length":"42316","record_id":"<urn:uuid:914fbe6b-3974-4f96-a92b-34e1f75b9f20>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00612-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Woodward-Hoffmann Rules
The conclusions (from correlation diagrams) about pericyclic reactions can be generalized to a set of selection rules for pericylic reactions -
the Woodward-Hoffmann Rules:
These rules can be expressed in a number of ways. A summary for pericyclic cycloaddition reactions is:
│p + q│ Thermally allowed │ Photochemically allowed │
│ 4n │p[s] + q[a] or p[a] + q[s] │p[s] + q[s] or p[a] + q[a] │
│4n +2│ p[s] +q[s] or p[a] + q[a] │p[s] + q[a] or p[a] + q[s] │
You don’t need to learn these for this course!
· p and q are the number of electrons in the two p systems which are undergoing the cycloaddition reaction.
· s indicates suprafacial attack with respect to one of the p components, and
· a indicates antarafacial with respect to one of the p components.
From the table outlining the rules, when p and q add up to give a number which can be written as (4n +2), with n as an integer (0, 1, 2…), the thermal cycloaddition reaction is allowed when it is
suprafacial with respect to both components (or antarafacial with respect to both components).
An example of the Woodward-Hoffmann Rules in action: the Diels-Alder Reaction | {"url":"http://www.chm.bris.ac.uk/pt/ajm/sb04/L5_p11.htm","timestamp":"2014-04-19T17:02:09Z","content_type":null,"content_length":"18022","record_id":"<urn:uuid:394eb9bb-74d8-4d24-84b9-9ef6e6000ff5>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00238-ip-10-147-4-33.ec2.internal.warc.gz"} |
Differentiation and Implicit Differentiation
February 6th 2009, 01:23 AM
Differentiation and Implicit Differentiation
implicit differentiation
1) y=x^2+yx
2) xy-y^3=1
3) 1/xy-y^2=2
1) f(x)=<square root>(x^2-1)/ 1+<square root>(x^2+1)
please find
1) d/dx <square root> f(x)+g(x)
thanks sososososoos much!!!!!!!!!
February 6th 2009, 04:07 AM
implicit differentiation
1) y=x^2+yx
2) xy-y^3=1 ........ Use product rule and chain rule
3) 1/xy-y^2=2 ........ Use quotient rule and chain rule
1) f(x)=<square root>(x^2-1)/ 1+<square root>(x^2+1) ........ Use quotient rule and chain rule
please find
1) d/dx <square root> f(x)+g(x) ........ Use chain rule
thanks sososososoos much!!!!!!!!! Are you sure your keyboard is working correctly?
I'll show you how to do the first example. I leave the following questions for you. I've mentioned above what you should do with the other questions.
to #1: Use product rule with the second summand:
$y=x^2+yx~\implies~ y'=2x+(y+x\cdot y')$
Collect all terms containing y' at the LHS:
$y'-x \cdot y' = 2x+y ~\implies~y'(1-x)=2x+y~\implies~ y'=\dfrac{2x+y}{1-x}$ | {"url":"http://mathhelpforum.com/calculus/72140-differentiation-implicit-differentiation-print.html","timestamp":"2014-04-21T00:06:47Z","content_type":null,"content_length":"5686","record_id":"<urn:uuid:ad97ddd0-9370-4fed-9f5a-e917852b991a>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00633-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
can you tell me why this website is so slow??
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4e9995a70b8ba77642a62683","timestamp":"2014-04-18T21:09:26Z","content_type":null,"content_length":"61591","record_id":"<urn:uuid:9a9933d0-b4c9-4ad0-b45e-4677651876dd>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00313-ip-10-147-4-33.ec2.internal.warc.gz"} |
Heron's Formula For Tetrahedra
In another article we gave a very direct derivation of Heron's formula based on Pythagoras's Theorem for right triangles. However, we might also observe that Heron's formula is essentially equivalent
to Pythagoras' Theorem for right tetrahedra. For a right tetrahedron with vertices (0,0,0), (a,0,0), (0,b,0), and (0,0,c), the base and height of the "hypotenuse" are
where (by similar triangles) h = ab/(Base) is the distance from the origin to the base, so we have
Thus if B,C,D denote the areas of the three orthogonal faces of a right tetrahedron with orthogonal edge lengths a,b,c, and if A denotes the area of the "hypotenuse face", we have
where B = ab/2, C = ac/2, and D = bc/a. This is essentially Heron's formula. To make this explicit, note that the edges of the hypotenuse face d,e,f are directly related to a,b,c according to the
Pythagorean relations
so we can express the areas B,C,D in equation (1) in terms of d^2, e^2, and f^2 to give Heron's formula explicitly.
It might seem as if this derivation does not apply to obtuse triangles, because the "hypotenuse face" of a right tetrahedron is necessarily acute (i.e., each of its angles must be less than 90
degrees). However, any triangle can be the hypotenuse face of a right tetrahedron, provided the orthogonal edge lengths and areas are allowed to be imaginary. Thus for any values of d,e,f we can
solve equations (2) for the orthogonal edges of the right tetrahedron whose hypotenuse is the triangle with the edges lengths d, e, f. This gives
For example, suppose we want the area of a triangle with edge lengths 8, 5, and 5, which is an obtuse triangle. Substituting these into the above equations gives
so the "c" leg has imaginary length. Consequently, two of the three orthogonal faces (those given by ac/2 and bc/2) are also imaginary. However, these areas only appear squared in the Pythagorean
formula for right tetrahedrons, so we're guaranteed to get a real area for the hypotenuse face. As Hadamard said, "The shortest path to any truth involving real quantities often passes through the
complex plane".
I honestly wouldn't be surprised if the ancient Greeks were aware of the connection between the generalized Pythagorean theorem and Heron's formula, but refrained from presenting it in that form
because of difficulties with interpreting the obtuse case. Recall that Descartes, for one, believed the ancient Greeks had discovered most of their theorems analytically by means of coordinate
geometry and algebra, but concealed their methods, presenting them in synthetic form, so as to make the results seem more daunting and impressive to the uninitiated. (See the note on Prisca Sapientia
.) It has always seemed doubtful that Heron's formula was discovered via the thought process of Heron's proof, which is absurdly circuitous. In any case, this is a nice example of how imaginary
numbers can arise naturally in dealing with questions of purely real quantities.
As for higher dimensional simplexes, there is no complete generalization of Heron's formula giving the volume of a general tetrahedron in terms of the areas of its faces, because the face areas don't
uniquely determine the volume (in contrast to the case of triangles, where the three edge lengths determine the area). However, it is possible to derive a "Heron's formula" for tetrahedrons if we
restrict ourselves to just those that would fit as the "hypotenuse face" of a right four-dimensional solid. (Notice that every triangle is the face of a right tetrahedron, which explains why Heron's
formula is complete for triangles).
To review, remember that Heron's formula for triangles is essentially equivalent to Pythagoras' Theorem for right tetrahedrons. Let's let A[xyo], A[xoz], and A[oyz] denote the areas of the three
orthogonal faces of a right tetrahedron, and A[xyz] denote the area of the "hypotenuse face", so we have
Now if we let L[x], L[y], L[z] denote the three orthogonal edge lengths of the tetrahedron, then the areas of its orthogonal faces are simply
and so equation (3) can be re-written in the form
Furthermore, the three edges L[1], L[2], L[3] of the hypotenuse face are directly related to L[x], L[y], L[z] by the two-dimensional Pythagorean theorem
Equations (5) are three linear equations in the three squared edge lengths, so we can solve for these squared lengths in terms of L[1], L[2], and L[3], and then substitute these into equation (4) to
give the ordinary Heron's formula for triangles, as before.
Now, we can do the same thing for tetrahedrons based on the generalized Pythagorean theorem for volumes of right four-dimensional solids
If we let L[w], L[x], L[y], L[z], denote the orthogonal edge lengths of the four-dimensional solid, then the volumes of the four orthogonal "faces" are simply
so equation (3') can be rewritten as
Furthermore, the four areas A[1], A[2], A[3], A[4] of the hypotenuse "face" are directly related to L[x], L[y], L[z] by the three-dimensional Pythagorean theorem (4)
Thus, given the four face areas A[1], A[2], A[3], A[4], we have four equations in the four unknowns L[2], L[x], L[y], L[z], so we can solve for these values and then compute the volume of the
tetrahedron using (4').
At this point people usually turn away from this approach, for two reasons. First, everything we're doing is restricted to the "special" tetrahedrons that can serve as the hypotenuse of a "right"
four-dimensional simplex, so we're certainly not going to end up with a general formula applicable to every tetrahedron (as is clear from the fact that we have only four independent edge lengths
here, whereas the general tetrahedron has six). General formulas giving the volume in terms of the edge lengths do exist, such as the one give by the Italian painter Piero della Francesca. Of course,
all such formulae can be traced back to the well-known determinant expression for volumes.
The second reason that people usually give up on equations (5') is that they are somewhat messy to solve, since they are non-linear in the lengths. Still, we might decide to press on anyway. It turns
out (after extensive algebraic manipulation) that we can reduce (5')
to a single quartic in the square of any of the four edge lengths L[w], L[x], L[y], or L[z]. Arbitrarily selecting L[y], and letting A,B,C,D denote 4 times the squares of the face areas (i.e., the
left hand sides of equations (5')), we can express the quartic in x = L[y]^2 with coefficients that are functions of B and the elementary symmetric polynomials of A,C,D
In these terms the quartic for x = L[y]^2 is
Of course, the analogous quartics can be given for L[w]^2, L[x]^2, and L[z]^2, but once we have any one of them we can more easily compute the others. For example, given L[y] we can compute L[x] from
the relation
and the values of L[w] and L[z] follow easily, allowing us to compute the volume using equation (4'). It would be nice if we could express the volume as an explicit function of the face areas, but I
don't know if such a formula exists.
In the preceding discussion we developed a tetrahedral version of Heron's formula for a restricted class of tetrahedra, namely those that can serve as the hypotenuse of a "right" four-dimensional
simplex, but there are other special classes of tetrahedra that possess interesting volume formulas. The one that gives the closest analogue to Heron's formula is the class of tetrahedra whose
opposite edges lengths are equal. Thus there are only three independent edge lengths, and each face of the tetrahedron is identical. Letting (a,f), (b,e), and (c,d) denote the pairs of opposite edge
lengths, we can set a = f, b = e, and c = d in the basic determinant expression for the volume, or equivalently in Piero della Francesca's formula, and we find that the resulting expression for the
squared volume factors as
which is certainly reminiscent of Heron's formula for the area of each face
This also shows that if each face is an identical right triangle, the volume is zero, as it must be, since four such triangles connected by their edges to give a tetrahedron necessarily all lie flat
in the same plane:
Obviously we can construct a regular tetrahedron with equilateral triangles of the same area as these right triangles, and the volume is [], which illustrates the fact that the face areas of a
tetrahedron do not in general determine it's volume.
Return to MathPages Main Menu | {"url":"http://mathpages.com/home/kmath226/kmath226.htm","timestamp":"2014-04-19T09:26:46Z","content_type":null,"content_length":"29712","record_id":"<urn:uuid:902f66bb-accf-423a-af8e-458f6033f374>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00387-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solving a systems of differential equations in terms of x(t) and y(t)
1. The problem statement, all variables and given/known data
x' ={{-1,1},{-4, 3}}*x, with x(0) = {{1},{1}}
Solve the differential equation where x = {{x(t)}, {y(t)}}
2. Relevant equations
3. The attempt at a solution
I have e^t*{{1},{-2}} + e^t*{{t},{2t+1}}
but I'm not sure how to get it in terms of what it's asking.
Edit: Please quick if you know how to do it. It's due at 4 AM :/ Crazy week on my end. | {"url":"http://www.physicsforums.com/showthread.php?t=580087","timestamp":"2014-04-17T15:43:11Z","content_type":null,"content_length":"22955","record_id":"<urn:uuid:1a5d0900-604d-4723-b1ce-2a3b0574d3a0>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00078-ip-10-147-4-33.ec2.internal.warc.gz"} |
Estimate Website Traffic with Compete.Com by Using Regression Analysis - SEO Chat
Estimate Website Traffic with Compete.Com by Using Regression Analysis
As a webmaster who competes with other websites or has an interest in entering a new niche, you might want to get traffic numbers for sites that you don’t own. You can’t actually do that with Google
Analytics…but Compete.com, combined with Google Analytics, may give you at least a very reasonable estimate. Keep reading to find out how.
It is important as a webmaster to at least estimate the number of actual unique visitors to any website. Of course, you know that you can get accurate data using Google Analytics and other tools.
However, you need to be the owner of the website in order to see those data.
If you are not the verified owner of the website, then you cannot obtain website traffic data using Google Analytics or other tools such as Stat Counter.
A feasible but not entirely accurate approach is to use online tools that can estimate the traffic/unique visitors’ data of any website, and for free. One of these tools is Compete.com.
However, the main problem with using such tools is the accuracy of the result. The data given by Compete.com could never be the same as the Google Analytics data.
While the tool provides you with some data, you will never have a clue as to how it relates to Google Analytics which is a standard in web analytics.
This study aims to estimate the unique visitors of a website as if measured by Google Analytics but using Compete.com’s raw data.
At the end of this study, any webmaster will be able to estimate the number of unique monthly visitors to any website, if they are using Google Analytics, given its Compete traffic value, to a
certain accuracy level (an 83% confidence level, for example).
The main objective is that, even if you do not have access to the Google Analytics account of a certain website, you will still be able to estimate the number of unique visitors it receives, using
Compete.com data.
Methodology of the Study
In order to estimate traffic, a model needs to be generated using regression analysis. To conduct a regression analysis, the following steps are employed:
Step 1: Select a website with at least one full year of Google analytics data.
Step 2: Gather the Compete.Com unique visitors’ data of the website. Compete.com by default provides one full year of data (12 months maximum).
To do this, you need to go to this URL: http://compete.com/. Click “Site Profile,” enter the domain name and then hit “Go.”
Screen shot:
The unique visitors’ data is available from the resulting unique visitors’ plot.
Step 3: Gather the equivalent unique visitor’s data in Google Analytics for those months with Compete.com data.
To get the absolute unique visitors data in Google Analytics, first, click “View Report” after logging to your Google Analytics account. In the “Dashboard,” adjust the date range to reflect the same
date range used by Compete.com’s data gathering.
For example, if Compete.com provides September 2009 to September 2010 data, then adjust the date period to September 1, 2009 to September 30, 2010 in Google Analytics.
Finally, click “Visitors” -> click “Absolute Unique Visitors.” To get monthly data, click the ”Month” option beside “Graph by:” It should look like the screen shot below:
Step 4: Summarize all the data gathered in an Excel spreadsheet.
Step 5: Perform regression data analysis.
Step 6: Make conclusions and consider recommendations/case examples.
Below is the screen shot of the Excel spreadsheet containing the data:
You can also download the Excel regression analysis as discussed in this study at the link.
The plot shows that Compete and Google Analytics are “positively” correlated. Refer to this page for the definition of positive correlation. This means that a high number of unique visitors in Google
Analytics relates to a high number of unique visitors in Compete.com.
The x-axis is the Compete.com data, while the y-axis is the Google Analytics data in the Scatter plot.
To do regression analysis in Excel, the “Analysis Toolpak” add-in must be installed.
Below are the results of the regression analysis:
The R squared is around 0.38, or 38%. To test if the regression model is significant or not, the P-value is compared to an acceptable error.
Suppose our confidence level is 83%. The acceptable error, then, is 17%. If the p value is less than 0.17, then you can say that the relationships between Google analytics and Compete data are
significant to an 83% confidence level.
Otherwise, if the p-value is greater than 0.17, then the relationship is not significant.
Based on the analysis, the p-value of the ANOVA (analysis of variance) is 0.05638, or 5.638%, which is less than 17%. Therefore the regression model is significant at an 83% confidence level.
If this is your first encounter with regression analysis, it is recommended that you read this linear regression analysis tutorial for details on doing regression analysis in MS Excel, as well as
interpreting the results.
The pink line in the graph as shown in the above screen shot is governed by this regression model:
Y= 1.2113x + 3226.8 where X is the Compete.com data and Y is the predicted Google analytics equivalent data.
However, for real-world applications, estimating the 83% confidence interval is much more useful and meaningful.
Based on the regression analysis, the following are the 83% confidence interval equations:
Upper 83%: Y = 2.031X + 4477.2945
Lower 83%: Y = 0.392x + 1976.291
Website Traffic/Unique Visitors Estimation Examples
Now that the regression analyses are done, it is time to use it to estimate Google Analytics website traffic using Compete.com’s unique visitors data.
Case Example 1: Estimate the latest monthly unique visitors measured by Google analytics of the seochat.com domain.
Step 1. Get the latest Compete.com (http://compete.com/) unique visitors data for the seochat.com domain. Go to “Site Profile,” enter seochat.com in the text box, and then press the go button.
According to Compete, the seochat.com domain’s unique visitors as of September 2010 (latest month in their report) are around 186,389.
Let’s use the 83% confidence interval equations in the regression analysis model to estimate the upper and lower Google Analytics equivalent number of unique visitors.
Upper 83%: Y = 2.031X + 4477.2945= 2.031*186389 + 4477.2945 = 383033
Lower 83%: Y = 0.392x + 1976.291= 0.392*186389 + 1976.291 = 75041
Interpretation of Results: The seochat.com domain’s September 2010 unique visitors number somewhere around 75,041 to 383,033 at an 83% confidence level.
Of course, there is a 17% chance of error/prediction mistake, because the confidence level is 83%.
Using the above analysis, Compete.com data will be much more useful and meaningful if it is used to compute the equivalent upper and lower limit of Google Analytics unique visitors.
Case Example 2: Estimate the January 2010 monthly unique visitors as measured by Google Analytics for Americantowns.com using Compete.com data.
Given: Using the Compete.com site profile tool, Americantowns.com had around 1,558,548 unique visitors in January 2010.
Solution: Using the regression model with 83% confidence interval models:
Upper 83%: Y = 2.031X + 4477.2945= 2.031*1558548 + 4477.2945 = 3169888
Lower 83%: Y = 0.392x + 1976.291= 0.392*1558548 + 1976.291 = 612927
And this press release claims that Americantowns.com got 3 million unique visitors, according to Google Analytics, in January 2010.
So this means that the “actual” website traffic falls between the estimated range using Compete.com’s data, or between 612,927 and 3,169,888 estimated unique visitors to the website.
If you need an online tool for this regression model for quick calculations, you can find it here: http://www.php-developer.org/estimateuniquevisitors/
Of course, the accuracy of this regression model can be improved further by adding more samples and data to the calculation. The sample size analyzed is around 10, determined by this stat sampling
This 10 samples result in an 83% confidence interval that explains the wide gap between the lower and upper limit of the Google Analytics unique visitors prediction.
Google+ Comments | {"url":"http://www.seochat.com/c/a/search-engine-optimization-help/estimate-website-traffic-with-compete-com-by-using-regression-analysis/","timestamp":"2014-04-18T08:13:17Z","content_type":null,"content_length":"36590","record_id":"<urn:uuid:4f5fd5c2-e5d7-4ce6-bd5b-a0fd09edc8de>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00471-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bayside, NY Prealgebra Tutor
Find a Bayside, NY Prealgebra Tutor
...Good luck with the studying!I studied Physics with Astronomy at undergraduate level, gaining a master's degree at upper 2nd class honors level (approx. 3.67 GPA equivalent). I then proceeded
to complete a PhD in Astrophysics, writing a thesis on Massive star formation in the Milky Way Galaxy usin...
8 Subjects: including prealgebra, physics, geometry, algebra 1
I am a practicing attorney who enjoys working with young people. I am currently a mentor to young professionals and junior attorneys, and I am a practicing attorney who handles litigation
matters. I am very well-versed in English and writing, as I write frequently as an attorney.
26 Subjects: including prealgebra, reading, English, writing
...I am qualified to tutor in the Bar Exam subject because I have successful passed the IL bar examination of July 2010, the patent bar examination in 2011, and the NY bar examination of February
2013. I am qualified to tutor Chemical Engineering courses, because I graduated with a B.S. in Chemical...
13 Subjects: including prealgebra, chemistry, physics, calculus
...I did my undergraduate in Physics and Astronomy at Vassar, and did an Engineering degree at Dartmouth. I'm now a PhD student at Columbia in Astronomy (have completed two Masters by now) and
will be done in a year. I have a lot of experience tutoring physics and math at all levels.
11 Subjects: including prealgebra, Spanish, calculus, physics
...Francis College and Berkeley College, overall I have been teaching for 15 years. I have also been tutoring for the past 5 years Elementary Math, Algebra, Precalculus and Calculus students,
amongst others, at Hunter College's Dolciani Math Learning tutoring center. I have a Master of Arts and a Bachelor of Science in Pure Mathematics from City College of CUNY, where I also taught
for 2 years.
21 Subjects: including prealgebra, calculus, elementary math, economics
Related Bayside, NY Tutors
Bayside, NY Accounting Tutors
Bayside, NY ACT Tutors
Bayside, NY Algebra Tutors
Bayside, NY Algebra 2 Tutors
Bayside, NY Calculus Tutors
Bayside, NY Geometry Tutors
Bayside, NY Math Tutors
Bayside, NY Prealgebra Tutors
Bayside, NY Precalculus Tutors
Bayside, NY SAT Tutors
Bayside, NY SAT Math Tutors
Bayside, NY Science Tutors
Bayside, NY Statistics Tutors
Bayside, NY Trigonometry Tutors | {"url":"http://www.purplemath.com/bayside_ny_prealgebra_tutors.php","timestamp":"2014-04-17T04:29:50Z","content_type":null,"content_length":"24249","record_id":"<urn:uuid:8cba1e31-e731-417c-9495-683784742238>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00603-ip-10-147-4-33.ec2.internal.warc.gz"} |
Combined Visibility and Surrounding Triangles Method for Simulation of Crack Discontinuities in Meshless Methods
Journal of Applied Mathematics
Volume 2012 (2012), Article ID 715613, 16 pages
Research Article
Combined Visibility and Surrounding Triangles Method for Simulation of Crack Discontinuities in Meshless Methods
School of Mechanical Engineering, Iran University of Science and Technology, Tehran, Iran
Received 25 July 2012; Accepted 27 September 2012
Academic Editor: Khalida I. Noor
Copyright © 2012 H. Pirali et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
In this paper a combined node searching algorithm for simulation of crack discontinuities in meshless methods called combined visibility and surrounding triangles (CVT) is proposed. The element free
Galerkin (EFG) method is employed for stress analysis of cracked bodies. The proposed node searching algorithm is based on the combination of surrounding triangles and visibility methods; the
surrounding triangles method is used for support domains of nodes and quadrature points generated at the vicinity of crack faces and the visibility method is used for points located on the crack
faces. In comparison with the conventional methods, such as the visibility, the transparency, and the diffraction method, this method is simpler with reasonable efficiency. To show the performance of
this method, linear elastic fracture mechanics analyses are performed on number of standard test specimens and stress intensity factors are calculated. It is shown that the results are in good
agreement with the exact solution and with those generated by the finite element method (FEM).
1. Introduction
Conventional finite element method (FEM) is usually used for solving fracture mechanics problems. This method has some drawbacks in calculation of fracture mechanics parameters. One of the major
drawbacks is that singularity cannot be captured correctly and therefore the results at the vicinity of the crack tip are not reliable [1]. Another problem with FEM is simulation of crack growth. FEM
requires remeshing to update the mesh in each step of the crack growth process and so this is a time consuming phenomena. Although there are some methods such as node release method [2–5] to overcome
this drawback, but still some problems exist. The above drawbacks as well as other shortcomings that exist in FEM such as discontinuous results at the element faces, have caused the researchers to
seak for other computational methods. Extended finite element method (XFEM) and meshfree method are two different approaches that can be a good alternative for FEM in solving fracture mechanics
problems. In recent years many articles have been published in this field. In [6] combination of FEM and meshfree method is used for determination of crack tip fields. Partition of unity methods is
employed for three dimensional modeling of crack growth [7]. In [8] generalized Gaussian quadrature rules are used for discontinuities in XFEM. Simulation of fatigue crack growth using XFEM is
another article in this field [9]. A variational approach for evaluation of stress intensity factors using the element free Galerkin method is used in [10].
Among the above-mentioned methods, meshless methods in recent years have developed rapidly as a computational technique. This method has some advantages over traditional methods such as FEM and the
boundary element method (BEM). In meshless methods, the resulting derivatives of meshfree interpolations are smooth leading in general to very desirable properties, like smooth stresses, but in FEM
the results at the element faces have discontinuity and some extra efforts have to be done for smoothing the strains and stresses [1]. Simulation of crack propagation and large deformation analysis
are typical areas that meshless methods have better performance than FEM [1]. Besides these advantages of meshless methods, there are some disadvantages for meshless methods. Complex shape functions,
difficulties in implementation of essential boundary conditions, and extra effort for simulation of crack discontinuities are the major disadvantages in meshless methods [1]. In stress analysis of
cracks, the major disadvantage is simulation of crack discontinuity.
Different techniques are used for simulation of discontinuities in meshless methods. There are four approaches which can be implemented to model discontinuities in meshless methods [12]. First method
consists of modification of the weight function such as the visibility method, the diffraction method, and the transparency method [13–16]. Second approach is based on modification of the intrinsic
basis [17] to consider special functions. Third approach includes the methods based on an extrinsic MLS enrichment [17]. The last approach is the methods based on the extrinsic PUM enrichment [18–20
]. Morever, augmented lagrangian method is used to model crack problems and material discontinuity [21, 22]. All of these approaches employ relative complicated algorithms. Most popular methods for
simulation of crack discontinuity in meshless methods are the visibility method, the transparency method and the diffraction method [11].
In this paper a node searching algorithm for simulation of crack discontinuities in meshless methods is proposed. This approach follows a simple and effective trend to capture better construction of
support domain at the vicinity of crack faces. Using this method, support domains of nodes and quadrature points generated at the vicinity of crack faces, are based on the surrounding triangles
method and for points located on the crack faces are based on the visibility method. Despite the other conventional methods, the proposed method does not need any special formulation for modification
of support domain. It works mainly with the surrounding triangles and the major modification is done on the crack faces. This method can be used when the background cells are triangular elements.
2. EFG Formulation
Consider a two-dimensional problem of solid mechanics in a domain bounded by . The strong form of system equation is given by (2.1)(2.3) [1].
Equilibrium equation:
Natural boundary condition:
Essential boundary condition: where is differential operator is the stress vector; is the displacement vector; is the body force vector; is the prescribed traction on the traction (natural)
boundaries; is the prescribed displacement on the displacement (essential) boundaries and is the vector of unit outward normal at a point on the natural boundary.
In the element-free Galerkin (EFG) method, the moving least squares (MLS) shape functions are used. The MLS shape functions do not have the Kronecker delta function property hence, the constrained
Galerkin weak form is as follows [1]: where is a diagonal matrix of penalty factors, for 2D, and for 3D. Using the MLS shape functions using n nodes in the local support domain, we can write where is
a matrix of the MLS shape functions shown in the following form
In (2.5), and are the parameters of displacements (not the nodal displacement) for the node. Substituting (2.6) for all the displacement components of into the weak-form (2.4), gives global
discretized system equations of the EFG method [1] where is the vector of nodal parameters of displacements for all nodes in the whole domain, is the global stiffness matrix, and is the global
external force vector. The matrix is the global penalty stiffness matrix assembled in the same manner as for assembling using the nodal penalty stiffness matrix defined by [1]
In (2.7), the additional force vector is caused by the essential boundary conditions; it is formed in the same way as , but using the nodal penalty force vector defined by [1]
Standard Gauss quadrature is used for integrations in the penalty stiffness matrix and the penalty force vector. The integration is performed along the essential boundary, and hence matrix will have
entries only for the nodes near the essential boundaries, which are covered by the support domains of the Gauss quadrature points on .
3. Description of the Proposed Method
This method is a technique for construction of support domain in cracked bodies. The proposed method is based on the combination of the surrounding triangles algorithm and the visibility method. The
resulting support domain using this method is almost the same as the transparency method and the diffraction method.
The visibility criterion can easily be understood by considering the discontinuity opaque for rays of light coming from the nodes [25]. That is, for the modification of a support of node I, one
considers light coming from the coordinates of node I and truncates the part of the support which is in the shadow of the discontinuity. This is depicted in Figure 1 and the discontinuity at the
crack faces can be simulated in meshless methods. A major short coming with this method is that at the discontinuity tips; an artificial discontinuity inside the domain is constructed as shown in
Figure 1 [26].
The diffraction method [13, 15] considers the diffraction of the rays around the tip of the discontinuity. For the evaluation of the weighting function at a certain evaluation point (usually an
integration point) the input parameter of is changed in the following way: Let , being the distance from the node to the crack tip , and the distance from the crack tip to the evaluation point . Then
the change in is [15]. For understanding the parameters see Figure 1: In [13] only , that is, , has been proposed. Reasonable choices for are 1 or 2 [15], however, optimal values for are not
available for a specific problem. The derivatives of the resulting shape function are not continuous directly at the crack tip, however, this poses no difficulties as long as no integration point is
placed there [15]. The modification of the support according to the diffraction method may be seen in Figure 1. A natural extension of the diffraction method for the case of multiple discontinuities
per support may be found in [16].
In [15] the transparency method is introduced. Here, the function is smoothed around a discontinuity by endowing the surface of the discontinuity with a varying degree of transparency. The tip of the
discontinuity is considered completely transparent and becomes more and more opaque with increasing distance from the tip. For the modification of the input parameter of the weighting function as
follows: where , is the dilatation parameter of node , is the intersection of the line with the discontinuity, and is the distance from the crack tip where the discontinuity is completely opaque. For
nodes directly adjacent to the discontinuity a special treatment is proposed [15]. The value of this approach is also a free value which has to be adjusted with empirical arguments. The resulting
derivatives are continuous also at the crack tip. This method is shown in Figure 1.
The surrounding triangles algorithm is used for node searching when the background cells are triangular meshes. The construction of interpolation domains is slightly different for nodes (vertices)
and Gauss points. The procedure for node selections is as follows. First, it is determined which node is a vertex of triangles (a node can belong to 6 different triangles). The vertices of these
surrounding triangles are added to the interpolation domain of the node under consideration. These vertices can be referred to as the inner ring of selected nodes. In case a node is close to or on a
boundary, the surrounding triangles do not always form a ring, but a string. For the construction of the interpolation domain for a Gauss point only the first selection step is different from that in
the procedure for a node. The inner ring of nodes simply consists of the three vertices from the triangle in which the Gauss point is situated. The expansion step further more is the same as that for
nodes. In Figure 2(a) the support domain of a sample node generated by this method is shown for nodes attached to three layers of neighboring elements of the sample node. This procedure for
construction of interpolation domain, works well for regions far from the crack faces but it is not a proper method for regions near the crack faces.
The proposed node searching algorithm is the same as surrounding triangles method, for nodes and quadrature points not located on the crack faces. But for nodes and quadrature points located on the
crack faces, the visibility method is employed. In Figure 2(a) a comparison of the support domain generated by the surrounding triangles method and the proposed method is made for a sample node
located on the crack face. Hollow inside circles plus hollow inside squares are nodes that make support domain of the sample node by the surrounding triangles method and hollow inside circles are
nodes that make support domain of the sample node by the proposed method. As it is shown in the figure, two extra nodes marked by hollow inside squares are selected as supporting nodes in the
surrounding triangles method. In the next section it will be shown that these two extra nodes cause virtual crack closure at the crack tip. Also in Figure 2(b) a comparison of the visibility method
and the proposed method is made. A node near the crack tip is chosen as sample node. Hollow inside circles are nodes that make support domain of the sample node by the visibility method and hollow
inside circles plus hollow inside squares are nodes that make support domain of the sample node by the proposed method. As it is shown in the figure, an artificial discontinuity is generated by the
visibility method; but using the proposed method this artificial discontinuity is removed by selecting four extra nodes as supporting nodes for the sample node shown by the filled circle in the
figure. According to Figure 2, it can be seen that five extra nodes are selected in the support domain of the sample node in comparison with the support domain generated by the visibility method.
These extra nodes help to eliminate virtual crack closure at the crack tip as shown in Figure 3.
In the proposed method, number of layers of supporting elements are very important and affect the results. If few nodes are selected for support domain, there might be some problems for making
inverse of moment matrix, and if more nodes are selected, the accuracy will be decreased. For the proposed method, nodes attached to three layers of neighboring elements of the sample node are
selected for support domain of that node. For quadrature points, nodes attached to two layers of neighboring elements of the sample quadrature point are used. Also the number of nodes located in the
support domain is limited to eight nodes for getting the optimum results.
4. Simulation of Crack Discontinuity Using the Proposed Method
For implementation of the proposed method, a 2D meshless code has been developed. This meshless code has 3 major modules. First module is the preprocessor. Preprocessing is done in ANSYS software and
using a specific macro the information is transmitted to the processing module. Processing module is a FORTRAN program which solves the governing equations. The last module is after processing.
TECPLOT software has been chosen for postprocessing. The results obtained can be shown in TECPLOT software.
To show improper construction of support domain near the crack faces, first the surrounding triangles method is used for construction of support domain for a cracked body; and a virtual crack closure
occurred near the crack tip as shown in Figure 3(a). Using the proposed technique, virtual crack closure will be removed. In the second step, the proposed method has been implemented. In Figure 3(b),
the results of this procedure are shown for the Middle tension (MT) specimen and it can be seen that the above-mentioned problem has been resolved.
In Figure 4 vertical displacement of the MT specimen is compared with the result of ANSYS software. It can be seen that the results are in good agreement with each other.
In Figure 5(a) opening of crack face for the MT specimen is calculated with different approaches. As shown in the figure the results obtained with the proposed method is almost the same as the
results achieved with the visibility method. Finer mesh results for FEM analysis gave closer agreement with the proposed method. In Figure 5(b) plot of vertical stresses is illustrated. The proposed
method and ANSYS results are with the same nodal density. As it is shown in the figure, results obtained with the proposed method are between the ANSYS results with two different nodal densities; the
ordinary nodal density and the high nodal density. It can be concluded that if the same nodal density is used, then the proposed method gives better results.
5. Numerical Results for Benchmark Specimens
In Figure 6 specimens used for analyses are shown. These specimens are divided into two categories. Specimens shown at the right hand side of Figure 6 are subjected to concentrated load and specimens
shown at the left hand side of Figure 6 are subjected to uniform surface load . Parametric dimensions of specimens are illustrated in Figure 6 and also geometrical dimensions for present work are
listed in Table 1. Material properties and load applied to the specimens are presented in Table 2. These specimens are analyzed in plane strain condition.
For point loaded specimens, shown at the left hand side of Figure 6, the following equation represents stress intensity factor [27]: where is the applied force, is the thickness, is characteristic
length dimension of the cracked body, and is dimensionless geometry factor for compact tension specimen (CT), disk-shaped compact specimen, and arc-shaped specimen are presented in Table 3.
For surface loaded specimens shown at the right hand side of Figure 6, relation for stress intensity factor is as follows [28]: where is the uniform applied stress, is the crack length, and is
dimensionless geometry factor for single edge notched tension (SENT) panel, middle tension (MT) panel, and double edge notched tension (DENT) panel are presented in Table 4.
It should be noted that in the proposed procedure the number of nodes located in support domain of quadrature points was limited to 8 for obtaining reasonable results.
integral calculation is used for determination of stress intensity factor: where
In EFG meshless method, stresses and strains are continuous so any arbitrary path can be selected for integration. For simplicity a circle centered at crack tip and radius equal to a is selected as
integrating path as shown in Figure 7. This path is divided to small segments and integral is calculated at each segment. At the end, integrals are added together to obtain the total integral [24].
As it is shown, the natural coordinate is used for integration. Coordinate of each point in this path can be obtained using the following formula [24]: Contribution of integral from to is [24] where
is elastic strain energy density, is unique vector, is stress vector, and is displacement vector along the integrating path. The tractions are calculated using the following relations [24]:
Numerical integration is done along the circular path using guass quadrature method and finally the total integral is determined as follows [24]:
Finally (5.3) is used to calculate the stress intensity factors. Linear elastic fracture mechanics analyses are performed for standard test specimens and stress intensity factors are calculated. As
shown in the results, in each test specimen, several ratios are analyzed but for a sample demonstration, scattered nodes and triangular background cells and finite element mesh for uniform surface
loaded specimens and point loaded specimens with are presented in Figures 8 and 9, respectively.
The results obtained by the proposed method, are compared with K-solutions obtained by ANSYS software and target values [23, 28]. Figure 10 shows that there is a good agreement for the achieved
results by different methods.
The percentage errors of calculated stress intensity factors between the new method and exact solution for number of geometries with surface loading are shown in Table 5. As it can be seen there is a
close agreement between the results.
6. Conclusions
In this paper a node searching algorithm for simulation of crack discontinuities in meshless methods is proposed. This method can simply be implemented, without engagement of any mathematical
relationships. The proposed method is based on the combination of surrounding triangles and visibility methods. If the visibility method is used solely, the node searching algorithm will be complex
and on the other hand, if the surrounding triangles method is used, due to the virtual crack closure phenomena the errors for stress analysis increase. Results for calculated stress intensity factors
showed that there is good agreement has been obtained between exact solution and the proposed method for different types of geometries and loading conditions with an average 2.5 percent of error. The
proposed technique for simulation of crack discontinuity can be used as a simple and efficient way which can be employed in meshless methods when the background cells are triangular meshes.
1. G. R. Liu, Mesh Free Methods, CRC Press, Boca Raton, Fla, USA, 1st edition, 2003. View at Zentralblatt MATH
2. T. N. Bittencourt, P. A. Wawrzynek, A. R. Ingraffea, and J. L. Sousa, “Quasi-automatic simulation of crack propagation for 2D lefm problems,” Engineering Fracture Mechanics, vol. 55, no. 2, pp.
321–334, 1996. View at Publisher · View at Google Scholar · View at Scopus
3. P. O. Bouchard, F. Bay, and Y. Chastel, “Numerical modelling of crack propagation: automatic remeshing and comparison of different criteria,” Computer Methods in Applied Mechanics and Engineering
, vol. 192, no. 35-36, pp. 3887–3908, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
4. S. R. Beissel, G. R. Johnson, and C. H. Popelar, “An element-failure algorithm for dynamic crack propagation in general directions,” Engineering Fracture Mechanics, vol. 61, no. 3-4, pp. 407–425,
1998. View at Publisher · View at Google Scholar · View at Scopus
5. F. R. Biglari, A. T. Kermani, M. H. Parsa, K. M. Nikbin, and N. P. O'Dowd, “Comparison of fine and conventional blanking based on ductile fracture criteria,” in Proceedings of the 7th Biennial
Conference on Engineering Systems Design and Analysis, pp. 265–270, Manchester, UK, July 2004. View at Scopus
6. Y. T. Gu and L. C. Zhang, “Coupling of the meshfree and finite element methods for determination of the crack tip fields,” Engineering Fracture Mechanics, vol. 75, no. 5, pp. 986–1004, 2008. View
at Publisher · View at Google Scholar · View at Scopus
7. T. Rabczuk, S. Bordas, and G. Zi, “On three-dimensional modelling of crack growth using partition of unity methods,” Computers and Structures, vol. 88, no. 23-24, pp. 1391–1411, 2010. View at
Publisher · View at Google Scholar · View at Scopus
8. S. E. Mousavi and N. Sukumar, “Generalized Gaussian quadrature rules for discontinuities and crack singularities in the extended finite element method,” Computer Methods in Applied Mechanics and
Engineering, vol. 199, no. 49-52, pp. 3237–3249, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
9. I. V. Singh, B. K. Mishra, S. Bhattacharya, and R. U. Patil, “The numerical simulation of fatigue crack growth using extended finite element method,” International Journal of Fatigue, vol. 36,
no. 1, pp. 109–119, 2012. View at Publisher · View at Google Scholar
10. P. H. Wen and M. H. Aliabadi, “A variational approach for evaluation of stress intensity factors using the element free Galerkin method,” International Journal of Solids and Structures, vol. 48,
no. 7-8, pp. 1171–1179, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
11. T. P. Fries and H. G. Matthies, Classification and Overview of Meshfree Methods, Informatikbericht 2003-3, Technical University Braunschweig, Brunswick, Germany, 2003.
12. V. P. Nguyen, T. Rabczuk, S. Bordas, and M. Duflot, “Meshless methods: a review and computer implementation aspects,” Mathematics and Computers in Simulation, vol. 79, no. 3, pp. 763–813, 2008.
View at Publisher · View at Google Scholar · View at Zentralblatt MATH
13. T. Belytschko, Y. Krongauz, M. Fleming, D. Organ, and W. K. S. Liu, “Smoothing and accelerated computations in the element free Galerkin method,” Journal of Computational and Applied Mathematics,
vol. 74, no. 1-2, pp. 111–126, 1996. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
14. Y. Krongauz and T. Belytschko, “EFG approximation with discontinuous derivatives,” International Journal for Numerical Methods in Engineering, vol. 41, no. 7, pp. 1215–1233, 1998. View at
Zentralblatt MATH
15. D. Organ, M. Fleming, T. Terry, and T. Belytschko, “Continuous meshless approximations for nonconvex bodies by diffraction and transparency,” Computational Mechanics, vol. 18, no. 3, pp. 225–235,
1996. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
16. B. Muravin and E. Turkel, Advance Diffraction Method as a Tool for Solution of Complex Non-Convex Boundary Problems in Meshfree Methods for Partial Differential Equations, vol. 26, Springer,
Berlin, Germany, 2002, Edited by M. Griebel, M. A. Schweitzer.
17. M. Fleming, Y. A. Chu, B. Moran, and T. Belytschko, “Enriched element-free Galerkin methods for crack tip fields,” International Journal for Numerical Methods in Engineering, vol. 40, no. 8, pp.
1483–1504, 1997.
18. G. Ventura, J. X. Xu, and T. Belytschko, “A vector level set method and new discontinuity approximations for crack growth by EFG,” International Journal for Numerical Methods in Engineering, vol.
54, no. 6, pp. 923–944, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
19. T. Rabczuk and T. Belytschko, “Cracking particles: a simplified meshfree method for arbitrary evolving cracks,” International Journal for Numerical Methods in Engineering, vol. 61, no. 13, pp.
2316–2343, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
20. T. Rabczuk, P. M. A. Areias, and T. Belytschko, “A simplified mesh-free method for shear bands with cohesive surfaces,” International Journal for Numerical Methods in Engineering, vol. 69, no. 5,
pp. 993–1021, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
21. A. Carpinteri, “Post-peak and post-bifurcation analysis of cohesive crack propagation,” Engineering Fracture Mechanics, vol. 32, no. 2, pp. 265–278, 1989. View at Publisher · View at Google
Scholar · View at Scopus
22. A. Carpinteri, “A scale-invariant cohesive crack model for quasi-brittle materials,” Engineering Fracture Mechanics, vol. 69, no. 2, pp. 207–217, 2001. View at Publisher · View at Google Scholar
· View at Scopus
23. H. Tada, P. C. Paris, and G. R. Irwin, The Stress Analysis of Cracks Handbook, Del Research, Hellrtown, Pa, USA, 1975.
24. S. Hagihara, M. Tsunori, T. Ikeda, and N. Miyazaki, “Application of meshfree method to elastic-plastic fracture mechanics parameter analysis,” Computer Modeling in Engineering and Sciences, vol.
17, no. 2, pp. 63–72, 2007. View at Zentralblatt MATH · View at Scopus
25. T. Belytschko, Y. Y. Lu, and L. Gu, “Element-free Galerkin methods,” International Journal for Numerical Methods in Engineering, vol. 37, no. 2, pp. 229–256, 1994. View at Publisher · View at
Google Scholar · View at Zentralblatt MATH
26. P. Krysl and T. Belytschko, “Element-free Galerkin method: convergence of the continuous and discontinuous shape functions,” Computer Methods in Applied Mechanics and Engineering, vol. 148, no.
3-4, pp. 257–277, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
27. T. L. Anderson, Fracture Mechanics, Fundamentals and Applications, 2nd edition, 1995.
28. A. Saxena, Nonlinear Fracture Mechanics for Engineers, 1998. | {"url":"http://www.hindawi.com/journals/jam/2012/715613/","timestamp":"2014-04-16T22:11:01Z","content_type":null,"content_length":"208629","record_id":"<urn:uuid:610a01c6-bf1b-4de0-b4ad-a9fcf15a81ef>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00454-ip-10-147-4-33.ec2.internal.warc.gz"} |
Polynomials with roots in convex position
up vote 9 down vote favorite
Let $\mathcal P_n$ denote the set of all monic polynomials of degree $n$ with real or complex coefficients such that $P\in\mathcal P_n$ if for all $k\in\lbrace 0,1,\dots,n-2\rbrace$ the $n-k$ roots
of $P^{(k)}=\left(\frac{d}{dx}\right)^kP$ are in strictly convex position (ie are the $n-k$ vertices of a convex polygon with $n-k$ extremal vertices).
The set $\mathcal P_n$ is clearly an open subset of all monic polynomials of degree $n$ over $\mathbb R$ or $\mathbb C$.
What is the geometry and topology of $\mathcal P_n$?
(1) Over the reals, $\mathcal P_n$ has at least $2^{n-1}$ connected components: Indeed, consider a very fast decreasing sequence (I guess $n\longmapsto 1/((1+n)^{(1+n)^{1+n}})!$ will probably work)
of strictly positive reals $\alpha_0>\alpha_1\dots$ and a sequence of signs $\epsilon_0,\epsilon_1,\dots\in\lbrace \pm 1\rbrace^{\mathbb N}$. Then
$$x^n+\sum_{k=0}^{n-2}\epsilon_k\alpha_k x^k\in \mathcal P_n(\mathbb R)$$ and different choices of signs correspond to different connected components. Are there other connected components?
(2) Over the complex numbers, all the polynomials described in (1) are in the same connected component. Choosing $\epsilon_i$ on the complex unit circle suggests however that $\pi_1(\mathcal C_n)$
might be $\mathbb Z^{n-1}$ where $\mathcal C_n$ denotes the connected component of $x^n+\sum_{k=0}^{n-2}\alpha_kx^k$ in $\mathcal P_n(\mathbb C)$. Do we have $\mathcal C_n=\mathcal P_n(\mathbb C)$?
(3) We have the equalities $\mathcal P_{n-1}=\lbrace P'/n|P\in\mathcal P_n\rbrace$.
gt.geometric-topology polynomials
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged gt.geometric-topology polynomials or ask your own question. | {"url":"http://mathoverflow.net/questions/26799/polynomials-with-roots-in-convex-position","timestamp":"2014-04-19T10:25:32Z","content_type":null,"content_length":"46895","record_id":"<urn:uuid:2a36ff20-b1e0-4d6f-819f-fedee2112725>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00537-ip-10-147-4-33.ec2.internal.warc.gz"} |
Plymouth Meeting Prealgebra Tutor
Find a Plymouth Meeting Prealgebra Tutor
I am certified as a math teacher in Pennsylvania and spent ten years teaching math courses for grades 7-12 in the Philadelphia area. I enjoy tutoring students one-on-one, and watching them become
stronger math students. I like to help them build their confidence and problem solving ability as well as their skills.I taught Algebra to 8th and 9th grade students for over 5 years.
3 Subjects: including prealgebra, geometry, algebra 1
...I have also tutored college degree candidates who are diagnosed dyslexic in an effort to assist them with achieving passing scores for college entrance exams such as the SAT, ACT and TEAS. I am
currently working with two students, both of whom are dyslexic along with language and other learning ...
51 Subjects: including prealgebra, English, reading, geometry
...I also have a USSF State D license administered by New York State West Youth Soccer Association. Microsoft Access is a great information management tool that allows you to store, report, and
analyze information within a relational database. I can assist you with understanding the program; create tables, forms, report and queries; and how to form expressions and create functions.
27 Subjects: including prealgebra, calculus, ACT Math, economics
...I have a superior knowledge in Organic Chemistry having served as a Teaching assistant in Graduate School, teaching labs, recitations, making exams and grading exams. I have over 8 publications
and presentations in the field of Organic Chemistry from Graduate School and work as a Process Organic...
26 Subjects: including prealgebra, chemistry, GRE, biology
...I love the energy of kids. Whenever working with young students, I aim to make a positive contribution towards opening up and growing their minds. As a tutor, I strive to bring the same energy
to my work that kids bring to life.
20 Subjects: including prealgebra, reading, writing, algebra 1 | {"url":"http://www.purplemath.com/Plymouth_Meeting_prealgebra_tutors.php","timestamp":"2014-04-18T01:18:29Z","content_type":null,"content_length":"24470","record_id":"<urn:uuid:a8a262ee-d4be-4b11-8fc5-56d2d6429f59>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00576-ip-10-147-4-33.ec2.internal.warc.gz"} |
North Brentwood, MD Prealgebra Tutor
Find a North Brentwood, MD Prealgebra Tutor
...I have taken 10 years of lessons, and I am able to perform many different pieces at a high level. I have been playing chess since I was 8 years old. I've read multiple strategy books and am
currently ranked #418/1364 on the itsyourturn.com chess ladder.
27 Subjects: including prealgebra, calculus, physics, geometry
...It is not really about memorization so much as visualization of the ideas. That is, it is more about why something is a certain way rather than memorizing ideas. I have taught high school math
for 44 years.
21 Subjects: including prealgebra, calculus, statistics, geometry
With over 3 years of tutoring experience in various subjects such as elementary math, pre algebra, algebra, elementary reading, history, SSAT prep; I bring a sense of enthusiasm to my close
working relationships with students. My practical experience includes my two years in financial analysis, one...
23 Subjects: including prealgebra, English, geometry, algebra 1
...My goal is for a student to learn not just the material on any given school day, but also the confidence in their own ability to comprehend new subjects when working without a tutor's support.
My tutoring experience is broad, both in subject and student age. I have: - volunteered as a reading ...
25 Subjects: including prealgebra, English, reading, geometry
...As a result of my background in neuroscience, I emphasize metacognitive strategies—i.e., learning how to learn. My philosophy of teaching has been influenced by my background in social
sciences. Specifically, I strive to reduce the power differential between teacher and student and achieve a learner-learner relationship per the critical pedagogical approach of Paulo Freire.
15 Subjects: including prealgebra, calculus, algebra 1, geometry
Related North Brentwood, MD Tutors
North Brentwood, MD Accounting Tutors
North Brentwood, MD ACT Tutors
North Brentwood, MD Algebra Tutors
North Brentwood, MD Algebra 2 Tutors
North Brentwood, MD Calculus Tutors
North Brentwood, MD Geometry Tutors
North Brentwood, MD Math Tutors
North Brentwood, MD Prealgebra Tutors
North Brentwood, MD Precalculus Tutors
North Brentwood, MD SAT Tutors
North Brentwood, MD SAT Math Tutors
North Brentwood, MD Science Tutors
North Brentwood, MD Statistics Tutors
North Brentwood, MD Trigonometry Tutors
Nearby Cities With prealgebra Tutor
Berwyn Heights, MD prealgebra Tutors
Bladensburg, MD prealgebra Tutors
Brentwood, MD prealgebra Tutors
Colmar Manor, MD prealgebra Tutors
Cottage City, MD prealgebra Tutors
Edmonston, MD prealgebra Tutors
Fairmount Heights, MD prealgebra Tutors
Hyattsville prealgebra Tutors
Landover Hills, MD prealgebra Tutors
Mount Rainier prealgebra Tutors
Riverdale Park, MD prealgebra Tutors
Riverdale Pk, MD prealgebra Tutors
Riverdale, MD prealgebra Tutors
University Park, MD prealgebra Tutors
West Hyattsville, MD prealgebra Tutors | {"url":"http://www.purplemath.com/north_brentwood_md_prealgebra_tutors.php","timestamp":"2014-04-16T04:32:59Z","content_type":null,"content_length":"24721","record_id":"<urn:uuid:cf39e425-1717-4585-8d4c-6cd456f4dd61>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00466-ip-10-147-4-33.ec2.internal.warc.gz"} |
Parker, TX Prealgebra Tutor
Find a Parker, TX Prealgebra Tutor
...I utilize a hands on approach to explain Math which can be a very abstract concepts. I have the ability to prepare my students for the state assessment and have a 95% success rate upon the
first assessment. I have worked with elementary children for 21 of my 25 years in public education.
21 Subjects: including prealgebra, reading, English, writing
...I mastered UNIX in college as a student in Electrical Engineering. We were required to take computer science courses on UNIX and even build our own multitasking operating system. I was a UNIX
System Administrator for 3+ years while serving in the U.S.
48 Subjects: including prealgebra, chemistry, calculus, ASVAB
...My background consistent of tutoring the necessary math, reading, and language arts skills needed to pass this exam. I have assisted one student I utilize previous TEAS V test questions that
focus primarily on a student's weaknesses. I am able to assess the weaknesses based on a diagnostic that I provide during our first lesson.
27 Subjects: including prealgebra, reading, geometry, statistics
...My name is Lois and I am interested in helping you with your English and Math subjects. I believe learning can actually be fun as well as rewarding. I myself am strong in English and Math, and
was a Technical Writer, Editor and Proofreader for 10 years in the business world.
10 Subjects: including prealgebra, English, writing, calculus
...Unfortunately I picked the wrong year of the century to become a teacher in Texas, so I accepted a position working in the Dallas County Community College District Service Center and plan to
be a tutor. I am highly qualified to teach grades 7-12 Mathematics in Texas. I passed the TExES 4-8th grade Math exam with a 273 out of a possible 300 score.
4 Subjects: including prealgebra, geometry, algebra 1, algebra 2
Related Parker, TX Tutors
Parker, TX Accounting Tutors
Parker, TX ACT Tutors
Parker, TX Algebra Tutors
Parker, TX Algebra 2 Tutors
Parker, TX Calculus Tutors
Parker, TX Geometry Tutors
Parker, TX Math Tutors
Parker, TX Prealgebra Tutors
Parker, TX Precalculus Tutors
Parker, TX SAT Tutors
Parker, TX SAT Math Tutors
Parker, TX Science Tutors
Parker, TX Statistics Tutors
Parker, TX Trigonometry Tutors | {"url":"http://www.purplemath.com/Parker_TX_Prealgebra_tutors.php","timestamp":"2014-04-19T09:47:48Z","content_type":null,"content_length":"23981","record_id":"<urn:uuid:b57dd649-2884-4e5e-b056-a1222edb31f2>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00158-ip-10-147-4-33.ec2.internal.warc.gz"} |
Counting as Descriptive Statistics
John Graunt's 1662 Observatons on the Bills of Mortality is often cited as the first instance of descriptive statistics. In it he presents vast amounts of data in a few tables which can be easily
comprehended. This is the purpose of descriptive statistics, to communicate information. Although some of the original informaton has been lost, that which is in the tables can be comprehended. His
first table begins (for the year 1624):
Buried within the walls of London: 3386
Whereof the plague: 1
Buried outside the walls: 5924
Whereof the plague: 5
Buried in total: 9310
Whereof the plague: 6
This illustrates well that the essence of descriptive statistics is counting. From all the parrish registers, he counted the number of persons who died, and who died of the plague.
Because the numbers sometimes were rather too large to comprehend, He also simplified them. For the year 1625 which had 51758 deaths, of which 35417 were of the plague; he wrote: "we finde the Plague
to bear unto the whole in proportion as 35 to 51. or 7 to 10." With these approximations, he is introducing the concept that relative proportions are sometimes of more interest than the raw numbers.
We would generally express the above proportion as 70%.
As a first exercise in descriptive statistics, you should be able to construct a table showing the frequency (raw count) of various events, and also express the results as relative frequencies
(percentages of the whole). Exercise:
In 1625 in Margarets Lothbury 114 persons died, of which 64 deaths were attributed to the plague; in Margarets Moses 37 persons died, of which 25 deaths were attributed to the plague; in Margarets
new Fishstreet 123 persons died, of which 82 deaths were attributed to the plague; and in Margarets Pattons 77 persons died, of which 50 deaths were attributed to the plague. Present this information
in a table, and represent it as relative frequencies.
Note that we have used the raw count data to calculate relative frequencies. If we only know the relative frequencies, we cannot calculate the raw count data (unless wee also know the total number of
A graphical presentation is often easier to comprehend than a table. Bar charts and pie charts are the most common graphical presentations. We will illustrate these in the case one has 7 green balls,
10 red balls, and 3 yellow balls. The number of each type (i.e., 7, 10, 3) is called the frequency. Such information can be communicated graphically with a bar chart or pie chart as follows:
10_ __
| |
| |
__ | |
number | | | |
of 5_ | | | |
balls | | | |
| | | | __
| | | | | |
| | | | | |
G R Y
color of balls
Frequency of Ball Colors
Note that we may divide the number (frequency) of each type by the total number (in this case 7+10+3=20) to get the percent or relative frequency of each type. This information can also be displayed
in a bar chart or histogram:
50%_ __
| |
| |
__ | |
number | | | |
of 25%_ | | | |
balls | | | |
| | | | __
| | | | | |
| | | | | |
G R Y
color of balls
Relative Frequency of Ball Colors
Note that the frequency or relative frequency of any sort of characteristic can be displayed with a bar or pie chart. Note also that some information which is not count information (such as miles per
gallon of different cars) can be displayed as a bar chart, but cannot be displayed as a pie chart since the information is not parts of a whole. Pie charts are only appropriate when data can be
interpreted as parts of a whole.
| |
20_ __ | |
MPG | | | | __
| | | | | |
| | | | | |
GM BMW Ford
Make of car
Exercise: Represent the moratality data for the Margarets parishes with bar charts and pie charts; present some information which can be displayed with a bar chart, but not a pie chart.
Competency: Represent the data set {CJJMCJMMJCMMM} of the religion (Christian, Muslim, Jewish) of 13 people interviewed in Jeruselem with both a bar chart and a pie chart; label them with both
absolute and relative frequencies.
Identify some information that could be displayed with a bar chart, but not a pie chart.
Reflection: What are all the decisions which you must make in order to represent a data set with a bar chart or pie chart?
What are the advantages and disadvantages of a bar chart versus pie chart?
Challenge: If you had the per capita income of the New England states, how could you modify that information to represent it in a pie chart?
13 May 2003 | {"url":"http://cns2.uni.edu/~campbell/mdm/origin.html","timestamp":"2014-04-17T21:48:10Z","content_type":null,"content_length":"6167","record_id":"<urn:uuid:01a51f0a-892c-4128-8cb8-b139889eb1e8>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00467-ip-10-147-4-33.ec2.internal.warc.gz"} |
Overview Package Class Use Tree Deprecated Index Help
PREV CLASS NEXT CLASS FRAMES NO FRAMES
SUMMARY: NESTED | FIELD | CONSTR | METHOD DETAIL: FIELD | CONSTR | METHOD
Class Duration
All Implemented Interfaces:
Serializable, Comparable<ReadableDuration>, ReadableDuration
public final class Duration
extends BaseDuration
implements ReadableDuration, Serializable
An immutable duration specifying a length of time in milliseconds.
A duration is defined by a fixed number of milliseconds. There is no concept of fields, such as days or seconds, as these fields can vary in length. A duration may be converted to a Period to obtain
field values. This conversion will typically cause a loss of precision however.
Duration is thread-safe and immutable.
Brian S O'Neill, Stephen Colebourne
See Also:
│ Field Summary │
│ static Duration │ ZERO │
│ │ Constant representing zero millisecond duration │
│ Constructor Summary │
│ Duration(long duration) │ │
│ Creates a duration from the given millisecond duration. │ │
│ Duration(long startInstant, long endInstant) │ │
│ Creates a duration from the given interval endpoints. │ │
│ Duration(Object duration) │ │
│ Creates a duration from the specified object using the ConverterManager. │ │
│ Duration(ReadableInstant start, ReadableInstant end) │ │
│ Creates a duration from the given interval endpoints. │ │
│ Method Summary │
│ long │ getStandardDays() │
│ │ Gets the length of this duration in days assuming that there are the standard number of milliseconds in a day. │
│ long │ getStandardHours() │
│ │ Gets the length of this duration in hours assuming that there are the standard number of milliseconds in an hour. │
│ long │ getStandardMinutes() │
│ │ Gets the length of this duration in minutes assuming that there are the standard number of milliseconds in a minute. │
│ long │ getStandardSeconds() │
│ │ Gets the length of this duration in seconds assuming that there are the standard number of milliseconds in a second. │
│ static Duration │ millis(long millis) │
│ │ Create a duration with the specified number of milliseconds. │
│ Duration │ minus(long amount) │
│ │ Returns a new duration with this length minus that specified. │
│ Duration │ minus(ReadableDuration amount) │
│ │ Returns a new duration with this length minus that specified. │
│ static Duration │ parse(String str) │
│ │ Parses a Duration from the specified string. │
│ Duration │ plus(long amount) │
│ │ Returns a new duration with this length plus that specified. │
│ Duration │ plus(ReadableDuration amount) │
│ │ Returns a new duration with this length plus that specified. │
│ static Duration │ standardDays(long days) │
│ │ Create a duration with the specified number of days assuming that there are the standard number of milliseconds in a day. │
│ static Duration │ standardHours(long hours) │
│ │ Create a duration with the specified number of hours assuming that there are the standard number of milliseconds in an hour. │
│ static Duration │ standardMinutes(long minutes) │
│ │ Create a duration with the specified number of minutes assuming that there are the standard number of milliseconds in a minute. │
│ static Duration │ standardSeconds(long seconds) │
│ │ Create a duration with the specified number of seconds assuming that there are the standard number of milliseconds in a second. │
│ Duration │ toDuration() │
│ │ Get this duration as an immutable Duration object by returning this. │
│ Days │ toStandardDays() │
│ │ Converts this duration to a period in days assuming that there are the standard number of milliseconds in a day. │
│ Hours │ toStandardHours() │
│ │ Converts this duration to a period in hours assuming that there are the standard number of milliseconds in an hour. │
│ Minutes │ toStandardMinutes() │
│ │ Converts this duration to a period in minutes assuming that there are the standard number of milliseconds in a minute. │
│ Seconds │ toStandardSeconds() │
│ │ Converts this duration to a period in seconds assuming that there are the standard number of milliseconds in a second. │
│ Duration │ withDurationAdded(long durationToAdd, int scalar) │
│ │ Returns a new duration with this length plus that specified multiplied by the scalar. │
│ Duration │ withDurationAdded(ReadableDuration durationToAdd, int scalar) │
│ │ Returns a new duration with this length plus that specified multiplied by the scalar. │
│ Duration │ withMillis(long duration) │
│ │ Creates a new Duration instance with a different milisecond length. │
│ Methods inherited from class org.joda.time.base.BaseDuration │
│ getMillis, setMillis, toIntervalFrom, toIntervalTo, toPeriod, toPeriod, toPeriod, toPeriodFrom, toPeriodFrom, toPeriodTo, toPeriodTo │
public static final Duration ZERO
Constant representing zero millisecond duration
public Duration(long duration)
Creates a duration from the given millisecond duration.
duration - the duration, in milliseconds
public Duration(long startInstant,
long endInstant)
Creates a duration from the given interval endpoints.
startInstant - interval start, in milliseconds
endInstant - interval end, in milliseconds
ArithmeticException - if the duration exceeds a 64 bit long
public Duration(ReadableInstant start,
ReadableInstant end)
Creates a duration from the given interval endpoints.
start - interval start, null means now
end - interval end, null means now
ArithmeticException - if the duration exceeds a 64 bit long
public Duration(Object duration)
Creates a duration from the specified object using the ConverterManager.
duration - duration to convert
IllegalArgumentException - if duration is invalid
public static Duration parse(String str)
Parses a Duration from the specified string.
This parses the format PTa.bS, as per AbstractDuration.toString().
str - the string to parse, not null
public static Duration standardDays(long days)
Create a duration with the specified number of days assuming that there are the standard number of milliseconds in a day.
This method assumes that there are 24 hours in a day, 60 minutes in an hour, 60 seconds in a minute and 1000 milliseconds in a second. This will be true for most days, however days with Daylight
Savings changes will not have 24 hours, so use this method with care.
A Duration is a representation of an amount of time. If you want to express the concepts of 'days' you should consider using the Days class.
days - the number of standard days in this duration
the duration, never null
ArithmeticException - if the days value is too large
public static Duration standardHours(long hours)
Create a duration with the specified number of hours assuming that there are the standard number of milliseconds in an hour.
This method assumes that there are 60 minutes in an hour, 60 seconds in a minute and 1000 milliseconds in a second. All currently supplied chronologies use this definition.
A Duration is a representation of an amount of time. If you want to express the concepts of 'hours' you should consider using the Hours class.
hours - the number of standard hours in this duration
the duration, never null
ArithmeticException - if the hours value is too large
public static Duration standardMinutes(long minutes)
Create a duration with the specified number of minutes assuming that there are the standard number of milliseconds in a minute.
This method assumes that there are 60 seconds in a minute and 1000 milliseconds in a second. All currently supplied chronologies use this definition.
A Duration is a representation of an amount of time. If you want to express the concepts of 'minutes' you should consider using the Minutes class.
minutes - the number of standard minutes in this duration
the duration, never null
ArithmeticException - if the minutes value is too large
public static Duration standardSeconds(long seconds)
Create a duration with the specified number of seconds assuming that there are the standard number of milliseconds in a second.
This method assumes that there are 1000 milliseconds in a second. All currently supplied chronologies use this definition.
A Duration is a representation of an amount of time. If you want to express the concepts of 'seconds' you should consider using the Seconds class.
seconds - the number of standard seconds in this duration
the duration, never null
ArithmeticException - if the seconds value is too large
public static Duration millis(long millis)
Create a duration with the specified number of milliseconds.
millis - the number of standard milliseconds in this duration
the duration, never null
public long getStandardDays()
Gets the length of this duration in days assuming that there are the standard number of milliseconds in a day.
This method assumes that there are 24 hours in a day, 60 minutes in an hour, 60 seconds in a minute and 1000 milliseconds in a second. This will be true for most days, however days with Daylight
Savings changes will not have 24 hours, so use this method with care.
This returns getMillis() / MILLIS_PER_DAY. The result is an integer division, thus excess milliseconds are truncated.
the length of the duration in standard seconds
public long getStandardHours()
Gets the length of this duration in hours assuming that there are the standard number of milliseconds in an hour.
This method assumes that there are 60 minutes in an hour, 60 seconds in a minute and 1000 milliseconds in a second. All currently supplied chronologies use this definition.
This returns getMillis() / MILLIS_PER_HOUR. The result is an integer division, thus excess milliseconds are truncated.
the length of the duration in standard seconds
public long getStandardMinutes()
Gets the length of this duration in minutes assuming that there are the standard number of milliseconds in a minute.
This method assumes that there are 60 seconds in a minute and 1000 milliseconds in a second. All currently supplied chronologies use this definition.
This returns getMillis() / 60000. The result is an integer division, thus excess milliseconds are truncated.
the length of the duration in standard seconds
public long getStandardSeconds()
Gets the length of this duration in seconds assuming that there are the standard number of milliseconds in a second.
This method assumes that there are 1000 milliseconds in a second. All currently supplied chronologies use this definition.
This returns getMillis() / 1000. The result is an integer division, so 2999 millis returns 2 seconds.
the length of the duration in standard seconds
public Duration toDuration()
Get this duration as an immutable Duration object by returning this.
Specified by:
toDuration in interface ReadableDuration
toDuration in class AbstractDuration
public Days toStandardDays()
Converts this duration to a period in days assuming that there are the standard number of milliseconds in a day.
This method assumes that there are 24 hours in a day, 60 minutes in an hour, 60 seconds in a minute and 1000 milliseconds in a second. This will be true for most days, however days with Daylight
Savings changes will not have 24 hours, so use this method with care.
a period representing the number of standard days in this period, never null
ArithmeticException - if the number of days is too large to be represented
public Hours toStandardHours()
Converts this duration to a period in hours assuming that there are the standard number of milliseconds in an hour.
This method assumes that there are 60 minutes in an hour, 60 seconds in a minute and 1000 milliseconds in a second. All currently supplied chronologies use this definition.
a period representing the number of standard hours in this period, never null
ArithmeticException - if the number of hours is too large to be represented
public Minutes toStandardMinutes()
Converts this duration to a period in minutes assuming that there are the standard number of milliseconds in a minute.
This method assumes that there are 60 seconds in a minute and 1000 milliseconds in a second. All currently supplied chronologies use this definition.
a period representing the number of standard minutes in this period, never null
ArithmeticException - if the number of minutes is too large to be represented
public Seconds toStandardSeconds()
Converts this duration to a period in seconds assuming that there are the standard number of milliseconds in a second.
This method assumes that there are 1000 milliseconds in a second. All currently supplied chronologies use this definition.
a period representing the number of standard seconds in this period, never null
ArithmeticException - if the number of seconds is too large to be represented
public Duration withMillis(long duration)
Creates a new Duration instance with a different milisecond length.
duration - the new length of the duration
the new duration instance
public Duration withDurationAdded(long durationToAdd,
int scalar)
Returns a new duration with this length plus that specified multiplied by the scalar. This instance is immutable and is not altered.
If the addition is zero, this instance is returned.
durationToAdd - the duration to add to this one
scalar - the amount of times to add, such as -1 to subtract once
the new duration instance
public Duration withDurationAdded(ReadableDuration durationToAdd,
int scalar)
Returns a new duration with this length plus that specified multiplied by the scalar. This instance is immutable and is not altered.
If the addition is zero, this instance is returned.
durationToAdd - the duration to add to this one, null means zero
scalar - the amount of times to add, such as -1 to subtract once
the new duration instance
public Duration plus(long amount)
Returns a new duration with this length plus that specified. This instance is immutable and is not altered.
If the addition is zero, this instance is returned.
amount - the duration to add to this one
the new duration instance
public Duration plus(ReadableDuration amount)
Returns a new duration with this length plus that specified. This instance is immutable and is not altered.
If the amount is zero, this instance is returned.
amount - the duration to add to this one, null means zero
the new duration instance
public Duration minus(long amount)
Returns a new duration with this length minus that specified. This instance is immutable and is not altered.
If the addition is zero, this instance is returned.
amount - the duration to take away from this one
the new duration instance
public Duration minus(ReadableDuration amount)
Returns a new duration with this length minus that specified. This instance is immutable and is not altered.
If the amount is zero, this instance is returned.
amount - the duration to take away from this one, null means zero
the new duration instance
Overview Package Class Use Tree Deprecated Index Help
PREV CLASS NEXT CLASS FRAMES NO FRAMES
SUMMARY: NESTED | FIELD | CONSTR | METHOD DETAIL: FIELD | CONSTR | METHOD
Copyright © 2002-2013 Joda.org. All Rights Reserved. | {"url":"http://joda-time.sourceforge.net/apidocs/org/joda/time/Duration.html","timestamp":"2014-04-19T09:25:05Z","content_type":null,"content_length":"56492","record_id":"<urn:uuid:2ca7f709-2254-4ff1-8db2-5ab9c8f4e505>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00216-ip-10-147-4-33.ec2.internal.warc.gz"} |
subexpressions (OT: math)
Stebanoid at gmail.com Stebanoid at gmail.com
Sun Jun 3 20:07:25 CEST 2007
On 3 , 21:43, Gary Herron <gher... at islandtraining.com> wrote:
> Steban... at gmail.com wrote:
> > angle is dimensionless unit.
> Of course not! Angles have units, commonly either degrees or radians.
> However, sines and cosines, being ratios of two lengths, are unit-less.> To understand it: sin() can't have dimensioned argument. It is can't
> > to be - sin(meters)
> No it's sin(radians) or sin(degrees).> it is difficult to invent what is a "sqrt from a angle" but it can be.
> I don't know of any name for the units of "sqrt of angle", but that
> doesn't invalidate the claim that the value *is* a dimensioned
> quantity. In lieu of a name, we'd have to label such a quantity as
> "sqrt of degrees" or "sqrt of radians". After all, we do the same
> thing for measures of area. We have some units of area like "acre", but
> usually we label areas with units like "meters squared" or "square
> meters". That's really no stranger than labeling a quantity as "sqrt
> of degrees".
> Gary Herron, PhD.
> Department of Computer Science
> DigiPen Institute of Technology
angle is a ratio of two length and dimensionless.
only dimensionless values can be a argument of a sine and exponent!
Are you discordant?
More information about the Python-list mailing list | {"url":"https://mail.python.org/pipermail/python-list/2007-June/447158.html","timestamp":"2014-04-19T19:53:17Z","content_type":null,"content_length":"4312","record_id":"<urn:uuid:e46624c4-82af-49fa-8579-4ca1fee402ad>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00277-ip-10-147-4-33.ec2.internal.warc.gz"} |
Interpret this SAT math question.
Click here to go to the NEW College Discussion Forum
Discus: SAT/ACT Tests and Test Preparation: October 2003 Archive: Interpret this SAT math question.
this is level 5. I got all number 5 Math questions except this.
If k is a positive integer, which of the following must represent an even integer that is twice the value of an odd integer?
a) 2k
b) 2k+3
c) 2k+4
d) 4k+1
e) 4k+2
what in the world is the question asking? Will be much obliged for an answer.
is the answer E? if it is ill explain but if no...
Yes. E=4k+2=2(2k+1). Done.
2k + 1 is always one more than an even integer, thus, its odd, multiply it by two and get 4k+2
Yes, on these even/odd problems, it's good to remember/relearn the basic rules dealing with even/odd numbers, unless you can easily come up with them on your own. 2x is always even, 2x+1 is always
odd, odd + odd = even, odd * odd=odd etc.
thanks every one:
the wording kind of confused me.
the answer is E.
Report an offensive message on this page E-mail this page to a friend | {"url":"http://www.collegeconfidential.com/discus/messages/69/29826.html","timestamp":"2014-04-18T20:43:42Z","content_type":null,"content_length":"12224","record_id":"<urn:uuid:fdd8a454-9e29-48ca-956a-db40d97e96a9>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00420-ip-10-147-4-33.ec2.internal.warc.gz"} |
You are currently browsing the tag archive for the ‘quasimorphisms’ tag.
Last week I was at Oberwolfach for a meeting on geometric group theory. My friend and collaborator Koji Fujiwara gave a very nice talk about constructing actions of groups on quasi-trees (i.e. spaces
quasi-isometric to trees). The construction is inspired by the famous subsurface projection construction, due to Masur-Minsky, which was a key step in their proof that the complex of curves (a
natural simplicial complex on which the mapping class group acts cocompactly) is hyperbolic. Koji’s talk was very stimulating, and shook up my thinking about a few related matters; the purpose of
this blog post is therefore for me to put some of my thoughts in order: to describe the Masur-Minsky construction, to point out a connection to certain geometric phenomena like winding numbers of
curves on surfaces, and to note that a variation on their construction gives rise directly to certain natural chiral invariants of surface automorphisms (and their generalizations) which should be
relevant to 4-manifold topologists.
On page 10 of Besse’s famous book on Einstein manifolds one finds the following quote:
It would seem that Riemannian and Lorentzian geometry have much in common: canonical connections, geodesics, curvature tensor, etc. . . . But in fact this common part is only a common disposition
at the onset: one soon enters different realms.
I will not dispute this. But it is not clear to me whether this divergence is a necessary consequence of the nature of the objects of study (in either case), or an artefact of the schism between
mathematics and physics during much of the 20th century. In any case, in this blog post I have the narrow aim of describing some points of contact between Lorentzian (and more generally, causal)
geometry and other geometries (hyperbolic, symplectic), which plays a significant role in some of my research.
The first point of contact is the well-known duality between geodesics in the hyperbolic plane and points in the (projectivized) “anti de-Sitter plane”. Let $\mathbb{R}^{2,1}$ denote a $3$
-dimensional vector space equipped with a quadratic form
$q(x,y,z) = x^2 + y^2 - z^2$
If we think of the set of rays through the origin as a copy of the real projective plane $\mathbb{RP}^2$, the hyperbolic plane is the set of projective classes of vectors $v$ with $q(v)<0$, the
(projectivized) anti de-Sitter plane is the set of projective classes of vectors $v$ with $q(v)>0$, and their common boundary is the set of projective classes of (nonzero) vectors $v$ with $q(v)=0$.
Topologically, the hyperbolic plane is an open disk, the anti de-Sitter plane is an open Möbius band, and their boundary is the “ideal circle” (note: what people usually call the anti de-Sitter plane
is actually the annulus double-covering this Möbius band; this is like the distinction between spherical geometry and elliptic geometry). Geometrically, the hyperbolic plane is a complete Riemannian
surface of constant curvature $-1$, whereas the anti de-Sitter plane is a complete Lorentzian surface of constant curvature $-1$.
In this projective model, a hyperbolic geodesic $\gamma$ is an open straight line segment which is compactified by adding an unordered pair of points in the ideal circle. The straight lines in the
anti de-Sitter plane tangent to the ideal circle at these two points intersect at a point $p_\gamma$. Moreover, the set of geodesics $\gamma$ in the hyperbolic plane passing through a point $q$ are
dual to the set of points $p_\gamma$ in the anti de-Sitter plane that lie on a line which does not intersect the ideal circle. In the figure, three concurrent hyperbolic geodesics are dual to three
colinear anti de-Sitter points.
The anti de-Sitter geometry has a natural causal structure. There is a cone field whose extremal vectors at every point $p$ are tangent to the straight lines through $p$ that are also tangent to the
ideal circle. A smooth curve is timelike if its tangent at every point is supported by this cone field, and spacelike if its tangent is everywhere not supported by the cone field. A timelike curve
corresponds to a family of hyperbolic geodesics which locally intersect each other; a spacelike curve corresponds to a family of disjoint hyperbolic geodesics that foliate some region.
One can distinguish (locally) between future and past along a timelike trajectory, by (arbitrarily) identifying the “future” direction with a curve which winds positively around the ideal circle. The
fact that one can distinguish in a consistent way between the positive and negative direction is equivalent to the existence of a nonzero section of timelike vectors. On the other hand, there does
not exist a nonzero section of spacelike vectors, so one cannot distinguish in a consistent way between left and right (this is a manifestation of the non-orientability of the Möbius band).
The duality between the hyperbolic plane and the anti de-Sitter plane is a manifestation of the fact that (at least at the level of Lie algebras) they have the same (infinitesimal) symmetries. Let $O
(2,1)$ denote the group of real $3\times 3$ matrices which preserve $q$; i.e. matrices $A$ for which $q(A(v)) = q(v)$ for all vectors $v$. This contains a subgroup $SO^+(2,1)$ of index $4$ which
preserves the “positive sheet” of the hyperboloid $q=-1$, and acts on it in an orientation-preserving way. The hyperbolic plane is the homogeneous space for this group whose point stabilizers are a
copy of $SO(2)$ (which acts as an elliptic “rotation” of the tangent space to their common fixed point). The anti de-Sitter plane is the homogeneous space for this group whose point stabilizers are a
copy of $SO^+(1,1)$ (which acts as a hyperbolic “translation” of the geodesic in hyperbolic space dual to the given point in anti de-Sitter space). The ideal circle is the homogeneous space whose
point stabilizers are a copy of the affine group of the line. The hyperbolic plane admits a natural Riemannian metric, and the anti de-Sitter plane a Lorentz metric, which are invariant under these
group actions. The causal structure on the anti de-Sitter plane limits to a causal structure on the ideal circle.
Now consider the $4$-dimensional vector space $\mathbb{R}^{2,2}$ and the quadratic form $q(v) = x^2 + y^2 - z^2 - w^2$. The ($3$-dimensional) sheets $q=1$ and $q=-1$ both admit homogeneous Lorentz
metrics whose point stabilizers are copies of $SO^+(1,2)$ and $SO^+(2,1)$ (which are isomorphic but sit in $SO(2,2)$ in different ways). These $3$-manifolds are compactified by adding the
projectivization of the cone $q=0$. Topologically, this is a Clifford torus in $\mathbb{RP}^3$ dividing this space into two open solid tori which can be thought of as two Lorentz $3$-manifolds. The
causal structure on the pair of Lorentz manifolds limits to a pair of complementary causal structures on the Clifford torus. (edited 12/10)
Let’s go one dimension higher, to the $5$-dimensional vector space $\mathbb{R}^{2,3}$ and the quadratic form $q(v) = x^2 + y^2 - u^2 - z^2 - w^2$. Now only the sheet $q=1$ is a Lorentz manifold,
whose point stabilizers are copies of $SO^+(1,3)$, with an associated causal structure. The projectivized cone $q=0$ is a non-orientable twisted $S^2$ bundle over the circle, and it inherits a causal
structure in which the sphere factors are spacelike, and the circle direction is timelike. This ideal boundary can be thought of in quite a different way, because of the exceptional isomorphism at
the level of (real) Lie algebras $so(2,3)= sp(4)$, where $sp(4)$ denotes the Lie algebra of the symplectic group in dimension $4$. In this manifestation, the ideal boundary is usually denoted $\
mathcal{L}_2$, and can be thought of as the space of Lagrangian planes in $\mathbb{R}^4$ with its usual symplectic form. One way to see this is as follows. The wedge product is a symmetric bilinear
form on $\Lambda^2 \mathbb{R}^4$ with values in $\Lambda^4 \mathbb{R}^4 = \mathbb{R}$. The associated quadratic form vanishes precisely on the “pure” $2$-forms — i.e. those associated to planes. The
condition that the wedge of a given $2$-form with the symplectic form vanishes imposes a further linear condition. So the space of Lagrangian $2$-planes is a quadric in $\mathbb{RP}^4$, and one may
verify that the signature of the underlying quadratic form is $(2,3)$. The causal structure manifests in symplectic geometry in the following way. A choice of a Lagrangian plane $\pi$ lets us
identify symplectic $\mathbb{R}^4$ with the cotangent bundle $T^*\pi$. To each symmetric homogeneous quadratic form $q$ on $\pi$ (thought of as a smooth function) is associated a linear Lagrangian
subspace of $T^*\pi$, namely the (linear) section $dq$. Every Lagrangian subspace transverse to the fiber over $0$ is of this form, so this gives a parameterization of an open, dense subset of $\
mathcal{L}_2$ containing the point $\pi$. The set of positive definite quadratic forms is tangent to an open cone in $T_\pi \mathcal{L}_2$; the field of such cones as $\pi$ varies defines a causal
structure on $\mathcal{L}_2$ which agrees with the causal structure defined above.
These examples can be generalized to higher dimension, via the orthogonal groups $SO(n,2)$ or the symplectic groups $Sp(2n,\mathbb{R})$. As well as two other infinite families (which I will not
discuss) there is a beautiful “sporadic” example, connected to what Freudenthal called octonion symplectic geometry associated to the noncompact real form $E_7(-25)$ of the exceptional Lie group,
where the ideal boundary $S^1\times E_6/F_4$ has an invariant causal structure whose timelike curves wind around the $S^1$ factor; see e.g. Clerc-Neeb for a more thorough discussion of the theory of
Shilov boundaries from the causal geometry point of view, or see here or here for a discussion of the relationship between the octonions and the exceptional Lie groups.
The causal structure on these ideal boundaries gives rise to certain natural $2$-cocycles on their groups of automorphisms. Note in each case that the ideal boundary has the topological structure of
a bundle over $S^1$ with spacelike fibers. Thus each closed timelike curve has a well-defined winding number, which is just the number of times it intersects any one of these spacelike slices. Let
$C$ be an ideal boundary as above, and let $\tilde{C}$ denote the cyclic cover dual to a spacelike slice. If $p$ is a point in $\tilde{C}$, we let $p+n$ denote the image of $p$ under the $n$th power
of the generator of the deck group of the covering. If $g$ is a homeomorphism of $C$ preserving the causal structure, we can lift $g$ to a homeomorphism $\tilde{g}$ of $\tilde{C}$. For any such lift,
define the rotation number of $\tilde{g}$ as follows: for any point $p \in \tilde{C}$ and any integer $n$, let $r_n$ be the the smallest integer for which there is a causal curve from $p$ to $\tilde
{g}(p)$ to $p+r_n$, and then define $rot(\tilde{g}) = \lim_{n \to \infty} r_n/n$. This function is a quasimorphism on the group of causal automorphisms of $\tilde{C}$, with defect equal to the least
integer $n$ such that any two points $p,q$ in $C$ are contained in a closed causal loop with winding number $n$. In the case of the symplectic group $Sp(2n,\mathbb{R})$ with causal boundary $\mathcal
{L}_n$, the defect is $n$, and the rotation number is (sometimes) called the symplectic rotation number; it is a quasimorphism on the universal central extension of $Sp(2n,\mathbb{R})$, whose
coboundary descends to the Maslov class (an element of $2$-dimensional bounded cohomology) on the symplectic group.
Causal structures in groups of symplectomorphisms or contactomorphisms are intensely studied; see for instance this paper by Eliashberg-Polterovich.
Last week, Michael Brandenbursky from the Technion gave a talk at Caltech on an interesting connection between knot theory and quasimorphisms. Michael’s paper on this subject may be obtained from the
arXiv. Recall that given a group $G$, a quasimorphism is a function $\phi:G \to \mathbb{R}$ for which there is some least real number $D(\phi) \ge 0$ (called the defect) such that for all pairs of
elements $g,h \in G$ there is an inequality $|\phi(gh) - \phi(g) - \phi(h)| \le D(\phi)$. Bounded functions are quasimorphisms, although in an uninteresting way, so one is usually only interested in
quasimorphisms up to the equivalence relation that $\phi \sim \psi$ if the difference $|\phi - \psi|$ is bounded. It turns out that each equivalence class of quasimorphism contains a unique
representative which has the extra property that $\phi(g^n) = n\phi(g)$ for all $g\in G$ and $n \in \mathbb{Z}$. Such quasimorphisms are said to be homogeneous. Any quasimorphism may be homogenized
by defining $\overline{\phi}(g) = \lim_{n \to \infty} \phi(g^n)/n$ (see e.g. this post for more about quasimorphisms, and their relation to stable commutator length).
Many groups that do not admit many homomorphisms to $\mathbb{R}$ nevertheless admit rich families of homogeneous quasimorphisms. For example, groups that act weakly properly discontinuously on
word-hyperbolic spaces admit infinite dimensional families of homogeneous quasimorphisms; see e.g. Bestvina-Fujiwara. This includes hyperbolic groups, but also mapping class groups and braid groups,
which act on the complex of curves.
Michael discussed another source of quasimorphisms on braid groups, those coming from knot theory. Let $I$ be a knot invariant. Then one can extend $I$ to an invariant of pure braids on $n$ strands
by $I(\alpha) = I(\widehat{\alpha \Delta})$ where $\Delta = \sigma_1 \cdots \sigma_{n-1}$, and the “hat” denotes plat closure. It is an interesting question to ask: under what conditions on $I$ is
the resulting function on braid groups a quasimorphism?
In the abstract, such a question is probably very hard to answer, so one should narrow the question by concentrating on knot invariants of a certain kind. Since one wants the resulting invariants to
have some relation to the algebraic structure of braid groups, it is natural to look for functions which factor through certain algebraic structures on knots; Michael was interested in certain
homomorphisms from the knot concordance group to $\mathbb{R}$. We briefly describe this group, and a natural class of homomorphisms.
Two oriented knots $K_1,K_2$ in the $3$-sphere are said to be concordant if there is a (locally flat) properly embedded annulus $A$ in $S^3 \times [0,1]$ with $A \cap S^3 \times 0 = K_1$ and $A \cap
S^3 \times 1 = K_2$. Concordance is an equivalence relation, and the equivalence classes form a group, with connect sum as the group operation, and orientation-reversed mirror image as inverse. The
only subtle aspect of this is the existence of inverses, which we briefly explain. Let $K$ be an arbitrary knot, and let $K^!$ denote the mirror image of $K$ with the opposite orientation. Arrange $K
\cup K^!$ in space so that they are symmetric with respect to reflection in a dividing plane. There is an immersed annulus $A$ in $S^3$ which connects each point on $K$ to its mirror image on $K^!$,
and the self-intersections of this annulus are all disjoint embedded arcs, corresponding to the crossings of $K$ in the projection to the mirror. This annulus is an example of what is called a ribbon
surface. Connect summing $K$ to $K^!$ by pushing out a finger of each into an arc in the mirror connects the ribbon annulus to a ribbon disk spanning $K \# K^!$. A ribbon surface (in particular, a
ribbon disk) can be pushed into a (smoothly) embedded surface in a $4$-ball bounding $S^3$. Puncturing the $4$-ball at some point on this smooth surface, one obtains a concordance from $K\#K^!$ to
the unknot, as claimed.
The resulting group is known as the concordance group $\mathcal{C}$ of knots. Since connect sum is commutative, this group is abelian. Notice as above that a slice knot — i.e. a knot bounding a
locally flat disk in the $4$-ball — is concordant to the unknot. Ribbon knots (those bounding ribbon disks) are smoothly slice, and therefore slice, and therefore concordant to the trivial knot.
Concordance makes sense for codimension two knots in any dimension. In higher even dimensions, knots are always slice, and in higher odd dimensions, Levine found an algebraic description of the
concordance groups in terms of (Witt) equivalence classes of linking pairings on a Seifert surface; (some of) this information is contained in the signature of a knot.
Let $K$ be a knot (in $S^3$ for simplicity) with Seifert surface $\Sigma$ of genus $g$. If $\alpha,\beta$ are loops in $\Sigma$, define $f(\alpha,\beta)$ to be the linking number of $\alpha$ with $\
beta^+$, which is obtained from $\beta$ by pushing it to the positive side of $\Sigma$. The function $f$ is a bilinear form on $H_1(\Sigma)$, and after choosing generators, it can be expressed in
terms of a matrix $V$ (called the Seifert matrix of $K$). The signature of $K$, denoted $\sigma(K)$, is the signature (in the usual sense) of the symmetric matrix $V + V^T$. Changing the orientation
of a knot does not affect the signature, whereas taking mirror image multiplies it by $-1$. Moreover, if $\Sigma_1,\Sigma_2$ are Seifert surfaces for $K_1,K_2$, one can form a Seifert surface $\
Sigma$ for $K_1 \# K_2$ for which there is some sphere $S^2 \in S^3$ that intersects $\Sigma$ in a separating arc, so that the pieces on either side of the sphere are isotopic to the $\Sigma_i$, and
therefore the Seifert matrix of $K_1 \# K_2$ can be chosen to be block diagonal, with one block for each of the Seifert matrices of the $K_i$; it follows that $\sigma(K_1 \# K_2) = \sigma(K_1) + \
sigma(K_2)$. In fact it turns out that $\sigma$ is a homomorphism from $\mathcal{C}$ to $\mathbb{Z}$; equivalently (by the arguments above), it is zero on knots which are topologically slice. To see
this, suppose $K$ bounds a locally flat disk $\Delta$ in the $4$-ball. The union $\Sigma':=\Sigma \cup \Delta$ is an embedded bicollared surface in the $4$-ball, which bounds a $3$-dimensional
Seifert “surface” $W$ whose interior may be taken to be disjoint from $S^3$. Now, it is a well-known fact that for any oriented $3$-manifold $W$, the inclusion $\partial W \to W$ induces a map $H_1(\
partial W) \to H_1(W)$ whose kernel is Lagrangian (with respect to the usual symplectic pairing on $H_1$ of an oriented surface). Geometrically, this means we can find a basis for the homology of $\
Sigma'$ (which is equal to the homology of $\Sigma$) for which half of the basis elements bound $2$-chains in $W$. Let $W^+$ be obtained by pushing off $W$ in the positive direction. Then chains in
$W$ and chains in $W^+$ are disjoint (since $W$ and $W^+$ are disjoint) and therefore the Seifert matrix $V$ of $K$ has a block form for which the lower right $g \times g$ block is identically zero.
It follows that $V+V^T$ also has a zero $g\times g$ lower right block, and therefore its signature is zero.
The Seifert matrix (and therefore the signature), like the Alexander polynomial, is sensitive to the structure of the first homology of the universal abelian cover of $S^3 - K$; equivalently, to the
structure of the maximal metabelian quotient of $\pi_1(S^3 - K)$. More sophisticated “twisted” and $L^2$ signatures can be obtained by studying further derived subgroups of $\pi_1(S^3 - K)$ as
modules over group rings of certain solvable groups with torsion-free abelian factors (the so-called poly-torsion-free-abelian groups). This was accomplished by Cochran-Orr-Teichner, who used these
methods to construct infinitely many new concordance invariants.
The end result of this discussion is the existence of many, many interesting homomorphisms from the knot concordance group to the reals, and by plat closure, many interesting invariants of braids.
The connection with quasimorphisms is the following:
Theorem(Brandenbursky): A homomorphism $I:\mathcal{C} \to \mathbb{R}$ gives rise to a quasimorphism on braid groups if there is a constant $C$ so that $|I([K])| \le C\cdot\|K\|_g$, where $\|\cdot\|
_g$ denotes $4$-ball genus.
The proof is roughly the following: given pure braids $\alpha,\beta$ one forms the knots $\widehat{\alpha\Delta}$, $\widehat{\beta\Delta}$ and $\widehat{\alpha\beta\Delta}$. It is shown that the
connect sum $L:= \widehat{\alpha \Delta} \# \widehat{\beta\Delta} \# \widehat{\alpha\beta\Delta}^!$ bounds a Seifert surface whose genus may be universally bounded in terms of the number of strands
in the braid group. Pushing this Seifert surface into the $4$-ball, the hypothesis of the theorem says that $I$ is uniformly bounded on $L$. Properties of $I$ then give an estimate for the defect;
It would be interesting to connect these observations up to other “natural” chiral, homogeneous invariants on mapping class groups. For example, associated to a braid or mapping class $\phi \in \text
{MCG}(S)$ one can (usually) form a hyperbolic $3$-manifold $M_\phi$ which fibers over the circle, with fiber $S$ and monodromy $\phi$. The $\eta$-invariant of $M_\phi$ is the signature defect $\eta
(M_\phi) = \int_Y p_1/3 - \text{sign}(Y)$ where $Y$ is a $4$-manifold with $\partial Y = M_\phi$ with a product metric near the boundary, and $p_1$ is the first Pontriagin form on $Y$ (expressed in
terms of the curvature of the metric). Is $\eta$ a quasimorphism on some subgroup of $\text{MCG}(S)$ (eg on a subgroup consisting entirely of pseudo-Anosov elements)?
I am in Melbourne at the moment, in the middle of giving a lecture series, as part of the 2009 Clay-Mahler lectures (also see here). Yesterday I gave a lecture with the title “faces of the scl norm
ball”, and I thought I would try to give a sense of what it was all about. This also gives me an excuse to fiddle around with images in wordpress.
One starts with a basic question: given an immersion of a circle in the plane, when is there an immersion of the disk in the plane that bounds the given immersion of a circle? I.e., given a immersion
$\gamma:S^1 \to \bf{R}^2$, when is there an immersion $f:D^2 \to \bf{R}^2$ for which $\partial f$ factors through $\gamma$? Obviously this depends on $\gamma$. Consider the following examples:
The second circle does not bound such a disk. One way to see this is to use the Gauss map, i.e. the map $\gamma'/|\gamma'|:S^1 \to S^1$ that takes each point on the circle to the unit tangent to its
image under the immersion. The degree of the Gauss map for an embedded circle is $\pm 1$ (depending on a choice of orientation). If an immersed circle bounds an immersed disk, one can use this
immersed disk to define a 1-parameter family of immersions, connecting the initial immersed circle to an embedded immersed circle; hence the degree of the Gauss map is aso $\pm 1$ for an immersed
circle bounding an immersed disk; this rules out the second example.
The third example maps under the Gauss map with degree 1, and yet it does not bound an immersed disk. One must use a slightly more sophisticated invariant to see this. The immersed circle divides the
plane up into regions. For each bounded region $R$, let $\alpha:[0,1] \to \bf{R}^2$ be an embedded arc, transverse to $\gamma$, that starts in the region $R$ and ends up “far away” (ideally “at
infinity”). The arc $\alpha$ determines a homological intersection number that we denote $\alpha \cap \gamma$, where each point of intersection contributes $\pm 1$ depending on orientations. In this
example, there are three bounded regions, which get the numbers $1$, $-1$, $1$ respectively:
$f:S \to \bf{R}^2$ is any map of any oriented surface with one boundary component whose boundary factors through $\gamma$, then the (homological) degree with which $S$ maps over each region
complementary to the image of $\gamma$ is the number we have just defined. Hence if $\gamma$ bounds an immersed disk, these numbers must all be positive (or all negative, if we reverse orientation).
This rules out the third example.
The complete answer of which immersed circles in the plane bound immersed disks was given by S. Blank, in his Ph.D. thesis at Brandeis in 1967 (unfortunately, this does not appear to be available
online). The answer is in the form of an algorithm to decide the question. One such algorithm (not Blank’s, but related to it) is as follows. The image of $\gamma$ cuts up the plane into regions
$R_i$, and each region $R_i$ gets an integer $n_i$. Take $n_i$ “copies” of each region $R_i$, and think of these as pieces of a jigsaw puzzle. Try to glue them together along their edges so that they
fit together nicely along $\gamma$ and make a disk with smooth boundary. If you are successful, you have constructed an immersion. If you are not successful (after trying all possible ways of gluing
the puzzle pieces together), no such immersion exists. This answer is a bit unsatisfying, since in the first place it does not give any insight into which loops bound and which don’t, and in the
second place the algorithm is quite slow and impractial.
As usual, more insight can be gained by generalizing the question. Fix a compact oriented surface $\Sigma$ and consider an immersed $1$-manifold $\Gamma: \coprod_i S^1 \to \Sigma$. One would like to
know which such $1$-manifolds bound an immersion of a surface. One piece of subtlety is the fact that there are examples where $\Gamma$ itself does not bound, but a finite cover of $\Gamma$ (e.g. two
copies of $\Gamma$) does bound. It is also useful to restrict the class of $1$-manifolds that one considers. For the sake of concreteness then, let $\Sigma$ be a hyperbolic surface with geodesic
boundary, and let $\Gamma$ be an oriented immersed geodesic $1$-manifold in $\Sigma$. An immersion $f:S \to \Sigma$ is said to virtually bound $\Gamma$ if the map $\partial f$ factors as a
composition $\partial S \to \coprod_i S^1 \to \Sigma$ where the second map is $\Gamma$, and where the first map is a covering map with some degree $n(S)$. The fundamental question, then is:
Question: Which immersed geodesic $1$-manifolds $\Gamma$ in $\Sigma$ are virtually bounded by an immersed surface?
It turns out that this question is unexpectedly connected to stable commutator length, symplectic rigidity, and several other geometric issues; I hope to explain how in the remainder of this post.
First, recall that if $G$ is any group and $g \in [G,G]$, the commutator length of $g$, denoted $\text{cl}(g)$, is the smallest number of commutators in $G$ whose product is equal to $g$, and the
stable commutator length $\text{scl}(g)$ is the limit $\text{scl}(g) = \lim_{n \to \infty} \text{cl}(g^n)/n$. One can geometrize this definition as follows. Let $X$ be a space with $\pi_1(X) = G$,
and let $\gamma:S^1 \to X$ be a homotopy class of loop representing the conjugacy class of $g$. Then $\text{scl}(g) = \inf_S -\chi^-(S)/2n(S)$ over all surfaces $S$ (possibly with multiple boundary
components) mapping to $X$ whose boundary wraps a total of $n(S)$ times around $\gamma$. One can extend this definition to $1$-manifolds $\Gamma:\coprod_i S^1 \to X$ in the obvious way, and one gets
a definition of stable commutator length for formal sums of elements in $G$ which represent $0$ in homology. Let $B_1(G)$ denote the vector space of real finite linear combinations of elements in $G$
whose sum represents zero in (real group) homology (i.e. in the abelianization of $G$, tensored with $\bf{R}$). Let $H$ be the subspace spanned by chains of the form $g^n - ng$ and $g - hgh^{-1}$.
Then $\text{scl}$ descends to a (pseudo)-norm on the quotient $B_1(G)/H$ which we denote hereafter by $B_1^H(G)$ ($H$ for homogeneous).
There is a dual definition of this norm, in terms of quasimorphisms.
Definition: Let $G$ be a group. A function $\phi:G \to \bf{R}$ is a homogeneous quasimorphism if there is a least non-negative real number $D(\phi)$ (called the defect) so that for all $g,h \in G$
and $n \in \bf{Z}$ one has
1. $\phi(g^n) = n\phi(g)$ (homogeneity)
2. $|\phi(gh) - \phi(g) - \phi(h)| \le D(\phi)$ (quasimorphism)
A function satisfying the second condition but not the first is an (ordinary) quasimorphism. The vector space of quasimorphisms on $G$ is denoted $\widehat{Q}(G)$, and the vector subspace of
homogeneous quasimorphisms is denoted $Q(G)$. Given $\phi \in \widehat{Q}(G)$, one can homogenize it, by defining $\overline{\phi}(g) = \lim_{n \to \infty} \phi(g^n)/n$. Then $\overline{\phi} \in Q
(G)$ and $D(\overline{\phi}) \le 2D(\phi)$. A quasimorphism has defect zero if and only if it is a homomorphism (i.e. an element of $H^1(G)$) and $D(\cdot)$ makes the quotient $Q/H^1$ into a Banach
Examples of quasimorphisms include the following:
1. Let $F$ be a free group on a generating set $S$. Let $\sigma$ be a reduced word in $S^*$ and for each reduced word $w \in S^*$, define $C_\sigma(w)$ to be the number of copies of $\sigma$ in $w$.
If $\overline{w}$ denotes the corresponding element of $F$, define $C_\sigma(\overline{w}) = C_\sigma(w)$ (note this is well-defined, since each element of a free group has a unique reduced
representative). Then define $H_\sigma = C_\sigma - C_{\sigma^{-1}}$. This quasimorphism is not yet homogeneous, but can be homogenized as above (this example is due to Brooks).
2. Let $M$ be a closed hyperbolic manifold, and let $\alpha$ be a $1$-form. For each $g \in \pi_1(M)$ let $\gamma_g$ be the geodesic representative in the free homotopy class of $g$. Then define $\
phi_\alpha(g) = \int_{\gamma_g} \alpha$. By Stokes’ theorem, and some basic hyperbolic geometry, $\phi_\alpha$ is a homogeneous quasimorphism with defect at most $2\pi \|d\alpha\|$.
3. Let $\rho: G \to \text{Homeo}^+(S^1)$ be an orientation-preserving action of $G$ on a circle. The group of homeomorphisms of the circle has a natural central extension $\text{Homeo}^+(\bf{R})^{\
bf{Z}}$, the group of homeomorphisms of $\bf{R}$ that commute with integer translation. The preimage of $G$ in this extension is an extension $\widehat{G}$. Given $g \in \text{Homeo}^+(\bf{R})^{\
bf{Z}}$, define $\text{rot}(g) = \lim_{n \to \infty} (g^n(0) - 0)/n$; this descends to a $\bf{R}/\bf{Z}$-valued function on $\text{Homeo}^+(S^1)$, Poincare’s so-called rotation number. But on $\
widehat{G}$, this function is a homogeneous quasimorphism, typically with defect $1$.
4. Similarly, the group $\text{Sp}(2n,\bf{R})$ has a universal cover $\widetilde{\text{Sp}}(2n,\bf{R})$ with deck group $\bf{Z}$. The symplectic group acts on the space $\Lambda_n$ of Lagrangian
subspaces in $\bf{R}^{2n}$. This is equal to the coset space $\Lambda_n = U(n)/O(n)$, and we can therefore define a function $\text{det}^2:\Lambda_n \to S^1$. After picking a basepoint, one
obtains an $S^1$-valued function on the symplectic group, which lifts to a real-valued function on its universal cover. This function is a quasimorphism on the covering group, whose
homogenization is sometimes called the symplectic rotation number; see e.g. Barge-Ghys.
Quasimorphisms and stable commutator length are related by Bavard Duality:
Theorem (Bavard duality): Let $G$ be a group, and let $\sum t_i g_i \in B_1^H(G)$. Then there is an equality $\text{scl}(\sum t_i g_i) = \sup_\phi \sum t_i \phi(g_i)/2D(\phi)$ where the supremum is
taken over all homogeneous quasimorphisms.
This duality theorem shows that $Q/H^1$ with the defect norm is the dual of $B_1^H$ with the $\text{scl}$ norm. (this theorem is proved for elements $g \in [G,G]$ by Bavard, and in generality in my
monograph, which is a reference for the content of this post.)
What does this have to do with rigidity (or, for that matter, immersions)? Well, one sees from the examples (and many others) that homogeneous quasimorphisms arise from geometry — specifically, from
hyperbolic geometry (negative curvature) and symplectic geometry (causal structures). One expects to find rigidity in extremal circumstances, and therefore one wants to understand, for a given chain
$C \in B_1^H(G)$, the set of extremal quasimorphisms for $C$, i.e. those homogeneous quasimorphisms $\phi$ satisfying $\text{scl}(C) = \phi(C)/2D(\phi)$. By the duality theorem, the space of such
extremal quasimorphisms are a nonempty closed convex cone, dual to the set of hyperplanes in $B_1^H$ that contain $C/|C|$ and support the unit ball of the $\text{scl}$ norm. The fewer supporting
hyperplanes, the smaller the set of extremal quasimorphisms for $C$, and the more rigid such extremal quasimorphisms will be.
When $F$ is a free group, the unit ball in the $\text{scl}$ norm in $B_1^H(F)$ is a rational polyhedron. Every nonzero chain $C \in B_1^H(F)$ has a nonzero multiple $C/|C|$ contained in the boundary
of this polyhedron; let $\pi_C$ denote the face of the polyhedron containing this multiple in its interior. The smaller the codimension of $\pi_C$, the smaller the dimension of the cone of extremal
quasimorphisms for $C$, and the more rigidity we will see. The best circumstance is when $\pi_C$ has codimension one, and an extremal quasimorphism for $C$ is unique, up to scale, and elements of $H^
An infinite dimensional polyhedron need not necessarily have any top dimensional faces; thus it is natural to ask: does the unit ball in $B_1^H(F)$ have any top dimensional faces? and can one say
anything about their geometric meaning? We have now done enough to motivate the following, which is the main theorem from my paper “Faces of the scl norm ball”:
Theorem: Let $F$ be a free group. For every isomorphism $F \to \pi_1(\Sigma)$ (up to conjugacy) where $\Sigma$ is a compact oriented surface, there is a well-defined chain $\partial \Sigma \in B_1^H
(F)$. This satisfies the following properties:
1. The projective class of $\partial \Sigma$ intersects the interior of a codimension one face $\pi_\Sigma$ of the $\text{scl}$ norm ball
2. The unique extremal quasimorphism dual to $\pi_\Sigma$ (up to scale and elements of $H^1$) is the rotation quasimorphism $\text{rot}_\Sigma$ (to be defined below) associated to any complete
hyperbolic structure on $\Sigma$
3. A homologically trivial geodesic $1$-manifold $\Gamma$ in $\Sigma$ is virtually bounded by an immersed surface $S$ in $\Sigma$ if and only if the projective class of $\Gamma$ (thought of as an
element of $B_1^H(F)$) intersects $\pi_\Sigma$. Equivalently, if and only if $\text{rot}_\Sigma$ is extremal for $\Gamma$. Equivalently, if and only if $\text{scl}(\Gamma) = \text{rot}_\Sigma(\
It remains to give a definition of $\text{rot}_\Sigma$. In fact, we give two definitions.
First, a hyperbolic structure on $\Sigma$ and the isomorphism $F\to \pi_1(\Sigma)$ determines a representation $F \to \text{PSL}(2,\bf{R})$. This lifts to $\widetilde{\text{SL}}(2,\bf{R})$, since $F$
is free. The composition with rotation number is a homogeneous quasimorphism on $F$, well-defined up to $H^1$. Note that because the image in $\text{PSL}(2,\bf{R})$ is discrete and torsion-free, this
quasimorphism is integer valued (and has defect $1$). This quasimorphism is $\text{rot}_\Sigma$.
Second, a geodesic $1$-manifold $\Gamma$ in $\Sigma$ cuts the surface up into regions $R_i$. For each such region, let $\alpha_i$ be an arc transverse to $\Gamma$, joining $R_i$ to $\partial \Sigma$.
Let $(\alpha_i \cap \Gamma)$ denote the homological (signed) intersection number. Then define $\text{rot}_\Sigma(\Gamma) = 1/2\pi \sum_i (\alpha_i \cap \Gamma) \text{area}(R_i)$.
We now show how 3 follows. Given $\Gamma$, we compute $\text{scl}(\Gamma) = \inf_S -\chi^-(S)/2n(S)$ as above. Let $S$ be such a surface, mapping to $\Sigma$. We adjust the map by a homotopy so that
it is pleated; i.e. so that $S$ is itself a hyperbolic surface, decomposed into ideal triangles, in such a way that the map is a (possibly orientation-reversing) isometry on each ideal triangle. By
Gauss-Bonnet, we can calculate $\text{area}(S) = -2\pi \chi^-(S) = \pi \sum_\Delta 1$. On the other hand, $\partial S$ wraps $n(S)$ times around $\Gamma$ (homologically) so $\text{rot}_\Sigma(\Gamma)
= \pi/2\pi n(S) \sum_\Delta \pm 1$ where the sign in each case depends on whether the ideal triangle $\Delta$ is mapped in with positive or negative orientation. Consequently $\text{rot}_\Sigma(\
Gamma)/2 \le -\chi^-(S)/2n(S)$ with equality if and only if the sign of every triangle is $1$. This holds if and only if the map $S \to \Sigma$ is an immersion; on the other hand, equality holds if
and only if $\text{rot}_\Sigma$ is extremal for $\Gamma$. This proves part 3 of the theorem above.
Incidentally, this fact gives a fast algorithm to determine whether $\Gamma$ is the virtual boundary of an immersed surface. Stable commutator length in free groups can be computed in polynomial time
in word length; likewise, the value of $\text{rot}_\Sigma$ can be computed in polynomial time (see section 4.2 of my monograph for details). So one can determine whether $\Gamma$ projectively
intersects $\pi_\Sigma$, and therefore whether it is the virtual boundary of an immersed surface. In fact, these algorithms are quite practical, and run quickly (in a matter of seconds) on words of
length 60 and longer in $F_2$.
One application to rigidity is a new proof of the following theorem:
Corollary (Goldman, Burger-Iozzi-Wienhard): Let $\Sigma$ be a closed oriented surface of positive genus, and $\rho:\pi_1(\Sigma) \to \text{Sp}(2n,\bf{R})$ a Zariski dense representation. Let $e_\rho
\in H^2(\Sigma;\mathbb{Z})$ be the Euler class associated to the action. Suppose that $|e_\rho([\Sigma])| = -n\chi(\Sigma)$ (note: by a theorem of Domic and Toledo, one always has $|e_\rho([\Sigma])|
\le -n\chi(\Sigma)$). Then $\rho$ is discrete.
Here $e_\rho$ is the first Chern class of the bundle associated to $\rho$. The proof is as follows: cut $\Sigma$ along an essential loop $\gamma$ into two subsurfaces $\Sigma_i$. One obtains
homogeneous quasimorphisms on each group $\pi_1(\Sigma_i)$ (i.e. the symplectic rotation number associated to $\rho$), and the hypothesis of the theorem easily implies that they are extremal for $\
partial \Sigma_i$. Consequently the symplectic rotation number is equal to $\text{rot}_{\Sigma_i}$, at least on the commutator subgroup. But this latter quasimorphism takes only integral values; it
follows that each element in $\pi_1(\Sigma_i)$ fixes a Lagrangian subspace under $\rho$. But this implies that $\rho$ is not dense, and since it is Zariski dense, it is discrete. (Notes: there are a
couple of details under the rug here, but not many; furthermore, the hypothesis that $\rho$ is Zariski dense is not necessary (but can be derived as a conclusion with more work), and one can just as
easily treat representations of compact surface groups as closed ones; finally, Burger-Iozzi-Wienhard prove more than just this statement; for instance, they show that the space of maximal
representations is always real semialgebraic, and describe it in some detail).
More abstractly, we have shown that extremal quasimorphisms on $\partial \Sigma$are unique. In other words, by prescribing the value of a quasimorphism on a single group element, one determines its
values on the entire commutator subgroup. If such a quasimorphism arises from some geometric or dynamical context, this can be interpreted as a kind of rigidity theorem, of which the Corollary above
is an example.
I have struggled for a long time (and I continue to struggle) with the following question:
Question: Is the group of self-homeomorphisms of the unit disk (in the plane) that fix the boundary pointwise a left-orderable group?
Recall that a group $G$ is left-orderable if there is a total order $<$ on the elements satisfying $g<h$ if and only if $fg < fh$ for all $f,g,h \in G$. For a countable group, the property of being
left orderable is equivalent to the property that the group admits a faithful action on the interval by orientation-preserving homeomorphisms; however, this equivalence is not “natural” in the sense
that there is no natural way to extract an ordering from an action, or vice-versa. This formulation of the question suggests that one is trying to embed the group of homeomorphisms of the disk into
the group of homeomorphisms of the interval, an unlikely proposition, made even more unlikely by the following famous theorem of Filipkiewicz:
Theorem: (Filipkiewicz) Let $M_1,M_2$ be two compact manifolds, and $r_1,r_2$ two non-negative integers or infinity. Suppose the connected components of the identity of $\text{Diff}^{r_1}(M_1)$ and $
\text{Diff}^{r_2}(M_2)$ are isomorphic as abstract groups. Then $r_1=r_2$ and the isomorphism is induced by some diffeomorphism.
The hard(est?) part of the argument is to identify a subgroup stabilizing a point in purely algebraic terms. It is a fundamental and well-studied problem, in some ways a natural outgrowth of Klein’s
Erlanger programme, to perceive the geometric structure on a space in terms of algebraic properties of its automorphism group. The book by Banyaga is the best reference I know for this material, in
the context of “flexible” geometric structures, with big transformation groups (it is furthermore the only math book I know with a pink cover).
Left orderability is inherited under extensions. I.e. if $K \to G \to H$ is a short exact sequence, and both $K$ and $H$ are left orderable, then so is $G$. Furthermore, it is a simple but useful
theorem of Burns and Hale that a group $G$ is left orderable if and only if for every finitely generated subgroup $H$ there is a left orderable group $H'$ and a surjective homomorphism $H \to H'$.
The necessity of this condition is obvious: a subgroup of a left orderable group is left orderable (by restricting the order), so one can take $H'$ to be $H$ and the surjection to be the identity.
One can exploit this strategy to show that certain transformation groups are left orderable, as follows:
Example: Suppose $G$ is a group of homeomorphisms of some space $X$, with a nonempty fixed point set. If $H$ is a finitely generated subgroup of $G$, then there is a point $y$ in the frontier of $\
text{fix}(H)$ so that $H$ has a nontrivial image in the group of germs of homeomorphisms of $X$ at $y$. If this group of germs is left-orderable for all $y$, then so is $G$ by Burns-Hale.
Example: (Rolfsen-Wiest) Let $G$ be the group of PL homeomorphisms of the unit disk (thought of as a PL square in the plane) fixed on the boundary. If $H$ is a finitely generated subgroup, there is a
point $p$ in the frontier of $\text{fix}(H)$. Note that $H$ has a nontrivial image in the group of piecewise linear homeomorphisms of the projective space of lines through $p$. Since the fixed point
set of a finitely generated subgroup is equal to the intersection of the fixed point sets of a finite generating set, it is itself a polyhedron. Hence $H$ fixes some line through $p$, and therefore
has a nontrivial image in the group of homeomorphisms of an interval. By Burns-Hale, $G$ is left orderable.
Example: Let $G$ be the group of diffeomorphisms of the unit disk, fixed on the boundary. If $H$ is a finitely generated subgroup, then at a non-isolated point $p$ in $\text{fix}(H)$ the group $H$
fixes some tangent vector to $p$ (a limit of short straight lines from $p$ to nearby fixed points). Consequently the image of $H$ in $\text{GL}(T_p)$ is reducible, and is conjugate into an affine
subgroup, which is left orderable. If the image is nontrivial, we are done by Burns-Hale. If it is trivial, then the linear part of $H$ at $p$ is trivial, and therefore by the Thurston stability
theorem, there is a nontrivial homomorphism from $H$ to the (orderable) group of translations of the plane. By Burns-Hale, we conclude that $G$ is left orderable.
The second example does not require infinite differentiability, just $C^1$, the necessary hypothesis to apply the Thurston stability theorem. This is such a beautiful and powerful theorem that it is
worth making an aside to discuss it. Thurston’s theorem says that if $H$ is a finitely generated group of germs of diffeomorphisms of a manifold fixing a common point, then a suitable limit of
rescaled actions of the group near the fixed point converge to a nontrivial action by translations. One way to think of this is in terms of power series: if $H$ is a group of real analytic
diffeomorphisms of the line, fixing the point $0$, then every $h \in H$ can be expanded as a power series: $h(x) = c_1(h)x + c_2(h)x^2 + \cdots$. The function $h \to c_1(h)$ is a multiplicative
homomorphism; however, if the logarithm of $c_1$ is identically zero, then if $i$ is the first index for which some $c_i(h)$ is nonzero, then $h \to c_i(h)$ is an additive homomorphism. The choice of
coefficient $i$ is a “gauge”, adapted to $H$, that sees the most significant nontrivial part; this leading term is a character (i.e. a homomorphism to an abelian group), since the nonabelian
corrections have strictly higher degree. Thurston’s insight was to realize that for a finitely generated group of germs of $C^1$ diffeomorphisms with trivial linear part, one can find some gauge that
sees the most significant nontrivial part of the action of the group, and at this gauge, the action looks abelian. There is a subtlety, that one must restrict attention to finitely generated groups
of homeomorphisms: on each scale of a sequence of finer and finer scales, one of a finite generating set differs the most from the identity; one must pass to a subsequence of scales for which this
one element is constant (this is where the finite generation is used). The necessity of this condition is demonstrated by a theorem of Sergeraert: the group of germs of ($C^\infty$) diffeomorphisms
of the unit interval, infinitely tangent to the identity at both endpoints (i.e. with trivial power series at each endpoint) is perfect, and therefore admits no nontrivial homomorphism to an abelian
Let us now return to the original question. The examples above suggest that it might be possible to find a left ordering on the group of homeomorphisms of the disk, fixed on the boundary. However, I
think this is misleading. The construction of a left ordering in either category (PL or smooth) was ad hoc, and depended on locality in two different ways: the locality of the property of left
orderability (i.e. Burns-Hale) and the tameness of groups of PL or smooth homeomorphisms blown up near a common fixed point. Rescaling an arbitrary homeomorphism about a fixed point does not make
things any less complicated. Burns-Hale and Filipkiewicz together suggest that one should look for a structural dissimilarity between the group of homeomorphisms of the disk and of the interval that
persists in finitely generated subgroups. The simplest way to distinguish between the two spaces algebraically is in terms of their lattices of closed (or equivalently, open) subsets. To a
topological space $X$, one can associate the lattice $\Lambda(X)$ of (nonempty, for the sake of argument) closed subsets of $X$, ordered by inclusion. One can reconstruct the space $X$ from this
lattice, since points in $X$ correspond to minimal elements. However, any surjective map $X \to Y$ defines an embedding $\Lambda(Y) \to \Lambda(X)$, so there are many structure-preserving morphisms
between such lattices. The lattice $\Lambda(X)$ is an $\text{Aut}(X)$-space in an obvious way, and one can study algebraic maps $\Lambda(Y) \to \Lambda(X)$ together with homomorphisms $\rho:\text
{Aut}(Y) \to \text{Aut}(X)$ for which the algebraic maps respect the induced $\text{Aut}(Y)$-structures. A weaker “localization” of this condition asks merely that for points (i.e. minimal elements)
$p,p' \in \Lambda(Y)$ in the same $\text{Aut}(Y)$-orbit, their images in $\Lambda(X)$ are in the same $\text{Aut}(X)$-orbit. This motivates the following:
Proposition: There is a surjective map from the unit interval to the unit disk so that the preimages of any two points are homeomorphic.
Sketch of Proof: This proposition follows from two simpler propositions. The first is that there is a surjective map from the unit interval to itself so that every point preimage is a Cantor set. The
second is that there is a surjective map from the unit interval to the unit disk so that the preimage of any point is finite. A composition of these two maps gives the desired map, since a finite
union of Cantor sets is itself a Cantor set.
There are many surjective maps from the unit interval to the unit disk so that the preimage of any point is finite. For example, if $M$ is a hyperbolic three-manifold fibering over the circle with
fiber $S$, then the universal cover of a fiber $\widetilde{S}$ is properly embedded in hyperbolic $3$-space, and its ideal boundary (a circle) maps surjectively and finitely-to-one to the sphere at
infinity of hyperbolic $3$-space. Restricting to a suitable subinterval gives the desired map.
To obtain the first proposition, one builds a surjective map from the interval to itself inductively; there are many possible ways to do this, and details are left to the reader. qed.
It is not clear how much insight such a construction gives.
Another approach to the original question involves trying to construct an explicit (finitely generated) subgroup of the group of homeomorphisms of the disk that is not left orderable. There is a
“cheap” method to produce finitely presented groups with no left-orderable quotients. Let $G = \langle x,y \; | \; w_1, w_2 \rangle$ be a group defined by a presentation, where $w_1$ is a word in the
letters $x$ and $y$, and $w_2$ is a word in the letters $x$ and $y^{-1}$. In any left-orderable quotient in which both $x$ and $y$ are nontrivial, after reversing the orientation if necessary, we can
assume that $x > \text{id}$. If further $y>\text{id}$ then $w_1 >\text{id}$, contrary to the fact that $w_1 = \text{id}$. If $y^{-1} >\text{id}$, then $w_2 >\text{id}$, contrary to the fact that $w_2
=\text{id}$. In either case we get a contradiction. One can try to build by hand nontrivial homeomorphisms $x,y$ of the unit disk, fixed on the boundary, that satisfy $w_1,w_2 =\text{id}$. Some
evidence that this will be hard to do comes from the fact that the group of smooth and PL homeomorphisms of the disk are in fact left-orderable: any such $x,y$ can be arbitrarily well-approximated by
smooth $x',y'$; nevertheless at least one of the words $w_1,w_2$ evaluated on any smooth $x',y'$ will be nontrivial. Other examples of finitely presented groups that are not left orderable include
higher Q-rank lattices (e.g. subgroups of finite index in $\text{SL}(n,\mathbb{Z})$ when $n\ge 3$), by a result of Dave Witte-Morris. Suppose such a group admits a faithful action by homeomorphisms
on some closed surface of genus at least $1$. Since such groups do not admit homogeneous quasimorphisms, their image in the mapping class group of the surface is finite, so after passing to a
subgroup of finite index, one obtains a (lifted) action on the universal cover. If the genus of the surface is at least $2$, this action can be compactified to an action by homeomorphisms on the unit
disk (thought of as the universal cover of a hyperbolic surface) fixed pointwise on the boundary. Fortunately or unfortunately, it is already known by Franks-Handel (see also Polterovich) that such
groups admit no area-preserving actions on closed oriented surfaces (other than those factoring through a finite group), and it is consistent with the so-called “Zimmer program” that they should
admit no actions even without the area-preserving hypothesis when the genus is positive (of course, $\text{SL}(3,\mathbb{R})$ admits a projective action on $S^2$). Actually, higher rank lattices are
very fragile, because of Margulis’ normal subgroup theorem. Every normal subgroup of such a lattice is either finite or finite index, so to prove the results of Franks-Handel and Polterovich, it
suffices to find a single element in the group of infinite order that acts trivially. Unipotent elements are exponentially distorted in the word metric (i.e. the cyclic subgroups they generate are
not quasi-isometrically embedded), so one “just” needs to show that groups of area-preserving diffeomorphisms of closed surfaces (of genus at least $1$) do not contain such distorted elements. Some
naturally occurring non-left orderable groups include some (rare) hyperbolic $3$-manifold groups, amenable but not locally indicable groups, and a few others. It is hard to construct actions of such
groups on a disk, although certain flows on $3$-manifolds give rise to actions of the fundamental group on a plane.
Mapping class groups (also called modular groups) are of central importance in many fields of geometry. If $S$ is an oriented surface (i.e. a $2$-manifold), the group $\text{Homeo}^+(S)$ of
orientation-preserving self-homeomorphisms of $S$ is a topological group with the compact-open topology. The mapping class group of $S$, denoted $\text{MCG}(S)$ (or $\text{Mod}(S)$ by some people) is
the group of path-components of $\text{Homeo}^+(S)$, i.e. $\pi_0(\text{Homeo}^+(S))$, or equivalently $\text{Homeo}^+(S)/\text{Homeo}_0(S)$ where $\text{Homeo}_0(S)$ is the subgroup of homeomorphisms
isotopic to the identity.
When $S$ is a surface of finite type (i.e. a closed surface minus finitely many points), the group $\text{MCG}(S)$ is finitely presented, and one knows a great deal about the algebra and geometry of
this group. Less well-studied are groups of the form $\text{MCG}(S)$ when $S$ is of infinite type. However, such groups do arise naturally in dynamics.
Example: Let $G$ be a group of (orientation-preserving) homeomorphisms of the plane, and suppose that $G$ has a bounded orbit (i.e. there is some point $p$ for which the orbit $Gp$ is contained in a
compact subset of the plane). The closure of such an orbit $Gp$ is compact and $G$-invariant. Let $K$ be the union of the closure of $Gp$ with the set of bounded open complementary regions. Then $K$
is compact, $G$-invariant, and has connected complement. Define an equivalence relation $\sim$ on the plane whose equivalence classes are the points in the complement of $K$, and the connected
components of $K$. The quotient of the plane by this equivalence relation is again homeomorphic to the plane (by a theorem of R. L. Moore), and the image of $K$ is a totally disconnected set $k$. The
original group $G$ admits a natural homomorphism to the mapping class group of $\mathbb{R}^2 - k$. After passing to a $G$-invariant closed subset of $k$ if necessary, we may assume that $k$ is
minimal (i.e. every orbit is dense). Since $k$ is compact, it is either a finite discrete set, or it is a Cantor set.
The mapping class group of $\mathbb{R}^2 - \text{finite set}$ contains a subgroup of finite index fixing the end of $\mathbb{R}^2$; this subgroup is the quotient of a braid group by its center. There
are many tools that show that certain groups $G$ cannot have a big image in such a mapping class group.
Much less studied is the case that $k$ is a Cantor set. In the remainder of this post, we will abbreviate $\text{MCG}(\mathbb{R}^2 - \text{Cantor set})$ by $\Gamma$. Notice that any homeomorphism of
$\mathbb{R}^2 - \text{Cantor set}$ extends in a unique way to a homeomorphism of $S^2$, fixing the point at infinity, and permuting the points of the Cantor set (this can be seen by thinking of the
“missing points” intrinsically as the space of ends of the surface). Let $\Gamma'$ denote the mapping class group of $S^2 - \text{Cantor set}$. Then there is a natural surjection $\Gamma \to \Gamma'$
whose kernel is $\pi_1(S^2 - \text{Cantor set})$ (this is just the familiar Birman exact sequence).
The following is proved in the first section of my paper “Circular groups, planar groups and the Euler class”. This is the first step to showing that any group $G$ of orientation-preserving
diffeomorphisms of the plane with a bounded orbit is circularly orderable:
Proposition: There is an injective homomorphism $\Gamma \to \text{Homeo}^+(S^1)$.
Sketch of Proof: Choose a complete hyperbolic structure on $S^2 - \text{Cantor set}$. The Birman exact sequence exhibits $\Gamma$ as a group of (equivalence classes) of homeomorphisms of the
universal cover of this hyperbolic surface which commute with the deck group. Each such homeomorphism extends in a unique way to a homeomorphism of the circle at infinity. This extension does not
depend on the choice of a representative in an equivalence class, and one can check that the extension of a nontrivial mapping class is nontrivial at infinity. qed.
This property of the mapping class group $\Gamma$ does not distinguish it from mapping class groups of surfaces of finite type (with punctures); in fact, the argument is barely sensitive to the
topology of the surface at all. By contrast, the next theorem demonstrates a significant difference between mapping class groups of surfaces of finite type, and $\Gamma$. Recall that for a surface
$S$ of finite type, the group $\text{MCG}(S)$ acts simplicially on the complex of curves $\mathcal{C}(S)$, a simplicial complex whose simplices are the sets of isotopy classes of essential simple
closed curves in $S$ that can be realized mutually disjointly. A fundamental theorem of Masur-Minsky says that $\mathcal{C}(S)$ (with its natural simplicial path metric) is $\delta$-hyperbolic
(though it is not locally finite). Bestvina-Fujiwara show that any reasonably big subgroup of $\text{MCG}(S)$ contains lots of elements that act on $\mathcal{C}(S)$ weakly properly, and therefore
such groups admit many nontrivial quasimorphisms. This has many important consequences, and shows that for many interesting classes of groups, every homomorphism to a mapping class group (of finite
type) factors through a finite group. In view of the potential applications to dynamics as above, one would like to be able to construct quasimorphisms on mapping class groups of infinite type.
Unfortunately, this does not seem so easy.
Proposition: The group $\Gamma'$ is uniformly perfect.
Proof: Remember that $\Gamma'$ denotes the mapping class group of $S^2 - \text{Cantor set}$. We denote the Cantor set in the sequel by $C$.
A closed disk $D$ is a dividing disk if its boundary is disjoint from $C$, and separates $C$ into two components (both necessarily Cantor sets). An element $g \in \Gamma$ is said to be local if it
has a representative whose support is contained in a dividing disk. Note that the closure of the complement of a dividing disk is also a dividing disk. Given any dividing disk $D$, there is a
homeomorphism of the sphere $\varphi$ permuting $C$, that takes $D$ off itself, and so that the family of disks $\varphi^n(D)$ are pairwise disjoint, and converge to a limiting point $x \in C$.
Define $h$ to be the infinite product $h = \prod_i \varphi^i g \varphi^{-i}$. Notice that $h$ is a well-defined homeomorphism of the plane permuting $C$. Moreover, there is an identity $[h^{-1},\
varphi] = g$, thereby exhibiting $g$ as a commutator. The theorem will therefore be proved if we can exhibit any element of $\Gamma'$ as a bounded product of local elements.
Now, let $g$ be an arbitrary homeomorphism of the sphere permuting $C$. Pick an arbitrary $p \in C$. If $g(p)=p$ then let $h$ be a local homeomorphism taking $p$ to a disjoint point $q$, and define
$g' = hg$. So without loss of generality, we can find $g' = hg$ where $h$ is local (possibly trivial), and $g'(p) = q e p$. Let ${}E$ be a sufficiently small dividing disk containing $p$ so that $g'
(E)$ is disjoint from ${}E$, and their union does not contain every point of $C$. Join ${}E$ to $g'(E)$ by a path in the complement of $C$, and let $D$ be a regular neighborhood, which by
construction is a dividing disk. Let $f$ be a local homeomorphism, supported in $D$, that interchanges ${}E$ and $g'(E)$, and so that $f g'$ is the identity on $D$. Then $fg'$ is itself local,
because the complement of the interior of a dividing disk is also a dividing disk, and we have expressed $g$ as a product of at most three local homeomorphisms. This shows that the commutator length
of $g$ is at most $3$, and since $g$ was arbitrary, we are done. qed.
The same argument just barely fails to work with $\Gamma$ in place of $\Gamma'$. One can also define dividing disks and local homeomorphisms in $\Gamma$, with the following important difference. One
can show by the same argument that local homeomorphisms in $\Gamma$ are commutators, and that for an arbitrary element $g \in \Gamma$ there are local elements $h,f$ so that $fhg$ is the identity on a
dividing disk; i.e. this composition is anti-local. However, the complement of the interior of a dividing disk in the plane is not a dividing disk; the difference can be measured by keeping track of
the point at infinity. This is a restatement of the Birman exact sequence; at the level of quasimorphisms, one has the following exact sequence: $Q(\Gamma') \to Q(\Gamma) \to Q(\pi_1(S^2 - C))^{\
The so-called “point-pushing” subgroup $\pi_1(S^2 - C)$ can be understood geometrically by tracking the image of a proper ray from $C$ to infinity. We are therefore motivated to consider the
following object:
Definition: The ray graph $R$ is the graph whose vertex set is the set of isotopy classes of proper rays $r$, with interior in the complement of $C$, from a point in $C$ to infinity, and whose edges
are the pairs of such rays that can be realized disjointly.
One can verify that the graph $R$ is connected, and that the group $\Gamma$ acts simplicially on $R$ by automorphisms, and transitively on vertices.
Lemma: Let $g \in \Gamma$ and suppose there is a vertex $v \in R$ such that $v,g(v)$ share an edge. Then $g$ is a product of at most two local homeomorphisms.
Sketch of proof: After adjusting $g$ by an isotopy, assume that $r$ and $g(r)$ are actually disjoint. Let $E,g(E)$ be sufficiently small disjoint disks about the endpoint of $r$ and $g(r)$, and $\
alpha$ an arc from ${}E$ to $g(E)$ disjoint from $r$ and $g(r)$, so that the union $r \cup E \cup \alpha \cup g(E) \cup g(r)$ does not separate the part of $C$ outside $E \cup g(E)$. Then this union
can be engulfed in a punctured disk $D'$ containing infinity, whose complement contains some of $C$. There is a local $h$ supported in a neighborhood of $E \cup \alpha \cup g(E)$ such that $hg$ is
supported (after isotopy) in the complement of $D'$ (i.e. it is also local). qed.
It follows that if $g \in\Gamma$ has a bounded orbit in $R$, then the commutator lengths of the powers of $g$ are bounded, and therefore $\text{scl}(g)$ vanishes. If this is true for every $g \in \
Gamma$, then Bavard duality implies that $\Gamma$ admits no nontrivial homogeneous quasimorphisms. This motivates the following questions:
Question: Is the diameter of $R$ infinite? (Exercise: show $\text{diam}(R)\ge 3$)
Question: Does any element of $\Gamma$ act on $R$ with positive translation length?
Question: Can one use this action to construct nontrivial quasimorphisms on $\Gamma$?
The purpose of this post is to discuss my recent paper with Koji Fujiwara, which will shortly appear in Ergodic Theory and Dynamical Systems, both for its own sake, and in order to motivate some open
questions that I find very intriguing. The content of the paper is a mixture of ergodic theory, geometric group theory, and computer science, and was partly inspired by a paper of Jean-Claude Picaud.
To state the results of the paper, I must first introduce a few definitions and some background.
Let $\Gamma$ be a finite directed graph (hereafter a digraph) with an initial vertex, and edges labeled by elements of a finite set $S$ in such a way that each vertex has at most one outgoing edge
with any given label. A finite directed path in $\Gamma$ starting at the initial vertex determines a word in the alphabet $S$, by reading the labels on the edges traversed (in order). The set $L \
subset S^*$ of words obtained in this way is an example of what is called a regular language, and is said to be parameterized by $\Gamma$. Note that this is not the most general kind of regular
language; in particular, any language $L$ of this kind will necessarily be prefix-closed (i.e. if $w \in L$ then every prefix of $w$ is also in $L$). Note also that different digraphs might
parameterize the same (prefix-closed) regular language $L$.
If $S$ is a set of generators for a group $G$, there is an obvious map $L \to G$ called the evaluation map that takes a word $w$ to the element of $G$ represented by that word.
Definition: Let $G$ be a group, and $S$ a finite generating set. A combing of $G$ is a (prefix-closed) regular language $L$ for which the evaluation map $L \to G$ is a bijection, and such that every
$w \in L$ represents a geodesic in $G$.
The intuition behind this definition is that the set of words in $L$ determines a directed spanning tree in the Cayley graph $C_S(G)$ starting at $\text{id}$, and such that every directed path in the
tree is a geodesic in $C_S(G)$. Note that there are other definitions of combing in the literature; for example, some authors do not require the evaluation map to be a bijection, but only a coarse
Fundamental to the theory of combings is the following Theorem, which paraphrases one of the main results of this paper:
Theorem: (Cannon) Let $G$ be a hyperbolic group, and let $S$ be a finite generating set. Choose a total order on the elements of $S$. Then the language $L$ of lexicographically first geodesics in $G$
is a combing.
The language $L$ described in this theorem is obviously geodesic and prefix-closed, and the evaluation map is bijective; the content of the theorem is that $L$ is regular, and parameterized by some
finite digraph $\Gamma$. In the sequel, we restrict attention exclusively to hyperbolic groups $G$.
Given a (hyperbolic) group $G$, a generating set $S$, a combing $L$, one makes the following definition:
Definition: A function $\phi:G \to \mathbb{Z}$ is weakly combable (with respect to $S,L$) if there is a digraph $\Gamma$ parameterizing $L$ and a function $d\phi$ from the vertices of $\Gamma$ to $\
mathbb{Z}$ so that for any $w \in L$, corresponding to a path $\gamma$ in $\Gamma$, there is an equality $\phi(w) = \sum_i d\phi(\gamma(i))$.
In other words, a function $\phi$ is weakly combable if it can be obtained by “integrating” a function $d\phi$ along the paths of a combing. One furthermore says that a function is combable if it
changes by a bounded amount under right-multiplication by an element of $S$, and bicombable if it changes by a bounded amount under either left or right multiplication by an element of $S$. The
property of being (bi-)combable does not depend on the choice of a generating set $S$ or a combing $L$.
Example: Word length (with respect to a given generating set $S$) is bicombable.
Example: Let $\phi:G \to \mathbb{Z}$ be a homomorphism. Then $\phi$ is bicombable.
Example: The Brooks counting quasimorphisms (on a free group) and the Epstein-Fujiwara counting quasimorphisms are bicombable.
Example: The sum or difference of two (bi-)combable functions is (bi-)combable.
A particularly interesting example is the following:
Example: Let $S$ be a finite set which generates $G$ as a semigroup. Let $\phi_S$ denote word length with respect to $S$, and $\phi_{S^{-1}}$ denote word length with respect to $S^{-1}$ (which also
generates $G$ as a semigroup). Then the difference $\psi_S:= \phi_S - \phi_{S^{-1}}$ is a bicombable quasimorphism.
The main theorem proved in the paper concerns the statistical distribution of values of a bicombable function.
Theorem: Let $G$ be a hyperbolic group, and let $\phi$ be a bicombable function on $G$. Let $\overline{\phi}_n$ be the value of $\phi$ on a random word in $G$ of length $n$ (with respect to a certain
measure $\widehat{u}$ depending on a choice of generating set). Then there are algebraic numbers $E$ and $\sigma$ so that as distributions, $n^{-1/2}(\overline{\phi}_n - nE)$ converges to a normal
distribution with standard deviation $\sigma$.
One interesting corollary concerns the length of typical words in one generating set versus another. The first thing that every geometric group theorist learns is that if $S_1, S_2$ are two finite
generating sets for a group $G$, then there is a constant $K$ so that every word of length $n$ in one generating set has length at most $nK$ and at least $n/K$ in the other generating set. If one
considers an example like $\mathbb{Z}^2$, one sees that this is the best possible estimate, even statistically. However, if one restricts attention to a hyperbolic group $G$, then one can do much
better for typical words:
Corollary: Let $G$ be hyperbolic, and let $S_1,S_2$ be two finite generating sets. There is an algebraic number $\lambda_{1,2}$ so that almost all words of length $n$ with respect to the $S_1$
generating set have length almost equal to $n\lambda_{1,2}$ with respect to the $S_2$ generating set, with error of size $O(\sqrt{n})$.
Let me indicate very briefly how the proof of the theorem goes.
Sketch of Proof: Let $\phi$ be bicombable, and let $d\phi$ be a function from the vertices of $\Gamma$ to $\mathbb{Z}$, where $\Gamma$ is a digraph parameterizing $L$. There is a bijection between
the set of elements in $G$ of word length $n$ and the set of directed paths in $\Gamma$ of length $n$ that start at the initial vertex. So to understand the distribution of $\phi$, we need to
understand the behaviour of a typical long path in $\Gamma$.
Define a component of $\Gamma$ to be a maximal subgraph with the property that there is a directed path (in the component) from any vertex to any other vertex. One can define a new digraph $C(\Gamma)
$ without loops, with one vertex for each component of $\Gamma$, in an obvious way. Each component $C$ determines an adjacency matrix $M_C$, with $ij$-entry equal to $1$ if there is a directed edge
from vertex $i$ to vertex $j$, and equal to $0$ otherwise. A component $C$ is big if the biggest real eigenvalue $\lambda$ of $M_C$ is at least as big as the biggest real eigenvalue of the matrices
associated to every other component. A random long walk in $\Gamma$ will spend most of its time entirely in big components, so these are the only components we need to consider to understand the
statistical distribution of $\phi$.
A theorem of Coornaert implies that there are no big components of $C(\Gamma)$in series; i.e. there are no directed paths in $C(\Gamma)$ from one big component to another (one also says that the big
components do not communicate). This means that a typical long walk in $\Gamma$ is entirely contained in a single big component, except for a (relatively short) path at the start and the end of the
walk. So the distribution of $\phi$ gets independent contributions, one from each big component.
The contribution from an individual big component is not hard to understand: the central limit theorem for stationary Markov chains says that for elements of $G$ corresponding to paths that spend
almost all their time in a given big component $C$ there is a central limit theorem $n^{-1/2}(\overline{\phi}_n - nE_C) \to N(0,\sigma_C)$ where the mean $E_C$ and standard deviation $\sigma_C$
depend only on $C$. The problem is to show that the means and standard deviations associated to different big components are the same. Everything up to this point only depends on weak combability of
$\phi$; to finish the proof one must use bicombability.
It is not hard to show that if $\gamma$ is a typical infinite walk in a component $C$, then the subpaths of $\gamma$ of length $n$ are distributed like random walks of length $n$ in $C$. What this
means is that the mean and standard deviation $E_C,\sigma_C$ associated to a big component $C$ can be recovered from the distribution of $\phi$ on a single infinite “typical” path in $C$. Such an
infinite path corresponds to an infinite geodesic in $G$, converging to a definite point in the Gromov boundary $\partial G$. Another theorem of Coornaert (from the same paper) says that the action
of $G$ on its boundary $\partial G$ is ergodic with respect to a certain natural measure called a Patterson-Sullivan measure (see Coornaert’s paper for details). This means that there are typical
infinite geodesics $\gamma,\gamma'$ associated to components $C$ and $C'$ for which some $g \in G$ takes $\gamma$ to a geodesic $g\gamma$ ending at the same point in $\partial G$ as $\gamma'$.
Bicombability implies that the values of $\phi$ on $\gamma$ and $g\gamma$ differ by a bounded amount. Moreover, since $g\gamma$ and $\gamma'$ are asymptotic to the same point at infinity, combability
implies that the values of $\phi$ on $g\gamma$ and $\gamma'$ also differ by a bounded amount. This is enough to deduce that $E_C = E_{C'}$ and $\sigma_C = \sigma_{C'}$, and one obtains a (global)
central limit theorem for $\phi$ on $G$. qed.
This obviously raises several questions, some of which seem very hard, including:
Question 1: Let $\phi$ be an arbitrary quasimorphism on a hyperbolic group $G$ (even the case $G$ is free is interesting). Does $\phi$ satisfy a central limit theorem?
Question 2: Let $\phi$ be an arbitrary quasimorphism on a hyperbolic group $G$. Does $\phi$ satisfy a central limit theorem with respect to a random walk on $G$? (i.e. one considers the distribution
of values of $\phi$ not on the set of elements of $G$ of word length $n$, but on the set of elements obtained by a random walk on $G$ of length $n$, and lets $n$ go to infinity)
All bicombable quasimorphisms satisfy an important property which is essential to our proof of the central limit theorem: they are local, which is to say, they are defined as a sum of local
contributions. In the continuous world, they are the analogue of the so-called de Rham quasimorphisms on $\pi_1(M)$ where $M$ is a closed negatively curved Riemannian manifold; such quasimorphisms
are defined by choosing a $1$-form $\alpha$, and defining $\phi_\alpha(g)$ to be equal to the integral $\int_{\gamma_g} \alpha$, where $\gamma_g$ is the closed oriented based geodesic in $M$ in the
homotopy class of $g$. De Rham quasimorphisms, being local, also satisfy a central limit theorem.
This locality manifests itself in another way, in terms of defects. Let $\phi$ be a quasimorphism on a hyperbolic group $G$. Recall that the defect $D(\phi)$ is the supremum of $|\phi(gh) - \phi(g) -
\phi(h)|$ over all pairs of elements $g,h \in G$. A quasimorphism is further said to be homogeneous if $\phi(g^n) = n\phi(g)$ for all integers $n$. If $\phi$ is an arbitrary quasimorphism, one may
homogenize it by taking a limit $\psi(g) = \lim_{n \to \infty} \phi(g^n)/n$; one says that $\psi$ is the homogenization of $\phi$ in this case. Homogenization typically does not preserve defects;
however, there is an inequality $D(\psi) \le 2D(\phi)$. If $\phi$ is local, one expects this inequality to be an equality. For, in a hyperbolic group, the contribution to the defect of a local
quasimorphism all arises from the interaction of the suffix of (a geodesic word representing the element) $g$ with the prefix of $h$ (with notation as above). When one homogenizes, one picks up
another contribution to the defect from the interaction of the prefix of $g$ with the suffix of $h$; since these two contributions are essentially independent, one expects that homogenizing a local
quasimorphism should exactly double the defect. This is the case for bicombable and de Rham quasimorphisms, and can perhaps be used to define locality for a quasimorphism on an arbitrary group.
This discussion provokes the following key question:
Question 3: Let $G$ be a group, and let $\psi$ be a homogeneous quasimorphism. Is there a quasimorphism $\phi$ with homogenization $\psi$, satisfying $D(\psi) = 2D(\phi)$?
Example: The answer to question 3 is “yes” if $\psi$ is the rotation quasimorphism associated to an action of $G$ on $S^1$ by orientation-preserving homeomorphisms (this is nontrivial; see
Proposition 4.70 from my monograph).
Example: Let $C$ be any homologically trivial group $1$-boundary. Then there is some extremal homogeneous quasimorphism $\psi$ for $C$ (i.e. a quasimorphism achieving equality $\text{scl}(C) = \psi
(C)/2D(\psi)$ under generalized Bavard duality; see this post) for which there is $\phi$ with homogenization $\psi$ satisfying $D(\psi) = 2D(\phi)$. Consequently, if every point in the boundary of
the unit ball in the $\text{scl}$ norm is contained in a unique supporting hyperplane, the answer to question 3 is “yes” for any quasimorphism on $G$.
Any quasimorphism on $G$ can be pulled back to a quasimorphism on a free group, but this does not seem to make anything easier. In particular, question 3 is completely open (as far as I know) when
$G$ is a free group. An interesting test case might be the homogenization of an infinite sum of Brooks functions $\sum_w h_w$ for some infinite non-nested family of words $\lbrace w \rbrace$.
If the answer to this question is false, and one can find a homogeneous quasimorphism $\psi$ which is not the homogenization of any “local” quasimorphism, then perhaps $\psi$ does not satisfy a
central limit theorem. One can try to approach this problem from the other direction:
Question 4: Given a function $f$ defined on the ball of radius $n$ in a free group $F$, one defines the defect $D(f)$ in the usual way, restricted to pairs of elements $g,h$ for which $g,h,gh$ are
all of length at most $n$. Under what conditions can $f$ be extended to a function on the ball of radius $n+1$ without increasing the defect?
If one had a good procedure for building a quasimorphism “by hand” (so to speak), one could try to build a quasimorphism that failed to satisfy a central limit theorem, or perhaps find reasons why
this was impossible.
A basic reference for the background to this post is my monograph.
Let $G$ be a group, and let $[G,G]$ denote the commutator subgroup. Every element of $[G,G]$ can be expressed as a product of commutators; the commutator length of an element $g$ is the minimum
number of commutators necessary, and is denoted $\text{cl}(g)$. The stable commutator length is the growth rate of the commutator lengths of powers of an element; i.e. $\text{scl}(g) = \lim_{n \to \
infty} \text{cl}(g^n)/n$. Recall that a group $G$ is said to satisfy a law if there is a nontrivial word $w$ in a free group $F$ for which every homomorphism from $F$ to $G$ sends $w$ to $\text{id}$.
The purpose of this post is to give a very short proof of the following proposition (modulo some background that I wanted to talk about anyway):
Proposition: Suppose $G$ obeys a law. Then the stable commutator length vanishes identically on $[G,G]$.
The proof depends on a duality between stable commutator length and a certain class of functions, called homogeneous quasimorphisms.
Definition: A function $\phi:G \to \mathbb{R}$ is a quasimorphism if there is some least number $D(\phi)\ge 0$ (called the defect) so that for any pair of elements $g,h \in G$ there is an inequality
$|\phi(x) + \phi(y) - \phi(xy)| \le D(\phi)$. A quasimorphism is homogeneous if it satisfies $\phi(g^n) = n\phi(g)$ for all integers $n$.
Note that a homogeneous quasimorphism with defect zero is a homomorphism (to $\mathbb{R}$). The defect satisfies the following formula:
Lemma: Let $f$ be a homogeneous quasimorphism. Then $D(\phi) = \sup_{g,h} \phi([g,h])$.
A fundamental theorem, due to Bavard, is the following:
Theorem: (Bavard duality) There is an equality $\text{scl}(g) = \sup_\phi \frac {\phi(g)} {2D(\phi)}$ where the supremum is taken over all homogeneous quasimorphisms with nonzero defect.
In particular, $\text{scl}$ vanishes identically on $[G,G]$ if and only if every homogeneous quasimorphism on $G$ is a homomorphism.
One final ingredient is another geometric definition of $\text{scl}$ in terms of Euler characteristic. Let $X$ be a space with $\pi_1(X) = G$, and let $\gamma:S^1 \to X$ be a free homotopy class
representing a given conjugacy class $g$. If $S$ is a compact, oriented surface without sphere or disk components, a map $f:S \to X$ is admissible if the map on $\partial S$ factors through $\partial
f:\partial S \to S^1 \to X$, where the second map is $\gamma$. For an admissible map, define $n(S)$ by the equality $[\partial S] \to n(S) [S^1]$ in $H_1(S^1;\mathbb{Z})$ (i.e. $n(S)$ is the degree
with which $\partial S$ wraps around $\gamma$). With this notation, one has the following:
Lemma: There is an equality $\text{scl}(g) = \inf_S \frac {-\chi^-(S)} {2n(S)}$.
Note: the function $-\chi^-$ is the sum of $-\chi$ over non-disk and non-sphere components of $S$. By hypothesis, there are none, so we could just write $-\chi$. However, it is worth writing $-\chi^
-$ and observing that for more general (orientable) surfaces, this function is equal to the function $\rho$ defined in a previous post.
We now give the proof of the Proposition.
Proof. Suppose to the contrary that stable commutator length does not vanish on $[G,G]$. By Bavard duality, there is a homogeneous quasimorphism $\phi$ with nonzero defect. Rescale $\phi$ to have
defect $1$. Then for any $\epsilon$ there are elements $g,h$ with $\phi([g,h]) \ge 1-\epsilon$, and consequently $\text{scl}([g,h]) \ge 1/2 - \epsilon/2$ by Bavard duality. On the other hand, if $X$
is a space with $\pi_1(X)=G$, and $\gamma:S^1 \to X$ is a loop representing the conjugacy class of $[g,h]$, there is a map $f:S \to X$ from a once-punctured torus $S$ to $X$ whose boundary represents
$\gamma$. The fundamental group of $S$ is free on two generators $x,y$ which map to the class of $g,h$ respectively. If $w$ is a word in $x,y$ mapping to the identity in $G$, there is an essential
loop $\alpha$ in $S$ that maps inessentially to $X$. There is a finite cover $\widetilde{S}$ of $S$, of degree $d$ depending on the word length of $w$, for which $\alpha$ lifts to an embedded loop.
This can be compressed to give a surface $S'$ with $-\chi^-(S') \le -\chi^-(\widetilde{S})-2$. However, Euler characteristic is multiplicative under coverings, so $-\chi^-(\widetilde{S}) = -\chi^-(S)
\cdot d$. On the other hand, $n(S') = n(\widetilde{S})=d$ so $\text{scl}([g,h]) \le 1/2 - 1/d$. If $G$ obeys a law, then $d$ is fixed, but $\epsilon$ can be made arbitrarily small. So $G$ does not
obey a law. qed.
Recent Comments
Ian Agol on Cube complexes, Reidemeister 3…
Danny Calegari on kleinian, a tool for visualizi…
Quod est Absurdum |… on kleinian, a tool for visualizi…
dipankar on kleinian, a tool for visualizi…
Ludwig Bach on Liouville illiouminated | {"url":"http://lamington.wordpress.com/tag/quasimorphisms/","timestamp":"2014-04-18T04:08:44Z","content_type":null,"content_length":"289968","record_id":"<urn:uuid:1122574a-80b5-4402-9e59-08c3d72aca69>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00348-ip-10-147-4-33.ec2.internal.warc.gz"} |
It would take 499 hours to fly around the planet in a plane.
When standing on Jupiter, I could jump cm into the air.
I'm years old on Earth, but I'd be years old on Jupiter.
I weigh kg on Earth, but I'd weigh kg on Jupiter, which is the same as on Earth. | {"url":"http://www.planethopper.co.uk/systems/sol/","timestamp":"2014-04-20T10:46:23Z","content_type":null,"content_length":"14409","record_id":"<urn:uuid:4d180147-2bef-4e56-9dc5-75ad00432bba>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00123-ip-10-147-4-33.ec2.internal.warc.gz"} |
// p7g2q11mo.m : This program is an analogue of p7exp.m print "\nConstruct a 2-generator anti-Hughes 7-group of class 14", "\nwhich satisfies 11 commutator defining relators and in which", "\nthe
normal closures of the generators both have class 7.", "\nThen repeatedly factor out complements to [b,a]^7 in the centre", "\nof the group, to obtain smaller counterexamples"; F := FreeGroup(2); p :
= 7; Q := quo < F | (b,a,a,a,a,b), (b,a,a,a,a,a,b), (b,a,a,a,a,a,a,b), (b,a,a,a,a,a,a,a,b), (b,a,b,b,b,a), (b,a,b,b,b,b,a), (b,a,b,b,b,b,b,a), (b,a,b,b,b,b,b,b,a), (b,a,a,a,b,a,b,a),
(b,a,a,a,b,b,b,b,b), (b,a,a,a,b,a,b,b,a,a,b) >; P := pQuotientProcess( Q, p, p : Exponent :=p ); NextClass( ~P : Exponent := p, MaxOccurrence := [p,p] ); NextClass( ~P : Exponent := p, MaxOccurrence
:= [p,p] ); NextClass( ~P : Exponent := p, MaxOccurrence := [p,p] ); NextClass( ~P : Exponent := p, MaxOccurrence := [p,p] ); NextClass( ~P : Exponent := p, MaxOccurrence := [p,p] ); NextClass( ~P :
Exponent := p, MaxOccurrence := [p,p] ); printf "\nThe class 13 quotient of the group has order %o^%o\n", p, FactoredOrder(P)[1][2]; NextClass( ~P : Exponent := 0, MaxOccurrence := [p,p] ); G :=
ExtractGroup(P); printf "\nThe p-covering group G has order %o^%o and class %o\n", p, FactoredOrder(G)[1][2], pClass(G); print "\nG is generated by a and b; and [b,a] has order", Order( (b,a) ), "\n
[b,a,a] has order", Order( (b,a,a) ), "; [b,a,b] has order", Order( (b,a,b) ), "\ngamma_3(G) is the normal closure of < [b,a,a], [b,a,b] >,", "\nand gamma_3(G) has class at most 4, so gamma_3(G) has
exponent 7"; print "\nNow compute suitable 7th powers of elements outside the derived group"; load "tw7.m"; // should be already computed S := [ x^7 : x in testwords ]; H := quo< G | S >; printf "The
quotient group H has order %o^%o", p, FactoredOrder(H)[1][2]; print "\nH is generated by a and b; and [b,a] has order", Order( (b,a) ), "\nH is anti-Hughes; the normal closures of a and b both have
class 7 \n[b,a]^7 generates gamma_14(H)"; CurrentQ:=H; Z := Center(CurrentQ); while Order(Z) ne 7 do printf "\n The centre, Z, of the current group has order %o^%o", FactoredOrder(Z)[1][1],
FactoredOrder(Z)[1][2]; print "\n Now build a complement for [b,a]^7 in Z"; rank := FactoredOrder(Z)[1][2]; ZGens := [ CurrentQ!Z.i : i in [1..rank] ]; _, index := Max( Eltseq( (b,a)^7 ) ); ComplGens
:= [ ZGens[i] * (CurrentQ.index)^-Eltseq(ZGens[i])[index] : i in [1..rank] ]; NextQ := quo< CurrentQ | ComplGens >; printf " Factor it out to get an anti-Hughes group of order %o^%o", p,
FactoredOrder(NextQ)[1][2]; print "\n generated by a and b; and [b,a]^7 =", (b,a)^7; CurrentQ := NextQ; Z := Center(CurrentQ); end while; printf "\nThe centre is generated by %o and has order %o^%o",
CurrentQ!(Z.1), p, FactoredOrder(Z)[1][2]; printf "\nSo this method reduces to an anti-Hughes group with order %o^%o", p, FactoredOrder(CurrentQ)[1][2]; print "\nwhich is as far as we can reduce the
group (by this method!)"; | {"url":"http://users.ox.ac.uk/~vlee/hughes/p7g2q11mo.m","timestamp":"2014-04-17T04:15:08Z","content_type":null,"content_length":"3466","record_id":"<urn:uuid:6d467aea-7e2a-488f-8673-840828e3515e>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00059-ip-10-147-4-33.ec2.internal.warc.gz"} |
Steady-State Operating Points (Trimming) from Specifications
Steady-State Operating Point Search (Trimming)
You can compute a steady-state operating point (or equilibrium operating point) using numerical optimization methods to meet your specifications. The resulting operating point consists of the
equilibrium state values and model input levels.
Optimization-based operating point computation requires you to specify initial guesses and constraints on the key operating point states, input levels, and model output signals.
You can usually improve your optimization results using simulation to initialize the optimization. For example, you can extract the initial values of the operating point at a simulation time when the
model reaches the neighborhood of steady state.
Optimization-based operating point search lets you specify and constrain the following variables at equilibrium:
● Initial state values
● States at equilibrium
● Maximum or minimum bounds on state values, input levels, and output levels
● Known (fixed) state values, input levels, or output levels
Your operating point search might not converge to a steady-state operating point when you overconstrain the optimization. You can overconstrain the optimization by specifying incompatible constraints
or initial guesses that are far away from the desired solution.
You can also control the accuracy of your operating point search by configuring the optimization algorithm settings.
Which States in the Model Must Be at Steady State?
When configuring a steady-state operating point search, you do not always need to specify all states to be at equilibrium. A pendulum is an example of a system where it is possible to find an
operating point with all states at steady state. However, for other types of systems, there may not be an operating point where all states are at equilibrium, and the application does not require
that all operating point states be at equilibrium.
For example, suppose you build an automobile model for a cruise control application with these states:
● Vehicle position and velocity
● Fuel and air flow rates into the engine
If your goal is to study the automobile behavior at constant cruising velocity, you need an operating point with the velocity, air flow rate, and fuel flow rate at steady state. However, the position
of the vehicle is not at steady state because the vehicle is moving at constant velocity. The lack of steady state of the position variable is fine for the cruise control application because the
position does not have significant impact on the cruise control behavior. In this case, you do not need to overconstrain the optimization search for an operating point by require that all states
should be at equilibrium.
Similar situations also appear in aerospace systems when analyzing the dynamics of an aircraft under different maneuvers.
Steady-State Operating Points from State Specifications
This example shows how to compute a steady-state operating point, or equilibrium operating point, by specifying known (fixed) equilibrium states and minimum state values.
1. Open Simulink^® model.
sys = 'magball';
2. In the Simulink Editor, select Analysis > Control Design > Linear Analysis.
The Linear Analysis Tool for the model opens.
3. In the Linear Analysis tab, click Trim Model. Then click Specifications.
The Specifications for trim dialog box opens.
By default, the software specifies all model states to be at equilibrium (as shown by the check marks in the Steady State column). The Inputs and Outputs tabs are empty because this model does
not have root-level input and output ports.
4. In the States tab, select Known for the height state.
The height of the ball matches the reference signal height (specified in the Desired Height block as 0.05). This height value should remain fixed during the optimization.
5. Enter 0 for the minimum bound of the Current state.
6. Click to compute the operating point.
This action uses numerical optimization to find the operating point that meets your specifications.
The Trim progress viewer shows that the optimization algorithm terminated successfully. The (Maximum Error) Block area shows the progress of reducing the error of a specific state or output
during the optimization.
A new variable, op_trim1, appears in the Linear Analysis Workspace.
7. Double-click op_trim1 in Linear Analysis Workspace to evaluate whether the resulting operating point values meet the specifications.
The Actual dx values are about 0, which indicates that the operating point meets the steady state specification.
The Actual Value of the states falls within the Desired Value bounds.
8. (Optional) To automatically generate a MATLAB^® script, click Trim and select Generate MATLAB Script.
The generated script contains commands for computing the operating point for this example.
Steady-State Operating Point to Meet Output Specification
This example shows how to specify an output constraint of an engine speed for computing the engine steady-state operating point.
1. Open Simulink model.
sys = 'scdspeed';
2. In the Simulink Editor, select Analysis > Control Design > Linear Analysis.
The Linear Analysis Tool for the model opens.
3. In the Linear Analysis tab, click Trim Model. Then click Specifications.
The Specifications for trim dialog box appears.
4. Examine the linearization outputs for scdspeed in the Outputs tab.
Currently there are no outputs specified for scdspeed.
5. In the Simulink Editor, right-click the output signal from the rad/s to rpm block. Select Linear Analysis Points > Trim Output Constraint.
This action adds the output signal constraint marker to the model.
The output signal from the rad/s to rpm block now appears under the Outputs tab.
6. Select Known and enter 2000 RPM for the engine speed as the output signal value. Press Enter.
7. Double-click op_trim1 in Linear Analysis Workspace to evaluate whether the resulting operating point values meet the specifications.
In the States tab, the Actual dx values are either zero or about zero. This result indicates that the operating point meets the steady state specification.
In the Outputs tab, the Actual Value and the Desired Value are both 2000.
Initialize Steady-State Operating Point Search Using Simulation Snapshot
Initialize Operating Point Search Using Linear Analysis Tool
This example shows how to use the Linear Analysis Tool to initialize the values of an operating point search using a simulation snapshot.
If you know the approximate time when the model reaches the neighborhood of a steady-state operating point, you can use simulation to get the state values to be used as the initial condition for
numerical optimization.
1. Open Simulink model.
sys = ('watertank');
2. In the Simulink Editor, select Analysis > Control Design > Linear Analysis.
The Linear Analysis Tool for the model opens.
3. In the Linear Analysis tab, click Operating Point Snapshot
The Operating Point Snapshots tab opens.
4. Enter 10 in the Simulation Snapshot Times field to extract the operating point at this simulation time. Press Enter.
Click to take a snapshot of the system at the specified time.
op_snapshot1 appears in the Linear Analysis Workspace. The snapshot, op_snapshot1, contains all state values of the system at the specified time.
5. In the Linear Analysis tab, click Trim Model. Then click Specifications.
The Specifications for trim dialog box appears.
6. Click Import.
The Import initial values and specifications dialog opens.
7. Select op_snapshot1 and click Import to initialize the operating point states with the values you obtained from the simulation snapshot.
The state values displayed in the Specifications for trim dialog box update to reflect the new values.
8. Click to find the optimized operating point using the states at t = 10 as the initial values.
9. Double-click op_trim1 in Linear Analysis Workspace to evaluate whether the resulting operating point values meet the specifications.
The Actual dx values are near zero. This result indicates that the operating point meets the steady state specifications.
Initialize Operating Point Search (MATLAB Code)
This example show how to use initopspec to initialize operating point object values for optimization-based operating point search.
1. Open Simulink model.
sys = 'watertank';
2. Extract an operating point from simulation after 10 time units.
opsim = findop(sys,10);
3. Create operating point specification object.
By default, all model states are specified to be at steady state.
opspec = operspec(sys);
4. Configure initial values for operating point search.
opspec = initopspec(opspec,opsim);
5. Find the steady state operating point that meets these specifications.
[op,opreport] = findop(sys,opspec)
opreport describes the optimization algorithm status at the end of the operating point search.
Operating Report for the Model watertank.
(Time-Varying Components Evaluated at time t=0)
Operating point specifications were successfully met.
(1.) watertank/PID Controller/Integrator
x: 1.26 dx: 0 (0)
(2.) watertank/Water-Tank System/H
x: 10 dx: -1.1e-014 (0)
Inputs: None
Outputs: None
dx, which is the time derivative of each state, is effectively zero. This value of the state derivative indicates that the operating point is at steady state.
Compute Steady-State Operating Points for SimMechanics Models
This example shows how to compute the steady-state operating point of a SimMechanics™ model from specifications.
1. Open the SimMechanics model.
sys = 'scdmechconveyor';
2. Double-click the Env block to open the Block Parameters dialog box.
3. In the Parameters tab, select Trimming as the Analysis mode. Click OK.
This action adds an output port to the model with constraints that must be satisfied to a ensure a consistent SimMechanics machine.
4. In the Simulink Editor, select Analysis > Control Design > Linear Analysis.
The Linear Analysis Tool for the model opens.
5. In the Linear Analysis tab, click Trim Model. Then click Specifications.
The Specifications for trim dialog box appears.
By default, the software specifies all model states to be at equilibrium (as shown in the Steady State column). The Outputs tab shows the error constraints in the system that must be set to zero
for steady-state operating point search.
6. In the Outputs tab, select Known to set all constraints to 0.
You can now specify additional constraints on the operating point states and input levels, and find the steady-state operating point for this model.
After you finish steady-state operating point search for the SimMechanics model, reset the Analysis mode to Forward dynamics in the Env block parameters dialog box.
Batch Compute Steady-State Operating Points Reusing Generated MATLAB Code
This example shows how to batch compute steady-state operating points for a model using generated MATLAB code. You can batch linearize a model using the operating points and study the change in model
If you are new to writing scripts, use the Linear Analysis Tool to interactively configure your operating points search. You can use Simulink Control Design™ to automatically generate a script based
on your Linear Analysis Tool settings.
1. Open the Simulink model.
sys = 'magball';
2. Open the Linear Analysis Tool for the model.
In the Simulink Editor, select Analysis > Control Design > Linear Analysis.
3. Open the Specifications for trim dialog box.
In the Linear Analysis tab, click Trim Model. The Trim Model tab should open.
Click Specifications.
By default, the software specifies all model states to be at equilibrium (as shown in the Steady State column).
4. In the States tab, select the Known check box for the magball/Magnetic Ball Plant/height state.
5. Click to compute the operating point using numerical optimization.
The Trim progress viewer shows that the optimization algorithm terminated successfully. The (Maximum Error) Block area shows the progress of reducing the error of a specific state or output
during the optimization.
6. Click Generate MATLAB Script in the Trim list to automatically generate a MATLAB script.
The MATLAB Editor window opens with the generated script.
7. Edit the script:
a. Remove unneeded comments from the generated script.
b. Define the height variable, height, with values at which to compute operating points.
c. Add a for loop around the operating point search code to compute a steady-state operating point for each height value. Within the loop, before calling findop, you must update the reference
ball height, specified by the Desired Height block.
Your script should now look similar to this (excluding most comments):
function [op,opreport] = myoperatingpointsearch
%% Specify the model name
sys = 'magball';
%% Create operating point specification object
opspec = operspec(sys)
% State (5) - magball/Magnetic Ball Plant/height
% - Default model initial conditions are used to initialize optimization.
opspec.States(5).Known = true;
%% Create the options
opt = findopOptions('DisplayReport','iter');
%% Specify the ball heights at which to compute operating points
height = [0.05;0.1;0.15];
%% Loop over height values to find the corresponding steady-state
%% operating points
for ct = 1:numel(height)
% Set the ball height in the specification
opspec.States(5).x = height(ct);
% Update model parameter
set_param('magball/Desired Height','Value',num2str(height(ct)));
% Trim the model
[op(ct),opreport(ct)] = findop(sys,opspec,opt);
Change Operating Point Search Optimization Settings
This example shows how to control the accuracy of your operating point search by configuring the optimization algorithm.
Typically, you adjust the optimization settings based on the operating point search report, which is automatically created after each search.
1. In the Linear Analysis Tool, open the Linear Analysis tab. Click Trim Model and click Optimization Options.
This action opens the Options for trim dialog box.
2. Change the appropriate optimization settings.
This table lists the most common optimization settings.
┃ Optimization Status │ Option to Change │ Comment ┃
┃ Optimization ends before completing (too few iterations) │ Maximum iterations │ Increase the number of iterations ┃
┃ State derivative or error in output constraint is too large │ Function tolerance or Constraint tolerance (depending on selected algorithm) │ Decrease the tolerance value ┃
│ Note: You can get help on each option by right-clicking the option label and selecting What's This?. │ | {"url":"http://www.mathworks.nl/help/slcontrol/ug/steady-state-operating-points-trimming-from-specifications.html?nocookie=true","timestamp":"2014-04-23T17:26:28Z","content_type":null,"content_length":"73818","record_id":"<urn:uuid:60b9330b-2b59-46f3-9508-554dd8643dc9>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00323-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
There is an old joke that goes something like this: “If God is love, love is blind, and Ray Charles is blind, then Ray Charles is God.” Explain, in the terms of first-order logic and predicate
calculus, why this reasoning is incorrect. can someone explain this in more details?
• 7 months ago
• 7 months ago
Best Response
You've already chosen the best response.
I do not know the answer to this question the terms of first-order logic and predicate calculus. In terms of the logic studied in Geometry, the matter has to do with valid reasoning patterns,
syllogisms. The two valid patterns are: p-->q p ------ Therefore, q and p -->q ~q ------- Therefore, ~p.
Best Response
You've already chosen the best response.
The pattern posted goes like this: GL --> LB B ------- therefore, no conclusion The fallacy is in reasoning from the converse. @jmh433
Best Response
You've already chosen the best response.
if ray Charles is blind and love is blind then ray charles is love
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5233cc26e4b0af32a078e8c6","timestamp":"2014-04-17T12:42:19Z","content_type":null,"content_length":"32951","record_id":"<urn:uuid:0d18b22d-21ae-4140-b856-bb15e30a4cd3>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00075-ip-10-147-4-33.ec2.internal.warc.gz"} |
Catalan Numbers with Applications th edition by Koshy | 9780195334548 | Chegg.com
Details about this item
Catalan Numbers with Applications: Like the intriguing Fibonacci and Lucas numbers, Catalan numbers are also ubiquitous. "They have the same delightful propensity for popping up unexpectedly,
particularly in combinatorial problems," Martin Gardner wrote in Scientific American. "Indeed, the Catalan sequence is probably the most frequently encountered sequence that is still obscure enough
to cause mathematicians lacking access to Sloane's Handbook of Integer Sequences to expend inordinate amounts of energy re-discovering formulas that were worked out long ago," he continued.
As Gardner noted, many mathematicians may know the abc's of Catalan sequence, but not many are familiar with the myriad of their unexpected occurrences, applications, and properties; they crop up in
chess boards, computer programming, and even train tracks. This book presents a clear and comprehensive introduction to one of the truly fascinating topics in mathematics. Catalan numbers are named
after the Belgian mathematician Eugene Charles Catalan (1814-1894), who "discovered" them in 1838, though he was not the first person to discover them. The great Swiss mathematician Leonhard Euler
(1707-1763) "discovered" them around 1756, but even before then and though his work was not known to the outside world, Chinese mathematician Antu Ming (1692?-1763) first discovered Catalan numbers
about 1730.
Catalan numbers can be used by teachers and professors to generate excitement among students for exploration and intellectual curiosity and to sharpen a variety of mathematical skills and tools, such
as pattern recognition, conjecturing, proof-techniques, and problem-solving techniques. This book is not only intended for mathematicians but for a much larger audience, including high school
students, math and science teachers, computer scientists, and those amateurs with a modicum of mathematical curiosity. An invaluable resource book, it contains an intriguing array of applications to
computer science, abstract algebra, combinatorics, geometry, graph theory, chess, and World Series.
Back to top | {"url":"http://www.chegg.com/textbooks/catalan-numbers-with-applications-1st-edition-9780195334548-019533454x","timestamp":"2014-04-16T05:37:33Z","content_type":null,"content_length":"21843","record_id":"<urn:uuid:9dd934a7-696a-4e1d-810b-3ef5b5788e72>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00144-ip-10-147-4-33.ec2.internal.warc.gz"} |
Avondale Estates SAT Math Tutor
Find an Avondale Estates SAT Math Tutor
...I also spent eight years at a private academy in Atlanta where I was responsible for the language arts program for grades K-8. It was in this capacity that I became proficient in teaching
phonics/reading. I have been retired for a year and am now returning to part-time teaching as a private tutor.My lessons are based on Latin and Greek prefixes, suffixes, and roots.
15 Subjects: including SAT math, reading, English, writing
I hold a bachelor's degree in Secondary Education and a master's degree in Education. I am certified to teach in both PA and GA. I have teaching experience at both the Middle School and High
School level in both private and public schools.
10 Subjects: including SAT math, geometry, algebra 1, algebra 2
...I am a mentor at my high school and involved in many honor societies as well as volunteered to teach at a homework club in an elementary school. If my students do not understand the way I am
teaching I will adjust my teachings to be more suitable for my students. I am flexible with my schedule and I am always punctual.
14 Subjects: including SAT math, chemistry, geometry, biology
...I specialize in tutoring students for standardized test including the ACT/SAT/SSAT/high school graduation test. I also enjoy assisting students with general needs such as raising grades and
performance in the academic setting. Willing to tutor in person (Home environment, Library, Coffee Shopt, etc.) or online.
14 Subjects: including SAT math, calculus, geometry, biology
...My technique is based on a combination of the Socratic Method and a more traditional classroom-style approach. Students I work with first learn the fundamentals of what they need to improve in
class and on test day, but they also gain a broader knowledge designed to help them pursue any goal they choose. Initially, I aim to help students improve their scores and/or grades.
46 Subjects: including SAT math, English, reading, chemistry
Nearby Cities With SAT math Tutor
Atlanta Ndc, GA SAT math Tutors
Belvedere, GA SAT math Tutors
Clarkston, GA SAT math Tutors
Conley SAT math Tutors
Decatur, GA SAT math Tutors
Dunaire, GA SAT math Tutors
Fairburn, GA SAT math Tutors
Grayson, GA SAT math Tutors
Hapeville, GA SAT math Tutors
Pine Lake SAT math Tutors
Redan SAT math Tutors
Rex, GA SAT math Tutors
Scottdale, GA SAT math Tutors
Stone Mountain SAT math Tutors
Vista Grove, GA SAT math Tutors | {"url":"http://www.purplemath.com/Avondale_Estates_SAT_math_tutors.php","timestamp":"2014-04-18T05:50:36Z","content_type":null,"content_length":"24385","record_id":"<urn:uuid:d9251bc4-771f-4771-89cb-4f136b7f3681>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00240-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kinematics, Equations for Accelerated Motion
Back Kinematics Mechanics Physics Contents Index Home
In introductory mechanics there are three equations that are used to solve kinematics problems:
These three equations are explained below. For each equation we will:
• Present an introduction.
• Understand a derivation, or origin.
• Use algebra to rearrange and solve the equation for all of its variables.
• Do some random valued multiple-choice problems.
• Cover some details and additional information regarding further understanding.
Symbols used in these equations:
│ d │ Displacement or change in position │
│ v[o] │ Original velocity, the velocity at the start of the acceleration │
│ v[f] │ Final velocity, the velocity at the end of the acceleration. │
│ a │ Acceleration, this is a constant acceleration │
│ t │ Time, this is the time period of the acceleration. │
There are many cross links in the discussions that follow.
If you get lost, at the top of each discussion page is a link back to the Home for Equations, which is this page.
So, if you need to, you can always jump back to here for a clear starting point in your navigation.
Displacement and Acceleration
Velocity and Acceleration
Time Independent Acceleration
Back Kinematics Mechanics Physics Contents Index Home | {"url":"http://zonalandeducation.com/mstm/physics/mechanics/kinematics/EquationsForAcceleratedMotion/EquationsForAcceleratedMotion.htm","timestamp":"2014-04-18T18:11:52Z","content_type":null,"content_length":"11986","record_id":"<urn:uuid:498e3703-15ad-442b-b5a0-f95026caaecf>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00617-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Gospel According to Frank Tipler: O’Leary’s review of The Physics of Christianity
Intelligent Design
» The Gospel According to Frank Tipler: O’Leary’s review of The Physics of Christianity
The Gospel According to Frank Tipler: O’Leary’s review of The Physics of Christianity
July 28, 2007 Posted by O'Leary under Intelligent Design 26 Comments
When I asked a gifted Canadian physicist what he thought of Frank Tipler’s The Physics of Christianity, he said, “in one word: wacky”.
But readers will expect more than one word from me, and I think there is more than that to be said for Tipler’s book.
Frank Tipler is in an unusual position. He is a Christian physicist who is an exponent of “many worlds” theory. This theory, according to which new universes are constantly generated by each choice
that we make, is typically shunned by Christian physicists (including my friend, mentioned above). Apart from its dizzying implications, many worlds theory seems to make life’s choices meaningless.
(Tipler does not appear to see it that way.)
Now, one good thing about Tipler, he is no pussyfoot. He is NOT afraid to take on the implications of whatever he espouses. For example, he writes … (For the rest, click the link.)
26 Responses to The Gospel According to Frank Tipler: O’Leary’s review of The Physics of Christianity
1. The many worlds interpretation in quantum mechanics, that Tipler relies on, is a materialistic presumption. The Theistic presumption would be that each particle/wave is ultimately controlled by
the infinite mind and power of God in His sovereignty. As with everything else in science, those are the only two options we have from the two prevailing philosophies. Though I think the Theistic
presumption can and will be refined further, I see no way to refine the materialistic position any further.
The % ” analysis of the Shroud that Tipler alludes too to substantiate the virgin birth of Jesus, though he may not agree, in fact relies on the Theistic presumption of quantum mechanics to be
true in order for the probabilistic hurdles he proposes to be satisfied in logic.
2. He does indeed sound very nutty. Also, jumping to premature conclusions. I don’t think we yet understand the data well enough to come to the conclusions he does, accepting some but rejecting
other miracles, for example. Seems to me the acceptance of Big Bang is similarly premature, and equally silly for anyone to worry that Big Bang is a stronger argument for the existence of God
than some other system. The Hindus believe in God, and they think universes are cyclic, with one lasting 311 trillion years. What TOE is referred to here?:
“Contrary to what many physicists have claimed in the popular press, we have had a Theory of Everything for about thirty years. Most physicists dislike this Theory of Everything because it
requires the universe to begin in a singularity. That is, they dislike it because the theory is consistent only if God exists, and most contemporary scientists are atheists.”
3. Though I’m not really convinced by MWI, I do agree with Tipler on one point – even though it’s wielded by materialists, I hardly think it does much to the idea of God, even as traditionally
That said, I do enjoy how desperately many atheists now cling to MWI. What atheists of old would have thought that in the years to come their standard bearers would be talking about there being
an infinite number of (unfalsifiable?) universes as a means to explain things.
4. bornagain77, “The Theistic presumption would be that each particle/wave is ultimately controlled by the infinite mind and power of God in His sovereignty.”
I seriously question your perspective. The first error I see is when you say “The” As you get to know theists you will discover that there is by no means only one theistic perspective.
I, for instanct, am quite convinced that man truly has free will. If God ultimately controlls each particle/wave then free will does not exist. My theistic perspective, therefore, is that there
is something truly outside of God’s control within the universe, something that God chose to give up control of.
Does this mean that God is not soverign? If by God’s choice, He gave man free will, then God remains soverign. As God is clearly outside of time, even though we have free will and free choice, He
has the right of divine intervention, and He has the privelage of knowing the end from the beginning. So God is able to give us true free choice without in any way loosing control of the system.
5. bfast, I agree with you, that God has given us free will. A calvinist friend of mine however asserts that God controls what we do AND we have free will. There was a time I would have laughed at
him, but there was a time I would have laughed at the idea of something being a particle and a wave at the same time.
6. Free will is the great puzzle for us and as I stated earlier I can see where the Theistic presumption can and will be refined further. Yet, none the less, the starting Theistic presumption is
that God ultimately controls each and every particle/wave in his infinite power and sovereignty, whereas the materialistic presumption is forced into alluding to an infinite “array” of universe’s
that have the same foundational universal constants as the one we find ourselves in. I find the Theistic presumption more coherent since the personal miracles I’ve seen in my life testify to the
fact that God has ultimate control of each and every particle/wave in the universe.
To reconcile the starting presumption of Theism in quantum mechanics to the apparent free will we exercise would require that one say that free will is either an illusion or that God allows the
universe to operate under His “permissive will” in which He allows us the freedom to choose our own course of action. The latter “somewhat detached” sovereignty of control seems, from our present
limited perspective, to be the most logically coherent view.
7. Hello everyone,
Long time lurker and my first post.
Some background on myself since I know this site is very tightly moderated.
I am ID friendly and admire very much Dembski, Phil Johnson and all those who are standing up to the Darwinian fundamentalists even though it has cost them much in academia as well the assaults
they have had to endure by the ruling academic priesthood.
I used to post over at ARN and from time to time post at telic thoughts.
I thought I might offer my thoughts on free will. I have given this subject a lot of thought and I confess I have been heavily influenced by Edwards.
The problem is that the term “free will” can mean so many things so we really need to drill down and become more precise in our definitions.
To me free will cannot exist if we mean by this that nothing determines the will. But that is not what most people mean when they bandy about this term they usually mean “free choice”
For myself I deny that man has a free will but assert we do have free choice. It seems to me they are entirely two different things.
In all cases something always “determines” the will so from that standpoint the will is never free from that which determines it. This is why I deny the existence of free will.
Free choice on the other hand can exist even if the will is not free. To have free choice is the capacity to choose what we want within the constraints of the choces we are able to make.
I want to comment more but I want to see if this gets posted before I write more extensively on the subject.
8. “A calvinist friend of mine however asserts that God controls what we do AND we have free will. There was a time I would have laughed at him,”
Hi Collin,
I would bet your friend is a compatibilist.
Like your friend I too think that God controls all and at the same time we have free choice.
Hi bfast
“I seriously question your perspective. The first error I see is when you say “The†As you get to know theists you will discover that there is by no means only one theistic
I, for instanct, am quite convinced that man truly has free will. If God ultimately controlls each particle/wave then free will does not exist. My theistic perspective, therefore, is that there
is something truly outside of God’s control within the universe, something that God chose to give up control of.”
This theistic perspective is known as “open theism”.
9. “If God ultimately controlls each particle/wave then free will does not exist”
I meant to include this on my previous post but hit the wrong button and I dont know how to edit.
I guess my question to you bfast is why would free choice not exist if God ultimatey controls every particle and wave?
10. What atheists of old would have thought that in the years to come their standard bearers would be talking about there being an infinite number of (unfalsifiable?) universes as a means to explain
Not only unfalsifiable, but inherently unobservable, sort of like, er, you know…God.
11. There is another option – God is unable to have complete control.
As one of the few non Christian but theists here this can make sense to me.
12. Many Worlds Interpretation could be an artifact of the math we use. Tipler is perhaps making a forgivable error.
Electrical Engineers like myself describe our circuits with imaginary currents and voltages. Do such imaginary things really exist?
The number 3 = 8 + (-5)
3 people in a room are actually +8 people and -5 people in the room [only in our imagination, but it's correct mathematically]
We often describe electrical voltages as the sum of several components, in fact as an infite series. So a voltage of 3 volts could be modeled as the sum of +8 volts and -5 volts [just like we did
for 3 people in a room being modeled as +8 people and -5 people]. In fact Electrical Engineerings go several steps farther, rather than +8 and -5 to describe the number 3, they make it an
infinite series of numbers (amplitudes). To top it off it’s an infinite series of both real and imaginary numbers (WHOA!). We call such monstrosities Fourier series and Fourier Transforms, which
strangely enough we can use to help us describe quantum mechanics….
Thus with mathematical wizardry we can show that 3 people are really and infinite summation of posivite negative people and positively imaginary and negatively imaginary people….. Divergent
series combined in away to make them convergent. Wonderful!
My point, MWI may only be something of an artifact of the mathematical gimmickry we use to solve difficult problems. Unless of course when you see 3 people in a room you presume there are
actually +8 and -3 people in the room…..
Tipler’s book, by the way is fabulous except for MWI, which is tolerable if read in the light of the math considerations I gave.
I don’t believe I’ve met an Electrical Engineer who, although injects imaginary numbers and measurements into his calculation, who really believes these entities really exist, any more than his
Fourier Inifinte Series interpretation of everything in his world suggest that 3 people in a room are really +8 people combined with -5 people.
Yet we Electrical Engineers pump these make believe entitities through our make believe mathematical worlds to solve real world problems. But at some point we have to draw the line between make
believe and reality.
Upon finishing our calculations for predicted voltage we have a real part and then an imaginary part. The professors admonish there students then, when they go to the lab to make measurements,
they throw out the imaginary part, since it can’t be measured. Maybe these imaginary parts were never there to begin with, just a gimmick to make the math of solving differential equations
13. Could you explain this a bit further?
By scordova
“gimmick to make the math of solving differential equations easier”
Differential equations I found difficult.
14. LOTF:
Diff eqns are equations that use rates of change of certain variables as terms.
Speed is actually such a variable:
V = dx/dt, the rate of change of distance, miles per hour or whatever.
If something is moving with steady speed, its position X = [dx/dt] *t.
or in more familiar terms, X = v.t.
55 mph * 10 hr = 550 miles. [MPH is the giveaway -- miles per hour. So would be gallons per minute for a flow rate . . .]
The “d” in the expression dx/dt, is a shorthand for tiny jump in distance divided by tiny jump in time, taken to the limit.
Hope that helps.
GEM of TKI
PS: Imaginary numbers are based on assuming that -1 has a square root, j or i depending on your field.
I usually taught 4th form classes, to draw up x-y axes, and imagine J* 1 rotates 90 degrees anticlockwise — puts it up the Y axis. j* j* 1 then puts us to -1 along ther negative x axis. Voila, j^
2 = -1. J is sq root -1.
[We then make the vectors into rotating vectors and therein lies all that stuff on using complex numbers in engineering and science. Fascinating stuff -- the shortest route between two real
number results is often through doing a complex number solution.]
Finally, negative numbers were discoverd by borrowers> if you have $5 and buy what costs $ 13, you OWE $ 8, i.e it would take $ 8 to clear your debt. Thence, negative numbers.
And so forth . . .
How about the Euler eqn:
e^ [j*pi] + 1 = 0,
believe it or not, duly connecting the five most important numbers in Mathematics in one expression. Astonishing. (Sometimes, I sign off at that point, QED, God didit!)
15. LOTF:
Certain differential equations (when certain boundary conditions are known) can be solved by gimmicks.
But let’s bring this down to Earth a bit. When you put light through a prism, you break it down into the various components. Each of the color components has different strengths of intensity.
We’re specifying a “multiverse” of components of the lightwave by putting it through a prism (figuratively speaking)….
Surprisingly, most any arbitrary wave form (well technically wave-forms that obey Dirichelet criteria) can be put through a “prism”. We can do this physically with prism crystals, or the
wonderful corti-organ in your ears, or spectral analyzers or something as crude as an audio graphic equalizer for your fancy stereo. Mathematically, the “prism” is known as the Fourier Transform
or Fourier Series. There is a related “prism” known as the Laplace Transform……
Electrial Engineers realized that many times they simply wanted to know if they pumped a wave form into a circuit, what would be the output? One could solve it with quite a great deal of
headaches the old-fashioned way by solving differenctial equations, or they could come up with a generic and realatively easy method. [This only works for certain kinds of diff-eqs...]
If the system (like a stereo) receives a sinewave and the only transformation it applies to it is an amplitude and phase change to the sinewave, then VOILA, no matter how complicated the
differential equation is that describes the circuit( perhaps 30th order equations!!!), the nice thing is “sine wave in, sine wave out”. So if we can tell that a particular system is “sinewave in,
sinewave out” for all sinewaves or all possible sums of sinewaves, we’re good to go to solve all possible differential equations relating to the input or ouput of the system! We have a nice
number cranking method, versus having to think through every possible special case…
With this fact, it turns out there is a nice way to predict how even such complicated diffeqs will affect any given input sinewave. We call such things Transfer Functions
Now, it turns out since we can decompose arbitrary waves through a prism (your ear is one such “prism”), we can mathematically predict what will happen to just about any given waveform of
practical value going through a system without the agony of solving a differential equation in the old fashioned way. All we have to do is apply a Fourier Transform (the “prism”) and we can see
how each “color” component will behave at the output end.
This is nice, because when building an audio system, Electrical Engineers don’t have to ponder every possible audio wave form possible and model it, they can summarize the behavior for every
possible wave form succinctly. For example, when you crank up the bass on your CD player it doesn’t matter whether Mariah Carey or Johnny Cash are singing, you expect a certain gernal change in
the audio quality. Cranking up the bass on your CD involves changes to the differential equation of the audio system, but this change can be described without an insane amount of pain because of
these gimmicks.
What you experience as the simple act of hear cranked up bass also has a simple and elegant mathematical inperpretation through these gimmicks. For that matter, even the simple act of putting on
colored sunglasses approximates how this mathematical gimmick works. We basically describe the compelex differential systems as filters like colored lenses [ok, that's a pretty gross
simplification, but it should hopefully suffice].
The point remains, we might do well to be careful to take our gimmickery too far and believe things exists which really don’t because we were dazzled by the math.
3 = 8 + (-5) does not imply when there are 3 people in the room that it’s because there are actually 8 positive people and 5 negative people in the room!!!! That’s taking our math wizardry a bit
too far. Multiverses could be the same artifacts of taking things too far…
16. Perhaps another example can help Scordova’s example of how imaginary numbers can be taken too far. Imaginary numbers are used everywhere in QM. Generally, instead of using sin and cos, we use
complex exponentials for mathematical simplicity. Subatomic particles do not have definite known position and velocity, therefore, often a particles position is represented by a complex valued
wave function. Where scientists differ is on how the wave function should be interpreted. I take the squared absolute value of the wave function to mean the probility that a particle is at a
certain point in space, which should be between 0 and 1. By taking the squared absolute value, the imaginary numbers disappear and we are left with real numbers. This interpretion of QM is called
the Copenhagen interpretion. There are other interpretions such as the mutli-world interpretion. The MWI works like this: I solve my set of differential equations and I come up with a number of
possible outcomes. Instead of just one happening, with a certain probility (like what the copenhagen interpretion says), they all happen. However, I want to point out that these are all
INTERPRETIONS and NOT based on imperical experiments. MWI people must interpret their complex valued wave functions differently than Copenhagen people
17. Since we are dealing with infinities I thought this portion of the following article I recieved from “reasons to believe” might be of interest:
An article in Scientific American gives more details of the relevant science, but the point pertinent to this discussion involves a transformation of the infinite future expansion of our bubble
into an actual spatially infinite universe. Craig argues that actual infinities of the type invoked here cannot exist because they lead to absurdities.
He outlines a few examples of absurdities arising in dynamical infinities in this article published in the Canadian Journal of Philosophy:
Consider an infinite hotel full of guests. Now suppose another infinite group arrives and asks for rooms. If the owner has each guest move to the room twice their current value (1 to 2, 2 to 4, 3
to 6,…), this leaves open the infinite number of odd-numbered rooms. So a completely full hotel can accommodate an infinite number of new guests.
Consider two planets where one orbits twice as fast as the other. After an infinite time, each planet has accumulated an identical number (the infinite value aleph-null, ) of orbits. However,
during every possible finite time interval, the faster planet accumulates twice as many orbits as the slower.
In the previous example, one could ask the question of whether the number of completed orbits is even or odd. After an infinite time the number of orbits is a value referred to as aleph-null. An
even number is a multiple of two; an odd number is one more than a multiple of 2. But, = (2 x ) = (2 x 1). So the number of orbits after infinite time is both odd and even.
These examples highlight that basic rules which we take for granted cannot apply in physically existing infinites. Either we must rewrite basic arithmetic rules (addition, subtraction,
multiplication, division, and comparison) or such infinities do not exist. I have glossed over many details, but the objections Craig raises are worth a serious look as a response to infinite,
dynamic universes. Additionally, scientists typically regard infinities as a sign that they have entered a region where their theories are no longer valid.
18. As a sidelight to this, I think, logically, all infinities of math should ultimately reside in “God”.
To clarify this, It seems to me in battling materialists, Theists have always taken away the materialists source for infinities in defeating their particular theories, with the result being that
the resultant need for an infinity is always fullfilled by God. Thus mathematically speaking, it seems simple to me that all problems encountered in math with infinities will only be “truthfully”
satisfied when alluding to God as the source of the needed infinity in the math problem.
19. Thanks all for the exaplanations.
Bornagain77 – can I ask why you think infinity requires a ‘source’? Do all numbers require a source?
20. Sal, Dr Dan and BA77:
Very nice popular level summaries of some really nasty math when one has to actually do it the hard way!
Maybe we should think about doing what C S Lewis long ago suggested: do a sort of series of “in a nutshell guides for the layman” on a lot of the relvant sci and math. [Eng too . . .]. A Wiki on
101 type stuff tied to UD might be a good way to start?
That way when anyone needs to figure it out fast, he is talking with us . . . (That gets up to an idea I have about doing an online college . . .)
The point on MWI of the Q-mech numbers and observations i interesting, and also ties to some work by Feynmann and his infamous sets of diagrams that were summed to give a result. Such models may
work but just because a model works does not mean it is “true” in any metaphysical sense.
GEM of TKI
PS: On Transfer Functions, I rather liked the “poles and nails under a stretchy rubber sheet” model of Laplace Transform based TFs. (My College level engg technology students loved it and it
helped them visualise where the frequency and transient responses were coming from. Years later they were telling me about how they were still looking at cars with bad suspension systems — common
on Jamaica’s roads — and figuring out where the poles were in the s-space from what happened when they hit a bump or pothole! BTW, we once did a push and spring-back game on a nice little Toyota
hotrod, and it was critically damped, unlike a lot of the nearby cars in the student car-park circa 8 pm that night . . . my idea of a class demonstration; make sure the cars don’t have
electronic alarms first.) Z transform space versions too, can be useful.
PPS: the s-variable is of course a complex number:
s = sigma + j* omega
The sigma part is useful on assessing damping behaviour, and the w part is related to the frequency and thus the frequency response.
21. lotf,
In dealing with materialists on the origin of the universe, Theists have the upper hand for a valid explanation. Materialists must allude to an infinite number of other universe’s that have tried
every other possible combination of universal constants in order to account for the fact that this universe is exceedingly finely tuned for carbon-based life to exist. Yet the materialist loses
in logic for if he must concede the need for an infinite number of other possible universes then he must also concede the fact that it is infinitely possible for God to exist. By strict logic if
it is infinitely possible for God to exist then God certainly does exist. So to answer your question yes there must be a source for the infinities that are required for explanations in origins’
science, Yet as I posted earlier, and illustrated here, physical infinities are logically absurd so this leaves us only the Theistic solution to our needed source for an infinity to satisfy the
logic of origin of the universe as well as the origin of the stunning complexity we find in biology, as well as the infinite possibilities dealt with in quantum mechanics.
I would like point out the, though I’m not that well versed in Quantum mechanics, that Richard Feynman stated his diagrams were successful for they “did away with the infinities” in quantum
Maybe one of you guys could clarify this one Quantum mechanic point more clearly for it is beyond my grasp presently.
22. bornagain77,
thanks for your reply I understand your point though I will have to disagree, but now have another question.
You say –
“this universe is exceedingly finely tuned for carbon-based life to exist.”
I would have to disagree with that as it seems most objects in the universe aren’t capable of supporting life.
23. Lotf,
Since you respectfully disagree with me I will clarify, I’m referring to strictly the anthropic principle of universal constants, yet, as you may be alluding to, their is another level of
complexity that is required to be fulfilled as pointed out in “The Privledged Planet” and “Rare Earth”.
But I am strictly addressing the anthropic principle. The numerical values of the universal constants in physics that are found for gravity which holds planets, stars and galaxies together; for
the weak nuclear force which holds neutrons together; for electromagnetism which allows chemical bonds to form; for the strong nuclear force which holds protons together; for the cosmological
constant of space/energy density which accounts for the universe’s expansion; and for several dozen other constants (a total of 77 as of 2005) which are universal in their scope, “happen”
to be the exact numerical values they need to be in order for life, as we know it, to be possible at all. A more than slight variance in the value of any individual universal constant, over the
entire age of the universe, would have undermined the ability of the entire universe to have life as we know it. On and on through each universal constant scientists analyze, they find such
unchanging precision from the universe’s creation. There are many web sites that give the complete list, as well as explanations, of each universal constant. Search under anthropic
principle. One of the best web sites for this is found on Dr. Hugh Ross’s web site (reasonstobelieve.org). There are no apparent reasons why the value of each individual universal constant could
not have been very different than what they actually are. In fact, the presumption of any naturalistic theory based on blind chance would have expected a fair amount of flexibility in any
underlying natural laws for the universe. They “just so happen” to be at the precise unchanging values necessary to enable carbon-based life to exist in this universe. Some individual constants
are of such a high degree of precision as to defy human comprehension. For example, the individual cosmological constant is balanced to 1 part in 10^60 and The individual gravity constant is
balanced to 1 part to 10^40. Although 1 part in 10^60 and 1 part in 10^40 far exceeds any tolerances achieved in any man made machines, according to the esteemed British mathematical physicist
Roger Penrose (1931-present), the odds of one particular individual constant, the “original phase-space volume†constant required such precision that the “Creator’s aim
must have been to an accuracy of 1 part in 10^10^123†. If this number were written out in its entirety, 1 with 10^123 zeros to the right, it could not be written on a piece of paper the
size of the entire visible universe, EVEN IF a number were written down on each atomic particle in the entire universe, since the universe only has 10^80 atomic particles in it. This staggering
level of precision is exactly why many theoretical physicists have suggested the existence of a “super-calculating intellect†to account for this fine-tuning. This is precisely why
the anthropic hypothesis has gained such a strong foothold in many scientific circles. American geneticist Robert Griffiths jokingly remarked about these recent developments “If we need an
atheist for a debate, I go to the philosophy department. The physics department isn’t much use anymore.” The only other theory possible for the universe’s creation, other than a
God-centered hypothesis, is a naturalistic theory based on blind chance. Naturalistic blind chance only escapes being completely crushed, by the overwhelming evidence for design, by appealing to
an infinite number of other “un-testable†universes in which all other possibilities have been played out. Naturalism also tries to find a place for blind chance to hide by proposing a
universe that expands and contracts (recycles) infinitely. Yet there is no hard physical evidence to support either of these blind chance conjectures. In fact, the “infinite universesâ€Â
conjecture suffers from some serious flaws of logic. For instance, exactly which laws of physics are telling all the other natural laws in physics what, how and when to do the many precise
unchanging things they do in these other universes? Plus, if an infinite number of other possible universes exist then why is it not also infinitely possible for God to exist? As well, the
“recycling universe†conjecture suffers so many questions from the second law of thermodynamics (entropy) as to render it effectively implausible as a serious theory. The only hard
evidence there is, the stunning precision found in the universal constants, points overwhelmingly to intelligent design by an infinitely powerful and transcendent Creator who originally
established what the unchanging universal constants of physics could and would do at the creation of the universe. The hard evidence left no room for the blind chance of natural laws in this
universe. Thus, naturalism was forced into appealing to an infinity of other â€%9 – estable†universes for it was left with no footing in this universe. These developments in science
make it seem like naturalism was cast into the abyss of nothingness so far as explaining the fine-tuning of the universe.
So as I hope I have made clear Lotfthe evidence overwhelming supports Theism.
Maybe we should think about doing what C S Lewis long ago suggested: do a sort of series of “in a nutshell guides for the layman†on a lot of the relvant sci and math. [Eng too .
. .]. A Wiki on 101 type stuff tied to UD might be a good way to start?
Great idea! This would be the sort of thing for a college level ID course. Maybe it can be put online just to vet the material and test it on students.
Are you a professor, by the way?
25. bornagain77
You say: ‘“happen†to be the exact numerical values they need to be in order for life, as we know it, to be possible at all’
But doesn’t that imply that slight changes in the constants might result in a different form of life, ‘not as we know it’? So maybe we’re not privileged just lucky?
26. Hi Sal:
In a former incarnation I was a “Lecturer.”
Glad to see you like the idea of a 101 level series of articles forming a Wiki that takes in the range of issues that crop up and are relevant to ID discussions. [Some of them could be even
gleaned and cleaned up from threads here; including in some cases dialogues.]
What BA is referring to is that for a lot of different cosmic level parameters, slight shifts yield a radically, shockingly different and non-life habitable universe.
So striking is the result that one of the discoverers of the pattern, the late great Sir Fred Hoyle, was moved to observe that:
From 1953 onward, Willy Fowler and I have always been intrigued by the remarkable relation of the 7.65 MeV energy level in the nucleus of 12 C to the 7.12 MeV level in 16 O. If you wanted to
produce carbon and oxygen in roughly equal quantities by stellar nucleosynthesis, these are the two levels you would have to fix, and your fixing would have to be just where these levels are
actually found to be. Another put-up job? Following the above argument, I am inclined to think so. A common sense interpretation of the facts suggests that a super intellect has “monkeyed”
with the physics as well as the chemistry and biology, and there are no blind forces worth speaking about in nature. [F. Hoyle, Annual Review of Astronomy and Astrophysics, 20 (1982): 16.]
[My always linked has a briefish discussion with links to more.]
GEM of TKI
You must be logged in to post a comment. | {"url":"http://www.uncommondescent.com/intelligent-design/the-gospel-according-to-frank-tipler-olearys-review-of-the-physics-of-christianity/","timestamp":"2014-04-21T07:54:02Z","content_type":null,"content_length":"99380","record_id":"<urn:uuid:8333c891-85db-4fed-8866-9219c1f99fb7>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00014-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fortune at El Dorado
From Progteam
This problem has been solved by hjfreyer.
Fortune at El Dorado is problem number 2899 on the Peking University ACM site. The problem is to find, given some points in the x/y plane and a maximum area, the largest number of points which can be
enclosed in a rectangle of the given area or less.
Underlying Concept
The basic idea is to do an exhaustive search over all possible rectangles and see what is the largest number of points ever contained in a single rectangle. One way to do this (Christian's Solution)
is outlined in Figure 1. Every rectangle that is searched through is defined by three numbers, x1 (the left boundary), x2 (the right boundary), and y_top (the top boundary). The bottom boundary of
the rectangle is then set to be the lowest possible such that the total area is still below the maximum. In this way, we can choose all possible combinations of x1, x2, and y_top to obtain every
possible region with area less than the max. This is clearly n-cubed operations to find every rectangle, and then to count the number of points contained in that rectangle brings the complexity to
n-to-the-forth, or around 1 trillion operations minimum. This could certainly do with some improvement, and it will have to be much more efficient to be accepted by the judges.
The goal is to strip out all redundant cases. For instance, the borders of the rectangle pictured in Figure 1 are not directly up against the points; there is room in between. If, say, x1 were moved
slightly right, clearly no points would be gained or lost, so why reprocess the whole thing for that change? The key is to only move the borders to positions where the number of points can change. It
can be shown that this will only happen at a border than contains a point. Processing any boundaries that don't contain a point would be wasteful, as they cannot possibly improve the count.
Data representation
The data should not be viewed as a continuous plane, but as discrete rows and columns, as shown in Figure 2. For each row, we need to record its y coordinate, and for each column, we record its x
coordinate, and a list of all points in that column, sorted by row number. This way we can easily jump from column to column without worrying how much spaces exists between them. This will let us
search every possible rectangle without wasting any computation on unnecessary boundary changes.
From here, the algorithm is fairly straightforward, if not a little bothersome. Again, we choose all possible x1 and x2, however, unlike last time they don't refer to x coordinates, but column
numbers. For each (x1,x2) pair, we iterate through every possible y_top; y_top being one of the rows. Like before, this defines a rectangle by giving 3 of its sides and its area. For a given triplet
(x1, x2, y_top), we want to find all points in columns between (inclusive) x1 and x2, which are below y_top, and above y_top+max_area/(coordinateOf(x2)-coordinateOf(x1)). So for every (x1, x2,
y_top), we search down the lists for x1 through x2, looking for points that fit within the y constraints. This gives a solution which clocks in just barely under the judge's limit, but under
import java.util.*;
public class Main{
public static Scanner in;
static int numtrees,maxarea;
static int[] xpos; //This specifies the x coordinate of each of the columns
static int[] ypos; //This specifies the y coordinate of each of the columns
//For each column, a sorted list of integers, one for
//each point in the column. The integers
//correspond to the y coordinate of these points.
static List[] column;
public static void main(String[] args){
in=new Scanner(System.in);
public static void doStuff(){
int N=in.nextInt();
for(int i=0;i<N;i++){
solve(); //Do this scenario
public static void solve(){
numtrees=in.nextInt(); //Get the number of trees
maxarea=in.nextInt(); //and max area
build(); //Build the data representation of the problem
int maxcount=-1;
for(int x1=0;x1<xpos.length;x1++){
for(int x2=x1;x2<xpos.length && xpos[x2]<=xpos[x1]+maxarea;x2++){
//X1 is the leftmost column, X2 the rightmost column
for(int ytop=0;ytop<ypos.length;ytop++){
//ytop goes through each of the rows
int count=0;
//Get the sum over all the coulmns
for(int x=x1;x<=x2;x++){
//Count up all the points in this column in range
for(Object o:column[x]){
int y=(Integer)o;
int width=(xpos[x2]-xpos[x1]);
//See if it's the max
static void build(){
List[] bin_x = new List[1001];
boolean[] bin_y =new boolean[1001];
int xcount=0,ycount=0;
for(int i=0;i<numtrees;i++){
int x=in.nextInt();
int y=in.nextInt();
bin_x[x]=new List();
xpos=new int[xcount];
ypos=new int[ycount];
column=new List[xcount];
next_y=new List[xcount][ycount];
int xi=0,yi=0;
for(int i=0;i<1001;i++){
class List<T extends Comparable> implements Iterable<T>{
/** Core stuff **/
class Node{
T el;
Node next;
class Iter implements Iterator<T>{
Node ptr;
public boolean hasNext(){
return ptr.next!=null;
public T next(){
ptr=ptr.next;return ptr.el;
public void remove(){}
/** Optional Methods **/
/** A list containing all the unvisited elements including this one **/
/** Note! The list returned by this cannot be modified! **/
public List<T> getTail(){
List<T> res=new List<T>();
return res;
Node head,tail;
int size;
public List(){
head=new Node();
public boolean add(T v){
Node n=new Node();
return true;
public Iter iterator(){
Iter i=new Iter();
return i;
/** Sorting Related Options **/
/** Assuming the list is sorted, insert the item into sorted order **/
public void insertSorted(T x){
Node tmp=head;
Node n=new Node();
Node n=new Node(); | {"url":"http://cs.nyu.edu/~icpc/wiki/index.php?title=Fortune_at_El_Dorado&oldid=6235","timestamp":"2014-04-18T08:02:12Z","content_type":null,"content_length":"25282","record_id":"<urn:uuid:3f1b3dc2-fd9f-496e-bb37-867ca8866923>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00063-ip-10-147-4-33.ec2.internal.warc.gz"} |
Controlling Evaluation Order
Ensure correct MDX calculations by using solve order and pass numbers
Solve order and pass numbers are among the most complex concepts in SQL Server 2000 Analysis Services' MDX language. Many analytical applications require calculations—in the form of calculated
members, custom formulas, and cell calculations—to be embedded in the cubes. Individual MDX queries also frequently require embedded calculations. Because embedded and nested calculations are so
common, the order in which MDX evaluates these calculations is crucial to achieving the correct results. You can use a solve-order keyword and pass numbers to control the order of evaluation. This
topic is advanced, even for regular MDX users, so put on your crash helmet—we're diving in!
You often use two calculated members in combination in an MDX query, such as when you include a calculated member in the list of members for the columns and another calculated member in the list of
members for the rows. Listing 1 shows a simple example of a new FoodDrink member that's the sum of Food and Drink and a new CAOR member that's the sum of CA and OR (California and Oregon). I included
these members on the rows and columns, respectively. Figure 1 shows that these calculated members are totals of the columns and rows. When the two calculated members intersect (in the bottom right
cell), you end up with a grand total. You can think about the combination of these calculated members as the sum of the right column, which you can express as
(CA, FoodDrink) + (OR, FoodDrink)
This formula is equal to
((CA,Food) + (CA,Drink)) + ((OR,Food) + (OR,Drink))
Or the formula could be the sum of the bottom row. You can express that sum as
(CAOR, Food) + (CAOR, Drink)
which is equal to
((CA, Food) + (OR, Food)) + ((CA, Drink) + (OR, Drink))
In Listing 1's query, the order of the calculated members doesn't matter because the formulas are simple summations. But the order does matter when the mathematical operators aren't transitive.
The MDX query that Listing 2 shows has two calculated members. The first calculated member, called Canned Percent, returns Canned Foods' percent of the total of Canned Foods and Canned Products. The
second calculated member returns California's percent of all states in the United States. Both of these calculated members are percent-of-total calculations that are embedded within their own
dimensions; they don't specify which measure (or numeric quantity) they operate on. Creating a calculation in a dimension rather than as part of a measure's definition can be convenient because you
can combine such a calculation with any measure in a query to determine the percent of total for that measure. The MDX query in Listing 2 returns these two calculated members and a couple of the
members from which these calculations derive. Figure 2 shows the result of the MDX sample application we're working with.
Again, look at the lower right cell, in which the two calculated members intersect. If Analysis Services evaluates the Canned Percent member first, the expression looks like
(CA, Canned Percent) / (USA, Canned Percent)
Expanding Canned Percent's formula gives you
((CA, Canned Foods) / ((CA, Canned Foods)
+ (CA, Canned Products))) /
((USA, Canned Foods) / ((USA, Canned
Foods) + (USA, Canned Products)))
If you fill in the numbers, you get the totals that Figure 2 shows:
(5,268/(5,268 + 448)) / (19,026/(19,026 + 1,812)) = .9216/.9130 = 1.01
Now, look at how the formula appears if Analysis Services evaluates CA Percent before Canned Percent:
(CA Percent, Canned Foods) / ((CA Percent, Canned Foods) + (CA Percent, Canned Products))
If you expand CA Percent's formula, you get the following formula:
((CA, Canned Foods)/(USA, Canned Foods)) /
(((CA, Canned Foods)/(USA, Canned Foods)) +
((CA, Canned Products)/(USA, Canned Products)))
(5,268/19,026) / ((5,268/19,026) + (448/1,812)) = 0.528
The result of this evaluation order is different from the result that Figure 2 shows. If Canned Percent is evaluated first, the formula evaluates to 1.01 (or 101 percent), but if you evaluate CA
Percent first, the formula evaluates to 0.53 (or 53 percent). If you needed to evaluate CA Percent first to get the appropriate result, you could use the SOLVE_ORDER keyword to force Analysis
Services to give CA Percent priority. Listing 3 shows the query in Listing 2 with the SOLVE_ORDER keyword added so that CA Percent is evaluated first. You can use SOLVE_ORDER to control the
evaluation order of calculated members and of calculated cells and custom rollups.
Now let's add complexity to the solve-order problem by introducing the concept of pass numbers. When executing an MDX query, Analysis Services resolves the embedded calculations in passes. Evaluation
passes are identified by number. The last pass (i.e., the most nested) is always pass number 0. If a query doesn't contain cell calculations, custom rollup formulas, or custom rollup operators,
Analysis Services executes the MDX query in a single pass—pass number 0. Analysis Services executes calculated members at pass number 0 and executes custom rollup formulas and custom rollup operators
at pass number 1. Analysis Services performs cell calculations at pass number 1 unless you specify otherwise by changing the value of CALCULATION_PASS_NUMBER.
You can use this multipass execution process with pass numbers to create iterative formulas, such as goal-seeking formulas. Goal-seeking formulas are formulas for which you know what the outcome
needs to be; you simply adjust formula input until you achieve the desired outcome. For example, you might want to know how much you need to decrease your product cost to increase your overall
profitability by 10 percent. To solve this problem, your formula must try different product costs until it finds the value that achieves a profit increase of 10 percent.
Another use of a goal-seeking formula might be to determine how much revenue your company must make next month to meet revenue objectives for the year. To determine this amount, you can create a
goal-seeking algorithm that tries various revenue values for next month; with each value, the formula uses a forecasting algorithm to determine what your total revenue for the year might be.
To control both iterative formulas and evaluation order, you can use pass numbers and solve order together. Solve order specifies the evaluation order within a single evaluation pass. You can then
use pass numbers to control which iteration of a multipass formula will use a cell calculation. The pass number specifies the first pass in which the formula is used, and pass depth specifies the
number of passes to use the formula. For example, you could specify that a cell calculation has a pass number of 3 and a pass depth of 2—meaning that Analysis Services will perform the calculation at
pass number 3 and pass number 2. The calculation won't be in effect for pass number 1 because a calculation defined with a pass number of 3 would have to be active for 3 passes to reach pass number 1
(e.g., a pass depth of 3).
If you can use solve order to control the evaluation order of two calculated members, why would you need to use pass numbers? Although you can use pass numbers to control evaluation order, that's not
generally why you use them. Pass numbers are important in more complicated formulas, such as recursive calculations (i.e., calculations that reference themselves) or goal-seeking calculations.
Consider the code example that Listing 4 shows. The first thing you might notice is the abundance of a function called CalculationPassValue(). This function lets you control which pass numbers
Analysis Services uses to determine a formula's value. For example, CalculationPassValue(Time.CurrentMember, 0) means that you want the value for Time.CurrentMember after pass number 0, ignoring all
other passes. Referencing pass number 0 is useful when you want the actual value that's loaded in the cube. In other words, you don't want other cell calculations to affect the value you're
The MDX query in Listing 4 returns the Unit Sales for the months of 1997, which Figure 3, page 68, shows. The row labeled OldUnitSales shows the Unit Sales values as they exist in the cube. The row
labeled Unit Sales contains values affected by the cell calculation MinRecentValue. MinRecentValue is a recursive formula that returns the lesser of the current month and the previous month values.
The formula is recursive because the same formula determines the previous month's value. The formula continues searching through previous months as long as the values continue to get smaller. Two
conditions cause the formula to stop searching: the formula finds a month value that is larger than the successive month or the formula has exceeded its pass depth.
Because the MDX query in Listing 4 has a pass depth of 1, the cell calculation looks back only 1 month. Figure 3 shows that month 6 returned the value 21,081.00. The cell calculation MinRecentValue
compared month 5 with month 6 and returned month 5 because it was smaller.
Figure 4 shows the results you get if you change the query's CALCULATION_PASS_DEPTH property to 3. Notice in this result that the month-6 value is 20,179.00. Because the pass depth increased, the
formula could recurse (call itself) three times before it reached pass number 0, in which the cell calculation was no longer in effect. For month 6, the formula goes back to month 4. The formula
determines that month 3 has a value greater than month 4, so it stops and returns the month-4 value.
Note that when a formula calls itself, that recursive call doesn't change the pass number—in other words, a recursive call doesn't constitute another execution pass. You can always reference another
pass with the CalculationPassValue() function, as the MinRecentValue function does. The reason the MinRecentValue formula changes pass numbers when it calls itself is that it uses
CalculationCurrentPass()-1 to reference the next lower pass number.
To write effective cell-calculation formulas, you need to have at least a cursory understanding of pass numbers. Even the simplest cell-calculation formulas can mushroom into infinite recursion if
you're not careful. For example, say you write the following formula to try to scale down all the month values in the Sales cube by 10:
(Time.CurrentMember, \[Unit Sales\]) / 10
The problem with this formula is that it references the same cell that initiated your cell calculation. Therefore, the formula will call itself indefinitely. The only way to avoid this
infinite-recursion problem is to reference the cell value from a lower pass number, as the following formula shows:
CalculationPassValue( (Time.CurrentMember, \[Unit Sales\]), 0 ) / 10
The concepts of solve order and pass numbers are complex, and you need practice to learn when and how to use them. If you're writing queries that select calculated members on more than one axis
(e.g., rows and columns), you definitely need to consider using solve order. If you're using calculated cell formulas, you probably need to use pass numbers. And if you use multiple calculated cell
formulas with recursion, you might need to use both solve order and pass numbers—but I'll leave those scenarios for you to explore.
To practice writing MDX queries that return ordered lists, tackle the puzzle in the Web sidebar "September MDX Puzzle," http://www.sqlmag.com, InstantDoc ID 21988. For the answer to the August
puzzle, see the Web sidebar "August MDX Puzzle Solution Revealed," InstantDoc ID 21989. | {"url":"http://sqlmag.com/database-development/controlling-evaluation-order","timestamp":"2014-04-19T20:35:20Z","content_type":null,"content_length":"73709","record_id":"<urn:uuid:cf0f6fb3-70a4-4f7f-9442-e26ecee21e85>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00535-ip-10-147-4-33.ec2.internal.warc.gz"} |
The System Equation and Solving.
To solve this system of equations I used the solve-block "Given-Minerr" (automatic algorithm):
P.S. What version of Mathcad are you using? I also recommend to you reading the following book by Brent Maxfield "Engineering with Mathcad": http://amzn.to/vko999
Am I missing something here, or is your middle equation wrong?
The case 2 |M| = 0 - no solution (see above)
The case 3 |M| = 0 - a lot of solutions - see http://twt.mpei.ac.ru/ochkov/Mathcad_14/Chapter2/2_020_book.PNG | {"url":"http://communities.ptc.com/message/172702?tstart=0","timestamp":"2014-04-21T00:01:06Z","content_type":null,"content_length":"129452","record_id":"<urn:uuid:4c8ece84-b8f3-4f57-a0c4-bd0e842af81c>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00094-ip-10-147-4-33.ec2.internal.warc.gz"} |
Newport, RI Algebra Tutor
Find a Newport, RI Algebra Tutor
I have been a mathematics educator for more than forty years. I have taught middle school and high school mathematics in Rhode Island, Maine, and the African country of Zambia. Also I have had
lecturer's positions at the Community College of Rhode Island and Gibbs College, Cranston, RI.
15 Subjects: including algebra 1, algebra 2, calculus, trigonometry
...I have also been tutoring for many years from elementary subjects, to chemistry and test preparation and enjoy working with students who need that little extra boost. I use the hands on
approach when tutoring, trying many different methods to get the content across. Learning should be relevant and fun.
31 Subjects: including algebra 2, grammar, reading, geometry
...I tutored in algebra and precalculus. When I transferred to UMass Dartmouth, I continued tutoring in math from algebra I to calculus. Right now I tutor in UMD Primes, a program for UMass
Dartmouth on Mondays for Algebra I.
13 Subjects: including algebra 1, algebra 2, calculus, geometry
...Additionally, I was a member of the Rhode Island standard pilot program responsible for developing standards-based teaching for adult learners. I am a member of the Commission on Adult Basic
Education, a registered agent for CASAS Implementation, and an assessor for the National External Diploma...
30 Subjects: including algebra 2, algebra 1, English, reading
I'm a certified tutor for the North Kingstown School Department and also for the Literacy Volunteers of Washington County. I get along well with middle-school age students and am especially
patient with them. I enjoy helping students overcome problem areas with their schoolwork.
7 Subjects: including algebra 1, reading, English, prealgebra
Related Newport, RI Tutors
Newport, RI Accounting Tutors
Newport, RI ACT Tutors
Newport, RI Algebra Tutors
Newport, RI Algebra 2 Tutors
Newport, RI Calculus Tutors
Newport, RI Geometry Tutors
Newport, RI Math Tutors
Newport, RI Prealgebra Tutors
Newport, RI Precalculus Tutors
Newport, RI SAT Tutors
Newport, RI SAT Math Tutors
Newport, RI Science Tutors
Newport, RI Statistics Tutors
Newport, RI Trigonometry Tutors
Nearby Cities With algebra Tutor
Coventry, RI algebra Tutors
Dartmouth algebra Tutors
Jamestown, RI algebra Tutors
Johnston, RI algebra Tutors
Middletown, RI algebra Tutors
Narragansett algebra Tutors
NETC, RI algebra Tutors
North Kingstown algebra Tutors
Portsmouth, RI algebra Tutors
Somerset, MA algebra Tutors
South Kingstown, RI algebra Tutors
Tiverton algebra Tutors
Wakefield, RI algebra Tutors
West Warwick algebra Tutors
Westport, MA algebra Tutors | {"url":"http://www.purplemath.com/newport_ri_algebra_tutors.php","timestamp":"2014-04-19T04:48:35Z","content_type":null,"content_length":"23940","record_id":"<urn:uuid:f33b449b-cb7f-4ec7-ad40-1e74d50f83ac>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00064-ip-10-147-4-33.ec2.internal.warc.gz"} |
Predicting the Times of Retweeting in Microblogs
Mathematical Problems in Engineering
Volume 2014 (2014), Article ID 604294, 10 pages
Research Article
Predicting the Times of Retweeting in Microblogs
^1School of Software, Central South University, Changsha 410075, China
^2Hangzhou Institute of Services Engineering, Hangzhou Normal University, Hangzhou 310012, China
^3School of Information Science and Engineering, Central South University, Changsha 410075, China
Received 8 August 2013; Accepted 20 August 2013; Published 11 February 2014
Academic Editor: Zhongmei Zhou
Copyright © 2014 Li Kuang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
Recently, microblog services accelerate the information propagation among peoples, leaving the traditional media like newspaper, TV, forum, blogs, and web portals far behind. Various messages are
spread quickly and widely by retweeting in microblogs. In this paper, we take Sina microblog as an example, aiming to predict the possible number of retweets of an original tweet in one month
according to the time series distribution of its top n retweets. In order to address the problem, we propose the concept of a tweet’s lifecycle, which is mainly decided by three factors, namely, the
response time, the importance of content, and the interval time distribution, and then the given time series distribution curve of its top n retweets is fitted by a two-phase function, so as to
predict the number of its retweets in one month. The phases in the function are divided by the lifecycle of the original tweet and different functions are used in the two phases. Experiment results
show that our solution can address the problem of predicting the times of retweeting in microblogs with a satisfying precision.
1. Introduction
Microblog is a social network based platform where information can be shared, propagated, and obtained. Users can publish their tweets through SMS, instant messenger, email, web sites, or third-party
applications by inputting at most 140 words [1]. Microblog bloomed rapidly due to its numerous advantages such as real-time and high interaction. The number of Sina microblog users in China has
reached up to 250 million during 2 years [2], and it has become a very important Internet application for nearly half of Chinese netizens.
Retweeting is a very important user behavior in microblogs. Users can forward the tweets which they are interested in, so that the followers of the users can see the tweets as well. The tweet
publishing pattern and propagation form, as well as its concise presentation with multimedia added such as music, video, and pictures, make the information spreading faster in microblog than that in
traditional media, with the content and form being more diverse. Therefore, how to predict the times of retweeting in microblogs by analyzing the features of tweets propagation becomes a hot research
The result of the research can be applied in many areas: a tweet that is retweeted largely represents a hot topic, so the prediction on the times of retweeting can help find hot topics in microblog.
Second, a hot tweet can represent the focus that most people are concerned about so we can monitor public opinions in a better fashion by predicting the times of retweeting. Moreover, microblog
reacts more rapidly compared to traditional media, especially on social emergency, so traditional media like newspaper can draft news based on the latest hot tweets in microblog.
The 13th International Conference on Web Information System Engineering (WISE 2012) [3] organized a challenge on Sina microblog. The organizers collected a number of retweets related to 33 original
tweets from Sina microblog. There are about 100 retweeting records corresponding to each original tweet. One of the proposed challenges is to predict the times of retweeting of the 33 original tweets
in one month. Motivated by the challenge proposed in WISE 2012, we addressed the significant problem by three steps: first, the primitive data are divided into 33 groups, where the data in one group
correspond to the retweets of an original tweet. For each group, the primitive data are parsed by extracting the values of property tags, so that the time series distribution of top 100 retweets for
each original tweet can be derived. Second, calculate the lifecycle of each original tweet according to its content and the characteristic of the time series distribution of top 100 retweets
including response time and interval. Third, in order to predict the times of retweeting of the 33 original tweets in one month, the derived time series distribution curves of top 100 retweets are
fitted by a two-phase function, where the first phase is the calculated lifecycle of the original tweet and the second phase is the remainder time in one month. The value in the 1st phase is derived
by fitting the curve by a lineal function, while the value in the 2nd phase is by a logarithm function. The final predicted value of retweeting times is the sum of the values of two phases. The
experiments show that the proposed solution in this paper can greatly address the problem of predicting the times of retweeting in microblogs, and the average error is controlled within 20%.
The paper is organized as follows. Related work is introduced in Section 2. The form and volume of collected microblog data are introduced in Section 3. The detailed solution to predicting the times
of retweeting is illustrated in Section 4. The experiment results are presented in Section 5. And finally the conclusions and future work are given.
2. Related Work
The blossom of microblog aroused wide attention of many researchers. Presently, they begin to conduct research on the problems related to microblogs, including analyzing the contents of microblogs,
mining the association relation between microblogs and real society [4–11], and predicting whether a tweet will be retweeted as well as the characteristic of retweeting behavior [12–21].
In the related work on the analysis of microblog contents, researchers found that microblog plays an important role in many areas, for example, political elections, earthquake disaster, marketing
management, and various kinds of information spreading [4–11]. Tumasjan et al. [6] find that the political emotion of tweet users has close relation with election and tweets can reflect voters’
inclination in real society by using LIWC text analysis software. Bollen et al. [7] find that society, culture, politics, and economy have a great influence on public sentiment through extended
emotional analysis. Sakaki et al. [8] successfully find out the earthquake epicenter from Twitter messages through time probability model, and Qu et al. [9] pointed out that microblogs play an
important and positive role in disaster by comparing the content of microblogs before and after Yushu earthquake in 2010. Achananuparp et al. [10] proposed a model for describing users’ originating
and promoting behaviors so as to detect interesting events from sudden changes in aggregated information propagation behavior of Twitter users.
In the related work in retweeting tweets, many researchers study and analyze what contents and features of a tweet make it be retweeted more easily. For example, Chen and Zhang [12] predict whether a
tweet will be retweeted based on its emotional or content keywords, user tags, and historical retweeting frequency. Xiong et al. [13] studied information diffusion on microblogs based on retweeting
mechanism and proposed a diffusion model (SCIR) which contains four states, two of which are absorbing. Zhang et al. [14] predict whether a tweet will be retweeted by ranking tweets based on weighted
feature model. Hong et al. [15] discuss why and how people retweet messages, as well as what messages will be retweeted by making use of TF-IDF points. Zaman et al. [16] predict the information
spreading in Twitter through collaborative filtering algorithm. Petrovic et al. [1] decide whether a tweet will be retweeted by manual experiments and then predict it by improved passive progressing
algorithm. However, few works on predicting the times that a message is retweeted are published.
Zhang et al. [22] propose to compute the probability that a user retweets a tweet by considering several features first and then build a retweet model with the probability to predict the number of
possible views of a tweet. Unankard et al. [23] compare four different methods, of which the first one is discovering a regression function based on the popularity of messages and network
connectivity, the second one is learning a classification model based on users’ preferences in different fields of topics, the third one is simulating retweeting paths starting from a root message by
employing Monte Carlo method, and the fourth is building a recommendation model based on collaborative filtering. Luo et al. [24] propose to identify most similar message from training data based on
the similarity between their time series values in the same length period and then fit the ARMA models over the whole time series of the identified message, and finally the fitted model is applied to
the test tweet to predict future values. Compared with their work, in this paper, we propose a new perspective to differentiate the time period when a tweet may be largely retweeted and that when the
possibility of retweeting becomes small and propose a new concept, a tweet’s lifecycle, which is determined by analyzing the content of the tweet as well as the time series distribution of its top
retweets. Based on the calculated lifecycle, different functions are fitted within and out of its lifecycle, so as to predict the number of retweets of a tweet in one month.
3. Dataset
In this paper, we take the Sina microblog data as an example to study the prediction on the times of retweeting. This section will introduce the form and volume of the collected raw data.
3.1. Data Form
The basic form of each datum in the collected dataset is as follows:Tweet:time:Amid:Buid:CtDtE...isContainLink:FeventList:GrtTime:HrtMid:IrtUid:JrtIsContainLink:KrtEventList:L.
In which the detailed meaning of each property tag is shown as Table 1.
In order to illustrate the detailed meaning of every property more clearly, we take the following datum as an example:time:2011-06-0511:26:56mid:270926510254626223
8uid:6701001061010001018429227021838isContainLink:falsertTime:2011-06-0508:19:59rtMid:2709258383303085289rtUid:92560217202092828482rtIsContainLink:falsertEventList:Li Na win French Open in
tennis$Francesca Schiavone.
The datum shows the following: the original tweet ID (rtMid) is 2709258383303085289, it was created and published by a user with ID 92560217202092828482 (rtUid) at 2011-06-05 08:19:59 (rtTime), it
does not contain a link (rtIsContainLink: false), and it is about Li Na winning French Open in tennis with event tags “rtEventList:Li Na win French Open in tennis$Francesca Schiavone.” The original
tweet is retweeted by a user with uid 6701001061010001018429227021838 at 2011-06-05 11:26:56 (Time), its message ID (mid) is 2709265102546262238, and it does not contain a link (isContainLink:false).
Each primitive datum is constructed by such property-value pairs. We can find the retweeting time, retweeting message ID, the original tweet ID, event tags, and so forth from each datum, so as to
understand and use each datum.
3.2. Data Volume
We eliminate repeated messages and finally got 3292 valid messages by preprocessing data based on integrity constraints. The 33 original tweets are annotated with event tags, and the 33 groups of
data are mainly involved in 6 events, including the death of Steve Jobs, the earthquake in Japan, Li Na winning French Open tennis contest, Yao Jiaxin’s murder case, bombing in Fuzhou, and the
publishing of Xiaomi phones. Each of the 33 groups contains about 100 retweeting messages. The original tweet ID and corresponding number of collected retweeting messages for each group are shown in
Table 4.
4. Predicting the Times of Retweeting
Given the time series distribution of top retweets of an original tweet, we aim to predict the number of retweets in the future one month. In order to get a more accurate predicted value, we propose
to fit the given time series distribution curve by a two-phase function, whose phases are divided according to the lifecycle of the original tweet.
4.1. Lifecycle of a Tweet
Every creature in the earth has its own lifecycle. We think that every tweet has its lifecycle like the creatures on the earth as well. We find that the lifecycle of a tweet plays an important role
in predicting the times of retweeting. If the contents of two tweets are similar, the retweeting numbers per day of the two are nearly the same, and meanwhile their publishing time points are close,
the tweet with a longer lifecycle will have a larger number of retweets. Hence, in order to predict the retweeting times more accurately, we propose the concept of the lifecycle of a tweet, that is,
the time duration when a tweet can be retweeted in a large number.
We find that the lifecycle of a tweet is related to the response time of the first retweet, the importance of the content, and the interval distribution of retweets, and we will illustrate the three
factors in the following part.
4.1.1. The Response Time of the First Retweet
The response time of the first retweet means the time difference between the time of the first retweet and that of the origin tweet.
Generally speaking, the faster the first retweet is posted, the more attention is paid to the original one. And the more popular the original tweet is, the more likely it will be retweeted. Thus,
correspondingly, an original tweet which is retweeted in a short time may get more attention and thus have a longer lifecycle.
According to the 33 groups of retweeting records, we design a formula to calculate the score with respect to response time. We divide them into four levels according to different intervals of
response time, and each level corresponds to different functions on the response time. In general, the shorter time the first retweet is posted, the higher score will the original one get. The
response time in the high speed group is less than 10 seconds, and the corresponding score in this group is assigned a full score of 10 points. The response time in the 2nd group is between 10 and
100 seconds, and the range of corresponding score in this group is [6, 10] points, and the score declines with a speed. The response time in the 3rd group is between 100 and 10000 seconds, and the
range of corresponding score in this group is [0.6, 6] points; the score declines with speed. The slow ones are over 10000 seconds, some are even more than 70000 seconds, and the range of
corresponding score in this group is (0, 0.6] points; the score declines slower than the 3rd group with speed. The score on response time is proportional to the length of its lifecycle. The score
with respect to response time is shown as
4.1.2. The Importance of the Content
The vast amount of retweeting happens only when the content is attractive, which is named as the importance of content. People tend to pay more attention to those tweets with attractive contents,
that is, with high grade of importance of content.
The contents of tweets involve all aspects of our lives. According to Sina microblog, tweets can be classified to the categories such as lifestyle, love, entertainment, film, television, sports,
finance, science, art, fashion, culture, and media. A tweet will be retweeted by a large number of times only when there is something attractive enough in its content, such as being about a pop
star’s affair or some big emergency. Take some pieces of news as examples.(1)Before the death of American singer Michael Jackson was published, there were numerous fans coming into the hospital of
the University of California in Los Angeles, where Michael Jackson had been, since they got the news from Facebook and Twitter. Moreover, only one hour later after the announcement of death, there
were more than 65000 reply messages and retweets in Twitter; over 5000 of them came out within one minute.(2)In February 2010, a 93-year-old Mrs. Xiao, who was from Chengdu, needed RH-AB blood
because of the fracture. Lacking blood, she was in danger at that time. In that case, her daughter came to send a tweet to ask for help. Only within 12 hours, there were more than 3000 people that
helped to retweet it. Fortunately, 3 friends from the Internet donated their blood and she was saved.
To conclude the cases above, the tweet about the death of Michael Jackson received more than 65000 comments and retweets within one hour, and the tweet about seeking RH-AB blood received more than
3000 people’s attention within half a day; therefore, we guess that the more attractive the content is, the more chances it would be retweeted.
But what kind of content would be attractive? We believe that if the content is related to the hot issue recently, such as Olympic Games, disaster, or a pop star’s affair and big social case, it
would be attractive. And moreover, if the time of the tweet issued is close to the time of the occurrence of the event, the tweet would attract much attention and the level of importance of content
is high. In comparison, if the tweet is posted in a relatively long time later, or the content is attractive only to some professional people in some specific field, the level of importance of
content is in the middle. Finally, if there are few people concentrating on it or the tweet is posted very long time after the event happens, the level of importance of content is low. The rank and
corresponding score on the importance of content with respect to different kinds of contents are shown in Table 2. The higher the importance of content is, the more scores the tweet will get on the .
For instance, the case of Michel Jackson is about a pop star, and the tweet is issued on time, so that the content of tweet is very attractive, the rank is identified as T3, and the score on the
importance of content would be 9.
4.1.3. The Interval Time Distribution of Retweets
According to the observation of data, if the number of retweets grows up very fast, for example, the tweet is retweeted for thousands of times in a short time, the retweeting will be in saturation
soon; therefore, the lifecycle of the original tweet is relatively short; if the interval time distribution curve is even, that is, the number of retweeting grows up in a peace way, the life cycle of
the original tweet would be relatively long; if the distribution curve of retweets is scatter and discrete, the tweet needs more time to get saturation and the lifecycle would be very long. The rank
and corresponding score on the interval time distribution with respect to different type of curve are shown in Table 3.
For detailed values, we may make judgments based on the following standards. Divide the interval time distribution of all retweets according to the time equally. If the number of retweets is growing
fast, appearing as a linear with high slope (over 60 degrees), or an exponential curve, as Figure 1(a) shows, the curve is of the type dense rise. In general, the score on the interval distribution
for this type is [0.1, 0.2]. If the growth of retweets is steady as Figure 1(b) shows, the curve is of the type general steady and the score is [1, 3]. If the growth of the retweets is small and
flat, as Figure 1(c) shows, the curve is of the type scatter, and the score is [3, 5]. In addition, if the number of retweets increases sharply at early stage but becomes more and more slow
afterwards, which means the trend is subsequent fatigue, the rank for this type of curve is deemed as T1, and the lifecycle would not be long, so the score is set around [0.2,1]. Despite all the
criteria, the accurate values need further studies. According to the above discussion, we design the rank and corresponding scores of interval time as Table 3 shows.
In summary, we make a calculation formula to compute the lifecycle of a tweet considering the above three factors:
In the formula, the coefficients of the importance of content and response time are 0.6 and 0.4 separately, which are achieved by experiments on training data. The interval time distribution has a
direct impact on the whole fitting of function curve, so the score on this part is worked as a product factor.
Take the retweeting of an original tweet related to Steven Jobs’ death issued at 12:07:52 2011/10/6 as an example. First, the event of Jobs’ death belonged to the category of a star’s affair, so the
rank of the importance of the content is T3. Steven Jobs is the ex-CEO and one of the founders of Apple, who has a significant impact on the public, so we set as 9. Second, the response time of the
first retweet is 22 seconds, and according to formula 1 we have as 8. Last, the number of retweets is increasing steady as Figure 2 shows, at the pace of 10 more retweets per minute, and the
retweeting saturates within 460 seconds. The interval time distribution is like Figure 1(b), which belongs to general steady type, so is set to 1. Therefore, the lifecycle of the original tweet is
4.2. Two-Phase Function Curve Fitting
The given time series distribution curve of top 100 retweets of an original tweet is then fitted by a two-phase function whose phases are divided according to the lifecycle of the original tweet.
Main steps are illustrated as follows.(1)We make use of Matlab, a mathematical analysis tool, for the purpose of function curve fitting. We need first to make a connection between mysql andMatlab and
then execute sql statements through exec function, so as to import data from mysql to Matlab.(2)Take preliminary analysis and draw scatter diagram based on the imported data. In the diagram, the
-axis data item “time” is not accurate time points but calculated by the time difference. In order to make the result more intuitive, we make the points in the scatter diagram more concentrated by
dividing time slots. Figure 2 shows the time distribution scatter diagram of top 100 retweets of an original tweet which is related to Steven Jobs’ death mentioned in Section 3.1.
In the following part, we will calculate the prediction value by fitting the curve with a two-phase function. In the first phase, that is, within the calculated lifecycle of the original tweet, a
linear function is used to fit the curve. Most of the retweets occur within the lifecycle of the tweet, and the remainder appears as slow growing, so a logarithmic function like is used to fit the
curve in the 2nd phase. The detailed processes in the 3rd and 4th steps are shown as follows.(3)In order to minimize error, we select a linear function which has the highest matching degree with the
scatter points to fit the curve in the 1st phase. The line passes through as much points as possible. For every two points and , a liner function is used to link them, and the whole curve is fitted
from the relation among points. The detailed slope and intercept are decided based on the model of double moving average [25] in Matlab. It can avoid the lag deviation of single moving average
method. The double moving average method adjusts the single one by adding a second moving average and then builds a linear model based on both average values.The average of first moving is Double
moving average is making another moving average based on the first moving average, and the corresponding formula is Since we have analyzed the growth of retweets in the 1st phase which appears as a
liner function, we suppose the prediction model in the 1st phase is in which is the current time and is the time slots from to the lifecycle of the tweet; is the slope and is the intercept, and the
two are called smooth coefficients.According to model (5), we can have So we have Therefore, According to model (5) and to make similar inference as (8), we can have Therefore, Then the smooth
coefficients can be calculated by According to the fitting curve, the function value when the -axis value reaches the lifecycle of the original tweet is the predicted number of retweets in the 1st
phase. An example scatter diagram and its corresponding fitting curve in the 1st phase are shown in Figure 3.(4)For the remaining part that is beyond the lifecycle while being within one month, a
logarithm function is used to fit the curve. The coefficients in the logarithm function can be achieved by fitting the scatter points, and we can get the predicted value in the 2nd phase by passing
the value of rest time into the function.
Take retweeting of the original tweet about Steven Jobs’ death issued at 12:07:52 2011/10/6 as an example. Its lifecycle is 8.6 days as calculated in Section 4.1. In the 1st phase, the fitted linear
function is , which can be derived from Matlab. We should translate the metric from day to seconds before the following calculation; that is, 8.6 days is equal to 743040 seconds ( seconds). As we
mentioned in step 2, the accurate seconds are divided into time slots by every 15 seconds. So here is equal to 743040/15 = 49536, and then we can get the predicted retweeting number in the 1st phase
by passing the value of into the linear function; that is, . In the 2nd phase, the logarithm function is used to predict the retweeting number in the remaining 21.4 days. The coefficients can be
achieved directly by Matlab; here is 2432, is −714, and is −1.599 + 004, and the value of the 2nd phase by passing into the logarithm function is 117. Finally, the values of the two phases are summed
up and the final result of the prediction on the retweeting number in 30 days is 99110 + 117 = 99227. Compared to actual retweeting number 110904, the deviation of our result is
5. Experiment Analysis
The result of prediction on the times of retweeting of the 33 original tweets is presented in Table 5.
In this table we can find out that the average error is less than 20%; we can conclude that our prediction is almost close to the real number of retweeting. Although different events have different
lifecycle, we can get that the prediction values in the 1st phase play a dominate role, while those in the 2nd phase account for a smaller proportion.
6. Conclusions and Future Work
The prediction on the times of retweeting in microblog is to quantize the speed of information spread in microblogs and to find out the focus of public attention at all times, which is the key point
of our research. In this paper, we analyze the behavior characteristics of retweeting in microblog and predict the times of retweeting of an original tweet in one month by a two-phase function curve
fitting. The experiment shows that our approach can work out the prediction on retweeting times, and the average error is controlled within 20%.
Even so, our work still has some improvement to do, which is the direction in the future. First, the selected function may not be proper in some time, which leads to some exceptional results, so we
may try some other function model. Second, we may do experiments on big data in order to optimize and adjust the curve fitting, so as to reduce the error.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
The work is supported in part by the following funds: the National Natural Science Foundation of China under the Grant no. 61202095 and 61173176 and the Scientific Research Project of Central South
University under the Grant no. 7608010001.
1. S. Petrovic, M. Osborne, and V. Lavrenko, “RT to win! Predicting message propagation in twitter,” in Proceedings of 5th International AAAI Conference on Weblogs and Social Media, pp. 586–589,
2. China Internet Network Information Center (CNNIC), The 29th Internet Development Statistics Report in China, 2012.
3. “WISE 2012 challenge,” http://www.wise2012.cs.ucy.ac.cy/challenge.html.
4. B. J. Jansen, M. Zhang, K. Sobel, and A. Chowdury, “Twitter power: tweets as electronic word of mouth,” Journal of the American Society for Information Science and Technology, vol. 60, no. 11,
pp. 2169–2188, 2009. View at Publisher · View at Google Scholar · View at Scopus
5. R. Long, H. F. Wang, Y. Q. Chen, O. Jin, and Y. Yu, “Towards effective event detection, tracking and summarization on microblog data,” in Web-Age Information Management, H. Wang, S. Li, S. Oyama,
X. Hu, and T. Qian, Eds., vol. 6897 of Lecture Notes in Computer Science, pp. 652–663, 2011. View at Publisher · View at Google Scholar
6. A. Tumasjan, T. O. Sprenger, P. G. Sandner, and I. M. Welpe, “Predicting elections with twitter: what 140 characters reveal about political sentiment,” in Proceedings of 4th International AAAI
Conference on Weblogs and Social Media, pp. 178–185, 2010.
7. J. Bollen, H. Mao, and A. Pepe, “Determining the public mood state by analysis of microblogging posts,” in Proceedings of the 12th International Conference on the Synthesis and Simulation of
Living Systems, pp. 667–668, 2010.
8. T. Sakaki, M. Okazaki, and Y. Matsuo, “Earthquake shakes twitter users: real-time event detection by social sensors,” in Proceedings of the 19th International World Wide Web Conference (WWW '10),
pp. 851–860, April 2010. View at Publisher · View at Google Scholar · View at Scopus
9. Y. Qu, C. Huang, P. Zhang, and J. Zhang, “Microblogging after a major disaster in China,” in Proceedings of the ACM Conference on Computer Supported Cooperative Work (CSCW '11), pp. 25–34, March
2011. View at Publisher · View at Google Scholar · View at Scopus
10. P. Achananuparp, E. P. Lim, J. Jiang, and T. A. Hoang, “Who is retweeting the tweeters? Modeling, originating, and promoting behaviors in the twitter network,” ACM Transactions on Management
Information Systems, vol. 3, no. 3, article 13, 2012. View at Publisher · View at Google Scholar
11. J. Tang, X. Wang, H. Gao, X. Hu, and H. Liu, “Enriching short text representation in microblog for clustering,” Frontiers of Computer Science in China, vol. 6, no. 1, pp. 88–101, 2012. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
12. J. Chen and C. Zhang, “Research on prediction of comprehensive forwarding probability based on emotional word content, user tags, historical forward rate in MicroBlogging community,” 2012, http:/
13. F. Xiong, Y. Liu, Z. J. Zhang, J. Zhu, and Y. Zhang, “An information diffusion model based on retweeting mechanism for online social media,” Physics Letters A, vol. 376, no. 30-31, pp. 2103–2108,
2012. View at Publisher · View at Google Scholar
14. Y. Zhang, R. Lu, and Q. Yang, “Predicting retweeting in microblogs,” Journal of Chinese Information Processing, vol. 26, no. 4, pp. 109–114, 2012.
15. L. Hong, O. Dan, and B. D. Davison, “Predicting popular messages in twitter,” in Proceedings of the 20th International Conference Companion on World Wide Web (WWW '11), pp. 57–58, April 2011.
View at Publisher · View at Google Scholar · View at Scopus
16. T. R. Zaman, R. Herbrich, J. V. Gael, and D. Stern, “Predicting information spreading in twitter,” in Proceedings of the Workshop on Computational Social Science and the Wisdom of Crowds (NIPS
'10), 2010.
17. Y. Zhang, R. Lu, and Q. Yang, “Prediction of the micro-blog retweet behavior,” in Proceedings of the National Conference on Information Retrieval, 2011.
18. D. Boyd, S. Golder, and G. Lotan, “Tweet, tweet, retweet: conversational aspects of retweeting on twitter,” in Proceedings of the 43rd Annual Hawaii International Conference on System Sciences
(HICSS-43 '10), January 2010. View at Publisher · View at Google Scholar · View at Scopus
19. R. Lahan, The Economics of Attention, University of Chicago Press, 2006.
20. B. Suh, L. Hong, P. Pirolli, and E. H. Chi, “Want to be retweeted? Large scale analytics on factors impacting retweet in twitter network,” in Proceedings of the 2nd IEEE International Conference
on Social Computing (SocialCom '10), pp. 177–184, August 2010. View at Publisher · View at Google Scholar · View at Scopus
21. J. Berger and K. L. Milkman, “Social transmission, emotion, and the virality of online content,” Wharton Research Paper, 2010.
22. H. B. Zhang, Q. Zhao, H. Y. Liu, J. He, X. Y. Du, and H. Chen, “Predicting retweet behavior in weibo social network,” in Web Information Systems Engineering—WISE 2012, X. S. Wang, I. Cruz, A.
Delis, and G. Huang, Eds., vol. 7651 of Lecture Notes in Computer Science, pp. 737–743, 2012. View at Publisher · View at Google Scholar
23. S. Unankard, L. Chen, P. Li et al., “On the prediction of re-tweeting activities in social networks—a report on WISE 2012 challenge,” in Web Information Systems Engineering—WISE 2012, X. S. Wang,
I. Cruz, A. Delis, and G. Huang, Eds., vol. 7651 of Lecture Notes in Computer Science, pp. 744–754, 2012. View at Publisher · View at Google Scholar
24. Z. L. Luo, Y. Wang, and X. T. Wu, “Predicting retweeting behavior based on autoregressive moving average model,” in Web Information Systems Engineering—WISE 2012, X. S. Wang, I. Cruz, A. Delis,
and G. Huang, Eds., vol. 7651 of Lecture Notes in Computer Science, pp. 777–782, 2012. View at Publisher · View at Google Scholar
25. C. T. Ragsdale, Spreadsheet Modeling and Decision Analysis, Cengage Learning, 6th edition, 2010. | {"url":"http://www.hindawi.com/journals/mpe/2014/604294/","timestamp":"2014-04-18T06:53:06Z","content_type":null,"content_length":"191597","record_id":"<urn:uuid:4d25ca37-6f9c-4cd5-9b2b-24f0c844369b>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00374-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calamoneri, Tiziana - Dipartimento di Informatica, Università di Roma "La Sapienza"
• On Three-Dimensional Layout of Interconnection Networks
• Exact Solution of a Class of Frequency Assignment Problems in Cellular Networks
• Variable density deployment and topology control for the solution of the sink-hole problem
• University of Rome "Sapienza", Computer Science Department,
• International Journal of Foundations of Computer Science World Scienti c Publishing Company
• Maximizing the Number of Broadcast Operations in Static Random Geometric Ad-Hoc Networks
• Maximizing the Number of Broadcast Operations in Random Geometric Ad-Hoc Wireless Networks
• ARTICLE IN PRESS Discrete Mathematics ( )
• The L(h, k)-Labelling Problem: A Survey and Annotated Bibliography
• Does Cubicity Help to Solve Problems? T. Calamoneri
• L(2, 1)-Labeling of Unigraphs (Extended Abstract)
• L(2, 1)-Labeling of Unigraphs Tiziana Calamoneri Rossella Petreschi
• Recognition of Unigraphs through Superposition (Extended Abstract)
• A General Approach to L(h, k)-Label Interconnection networks
• Calamoneri T, Caminiti S, Petreschi R. A general approach to L(h, k)-label interconnection networks. JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY 23(4): 652659 July 2008
• Exact Solution of a Class of Frequency Assignment Problems in Cellular Networks and Other Regular
• -Coloring Matrogenic Graphs Tiziana Calamoneri Rossella Petreschi
• L(2, 1)-Labeling of Oriented Planar Graphs (Extended Abstract)
• Minimum-energy broadcast in random-grid ad-hoc networks: approximation and distributed algorithms
• Proxy Assignments for Filling Gaps in Wireless Ad-hoc Lattice Computers
• On the Approximability of the L(h, k)-Labelling Problem on Bipartite Graphs
• Minimum-Energy Broadcast and Disk Cover in Grid Wireless Networks
• A Simple Parallel Algorithm to Draw Cubic Graphs Tiziana Calamoneri y Stephan Olariu z Rossella Petreschi y
• On the Radiocoloring Problem Tiziana Calamoneri and Rossella Petreschi
• Interval Routing & Layered Cross Product: Compact Routing Schemes for Butterflies, Mesh
• On the L(h, k)-Labeling of Co-Comparability Tiziana Calamoneri1
• Labeling trees with a condition at distance two Tiziana Calamoneri
• A New Approach to the Rearrangeability of (2 log N 1) Stage MINs Tiziana Calamoneri Annalisa Massini
• -Coloring of Regular Tiling (Extended Abstract)
• L(2; 1)-Coloring Matrogenic Graphs (Extended Abstract)
• L(h,1) -Labeling Subclasses of Planar Graphs Tiziana Calamoneri Rossella Petreschi
• Parallel and Distributed Computing and Systems November 3-6, 1999 in Cambridge Massachusetts, USA
• L(h,1,1)-Labeling of Outerplanar Graphs Tiziana Calamoneri a
• An Optimal Layout of Multigrid Networks Tiziana Calamoneri 1 and Annalisa Massini
• A Parallel Approximation Algorithm for the Max Cut Problem on Cubic Graphs
• Noname manuscript No. (will be inserted by the editor)
• Discrete Mathematics and Theoretical Computer Science DMTCS vol. (subm.), by the authors, 11 Optimal L(h, k)-Labeling of Regular Grids
• Nearly Optimal Three Dimensional Layout of Hypercube Networks
• Efficient Algorithms for Checking the Equivalence of Multistage Interconnection Networks
• Journal of Graph Algorithms and Applications http://jgaa.info/ vol. 0, no. 0, pp. 00 (0)
• New Results on Edge-Bandwidth Tiziana Calamoneri and Annalisa Massini
• On the L(h, k)-Labeling of Co-Comparability Graphs and Circular-Arc Graphs
• Optimal Three-Dimensional Layout of Interconnection Networks
• A Parallel Approximation Algorithm for the Max Cut Problem on Cubic Graphs
• Journal of Graph Algorithms and Applications http://www.cs.brown.edu/publications/jgaa/
• A New 3D Representation of Trivalent Cayley Networks Tiziana Calamoneri and Rossella Petreschi
• Autonomous deployment of heterogeneous mobile N. Bartolini, T. Calamoneri
• Impact of Information on the Complexity of Asynchronous Radio Broadcasting
• Nearly Optimal Three Dimensional Layout of Hypercube Networks
• Minimum Energy Broadcast and Disk Cover in Grid Wireless Networks
• Sensor Activation and Radius Adaptation (SARA) in Heterogeneous Sensor Networks | {"url":"http://www.osti.gov/eprints/topicpages/documents/starturl/45/480.html","timestamp":"2014-04-18T05:41:49Z","content_type":null,"content_length":"15068","record_id":"<urn:uuid:dc9c1e20-b558-4dcc-b258-c7262a89328b>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00184-ip-10-147-4-33.ec2.internal.warc.gz"} |
Floating point numbers - what else can be done?
Avoiding errors
3 Big data security analytics techniques
Rational numbers
How about rational numbers? Most of the numbers we deal with can be expressed as a fraction or ratio, so why not store these numbers as a numerator and a denominator so that we represent them
We're all familiar with the math for manipulating fractions; for addition and subtraction you rewrite both sides to have a common denominator and for multiplication and division you multiply
nominators and denominators. There are some drawbacks with this approach, however. Firstly, as numerator and denominator are stored separately they are calculated separately. This means there will be
twice as many operations per calculations as there are with floats.
Secondly, numerators and denominators can get big very quickly as calculations are performed. This means there needs to be an overflow protection that will factorise the numerator and denominator to
make their values smaller as necessary. As this factorisation is not always possible, rational numbers can overflow.
Thirdly, any expressions involving addition, subtraction or comparison are going to have to determine lowest common denominators.
A final point - it's my impression that most interfaces use real numbers - when was the last time the store had a can of soda at $37/100? So at least in the presentation there's going to be extra
conversions going from the rational format back to the real number. So this is an approach that works and for the languages that don't incorporate rational number types and there are almost certainly
libraries available that are easy to understand.
In terms of performance, rational numbers may not compare very well to floating points and in terms of storage they'll be twice the size for a similar range. Rational numbers are also only suitable
for fractions and not every number can be represented this way (e.g. √2). That said, if you want to see more, for C++ there is a boost implementation of rational numbers available here.
Base-10 floating point numbers
Which brings us to base-10 floating point numbers. If we remember from the previous article, the problem of large errors came from the small approximation errors that arose when base-10 real numbers
were converted to the from x/2^y. Therefore, it seems that if we could instead represent our number as x/10^y then there would be no conversion to a base-2 format and consequently no approximation
error. And because there's no conversion back to a base-10 number there's no large errors arising as described in the last article.
The IEEE floating point standard currently undergoing revision allows for a base-10 to be used, so this idea has been around for a long time. That said, current floating point hardware tends not to
support the base-10 mode. The reason is that using base-10 implies using binary coded decimal, a number representation format about 20 per cent less storage efficient than base-2. This is because in
general binary coded decimal uses four bits per decimal digit; in base-2 these four bits can represent 16 distinct values whereas in the same space BCD can represent only, well 10 distinct values.
There are schemes that reduce the amount of redundant space, but not to the efficiency of base-2 and these schemes also render calculations more computationally expensive. | {"url":"http://www.theregister.co.uk/2006/09/20/floating_point_numbers_2?page=2","timestamp":"2014-04-21T03:04:57Z","content_type":null,"content_length":"50896","record_id":"<urn:uuid:ec4ad1f9-6fc6-4971-9053-da15abafef50>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00465-ip-10-147-4-33.ec2.internal.warc.gz"} |
Philadelphia SAT Math Tutor
Find a Philadelphia SAT Math Tutor
...I believe that I have a unique ability to present and demonstrate various topics in mathematics in a fun and effective way. I have worked three semesters as a computer science lab TA at North
Carolina State University, as well as three semesters as a general math tutor for the tutoring center at...
22 Subjects: including SAT math, calculus, geometry, statistics
I'm a retired college instructor and software developer and live in Philadelphia. I have tutored SAT math and reading for The Princeton Review, tutored K-12 math and reading and SAT for Huntington
Learning Centers for over ten years, and developed award-winning math tutorials.
14 Subjects: including SAT math, geometry, GRE, algebra 1
...I have experience tutoring math at the levels of pre-algebra through calculus, and would also be able to tutor probability, statistics, and actuarial math. I graduated with a degree in Russian
Language, and spent a full year living in St. Petersburg, Russia.
14 Subjects: including SAT math, Spanish, calculus, geometry
I have been tutoring and teaching for the past 10 years. I have tutored all levels of Math from pre-algebra all the way up to Multivariable Calculus. I also have experience in tutoring chemistry,
Organic Chemistry, physics and many other classes!
18 Subjects: including SAT math, chemistry, physics, calculus
...One of the most fulfilling aspects of the experience was knowing that I was positively impacting the lives of these students while having a hand, albeit small, in bolstering this small
community. Prior to working in schools, my ample customer service experience shaped my ability to communicate w...
29 Subjects: including SAT math, English, reading, writing | {"url":"http://www.purplemath.com/philadelphia_pa_sat_math_tutors.php","timestamp":"2014-04-18T06:10:10Z","content_type":null,"content_length":"24184","record_id":"<urn:uuid:2b85fe0f-a878-4710-8dc3-3c68626b6278>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00137-ip-10-147-4-33.ec2.internal.warc.gz"} |
Controlling rewriting by rewriting
Results 1 - 10 of 37
- Proceedings of the International Conference on Functional Programming (ICFP'98 , 1998
"... We describe a language for defining term rewriting strategies, and its application to the production of program optimizers. Valid transformations on program terms can be described by a set of
rewrite rules; rewriting strategies are used to describe when and how the various rules should be applied in ..."
Cited by 110 (33 self)
Add to MetaCart
We describe a language for defining term rewriting strategies, and its application to the production of program optimizers. Valid transformations on program terms can be described by a set of rewrite
rules; rewriting strategies are used to describe when and how the various rules should be applied in order to obtain the desired optimization effects. Separating rules from strategies in this fashion
makes it easier to reason about the behavior of the optimizer as a whole, compared to traditional monolithic optimizer implementations. We illustrate the expressiveness of our language by using it to
describe a simple optimizer for an ML-like intermediate representation. The basic strategy language uses operators such as sequential composition, choice, and recursion to build transformers from a
set of labeled unconditional rewrite rules. We also define an extended language in which the side-conditions and contextual rules that arise in realistic optimizer specifications can themselves be
expressed as strategy-driven rewrites. We show that the features of the basic and extended languages can be expressed by breaking down the rewrite rules into their primitive building blocks, namely
matching and building terms in variable binding environments. This gives us a low-level core language which has a clear semantics, can be implemented straightforwardly and can itself be optimized.
The current implementation generates C code from a strategy specification.
, 1998
"... This paper presents a comprehensive introduction to the ELAN rule-based programming language. We describe the main features of the language, the ELAN environment, and introduce bibliographic
references to various papers addressing foundations, implementation and applications of ELAN. 1 Introduction ..."
Cited by 101 (24 self)
Add to MetaCart
This paper presents a comprehensive introduction to the ELAN rule-based programming language. We describe the main features of the language, the ELAN environment, and introduce bibliographic
references to various papers addressing foundations, implementation and applications of ELAN. 1 Introduction The ELAN system [18] provides an environment for specifying and prototyping deduction
systems in a language based on rules controlled by strategies. Its purpose is to support the design of theorem provers, logic programming languages, constraints solvers and decision procedures and to
offer a modular framework for studying their combination. ELAN takes from functional programming the concept of abstract data types and the function evaluation principle based on rewriting. But
rewriting is inherently non-deterministic since several rules can be applied at different positions in a same term, and in ELAN, a computation may have several results. This aspect is taken into
account through choice...
- Theoretical Computer Science , 2002
"... ELAN implements computational systems, a concept that combines two first class entities: rewrite rules and rewriting strategies. ELAN can be used either as a logical framework or to describe and
execute deterministic as well as non-deterministic rule based processes. With the general goal to make pr ..."
Cited by 54 (5 self)
Add to MetaCart
ELAN implements computational systems, a concept that combines two first class entities: rewrite rules and rewriting strategies. ELAN can be used either as a logical framework or to describe and
execute deterministic as well as non-deterministic rule based processes. With the general goal to make precise a rewriting logic based semantics of ELAN, this paper has three contributions: a
presentation of the concepts of rules and strategies available in ELAN, an expression of rewrite rules with matching conditions in conditional rewriting logic, and finally an enrichment mechanism of
a rewrite theory into a strategy theory in conditional rewriting logic.
, 1999
"... In this work, we consider term rewriting from a functional point of view. A rewrite rule is a function that can be applied to a term using an explicit application function. From this starting
point, we show how to build more elaborated functions, describing rst rewrite derivations, then sets of d ..."
Cited by 42 (10 self)
Add to MetaCart
In this work, we consider term rewriting from a functional point of view. A rewrite rule is a function that can be applied to a term using an explicit application function. From this starting point,
we show how to build more elaborated functions, describing rst rewrite derivations, then sets of derivations. These functions, that we call strategies, can themselves be dened by rewrite rules and
the construction can be iterated leading to higher-order strategies. Furthermore, the application function is itself dened using rewriting in the same spirit. We present this calculus and study its
properties. Its implementation in the ELAN language is used to motivate and exemplify the whole approach. The expressiveness of ELAN is illustrated by examples of polymorphic functions and
strategies. Keywords: Rewriting Calculus, Rewriting Logic, Strategy, Rewrite Based Language, Term Rewriting, Strategy, Matching. 1. Introduction Rule-based reasoning is present in many domains of
- Rewriting Techniques and Applications (RTA'99 , 1999
"... Stratego is a language for the specification of transformation rules and strategies for applying them. The basic actions of transformations are matching and building instantiations of
first-order term patterns. The language supports concise formulation of generic and data type-specific term traversa ..."
Cited by 34 (7 self)
Add to MetaCart
Stratego is a language for the specification of transformation rules and strategies for applying them. The basic actions of transformations are matching and building instantiations of first-order
term patterns. The language supports concise formulation of generic and data type-specific term traversals. One of the unusual features of Stratego is the separation of scope from matching, allowing
sharing of variables through traversals. The combination of first-order patterns with strategies forms an expressive formalism for pattern matching. In this paper we discuss three examples of
strategic pattern matching: (1) Contextual rules allow matching and replacement of a pattern at an arbitrary depth of a subterm of the root pattern. (2) Recursive patterns can be used to characterize
concisely the structure of languages that form a restriction of a larger language. (3) Overlays serve to hide the representation of a language in another (more generic) language. These techniques are
illustrated by...
, 2007
"... We present the Tom language that extends Java with the purpose of providing high level constructs inspired by the rewriting community. Tom furnishes a bridge between a general purpose language
and higher level specifications that use rewriting. This approach was motivated by the promotion of rewriti ..."
Cited by 34 (6 self)
Add to MetaCart
We present the Tom language that extends Java with the purpose of providing high level constructs inspired by the rewriting community. Tom furnishes a bridge between a general purpose language and
higher level specifications that use rewriting. This approach was motivated by the promotion of rewriting techniques and their integration in large scale applications. Powerful matching capabilities
along with a rich strategy language are among Tom’s strong points, making it easy to use and competitive with other rule based languages.
, 1998
"... In a similar way as 2-categories can be regarded as a special case of double categories, rewriting logic (in the unconditional case) can be embedded into the more general tile logic, where also
side-effects and rewriting synchronization are considered. Since rewriting logic is the semantic basis o ..."
Cited by 33 (25 self)
Add to MetaCart
In a similar way as 2-categories can be regarded as a special case of double categories, rewriting logic (in the unconditional case) can be embedded into the more general tile logic, where also
side-effects and rewriting synchronization are considered. Since rewriting logic is the semantic basis of several language implementation efforts, it is useful to map tile logic back into rewriting
logic in a conservative way, to obtain executable specifications of tile systems. We extend the results of earlier work by two of the authors, focusing on some interesting cases where the
mathematical structures representing configurations (i.e., states) and effects (i.e., observable actions) are very similar, in the sense that they have in common some auxiliary structure (e.g., for
tupling, projecting, etc.). In particular, we give in full detail the descriptions of two such cases where (net) process-like and usual term structures are employed. Corresponding to these two cases,
we introduce two ca...
, 1998
"... Rewriting logic expresses an essential equivalence between logic and computation. System states are in bijective correspondence with formulas, and concurrent computations are in bijective
correspondence with proofs. Given this equivalence between computation and logic, a rewriting logic axiom of the ..."
Cited by 31 (12 self)
Add to MetaCart
Rewriting logic expresses an essential equivalence between logic and computation. System states are in bijective correspondence with formulas, and concurrent computations are in bijective
correspondence with proofs. Given this equivalence between computation and logic, a rewriting logic axiom of the form t \Gamma! t 0 has two readings. Computationally, it means that a fragment of a
system 's state that is an instance of the pattern t can change to the corresponding instance of t 0 concurrently with any other state changes; logically, it just means that we can derive the formula
t 0 from the formula t. Rewriting logic is entirely neutral about the structure and properties of the formulas/states t. They are entirely user-definable as an algebraic data type satisfying certain
equational axioms. Because of this ecumenical neutrality, rewriting logic has, from a logical viewpoint, good properties as a logical framework, in which many other logics can be naturally
represented. And, computationally, it has also good properties as a semantic framework, in which many different system styles and models of concurrent computation and many different languages can be
naturally expressed without any distorting encodings. The goal of this paper is to provide a relatively gentle introduction to rewriting logic, and to paint in broad strokes the main research
directions that, since its introduction in 1990, have been pursued by a growing number of researchers in Europe, the US, and Japan. Key theoretical developments, as well as the main current
applications of rewriting logic as a logical and semantic framework, and the work on formal reasoning to prove properties of specifications are surveyed.
- Journal of Logic and Algebraic Programming , 2002
"... A typed model of strategic term rewriting is developed. The key innovation is that generic. The calculus traversal is covered. To this end, we define a typed rewriting calculus S ′ γ employs a
many-sorted type system extended by designated generic strategy types γ. We consider two generic strategy t ..."
Cited by 26 (8 self)
Add to MetaCart
A typed model of strategic term rewriting is developed. The key innovation is that generic. The calculus traversal is covered. To this end, we define a typed rewriting calculus S ′ γ employs a
many-sorted type system extended by designated generic strategy types γ. We consider two generic strategy types, namely the types of type-preserving and type-unifying strategies. S ′ γ offers
traversal combinators to construct traversals or schemes thereof from many-sorted and generic strategies. The traversal combinators model different forms of one-step traversal, that is, they process
the immediate subterms of a given term without anticipating any scheme of recursion into terms. To inhabit generic types, we need to add a fundamental combinator to lift a many-sorted strategy s to a
generic type γ. This step is called strategy extension. The semantics of the corresponding combinator states that s is only applied if the type of the term at hand fits, otherwise the extended
strategy fails. This approach dictates that the semantics of strategy application must be type-dependent to a certain extent. Typed strategic term rewriting with coverage of generic term traversal is
a simple but expressive model of generic programming. It has applications in program
- Electronic Notes in Theoretical Computer Science , 1998
"... System S is a calculus providing the basic abstractions of term rewriting: matching and building terms, term traversal, combining computations and handling failure. The calculus forms a core
language for implementation of a wide variety of rewriting languages, or more generally, languages for specif ..."
Cited by 25 (8 self)
Add to MetaCart
System S is a calculus providing the basic abstractions of term rewriting: matching and building terms, term traversal, combining computations and handling failure. The calculus forms a core language
for implementation of a wide variety of rewriting languages, or more generally, languages for specifying tree transformations. In this paper we showhow a conventional rewriting language based on
conditional term rewriting can be implemented straightforwardly in System S. Subsequently we show how this implementation can be extended with features such as matching conditions, negative
conditions, default rules, non-strictness annotations and alternativeevaluation strategies. 1 Introduction Term rewriting is a theoretically well-de#ned paradigm that consists of reducing a term to
normal form with respect to a set of rewrite rules #12,5,1#. However, in practical instantiations of this paradigm a wide variety of features are added to this basic paradigm. This has resulted in
the design and impl... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=175718","timestamp":"2014-04-17T19:51:04Z","content_type":null,"content_length":"41371","record_id":"<urn:uuid:f6999642-b8c6-4314-947a-18f7c196fdf4>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00118-ip-10-147-4-33.ec2.internal.warc.gz"} |
Volume 54
Volume 54, Issue 7, July 2013
We consider a polyharmonic operator in dimension two with l ⩾ 2, l being an integer, and a quasi-periodic potential . We prove that the absolutely continuous spectrum of H contains a semiaxis and
there is a family of generalized eigenfunctions at every point of this semiaxis with the following properties. First, the eigenfunctions are close to plane waves at the high energy region.
Second, the isoenergetic curves in the space of momenta corresponding to these eigenfunctions have a form of slightly distorted circles with holes (Cantor type structure). A new method of
multiscale analysis in the momentum space is developed to prove these results.
• ARTICLES
□ Partial Differential Equations
View Description Hide Description
The paper analyzes basic mathematical questions for a model of chemically reacting mixtures. We derive a model of several (finite) component compressible gas taking rigorously into
account the thermodynamical regime. Mathematical description of the model leads to a degenerate parabolic equation with hyperbolic deviation. The thermodynamics implies that the diffusion
terms are non-symmetric, not positively defined, and cross-diffusion effects must be strongly marked. The mathematical goal is to establish the existence of weak solutions globally in
time for arbitrary number of reacting species. A key point is an entropy-like estimate showing possible renormalization of the system.
View Description Hide Description
It is established the existence of nontrivial solutions for quasilinear Schrödinger equations with subcritical or critical exponents, which appear from plasma physics as well as
high-power ultrashort laser in matter.
View Description Hide Description
Within the theoretical framework of differential constraints method a nonhomogeneous model describing traffic flows is considered. Classes of exact solutions to the governing equations
under interest are determined. Furthermore, Riemann problems and generalized Riemann problems which model situations of interest for traffic flows are solved.
View Description Hide Description
In this paper, we are concerned with the Cauchy problem for one-dimensional compressible isentropic Navier-Stokes equations with density-dependent viscosity μ(ρ) = ρα(α > 0) and pressure
P(ρ) = ργ(γ > 1). We will establish the global existence and asymptotic behavior of weak solutions for any α > 0 and γ > 1 under the assumption that the density function keeps a constant
state at far fields. In particular, in the case that , we obtain the large time behavior of the strong solution obtained by Mellet and Vasseur when the solution has a lower bound (no
□ Representation Theory and Algebraic Methods
View Description Hide Description
We show that subsingular vectors may exist in Verma modules over W(2, 2), and present the subquotient structure of these modules. We prove conditions for irreducibility of the tensor
product of intermediate series module with a highest weight module. Relation to intertwining operators over vertex operator algebra associated with W(2, 2) is discussed. Also, we study
the tensor product of intermediate series and a highest weight module over the twisted Heisenberg-Virasoro algebra, and present series of irreducible modules with infinite-dimensional
weight spaces.
View Description Hide Description
We introduce the most general quartic Poisson algebra generated by a second and a fourth order integral of motion of a 2D superintegrable classical system. We obtain the corresponding
quartic (associative) algebra for the quantum analog, extend Daskaloyannis construction obtained in context of quadratic algebras, and also obtain the realizations as deformed oscillator
algebras for this quartic algebra. We obtain the Casimir operator and discuss how these realizations allow to obtain the finite-dimensional unitary irreducible representations of quartic
algebras and obtain algebraically the degenerate energy spectrum of superintegrable systems. We apply the construction and the formula obtained for the structure function on a
superintegrable system related to type I Laguerre exceptional orthogonal polynomials introduced recently.
□ Quantum Mechanics
View Description Hide Description
We report a solution of the one-dimensional Schrödinger equation with a hyperbolic double-well confining potential via a transformation to the so-called confluent Heun equation. We
discuss the requirements on the parameters of the system in which a reduction to confluent Heun polynomials is possible, representing the wavefunctions of bound states.
View Description Hide Description
In the present paper, we consider effects of quantization in a topos approach of quantum theory. A quantum system is assumed to be coded in a quantum topos, by which we mean the topos of
presheaves on the context category of commutative subalgebras of a von Neumann algebra of bounded operators on a Hilbert space. A classical system is modeled by a Lie algebra of classical
observables. It is shown that a quantization map from the classical observables to self-adjoint operators on the Hilbert space naturally induces geometric morphisms from presheaf topoi
related to the classical system to the quantum topos. By means of the geometric morphisms, we give Lawvere-Tierney topologies on the quantum topos (and their equivalent Grothendieck
topologies on the context category). We show that, among them, there exists a canonical one which we call a quantization topology. We furthermore give an explicit expression of a
sheafification functor associated with the quantization topology.
View Description Hide Description
We develop a fully fledged theory of quantum dynamical patterns of behavior that are nonlocally induced. To this end we generalize the standard Laplacian-based framework of the
Schrödinger picture quantum evolution to that employing nonlocal (pseudodifferential) operators. Special attention is paid to the Salpeter (here, m ⩾ 0) quasirelativistic equation and the
evolution of various wave packets, in particular to their radial expansion in 3D. Foldy's synthesis of “covariant particle equations” is extended to encompass free Maxwell theory, which
however is devoid of any “particle” content. Links with the photon wave mechanics are explored.
□ Quantum Information and Computation
View Description Hide Description
The dual of a matrix ordered space has a natural matrix ordering that makes the dual space matrix ordered as well. The purpose of these notes is to give a condition that describes when
the linear map taking a basis of M n to its dual basis is a complete order isomorphism. We exhibit “natural” orthonormal bases for M n such that this map is an order isomorphism, but not
a complete order isomorphism. Included among such bases is the Pauli basis. Our results generalize the Choi matrix by giving conditions under which the role of the standard basis {E ij }
can be replaced by other bases.
View Description Hide Description
We describe a construction that maps any connected graph G on three or more vertices into a larger graph, H(G), whose independence number is strictly smaller than its Lovász number which
is equal to its fractional packing number. The vertices of H(G) represent all possible events consistent with the stabilizer group of the graph state associated with G, and exclusive
events are adjacent. Mathematically, the graph H(G) corresponds to the orbit of G under local complementation. Physically, the construction translates into graph-theoretic terms the
connection between a graph state and a Bell inequality maximally violated by quantum mechanics. In the context of zero-error information theory, the construction suggests a protocol
achieving the maximum rate of entanglement-assisted capacity, a quantum mechanical analogue of the Shannon capacity, for each H(G). The violation of the Bell inequality is expressed by
the one-shot version of this capacity being strictly larger than the independence number. Finally, given the correspondence between graphs and exclusivity structures, we are able to
compute the independence number for certain infinite families of graphs with the use of quantum non-locality, therefore highlighting an application of quantum theory in the proof of a
purely combinatorial statement.
□ Relativistic Quantum Mechanics, Quantum Field Theory, Quantum Gravity, and String Theory
View Description Hide Description
The theory of α*-cohomology is studied thoroughly and it is shown that in each cohomology class there exists a unique 2-cocycle, the harmonic form, which generates a particular
Groenewold-Moyal star product. This leads to an algebraic classification of translation-invariant non-commutative structures and shows that any general translation-invariant
non-commutative quantum field theory is physically equivalent to a Groenewold-Moyal non-commutative quantum field theory.
View Description Hide Description
We establish conceptually important properties of the operator product expansion (OPE) in the context of perturbative, Euclidean φ4-quantum field theory. First, we demonstrate,
generalizing earlier results and techniques of hep-th/1105.3375, that the 3-point OPE, , usually interpreted only as an asymptotic short distance expansion, actually converges at finite,
and even large, distances. We further show that the factorization identity is satisfied for suitable configurations of the spacetime arguments. Again, the infinite sum is shown to be
convergent. Our proofs rely on explicit bounds on the remainders of these expansions, obtained using refined versions, mostly due to Kopper et al., of the renormalization group flow
equation method. These bounds also establish that each OPE coefficient is a real analytic function in the spacetime arguments for non-coinciding points. Our results hold for arbitrary but
finite loop orders. They lend support to proposals for a general axiomatic framework of quantum field theory, based on such “consistency conditions” and akin to vertex operator algebras,
wherein the OPE is promoted to the defining structure of the theory.
View Description Hide Description
We present a field theoretical model of point-form dynamics which exhibits resonance scattering. In particular, we construct point-form Poincaré generators explicitly from field operators
and show that in the vector spaces for the in-states and out-states (endowed with certain analyticity and topological properties suggested by the structure of the S-matrix) these
operators integrate to furnish differentiable representations of the causal Poincaré semigroup, the semidirect product of the semigroup of spacetime translations into the forward
lightcone and the group of Lorentz transformations. We also show that there exists a class of irreducible representations of the Poincaré semigroup defined by a complex mass and a
half-integer spin. The complex mass characterizing the representation naturally appears in the construction as the square root of the pole position of the propagator. These
representations provide a description of resonances in the same vein as Wigner's unitary irreducible representations of the Poincaré group provide a description of stable particles.
View Description Hide Description
In twistor theory, the canonical quantization procedure, called twistor quantization, is performed with the twistor operators represented as and . However, it has not been clarified what
kind of function spaces this representation is valid in. In the present paper, we intend to find appropriate (pre-)Hilbert spaces in which the above representation is realized as an
adjoint pair of operators. To this end, we define an inner product for the helicity eigenfunctions by an integral over the product space of the circular space S 1 and the upper half of
projective twistor space. Using this inner product, we define a Hilbert space in some particular case and indefinite-metric pre-Hilbert spaces in other particular cases, showing that the
above-mentioned representation is valid in these spaces. It is also shown that only the Penrose transform in the first particular case yields positive-frequency massless fields without
singularities, while the Penrose transforms in the other particular cases yield positive-frequency massless fields with singularities.
□ General Relativity and Gravitation
View Description Hide Description
The infinitesimal transformations that leave invariant a two-covariant symmetric tensor are studied. The interest of these symmetry transformations lays in the fact that this class of
tensors includes the energy-momentum and Ricci tensors. We find that in most cases the class of infinitesimal generators of these transformations is a finite dimensional Lie algebra, but
in some cases exhibiting a higher degree of degeneracy, this class is infinite dimensional and may fail to be a Lie algebra. As an application, we study the Ricci collineations of a type
B warped spacetime.
□ Dynamical Systems
View Description Hide Description
For the Kuramoto oscillators with small inertia, we present several quantitative estimates on the relaxation dynamics and formational structure of a phase-locked state (PLS) for some
classes of initial configurations. In a super-critical regime where the coupling strength is strictly larger than the diameter of natural frequencies, we present quantitative relaxation
dynamics on the collision numbers and the structure of PLS. In a critical coupling regime where the coupling strength is exactly the diameter of natural frequencies, we provide a
sufficient condition for an asymptotically PLS solution. In particular, we show the existence of slow relaxation to a PLS, when there are exactly two natural frequencies. This generalizes
the earlier results of Choi et al. [“Asymptotic formation and orbital stability of phase locked states for the Kuramoto model,” Physica D241, 735–754 (Year: 2012)10.1016/
j.physd.2011.11.011; Choi et al. “Complete synchronization of Kuramoto oscillators with finite inertia,” Physica D240, 32–44 (Year: 2011)]10.1016/j.physd.2010.08.004
View Description Hide Description
The application of the Nekhoroshev theorem to many problems arising in different fields of Physics and Astronomy depends on a non-degeneracy property, called steepness, that a suitable
Hamiltonian approximation must satisfy. Since steepness is implicitly defined, we have the problem of recognizing whether a given function is steep or not. For this purpose, we here
consider some sufficient conditions for steepness provided by Nekhoroshev in 1979, based on the solvability of a collection of systems depending on the number n of degrees of freedom, the
derivatives of the function up to a certain order r, and some auxiliary parameters. These conditions are really explicit only for r = 2, corresponding to quasi-convexity , and for r = 3.
Instead, for r ⩾ 4, the conditions are implicit, since they require an elaborate computation of the closure of a certain set. In this paper, we first revisit Nekhoroshev's result and we
show that the number of parameters in the collections of systems can be suitably reduced. Then, we show that for r = 4 Nekhoroshev's result is interesting only for n = 2, 3, and 4, and in
these cases we find explicit conditions for steepness which are formulated in a purely algebraic form.
View Description Hide Description
We report on the results of a study of the motion of a four particle non-relativistic one-dimensional self-gravitating system. We show that the system can be visualized in terms of a
single particle moving within a potential whose equipotential surfaces are shaped like a box of pyramid-shaped sides. As such this is the largest N-body system that can be visualized in
this way. We describe how to classify possible states of motion in terms of Braid Group operators, generalizing this to N bodies. We find that the structure of the phase space of each of
these systems yields a large variety of interesting dynamics, containing regions of quasiperiodicity and chaos. Lyapunov exponents are calculated for many trajectories to measure
stochasticity and previously unseen phenomena in the Lyapunov graphs are observed.
View Description Hide Description
In this paper we analyze some normal forms of a general quadratic Hamiltonian system defined on the dual of the Lie algebra of real K-skew-symmetric matrices, where K is an arbitrary 3×3
real symmetric matrix. A consequence of the main results is that any first-order autonomous three-dimensional differential equation possessing two independent quadratic constants of
motion, which admit a positive/negative definite linear combination, is affinely equivalent to the classical “relaxed” free rigid body dynamics with linear control parameters. | {"url":"http://scitation.aip.org/content/aip/journal/jmp/54/7/","timestamp":"2014-04-16T21:50:59Z","content_type":null,"content_length":"169359","record_id":"<urn:uuid:a9f43f3f-5722-4518-a3fe-484e0496fab0>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00291-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Haskell-cafe] Making monadic code more concise
C. McCann cam at uptoisomorphism.net
Mon Nov 15 14:19:59 EST 2010
On Mon, Nov 15, 2010 at 12:43 PM, Ling Yang <lyang at cs.stanford.edu> wrote:
> Specifically: There are some DSLs that can be largely expressed as monads,
> that inherently play nicely with expressions on non-monadic values.
> We'd like to use the functions that already work on the non-monadic
> values for monadic values without calls to liftM all over the place.
It's worth noting that using liftM is possibly the worst possible way
to do this, aesthetically speaking. To start with, liftM is just fmap
with a gratuitous Monad constraint added on top. Any instance of Monad
can (and should) also be an instance of Functor, and if the instances
aren't buggy, then liftM f = (>>= return . f) = fmap f.
Additionally, in many cases readability is improved by using (<$>), an
operator synonym for fmap, found in Control.Applicative, I believe.
> The probability monad is a good example.
> I'm interested in shortening the description of 'test', as it is
> really just a 'formal addition' of random variables. One can use liftM
> for that:
> test = liftM2 (+) (coin 0.5) (coin 0.5)
Also on the subject of Control.Applicative, note that independent
probabilities like this don't actually require a monad, merely the
ability to lift currying into the underlying functor, which is what
Applicative provides. The operator ((<*>) :: f (a -> b) -> f a -> f b)
is convenient for writing such expressions, e.g.:
test = (+) <$> coin 0.5 <*> coin 0.5
Monads are only required for lifting control flow into the functor,
which in this case amounts to conditional probability. You would not,
for example, be able to easily use simple lifted functions to write
"roll a 6-sided die, flip a coin as many times as the die shows, then
count how many flips were heads".
> I think a good question as a starting point is whether it's possible
> to do this 'monadic instance transformation' for any typeclass, and
> whether or not we were lucky to have been able to instance Num so
> easily (as Num, Fractional can just be seen as algebras over some base
> type plus a coercion function, making them unusually easy to lift if
> most typeclasses actually don't fit this description).
Part of the reason Num was so easy is that all the functions produce
values whose type is the class parameter. Your Num instance could
almost be completely generic for any ((Applicative f, Num a) => f a),
except that Num demands instances of Eq and Show, neither of which can
be blindly lifted the way the numeric operations can.
I imagine it should be fairly obvious why you can't write a
non-trivial generic instance (Show a) => Show (M a) that would work
for any possible monad M--you'd need a function (show :: M a ->
String) which is impossible for abstract types like IO, as well as
function types like the State monad. The same applies to (==), of
course. Trivial instances are always possible, e.g. show _ = "[not
showable]", but then you don't get sensible behavior when a
non-trivial instance does exist, such as for Maybe or [].
> Note that if we consider this in a 'monadification' context, where we
> are making some choice for each lifted function, treating it as
> entering, exiting, or computing in the monad, instancing the typeclass
> leads to very few choices for each: the monadic versions of +, -, *
> must be obtained with "liftM2",the monadic versions of negate, abs,
> signum must be obtained with "liftM", and the monadic version of
> fromInteger must be obtained with "return . "
Again, this is pretty much the motivation and purpose of
Control.Applicative. Depending on how you want to look at it, the
underlying concept is either lifting multi-argument functions into the
functor step by step, or lifting tuples into the functor, e.g. (f a, f
b) -> f (a, b); the equivalence is recovered using fmap with either
(curry id) or (uncurry id).
Note that things do get more complicated if you have to deal with the
full monadic structure, but since you're lifting functions that have
no knowledge of the functor whatsoever they pretty much have to be
independent of it.
> I suppose I'm basically suggesting that the 'next step' is to somehow
> do this calculation of types on real type values, and use an inductive
> programming tool like Djinn to realize the type signatures. I think
> the general programming technique this is getting at is an orthogonal
> version of LISP style where one goes back and forth between types and
> functions, rather than data and code. I would also appreciate any
> pointers to works in that area.
Well, I don't think there's any good way to do this in Haskell
directly, in general. There's a GHC extension that can automatically
derive Functor for many types, but nothing to automatically derive
Applicative as far as I know (other than in trivial cases with newtype
deriving)--I suspect due to Applicative instances being far less often
uniquely determined than for Functor. And while a fully generic
instance can be written and used for any Applicative and Num, the
impossibility of sensible instances for Show and Eq, combined with the
context-blind nature of Haskell's instance resolution, means that it
can't be written directly in full generality. It would, however, be
fairly trivial to manufacture instance declarations for specific types
using some sort of preprocessor, assuming Show/Eq instances have been
written manually or by creating trivial ones.
Anyway, you may want to read the paper that introduced Applicative,
since that seems to describe the subset of generic lifted functions
you're after: http://www.soi.city.ac.uk/~ross/papers/Applicative.html
If for some reason you'd rather continue listening to me talk about
it, I wrote an extended ode to Applicative on Stack Overflow some time
back that was apparently well-received:
- C.
More information about the Haskell-Cafe mailing list | {"url":"http://www.haskell.org/pipermail/haskell-cafe/2010-November/086456.html","timestamp":"2014-04-18T13:43:44Z","content_type":null,"content_length":"9191","record_id":"<urn:uuid:e0111a9b-90a3-4994-9223-f77bcfd67b00>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00309-ip-10-147-4-33.ec2.internal.warc.gz"} |
degenerating immersion
up vote 5 down vote favorite
Hi, I would like to know if it exists a sequence of $C^2$ immersion $f_k: S^2 \rightarrow \mathbb{R}^3$ which converge (in C^2) to $z^2$ except on a finite set of point, i.e $f^k \rightarrow z^2$ in
$C^2_{loc}(S^2\setminus \{ a_1, \dots , a_n \})$.
Here $S^2$ is identified to $\hat{\mathbb{C}}$ the Riemann sphere, hence $z^2: \hat{\mathbb{C}} \rightarrow \hat{\mathbb{C}} \sim S^2 \subset \mathbb{R}^3$ makes sense. In fact my question is about
$P/Q$ where $P$ and $Q$ are two element of $\mathbb{C}[z]$, but we can start with $z^2$ in order to make it more clear.
It looks very hard topologically. For instance if I assume "embedded" instead of "immersed" it is not very difficult to prove that such a sequence doesn't exist. But I can't show more, especially,
would like to know if
1) it exists, assuming we have have a sequence of immersion from a ball to $\mathbb{R}^3$ which satisfies the same hypothesis on the boundary or if the curvature if bounded from above?
2) how to produce such a sequence in the general case? In fact looking at a proof it seems to looks like something like the sphere eversion: no topological obstruction but no way to see the effective
I hope to be clear, Thanks in advance for your contribution.
topology dg.differential-geometry
1 I agree with jc's question. Perhaps your maps are immersions into ${\mathbb R}^4$ and you are projecting. In that case the projection of $z\mapsto z^2$ has two branch points --- one at 0 and the
other at infinity. There is certainly a sequence of immersions that will do this. You can start from a figure 8 in 3-space and change the crossings for example. But you are interested in curvature
properties, so I don't really understand the question. – Scott Carter Nov 15 '11 at 0:45
Sorry, i mean $z^2 : \hat{\mathbb{C}} \rightarrow \hat{\mathbb{C}} \sim S^2 \subset \mathbb{R}^3$. – Paul Nov 15 '11 at 9:07
I guess it's very interesting, but not quite clear as it is. What "looks very hard topologically" ? What is "possible" in question 1? A counterexample to what, in question 2? And, what's the
regularity of your maps? – Pietro Majer Nov 15 '11 at 14:11
I have edited my post in order to make it more clear. – Paul Nov 15 '11 at 14:22
thank you ! – Pietro Majer Nov 15 '11 at 15:01
add comment
4 Answers
active oldest votes
There is no such sequence.
For an immersion $f_k\colon \mathbb S^2 \rightarrow \mathbb{R}^3$ (after a small perturbation) the set of self-intersections is formed by some number of closed curves $\gamma_1,,\gamma_2,
\dots \gamma_n,$ in $\mathbb R^3$. So any plane which intercets all $\gamma_i$ transversally, has to intersect them at even number of points.
up vote 4
down vote On the other hand the the equator plane say $\Pi$ (or its small perturbation) has to itersect it odd number of times. Indeed, the curves in $f_k^{-1}(\Pi)$ is close to equator $\mathbb S^
2$; the turning number of its image in $\Pi$ is $2$; so it has odd number of self-intersections. (This works for if $f_k$ is $C^1$-close to $z^2$ near $\Pi$, which is easy to arrange.)
Could you precise your answer, i don't understand what your last argument? is it work with any polynomial $P/Q$ of $\hat{\mathbb{C}}$. – Paul Nov 15 '11 at 9:05
2 I add one sentance, see also the answer of Sergey Melikhov. – Anton Petrunin Nov 15 '11 at 20:54
To clarify: my current answer is only a little elaboration on Anton's. I first thought that there's a serious gap in Anton's argument, and tried to fill it by a nontrivial argument
which turned out to be wrong. Now I see that there was no real gap after all. – Sergey Melikhov Nov 15 '11 at 22:00
add comment
The idea of Anton Petrunin can be made into an accurate proof. One does not need $C^2$ convergence, $C^1$ convergence is enough. That is, I claim that there is no $C^1$ immersion
sufficiently $C^1$-close to the composition $\phi:S^2\xrightarrow{z^2}S^2\subset\Bbb R^3$. (By the way, any map $S^2\to\Bbb R^3$ is $C^0$-close to a $C^\infty$ immersion, according to the $C
^0$-dense $h$-principle and using that $S^2$ immerses in $\Bbb R^3$.)
Let $f:S^2\to\Bbb R^3$ be a self-transverse map (not necessarily an immersion) that is $C^1$-close to $\phi$. The image of $f$ lies in a tubular neighborhood $S^2\times\Bbb R$ of the image
of $\phi$. Consider the composition $\psi:S^2\xrightarrow{f}S^2\times\Bbb R\xrightarrow{\text{projection}}S^2$. It is $C^1$-close to $\phi$, so it is equivalent to $\phi$ by a change of
up vote coordinates outside a small neighborhood of the poles (which are the singular points of $\phi$).
2 down
vote So we may assume that, outside of a small neighborhood of the poles, $f$ is a vertical lift of $\phi$ (with respect to the projection $S^2\times\Bbb R\to S^2$). Then, in particular, $f$
sends the equator of $S^2$ into the plane $\Pi$ in $\Bbb R^3$ that contains the equator of $S^2$. This equatorial map is a $C^1$-approximation to the composition $S^1\xrightarrow{\text
{double covering}}S^1\subset\Pi$, so it is an immersion and has an odd number of double points. But then the double point set of $f$ cannot be a union of closed curves. So $f$ cannot be an
Thank you, it makes the argument more clear. In fact it looks specific to $z^2$, if i have have well understood it won't works for $z^3$ for instance, because in fact i was looking for an
answer for any $P/Q$ where $P$ and $Q$ are two element of $\mathbb{C}[z]$. I will edit my post in this sense. – Paul Nov 16 '11 at 10:01
The same argument works for any branched cover $f$ between surfaces that has at least one branch point $f(z)$ of even index. That is, the composition $M\xrightarrow{f}N\subset\Bbb R^3$ is
not $C^1$-close to an immersion. To see this, take a small closed curve $S$ in $N$ going around $f(z)$, and then apply the above argument with $S$ in place of the equator (with precision
still smaller than the distance from $S$ to $f(z)$). If all branch points of $f$ have odd indexes, I believe $f$ is $C^1$-close (and hence also $C^\infty$-close) to an immersion. I'll
consider $f=z^3$ in a separate answer. – Sergey Melikhov Nov 16 '11 at 13:00
add comment
New answer to the generalized question. It's shown in previous answers that for $z^2$, and some other branched coverings, there are no immersions that are $C^1$-close except at the branch
points. (I believe this should also imply that there are no immersions that are $C^1$-close except on a finite set.)
But $z^3:S^2\to S^2$ is arbitrarily $C^\infty$-close, except at the two branch points, to a $C^\infty$ immersion in $\Bbb R^3$. (Also, any $C^\infty$ map $S^2\to S^2$ that is equivalent to $z
up vote ^3$ by a $C^0$ change of coordinates is $C^\infty$-close on the entire $S^2$ to an immersion in $\Bbb R^3$). To see this, pick a generic lift $f:S^1\to S^1\times\Bbb R$ of the $3$-fold
1 down covering $S^1\to S^1$. It suffices to show that the composition $f':S^1\xrightarrow{f} S^1\times\Bbb R\subset S^2$ bounds an immersion of a $2$-disk in a $3$-ball. Equivalently, we want to
vote find a regular homotopy from $f'$ to an embedding. But it is an exercise that that there are only two regular homotopy classes of immersions $S^1\to S^2$, distinguished by the parity of the
number of double points (in the case of self-transverse immersions).
Ok you have a disc whose boundary is $z^3$ and hence you can be $C^\infty$ closed to $z^3$ on $S^2\setminus \{ S,N\})$ which answer to 2) but can you extend your immersion of $S^2$ to an
immersion of $B^3$ OR is your sequence of approximation of $z^3$ get it Gaussian curvature bounded from above, i.e. the blow-up are given by necks and there is no pinching, this will
answer to 1). – Paul Nov 16 '11 at 14:10
Paul, you're right, on $S^2\setminus\{S,N\}$. I don't think I fully understand what exactly 1) and 2) ask for. – Sergey Melikhov Nov 16 '11 at 14:43
Sergey, 1) and 2) are my initial question in the first post, i can rephrase them as follow:Thanks to your last answer, we know that there exist a sequence of immersion $f_k :S^2 \
1 rightarrow \R^3$ which converge in $C_{loc}^2(S^2\setminus\{S,N\}$ to $z^3$, my question is: is it sill true if we assume one of following additional properties: i)$f_k$ is the restriction
of an immersion of $B^3$. ii)the Gaussian curvature of $f_k(S^2)$ is bounded from above. Of course $z^3$ is example but i look for an answer for any branched covering of the sphere of the
form $P/Q$. – Paul Nov 16 '11 at 14:53
OK, this makes it clear enough. I have no idea about (ii), and as to (i) it seems not so easy in general (should be doable for one specific map such as $z^3$). Note that every immersion $S
^2\to\Bbb R^3$ is regular homotopic to an embedding, and so bounds an immersed $3$-ball in $\Bbb R^3\times [0,\infty)$. There is some theory on which immersed curves in the plane bound
immersed surfaces in that plane, see for instance ams.org/journals/tran/1974-187-00/S0002-9947-1974-0341505-0, projecteuclid.org/euclid.ijm/1256049897, projecteuclid.org/euclid.hmj/
1150922487. – Sergey Melikhov Nov 16 '11 at 15:30
add comment
The answer is no. Two 2-dim smooth immersed in $\mathbb R^3$ objects generically intersect by line, so if intersection is a point then it can be eliminated. But it is clear that near $z^
2$ there are no embeddings.
Therefore what do you want it is a immersions with self-intersections as a small circles and these circles collapse to points when $k\to\infty$. But if a selfintersection is a small
up vote 0 circle, it can be eliminated too. Large circles in selfintersection can't disappear in limit.
down vote
added. Sorry, this answer is about absolutely different problem.
"Large circles in selfintersection can't disappear in limit." This is of course not true. For instance consider a generic immersion $f$ approximating the composition $\phi:S^1\times S^
1\xrightarrow{2\times 1}S^1\times S^1\subset\Bbb R^3$. Such an $f$ ought to have large self-intersection circles (even though $\phi$ doesn't). – Sergey Melikhov Nov 15 '11 at 20:04
(I guess it depends on your linguistic conventions whether $\phi$ in the above comment is said to have "large self-intersection circles", because its self-intersection is a
$2$-manifold; what I wanted to say is that whatever you call it, it's just like for the map in question, $S^2\xrightarrow{z^2}S^2\subset\Bbb R^3$.) – Sergey Melikhov Nov 15 '11 at
I mean they can't disappear for required type of degeneration (so, required limit should have only finite number of points in intersection), of course. Let's consider preimages of
large circles. Some subsequence of them has a limit. It means that the limit of immersions has infinite number of points in intersection. – Nikita Kalinin Nov 15 '11 at 23:30
Sorry, I don't understand. The composition $S^2\xrightarrow{z^2}S^2\subset\Bbb R^3$ has infinitely many intersection points, in fact every point except for north and south poles has
the same image as some other point. – Sergey Melikhov Nov 16 '11 at 0:48
1 aa. I see, It's my night misunderstanding. Sorry. – Nikita Kalinin Nov 16 '11 at 9:00
add comment
Not the answer you're looking for? Browse other questions tagged topology dg.differential-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/80925/degenerating-immersion/81064","timestamp":"2014-04-20T13:23:54Z","content_type":null,"content_length":"90900","record_id":"<urn:uuid:9476f2ce-f76d-4cc5-ba05-637d527cf0fc>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00337-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dedekind or Klein ?
Posted by lieven on Tuesday, 22 April 2008
Dedekind tessellation in this post, following the reference given by John Stillwell in his excellent paper Modular Miracles, The American Mathematical Monthly, 108 (2001) 70-76.
But is this correct terminology? Nobody else uses it apparently. So, let's try to track down the earliest depiction of this tessellation in the literature...
Richard Dedekind's 1877 paper "Schreiben an Herrn Borchard uber die Theorie der elliptische Modulfunktionen", which appeared beginning of september 1877 in Crelle's journal (Journal fur die reine und
angewandte Mathematik, Bd. 83, 265-292).
There are a few odd things about this paper. To start, it really is the transcript of a (lengthy) letter to Herrn Borchardt (at first, I misread the recipient as Herrn Borcherds which would be really
weird...), written on June 12th 1877, just 2 and a half months before it appeared... Even today in the age of camera-ready-copy it would probably take longer.
There isn't a single figure in the paper, but, it is almost impossible to follow Dedekind's arguments without having a mental image of the tessellation. He gives a fundamental domain for the action
of the modular group $\Gamma = PSL_2(\mathbb{Z}) $ on the hyperbolic upper-half plane (a fact already known to Gauss) and goes on in section 3 to give a one-to-one mapping between this domain and the
complex plane using what he calls the 'valenz' function $v $ (which is our modular function $j $, making an appearance in moonshine, and responsible for the black&white tessellation, the two colours
corresponding to pre-images of the upper or lower half-planes).
Then there is this remarkable opening sentence.
Sie haben mich aufgefordert, eine etwas ausfuhrlichere Darstellung der Untersuchungen auszuarbeiten, von welchen ich, durch das Erscheinen der Abhandlung von Fuchs veranlasst, mir neulich erlaubt
habe Ihnen eine kurze Ubersicht mitzuteilen; indem ich Ihrer Einladung hiermit Folge leiste, beschranke ich mich im wesentlichen auf den Teil dieser Untersuchungen, welcher mit der eben genannten
Abhandlung zusammenhangt, und ich bitte Sie auch, die Ubergehung einiger Nebenpunkte entschuldigen zu wollen, da es mir im Augenblick an Zeit fehlt, alle Einzelheiten auszufuhren.
Well, just try to get a paper (let alone a letter) accepted by Crelle's Journal with an opening line like : "I'll restrict to just a few of the things I know, and even then, I cannot be bothered to
fill in details as I don't have the time to do so right now!" But somehow, Dedekind got away with it.
So, who was this guy Borchardt? How could this paper be published so swiftly? And, what might explain this extreme 'je m'en fous'-opening ?
Carl Borchardt was a Berlin mathematician whose main claim to fame seems to be that he succeeded Crelle in 1856 as main editor of the 'Journal fur reine und...' until 1880 (so in 1877 he was still in
charge, explaining the swift publication). It seems that during this time the 'Journal' was often referred to as "Borchardt's Journal" or in France as "Journal de M Borchardt". After Borchardt's
death, the Journal für die Reine und Angewandte Mathematik again became known as Crelle's Journal.
As to the opening sentence, I have a toy-theory of what was going on. In 1877 a bitter dispute was raging between Kronecker (an editor for the Journal and an important one as he was the one
succeeding Borchardt when he died in 1880) and Cantor. Cantor had published most of his papers at Crelle and submitted his latest find : there is a one-to-one correspondence between points in the
unit interval [0,1] and points of d-dimensional space! Kronecker did everything in his power to stop that paper to the extend that Cantor wanted to retract it and submit it elsewhere. Dedekind
supported Cantor and convinced him not to retract the paper and used his influence to have the paper published in Crelle in 1878. Cantor greatly resented Kronecker's opposition to his work and never
submitted any further papers to Crelle's Journal.
Clearly, Borchardt was involved in the dispute and it is plausible that he 'invited' Dedekind to submit a paper on his old results in the process. As a further peace offering, Dedekind included a few
'nice' words for Kronecker
Bei meiner Versuchen, tiefer in diese mir unentbehrliche Theorie einzudringen und mir einen einfachen Weg zu den ausgezeichnet schonen Resultaten von Kronecker zu bahnen, die leider noch immer so
schwer zuganglich sind, enkannte ich sogleich...
Probably, Dedekind was referring to Kronecker's relation between class groups of quadratic imaginary fields and the j-function, see the miracle of 163. As an added bonus, Dedekind was elected to the
Berlin academy in 1880...
Anyhow, no visible sign of 'Dedekind's' tessellation in the 1877 Dedekind paper, so, we have to look further. I'm fairly certain to have found the earliest depiction of the black&white tessellation
(if you have better info, please drop a line). Here it is
It is figure 7 in Felix Klein's paper "Uber die Transformation der elliptischen Funktionen und die Auflosung der Gleichungen funften Grades" which appeared in may 1878 in the Mathematische Annalen
(Bd. 14 1878/79). He even adds the j-values which make it clear why black triangles should be oriented counter-clockwise and white triangles clockwise. If Klein would still be around today, I'm
certain he'd be a metapost-guru.
So, perhaps the tessellation should be called Klein's tessellation??
Well, not quite. Here's what Klein writes wrt. figure 7
Diese Figur nun - welche die eigentliche Grundlage fur das Nachfolgende abgibt - ist eben diejenige, von der Dedekind bei seiner Darstellung ausgeht. Er kommt zu ihr durch rein arithmetische
Case closed : Klein clearly acknowledges that Dedekind did have this picture in mind when writing his 1877 paper!
But then, there are a few odd things about Klein's paper too, and, I do have a toy-theory about this as well... (tbc) | {"url":"http://www.neverendingbooks.org/index.php/dedekind-or-klein.html","timestamp":"2014-04-17T12:54:36Z","content_type":null,"content_length":"18338","record_id":"<urn:uuid:f77bd522-ce28-4f4a-9470-c464a8c8a445>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00328-ip-10-147-4-33.ec2.internal.warc.gz"} |
Statistical Significance - The Simple One
Statistical Significance – The Simple One
Joanne Nova recently branched out of her comfort zone (spreading doubt) to try her hand at statistics claiming that surface temperature data since 2005 shows statistically significant cooling in 4 of
5 datasets.
Her words …
The cooling for the last eight years is statistically significant in 4 of the 5 major air temperature datasets.
Statistically significance is very rarely found on such short timeframes, so how did Nova manage to find it? Simple! She invented her own concept of statistically significance; “The easy one” as she
put it.
When shown how much variability existed, and how statistical significance methods showed large variability Nova replied …
So you can come up with a different more complex method but don’t recognise the simple one? How odd. You have referred to one in a paper by Foster and Rahmstorf that is complicated. I used a
simple, reasonable, rough and ready method and drew conclusions that fitted (see all the caveats). It is not sophisticated but it is still valid.
Nova has posted about statistical significance before and insists “no half-decent scientist would claim that the world was warming knowing that it was statistically insignificant.“
The Simple One?
Nova goes on to explain her unique SS method.
Is the trend greater than the errors in the individual measurements (about 0.05C per data point here)? If the trend is less than the errors of individual instruments it is not really credible.
With your preferred measure it looks like a trend could, under some circumstances, be statistically “significant” even though it is less than the individual measurement error. Hmm.
Oh dear. Nova is wrong, for so many reasons, I suspect her misunderstanding stem from mistakenly confusing the term “significant trend” with “a large trend”. When talking about statistical
significance we’re after a confidence that the trend is not just by accident, but rather that it represents the true trend. In “Nova world”, she believes the gradient of the trend must be large in
order for it to be called significant.
Error 1
Trends are more affected by natural variation, such as ENSO than by instrumental error (let’s assume that Nova’s 0.05°C instrumental error is correct). Each month global surface temps can jump by as
much as 0.4°C. This is the natural monthly variation caused as heat transfers into and out of the oceans, distorting the real trend. The month to month variations are NOT caused by instrument errors.
So any method for comparing the trend against the noise (for statistical significance purposes) needs to evaluate against the natural variability (noise). The noise is inherently in the data, not
some “instrument error” evaluated externally.
Error 2
By Nova’s definition a neutral trend of less than 0.05°C, neither warming or cooling, would never be statistically significant. If the data was perfectly flat, Nova would say the trend was
statistically insignificant because it neither went up or down. Sheesh!
Error 3
Nova’s significance test depends on what scale is used for the trend, in her case, °C/decade. Does this mean if we stated the trend as °C/year it would suddenly become invalid? How on earth did Nova
arbitrarily decide that °C/decade can be compared to instrument error?
Error 4
With Nova’s SS test, we find almost any timeframe we pick passes the Nova SS test and that it passes more easily with less data. According to Nova logic, the following trends are ALL “Nova
statistically significant” because the magnitude of the trend is greater than 0.05.
Example 1 – Since 2008, warming with “Nova statistical significance” because the trend is 0.25°C/decade.
Example 2 – Since 2010, the planet is cooling with “Nova statistical significance” because the trend is -1.03°C/decade.
Example 3 – Since 2012, warming with “Nova statistical significance” because the trend is 2.45°C/decade!!
Example 4 – Since 2013, plunging into an ice age with “Nova statistical significance” because the trend is -29.76°C/decade!!
Using Nova’s SS test, we can be 95% sure that we are plunging into an ice age this year based on Jan and Feb’s global surface values. How ridiculous! Not only that, the examples show that with less
data, the magnitude of the “Nova significance” has increased, which is opposite to statistical theory.
Statistical significance in surface temperature trends usually takes a decade or more to achieve simply because of the large monthly and yearly variations. These variations occur naturally as heat is
transported into and out of the oceans, not because of instrumental inaccuracy. More data gives greater statistical confidence, not less. Cherry picking 2005 and announcing that it is cooling with
statistical significance is as stupid as cherry picking 2008 and claiming it is warming with statistical significance – something no climate scientist has ever done.
But that’s Nova logic for you. A frightening example of someone who overestimates their own ability.
Tags: Cherry Pick, Joanne Nova
Vince Whirlwind Says:
March 26, 2013 at 11:47 pm | Reply
I too thought that quite hilarious when she tried to deny she’d misused the term “significant” by trying to imply it had alternative uses.
So in addition to over-estimating her ability, she is in a constant state of denial over the many errors she makes.
FIN Says:
April 4, 2013 at 9:11 pm | Reply
Great stuff, I’ve only just recently stumbled across this site. For some time now I’ve been grinding my teeth in frustration at the egregious bullshit “Cherry” Nova regurgitates at her site. Thank
you for putting the record straight, much appreciated, well done.
MMM Says:
April 8, 2013 at 8:48 pm | Reply
Error 2 isn’t quite right… “statistically significant”, as usually used, refers to a statistically significant difference from a null trend. Therefore, a perfectly flat trend would, in fact, never be
statistically significant.
Now, on the other hand, a trend of, say, 0.04 degree/decade could be statistically significant, despite a measurement error of 0.05 degrees, as long as the trend is long enough in comparison to the
• Mark F Says:
April 22, 2013 at 10:19 pm | Reply
What if the “null trend” is the linear trend of the past 30 years of temps, then a perfectly flat line is the deviation from that? | {"url":"http://itsnotnova.wordpress.com/2013/03/15/statistical-significance-the-simple-one/","timestamp":"2014-04-18T21:16:46Z","content_type":null,"content_length":"57195","record_id":"<urn:uuid:cf1e5607-b72d-4aff-a2e8-2a8317a393f4>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00181-ip-10-147-4-33.ec2.internal.warc.gz"} |
<philosophy of science, logic> an informal method for solving problems in the absence of an algorithm for formal proof. Heuristics typically have only restricted applicability and limited likelihood
of success but, as George Polya showed, contribute significantly to our understanding of mathematical truths. Recommended Reading: George Polya, How to Solve It (Princeton, 1971); Gerd Gigerenzer amd
Peter M. Todd, Simple Heuristics That Make Us Smart (Oxford, 1999); and George Polya, Mathematics and Plausible Reasoning (Princeton, 1990).
[A Dictionary of Philosophical Terms and Names]
1. <PI> A rule of thumb, simplification or educated guess that reduces or limits the search for solutions in domains that are difficult and poorly understood. Unlike algorithms, heuristics do not
guarantee optimal, or even feasible, solutions and are often used with no theoretical guarantee.
[What is a "feasible solution"?]
2. <algorithm> approximation algorithm.
Try this search on OneLook / Google
Nearby terms: heterogeneous « heterological paradox « heteronomy « heuristic » hex » hexadecimal » hierarchy | {"url":"http://lgxserve.ciseca.uniba.it/lei/foldop/foldoc.cgi?Heuristics","timestamp":"2014-04-18T18:29:34Z","content_type":null,"content_length":"3050","record_id":"<urn:uuid:33570930-424a-4112-a415-606d0a724878>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00326-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of electric susceptibility
electric susceptibility
of a
material is a measure of how easily it
in response to an
electric field
. This, in turn, determines the electric
of the material and thus influences many other phenomena in that medium, from the capacitance of
to the
speed of light
It is defined as the constant of proportionality (which may be a tensor) relating an electric field E to the induced dielectric polarization density P such that
{mathbf P}=varepsilon_0chi_e{mathbf E},
where $, varepsilon_0$ is the electric permittivity of free space.
The susceptibility of a medium is related to its relative permittivity $, varepsilon_r$ by
$chi_e = varepsilon_r - 1.$
So in the case of a vacuum,
$chi_e = 0.$
The electric displacement D is related to the polarization density P by
mathbf{D} = varepsilon_0mathbf{E} + mathbf{P} = varepsilon_0 (1+chi_e) mathbf{E} = varepsilon_r varepsilon_0 mathbf{E}.
Dispersion and causality
In general, a material cannot polarize instantaneously in response to an applied field, and so the more general formulation as a function of time is
$mathbf\left\{P\right\}\left(t\right)=varepsilon_0 int_\left\{-infty\right\}^t chi_e\left(t-t"\right) mathbf\left\{E\right\}\left(t\text{'}\right), dt\text{'}.$
That is, the polarization is a convolution of the electric field at previous times with time-dependent susceptibility given by $chi_e\left(Delta t\right)$. The upper limit of this integral can be
extended to infinity as well if one defines $chi_e\left(Delta t\right) = 0$ for $Delta t < 0$. An instantaneous response corresponds to Dirac delta function susceptibility $chi_e\left(Delta t\right)
= chi_e delta\left(Delta t\right)$.
It is more convenient in a linear system to take the Fourier transform and write this relationship as a function of frequency. Due to the convolution theorem, the integral becomes a simple product,
$mathbf\left\{P\right\}\left(omega\right)=varepsilon_0 chi_e\left(omega\right) mathbf\left\{E\right\}\left(omega\right).$
This frequency dependence of the susceptibility leads to frequency dependence of the permittivity. The shape of the susceptibility with respect to frequency characterizes the dispersion properties of
the material.
Moreover, the fact that the polarization can only depend on the electric field at previous times (i.e. $chi_e\left(Delta t\right) = 0$ for $Delta t < 0$), a consequence of causality, imposes
Kramers–Kronig constraints on the susceptibility $chi_e\left(0\right)$.
See also | {"url":"http://www.reference.com/browse/electric%20susceptibility","timestamp":"2014-04-16T08:48:50Z","content_type":null,"content_length":"81381","record_id":"<urn:uuid:b3bf8158-6c23-4bff-94a5-18cf4df5ec50>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00437-ip-10-147-4-33.ec2.internal.warc.gz"} |
There are an infinite number of regular polyhedra??
July 27th 2009, 02:47 AM
There are an infinite number of regular polyhedra??
Iam new to this forum so hope u peolpe can help. I am in my first year of a 4 yr university course training to be a primary/elementary school teacher.
I have beeen given the statement.. 'there are an infinite number of regular polyhedra'
The statment is said by an 8 yr old child. My task is to identify what is wrong with the statement, and how i would explain to the child the correct statement/ what i could show to the child to
make them realise this statement is wrong, ansd the correct course of action to put them on th e right track.
I have been told by my maths teacher the statement is incorrect, but that is all the information i have been given, i dont even know what a rugular polyhedra is!!!
Hope sumone on this forum can help???
July 27th 2009, 03:51 AM
Iam new to this forum so hope u peolpe can help. I am in my first year of a 4 yr university course training to be a primary/elementary school teacher.
I have beeen given the statement.. 'there are an infinite number of regular polyhedra'
The statment is said by an 8 yr old child. My task is to identify what is wrong with the statement, and how i would explain to the child the correct statement/ what i could show to the child to
make them realise this statement is wrong, ansd the correct course of action to put them on th e right track.
I have been told by my maths teacher the statement is incorrect, but that is all the information i have been given, i dont even know what a rugular polyhedra is!!!
Hope sumone on this forum can help???
Well, what exactly does that mean? Certainly, you can have a cube with side length n inches for any integer n so there are an infinite number of cubes alone!
A "regular polyheron", also called a "Platonic solid", is a solid figure with all faces the same polygon, all edges the same length, all angles the same.
If you mean "an infinite number of different platonic solids", that is, different numbers of sides, etc., then, far from being an infinite number of them, there are only five of them:
Tetrahedron: Four faces each being an equilateral triangle. Four faces, six edges, four vertices.
Hexahedron (cube): Six faces each being a square. Six faces, twelve edges, eight vertices.
Octahedron: Eight faces, each being an equilateral triangle. Eight faces, twelve edges, four vertices.
Dodecahedron: Twelve faces, each being a regular pentagon. Twelve faces, thirty edges, twenty vertices.
Icosahedron: twenty faces, each being an equilateral triangle. Twenty faces, thirty edges, twenty vertices.
Notice that all of these satisfies "Euler's formula": faces- edges+ vertices= 2.
Also they come in pairs or "duals", swapping number of faces with number of vertices: If you were to mark the center point of each face and then connect those points, the result would be the
"dual" polyhedron. The hexahedron is dual to the octahedron, the dodecahedron is dual to the icosahedron and the tetrahedron is dual to itself.
You can see pictures of them here:
Platonic Solid -- from Wolfram MathWorld
July 27th 2009, 04:01 AM
Iam new to this forum so hope u peolpe can help. I am in my first year of a 4 yr university course training to be a primary/elementary school teacher.
I have beeen given the statement.. 'there are an infinite number of regular polyhedra'
The statment is said by an 8 yr old child. My task is to identify what is wrong with the statement, and how i would explain to the child the correct statement/ what i could show to the child to
make them realise this statement is wrong, ansd the correct course of action to put them on th e right track.
I have been told by my maths teacher the statement is incorrect, but that is all the information i have been given, i dont even know what a rugular polyhedra is!!!
Hope sumone on this forum can help???
Consider a vertex of a regular polyhedron. There will be a meeting of internal angles of some regular polygons. What number can the angle at this vertex not exceed? When you think about this you
should soon see that there can only be a finite number of regular polyhedra.
July 27th 2009, 05:21 AM
A couple of suggestions:
1: drop the chat board lingo (u for you, sum for some ...); you're in university
2: start using Google for questions like you just posted:
regular polyhedra - Google Search= | {"url":"http://mathhelpforum.com/algebra/96201-there-infinite-number-regular-polyhedra-print.html","timestamp":"2014-04-20T03:29:47Z","content_type":null,"content_length":"9653","record_id":"<urn:uuid:d4b12072-d579-4f44-a346-7cf77a5cbaa8>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00103-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: matsnf segfaults
Alexander Shumakovitch on Thu, 29 May 2003 23:41:37 +0200
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
On Thu, May 29, 2003 at 11:06:41PM +0200, Karim BELABAS wrote:
> Having fiddled some more with the routine, I committed another patch to
> matsnf to allow arbitrary rectangular matrices [ for integer matrices only ],
> as a preliminary cleanup for the introduction a modular algorithm.
> When the matrix is singular, modular HNF reductions are done to reduce to
> that case. This already reduces your example to a trivial form (dealt with
> in half a second).
> It's quite straightforward to make the resulting algorithm modular
> [ provided one doesn't ask for transformation matrices ], but I'd like to
> know first whether the above works as expected !
Yes it does. Partially ;-) SNF works perfectly now and going through HNF
requires twice as little time. The reason could be that, since I'm
interested in the torsion only, I remove all columns on the right side
of the matrix brought to HNF that have pivot 1.
I used to pad the matrix with zeros to make it square. Now if I remove
this padding, matsnf doesn't complain anymore and produces the correct
result 10% faster than before. But if I go through HNF again, Pari
immediately complains about low memory (200MB stack not enough!). The
padding is obviously done _after_ mathnf is completed, so it's not to
I can send you the original matrix, if you like to test it.
Thanks for your quick response!
--- Alexander. | {"url":"http://pari.math.u-bordeaux.fr/archives/pari-dev-0305/msg00057.html","timestamp":"2014-04-18T00:24:23Z","content_type":null,"content_length":"5387","record_id":"<urn:uuid:0f04b40f-957f-436e-8158-61d853ab03c5>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00548-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
In the figure, the horizontal lines are parallel and AB = BC = CD. Find the measure of JM. The diagram is not drawn to scale. A 14 B 21 C 28 D 7
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
you essentially use the side splitter theorem http://www.cliffsnotes.com/study_guide/Proportional-Parts-of-Triangles.topicArticleId-18851,articleId-18813.html and that allows you to say "If a
line is parallel to one side of a triangle and intersects the other two sides, it divides those sides proportionally. " what this means is that you can effectively say that AD/AB = JM/LM (4AB)/AB
= JM/7 4 = JM/7 4*7 = JM 28 = JM JM = 28 So JM is 28 units
Best Response
You've already chosen the best response.
oh wait, it should be AD/AB = JM/LM (3AB)/AB = JM/7 3 = JM/7 3*7 = JM 21 = JM JM = 21 So JM is 21 units
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50b1a9bfe4b09749ccac7a64","timestamp":"2014-04-18T13:50:10Z","content_type":null,"content_length":"34173","record_id":"<urn:uuid:b41bb6ff-cc52-4b33-82bb-91d84fee6309>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00403-ip-10-147-4-33.ec2.internal.warc.gz"} |
[C++-sig] function with >15 args yields get_signature error
troy d. straszheim troy at resophonic.com
Mon Oct 26 18:08:42 CET 2009
Eilif Mueller wrote:
> Hi,
> Wrapping a function f with 16 arguments:
> int f(int x1, int x2, int x3, int x4, int x5,
> int x6, int x7, int x8, int x9, int x10,
> int x11, int x12, int x13, int x14, int x15,
> int x16) {
> return x1;
> }
> BOOST_PYTHON_MODULE(test)
> {
> def("f",f);
> }
> yields
> /usr/include/boost/python/make_function.hpp: In function ‘boost::python::api::object boost::python::make_function(F) [with F = int (*)(int, int, int, int, int, int, int, int, int, int, int, int, int, int, int, int)]’:
> /usr/include/boost/python/def.hpp:82: instantiated from ‘boost::python::api::object boost::python::detail::make_function1(T, ...) [with T = int (*)(int, int, int, int, int, int, int, int, int, int, int, int, int, int, int, int)]’
> /usr/include/boost/python/def.hpp:91: instantiated from ‘void boost::python::def(const char*, Fn) [with Fn = int (*)(int, int, int, int, int, int, int, int, int, int, int, int, int, int, int, int)]’
> test_module.cpp:20: instantiated from here
> /usr/include/boost/python/make_function.hpp:104: error: no matching function for call to ‘get_signature(int (*&)(int, int, int, int, int, int, int, int, int, int, int, int, int, int, int, int))’
> make: *** [stream.o] Error 1
> whereas all is fine if that last arg "int x16" is removed. All of gcc-4.2, gcc-4.3 and gcc-4.4 seem to exhibit the same behaviour. Same effect for libboost-python1.38-dev on Ubuntu karmic and libboost-python1.35-dev on Ubuntu jaunty.
> I need that 16th arg and more ... about 32 args, I think.
> Thanks for any help you can offer.
This came up recently. You're going to have various problems (not all
boost.python problems) getting arity that large. The workaround is to
introduce an intermediate function that takes a boost::python::tuple
and forward to the zillion-arguments one, example here:
More information about the Cplusplus-sig mailing list | {"url":"https://mail.python.org/pipermail/cplusplus-sig/2009-October/014912.html","timestamp":"2014-04-19T23:41:08Z","content_type":null,"content_length":"5366","record_id":"<urn:uuid:1bf1bcb5-697c-4be7-9ffa-f4732be22652>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00419-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics-like computation, Wolfram’s PCE and Church’s thesis
The lack of correspondence between the abstract and the physical world seems sometimes to suggest that there are profound incompatibilities between what can be thought and what actually happens in
the real world. One can ask, for example, how often one faces undecidable problems. However, the question of undecidability has been considered to be better formulated (and understood) in
computational terms because it is closer to our physical mechanical reality (through the concept of computation developed by Turing): Whether a computing machine enters a certain configuration is, in
general, an undecidable question (called the Halting problem). In other words, no machine can predict whether another machine will halt (under the assumption of Church’s thesis –aka Church-Turing
An interesting example of the gap between what the abstract theory says and what can be empirically ascertained was recently suggested by Klaus Sutner from Carnegie Mellon. He rightly points out that
if no concrete instance is known of a machine with an intermediate Turing degree, and consonant with Wolfram’s Principle of Computational Equivalence, is because intermediate degrees are artificial
constructions that do not necessarily correspond to anything in the real physical world.
In David Deutsch‘s words physics is at the bottom of everything and therefore everything relies on physics (ultimately on quantum physics according to Deutsch himself). This is true for the core
objects of study and practices in math: proofs, and in computer science: computer programs. At the end, that they are and how they are, is only possible by what it is feasible in the physical world.
As if sometimes it were forgotten that mathematics and computation also follow in practice the same laws of physics than everything else.
Sutner defines what he calls “physics-like” computation and concludes that machines with intermediate Turing degrees are artificial constructions unlikely to exist. According to Sutner, in practice
machines seem to follow a zero-one law: either they are as computationally powerful as a machine at the bottom of the computational power hierarchy (what Wolfram empirically calls “trivial behavior”)
or they are at the level of the first Turing degree (i.e. capable of universal computation). This seems to imply, by the way, that what Wolfram identifies as machines of equivalent sophistication
cannot be other but capable of universal computation, strengthening the principle itself (although one has to assume also Church’s thesis, otherwise PCE could be referring to a higher
So is PCE a conflation of Church’s thesis?
No. Church’s thesis could be wrong and PCE be still true, since by the negation of Church’s thesis the upper limit of the feasible computational power would just be shifted further, and even if it
turns out that the hypothesis of a Turing universe is false, PCE could be still true disregarding whether the universe is of a digital nature or not since it would refer then to the non-Turing limit
as the one holding the maximal sophistication (not that I think that C-T is false though).
Is PCE tantamount to the Church thesis in the provable sense?
Wolfram’s PCE would be still falsifiable if the distribution of the intermediate degrees is proven to be larger than what informally PCE suggests. However, so far that hasn’t been the case and there
are nice examples supporting PCE suggesting that very simple and small non-trivial programs can easily reach universal computation. Such as recent (weak) small universal Turing machines discovered by
Neary and Woods and particularly the smallest TM proven universal by Alex Smith (a 2-state 3-color machine that Wolfram conjectured in his NKS book). However PCE could be as hard to prove or disprove
as the Church thesis is. Unlike Church’s thesis PCE could not be disproved by exhibiting a single negative case but proving that the distribution of machines is different to what PCE suggests. A
positive proof however may require an infinite verification of cases which is evidently non-mechanically feasible (and only negating Church’s thesis itself one would be able to verify all the
infinite number of cases).
I see PCE acting below the curve while Church’s thesis acting from above determining a computational limit (known as the Turing limit).
1. Alberto says:
Where you plotting is comming from? is it a personal conclusion? | {"url":"http://www.mathrix.org/liquid/archives/physics-like-computation-wolframs-principle-of-computational-and-churchs-thesis","timestamp":"2014-04-18T13:06:44Z","content_type":null,"content_length":"36956","record_id":"<urn:uuid:d4b65756-6745-4834-8fd5-16cbcc0e8e3d>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00600-ip-10-147-4-33.ec2.internal.warc.gz"} |
Shady Lake, NJ Math Tutor
Find a Shady Lake, NJ Math Tutor
...TEACHING PHILOSOPHY I am a big believer in “hands-on” learning, in which the instructor regularly elicits responses from the student to ensure s/he understands the concepts being discussed and
is actively involved in absorbing the material. I think mistakes are great as I see them as learning o...
18 Subjects: including algebra 1, algebra 2, geometry, prealgebra
...The difficulty comes from 2 sources, time limits and the unusual form in which the questions are posed. One of the keys to excelling on the SAT is getting comfortable with the way Math topics
are tested, and knowing how to approach the different questions. In school math, the way in which you a...
10 Subjects: including geometry, algebra 1, algebra 2, American history
...Through my experience both at Princeton High School and as a private tutor, I have also worked extensively with students who have special needs and students who speak limited or no English. My
goal is to impart the knowledge I have gained through diligent study and daily practice, and to help ot...
37 Subjects: including algebra 1, English, ACT Math, geometry
...I have had excellent results with students who may be struggling with science or math courses, as well as those who are looking for an edge in subjects where they already are doing well. When
I was a sophomore at Boston University, I thought I wanted to be a high school science or math teacher. ...
10 Subjects: including probability, algebra 1, algebra 2, chemistry
...I am a pre-med student in my second year of college and I have 3 years of tutoring experience. I currently work at the Math Center in South Orange, NJ. I have also tutored elementary school
kids in math and writing.
29 Subjects: including algebra 2, calculus, grammar, Microsoft Excel
Related Shady Lake, NJ Tutors
Shady Lake, NJ Accounting Tutors
Shady Lake, NJ ACT Tutors
Shady Lake, NJ Algebra Tutors
Shady Lake, NJ Algebra 2 Tutors
Shady Lake, NJ Calculus Tutors
Shady Lake, NJ Geometry Tutors
Shady Lake, NJ Math Tutors
Shady Lake, NJ Prealgebra Tutors
Shady Lake, NJ Precalculus Tutors
Shady Lake, NJ SAT Tutors
Shady Lake, NJ SAT Math Tutors
Shady Lake, NJ Science Tutors
Shady Lake, NJ Statistics Tutors
Shady Lake, NJ Trigonometry Tutors
Nearby Cities With Math Tutor
Briar Park, NY Math Tutors
Canaan Lake, NY Math Tutors
Captree Island, NY Math Tutors
Delaware Park, NJ Math Tutors
Ellis Island, NJ Math Tutors
Fire Island, NY Math Tutors
Green Pond, NJ Math Tutors
Greenwood Lake, NJ Math Tutors
Heer Park, NY Math Tutors
Hillside, PA Math Tutors
Lake Gardens, NY Math Tutors
Lake Swannanoa, NJ Math Tutors
Monmouth Park, NJ Math Tutors
Oak Island, NY Math Tutors
Washington Park, NJ Math Tutors | {"url":"http://www.purplemath.com/Shady_Lake_NJ_Math_tutors.php","timestamp":"2014-04-21T10:26:39Z","content_type":null,"content_length":"24123","record_id":"<urn:uuid:c4829cc3-5692-43bb-a41b-56001e11d6c1>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00597-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quantum Random Number Generator
quantum, random, number, bit, generator, qrbg121, non-deterministic, quantum cryptography
QRBG121 is a fast non-deterministic random bit (number) generator whose randomness relies on intrinsic randomness of the quantum physical process of photonic emission in semiconductors and subsequent
detection by photoelectric effect. In this process photons are detected at random, one by one independently of each other. Timing information of detected photons is used to generate random binary
digits - bits. The unique feature of this method is that it uses only one photon detector to produce both zeros and ones which results in a very small bias and high immunity to components variation
and aging. Furthermore, detection of individual photons is made by a photomultiplier (PMT). Compared to solid state photon detectors the PMT's have drastically superior signal to noise performance
and much lower probability of appearing of afterpulses which could be a source of unwanted correlations.
Because of their non-deterministic nature and near-to-maximal entropy, quantum random number generators are ideally suited for most critical applications such as cryptography, production of PIN and
TAN numbers, simulations in industry or science, statistics research etc.
QRBG121 can be used either connected to a PC computer or as a standalone device.
In the computer mode, the device is connected to a PC computer via USB(2) port. Drivers and applications provided on the software disk allow to download random numbers into a file. Furthermore, a
library and software examples in C, C#, and VisualBasic are provided which makes possible easy integration of the QRBG in any user developped application.
In standalone mode, the device is powered through the OEM connector provided at the rear side. The random bits are output in serial manner.
Technical specifications
│ Bit rate │ 12Mbit/s +/-5% │
│ Bias (b)* │ < 0.00001 │
│ Serial autocorrelation (a) ** │ < 0.0002 │
│ Thermal noise │ < 0.0005% (5ppm) │
│ Interface │ USB2 and OEM │
│ OEM outputs │ 5V CMOS logic level, serial │
│ Supported OS's │ Win98/Me/2000/Xp, Linux │
│ Power supply │ none (powered by the USB/OEM port) │
│ Size │ 55 x 65 x 90 mm │
│ Weight │ 370g │
* Bias is defined as the difference between the measured probability of ones and the ideal probability: b = |p(1) - 0.5|.
** Serial autocorrelation cofficient a is defined in D. Knuth, art of programming Vol II:
We actually check that first 16 coeficients (k=1...16) are within the limit given in the table.
Randomness tests results
The US National Institute of Standards and Technology has proposed the "Statistical Test Suite for Random and Pseudorandom Number Generators for Cryptographic Applications", STS. We have tested 500
sequences of 1,000,000 bits produced by the QRBG121 with all 16 tests available in the suite. The tests were pased with excellent marks. Detailed results can be found here.
We have also performed tests with the "DIEHARD" battery of strong statistical randomness tests, probably the most respectable test suite, invented by prof. Georges Marsaglia from the Florida State
University. We have used the latest version which comprizes 17 different statistical tests. Some of them require very long sequences: the minimum to run all the tests is 268 megabytes. Tipical test
results of a 300 megabytes of random data produced by the QRBG121 can be found here. Most of the tests produce one p-value, whereas some produce several p-values. A test is said to fail if a single
p-value related to it is either very close to 0 or to 1 (up to 6 or more decimal places). The QRBG121 passes all the DIEHARD tests.
Min-entropy is an important parameter in theory of extractors and information-theoretic secure communications. Its definition is (Wikipedia):
The min-entropy is always less than or equal to the Shannon entropy; it is equal when all the probabilities pi are equal. When min-entropy is measured over a set of data (for example string of random
bits) it presents a worst-case probability of all possible patterns of given length. We have caried out measurements of min-entropy (blok length n=8 bits, non-overlapping) of serveral sample files of
random binary bits produced with the QRBG121. The expected min-entropy is a function of number of tested blocks N (file length) and block length n, and should slowly converge towards n as the number
of blocks goes to infinity. Table below shows measured min-entropy and theoretically expected values for the test.
N Measured Theoretical
1E5 7.8132 7.8135 +/- 0.0222
1E6 7.9360 7.9384 +/- 0.0077
1E7 7.9820 7.9802 +/- 0.0025
1E8 7.9929 7.9937 +/- 0.0008
1E9 7.9979 7.9998 +/- 0.0003
The conclusion from these measurements is that the min-entropy test does not show any anomalies of random data produced by the QRBG121 even for strings as long as 1,000,000,000 bytes.
Sample Binary Files
The generator QRBG121 actually produces random bits one by one. Our downloading software aranges bits in bytes (in the little endian order) and simply writes them to a file on disk.
Some sample binary files produced by the QRBG121 can be found below.
qrbg-10k.bin (10,000 bytes)
qrbg-100k.bin (100,000 bytes)
qrbg-1M.bin (1,000,000 bytes)
You are kindly invited to read the scientific article about the principle of operation of this product: 0609043v2.pdf also published as: Rev. Sci. Instrum. 78, 045104 (2007). | {"url":"http://qrbg.irb.hr/","timestamp":"2014-04-20T03:26:08Z","content_type":null,"content_length":"10195","record_id":"<urn:uuid:130838f2-4be6-49b6-8f80-c349baaba363>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00145-ip-10-147-4-33.ec2.internal.warc.gz"} |
Winston, GA Algebra 1 Tutor
Find a Winston, GA Algebra 1 Tutor
...He is a Christian who loves God, life, people and serving others. The focus of much of his teaching in recent years is to students who are missing key math fundamentals, students that need
help understanding new math concepts and students that have diverse learning styles and special needs. Set...
5 Subjects: including algebra 1, geometry, algebra 2, prealgebra
...I have over a decade of educational experience, with a very diverse background. Starting my educational career in New York (working the private and public education sectors in both general and
special education) and moving to Georgia teaching high and middle school math and social studies, and p...
22 Subjects: including algebra 1, reading, Microsoft Excel, elementary math
Hello, my name is Daniel, and I am a College graduate with a degree in Computer Science, as well as Business Marketing. I am currently looking for a position at a great company, but for now am
tutoring. I receive a 1320 on my SAT, a 710 in math and a 610 in verbal.
11 Subjects: including algebra 1, algebra 2, Microsoft Excel, precalculus
...I also became even more proficient with Microsoft Excel, Word, and PowerPoint. So, if there are questions or invaluable tips that I can help someone with, I can also provide direction with
these Microsoft Office programs. I recently tested my own math skills by taking and passing the GACE Content Assessment for Mathematics (022 -023) for teacher certification in the State of
21 Subjects: including algebra 1, calculus, statistics, geometry
...I know that it is important for your child to grasp these concepts early on, so that in the future, the transition to more difficult material will be smooth. I hope to target the problem
areas, while allowing him/her to enjoy the subject more. I have recently been certified for most of the general subjects for tutoring on Wyzant.
38 Subjects: including algebra 1, English, reading, algebra 2
Related Winston, GA Tutors
Winston, GA Accounting Tutors
Winston, GA ACT Tutors
Winston, GA Algebra Tutors
Winston, GA Algebra 2 Tutors
Winston, GA Calculus Tutors
Winston, GA Geometry Tutors
Winston, GA Math Tutors
Winston, GA Prealgebra Tutors
Winston, GA Precalculus Tutors
Winston, GA SAT Tutors
Winston, GA SAT Math Tutors
Winston, GA Science Tutors
Winston, GA Statistics Tutors
Winston, GA Trigonometry Tutors
Nearby Cities With algebra 1 Tutor
Atlanta Ndc, GA algebra 1 Tutors
Bowdon Junction algebra 1 Tutors
Braswell, GA algebra 1 Tutors
Cedartown algebra 1 Tutors
Chattahoochee Hills, GA algebra 1 Tutors
Clarkdale, GA algebra 1 Tutors
Ephesus, GA algebra 1 Tutors
Fairburn, GA algebra 1 Tutors
Felton, GA algebra 1 Tutors
Mount Zion, GA algebra 1 Tutors
Palmetto, GA algebra 1 Tutors
Red Oak, GA algebra 1 Tutors
Roopville algebra 1 Tutors
Sargent, GA algebra 1 Tutors
Waco, GA algebra 1 Tutors | {"url":"http://www.purplemath.com/winston_ga_algebra_1_tutors.php","timestamp":"2014-04-19T23:28:09Z","content_type":null,"content_length":"24211","record_id":"<urn:uuid:09137b2e-b14d-4f8c-8346-74fd04ca4baa>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00414-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wayne State, Detroit, MI
West Bloomfield, MI 48322
Master Certified Coach for Exam Prep, Mathematics, & Physics
...I look forward to speaking with you and to establishing a mutually beneficial arrangement in the near future! Best Regards, Brandon S.
1 covers topics such as linear equations, systems of linear equations, polynomials, factoring, quadratic equations,...
Offering 10+ subjects including algebra 1 and algebra 2 | {"url":"http://www.wyzant.com/Wayne_State_Detroit_MI_Algebra_tutors.aspx","timestamp":"2014-04-18T21:05:17Z","content_type":null,"content_length":"58630","record_id":"<urn:uuid:99a51ad8-0856-478e-8de4-f1f061cb728c>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00489-ip-10-147-4-33.ec2.internal.warc.gz"} |
RSK Insertion for Set Partitions and Diagram Algebras
We give combinatorial proofs of two identities from the representation theory of the partition algebra ${\Bbb C} A_k(n)$, $n \ge 2k$. The first is $n^k = \sum_\lambda f^\lambda m_k^\lambda$, where
the sum is over partitions $\lambda$ of $n$, $f^\lambda$ is the number of standard tableaux of shape $\lambda$, and $m_k^\lambda$ is the number of "vacillating tableaux" of shape $\lambda$ and length
$2k$. Our proof uses a combination of Robinson-Schensted-Knuth insertion and jeu de taquin. The second identity is $B(2k) = \sum_\lambda (m_k^\lambda)^2$, where $B(2k)$ is the number of set
partitions of $\{1, \ldots, 2k\}$. We show that this insertion restricts to work for the diagram algebras which appear as subalgebras of the partition algebra: the Brauer, Temperley-Lieb, planar
partition, rook monoid, planar rook monoid, and symmetric group algebras.
Full Text: | {"url":"http://www.combinatorics.org/ojs/index.php/eljc/article/view/v11i2r24","timestamp":"2014-04-17T10:16:41Z","content_type":null,"content_length":"15261","record_id":"<urn:uuid:5bd18185-90b4-481f-8f51-0d9fe14b310b>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00509-ip-10-147-4-33.ec2.internal.warc.gz"} |
Elementary Statistics, A Brief Version with MathZone
ISBN: 9780078004759 | 0078004756
Edition: 5th
Format: Paperback
Publisher: McGraw-Hill Science/Engineering/Math
Pub. Date: 9/15/2009
Why Rent from Knetbooks?
Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option
for you. Simply select a rental period, enter your information and your book will be on its way!
Top 5 reasons to order all your textbooks from Knetbooks:
• We have the lowest prices on thousands of popular textbooks
• Free shipping both ways on ALL orders
• Most orders ship within 48 hours
• Need your book longer than expected? Extending your rental is simple
• Our customer support team is always here to help | {"url":"http://www.knetbooks.com/elementary-statistics-brief-version/bk/9780078004759","timestamp":"2014-04-19T13:41:03Z","content_type":null,"content_length":"25295","record_id":"<urn:uuid:6f486c96-7109-4a75-9cfe-412bf5a18e25>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00003-ip-10-147-4-33.ec2.internal.warc.gz"} |
Erosion and Sedimentation
ISBN: 9780521537377 | 0521537371
Edition: 2nd
Format: Paperback
Publisher: Cambridge University Press
Pub. Date: 7/12/2010
Why Rent from Knetbooks?
Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option
for you. Simply select a rental period, enter your information and your book will be on its way!
Top 5 reasons to order all your textbooks from Knetbooks:
• We have the lowest prices on thousands of popular textbooks
• Free shipping both ways on ALL orders
• Most orders ship within 48 hours
• Need your book longer than expected? Extending your rental is simple
• Our customer support team is always here to help | {"url":"http://www.knetbooks.com/erosion-sedimentation-2nd-pierre-y-julien/bk/9780521537377","timestamp":"2014-04-18T18:53:52Z","content_type":null,"content_length":"27340","record_id":"<urn:uuid:5649432e-ccb7-4c50-ab2c-ae205d4db6e9>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00644-ip-10-147-4-33.ec2.internal.warc.gz"} |
visible surface detection w.r.t. ray tracing [Archive] - OpenGL Discussion and Help Forums
03-29-2000, 11:41 AM
I'm not sure if this is an advanced topic or not. If not, I apologize. Anyways, here's my question/problem:
I need to raytrace a surface of revolution(a spline revolved around the z-axis). In order to do this I've created a polygon mesh representation of the surface and stored the polygons in a list.
The part I'm not sure of is how to calculate the intersetion of rays with the mesh. My initial thoughts on it are to loop through the list of polygons and check to see if N dot V is negative(N =
normal of polygon, V = ray sent out from camera). If so, it is a front-facing polygon. I can then keep track of the polygon closest to the camera that is front-facing. I believe this will work, but
perhaps there's a better/more efficient way???
Secondly, how can I test for intersection between the ray and the polygon? Using the method described above, I only get the closest, front-facing polygon. But how to I test if the ray intersects this
as always, all help appreciated. | {"url":"https://www.opengl.org/discussion_boards/archive/index.php/t-151365.html?s=897fb9095f96828d9ec8e749595b81fa","timestamp":"2014-04-24T16:57:42Z","content_type":null,"content_length":"7016","record_id":"<urn:uuid:caaa93da-c31c-4ad8-add8-8887e718a9d0>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00085-ip-10-147-4-33.ec2.internal.warc.gz"} |
Method and System for Mapping Motion Vectors between Different Size Blocks
Patent application title: Method and System for Mapping Motion Vectors between Different Size Blocks
Inventors: Jun Xin (Quincy, MA, US) Anthony Vetro (Arlington, MA, US)
IPC8 Class: AH04N1102FI
USPC Class: 37524016
Class name: Television or motion video signal predictive motion vector
Publication date: 2008-10-09
Patent application number: 20080247465
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
A method and system for mapping motion vectors. A weight is determined for each motion vector of a set of input blocks of an input bitstream. Then, the set of motion vectors are mapped to an output
motion vector of an output block of an output bitstream according to the set of weights.
A computer implemented method for mapping motion vectors, comprising the steps of:determining a set of weights for a set of motion vectors of a set of input blocks, there being one weight for each
motion vector of each input block; andmapping the set of motion vectors to an output motion vector of an output block according to the set of weights.
The method of claim 1, in which the weight depends on a distance of a geometric center of the input block to a geometric center of the output block.
The method of claim 1, in which the weight depends on a size of the input block.
The method of claim 1, in which the weight depends on a distant of a geometric center of the input block to a geometric center of the output block, and on a size of the input block.
The method of claim 1, in which the set of input blocks is encoded according to a MPEG-2 standard, and the output block is encoded according to a H.264/AVC standard.
The method of claim 1, in which the set of input blocks has sizes different than the output block.
The method of claim 1, in which the set of input blocks overlaps the output block.
The method of claim 1, in which the set of input blocks is neighboring to the output block.
The method of claim 1, in which the set of input blocks overlaps and are neighboring to the output block.
The method of claim 1, in which them mapping uses a weighted median.
The method of claim 1, further comprising:refining the output motion vector.
The method of claim 2, in which the weight is inversely proportional to the distance.
The method of claim 3, in which the weight is proportional to the size.
The method of claim 2, in which the weight is ω i = 1 d i i = 1 6 1 d i ,where d
is the distance.
The method of claim 3 in which the weight is ω i = 1 d i × b i i = 1 6 ( 1 d i × b i ) ,where d
is the distance and b is a smaller dimension of the input block.
The method of claim 1, in which the weight is zero if the motion vector is an outlier.
The method of claim 14, in which the output motion vector is V o = i = 1 N ( ω i × V i ) i = 1 N ω i ,where V
is the set of input motion vectors.
The method of claim 14, in which the output motion vector is { V o .di-elect cons. { V i } i = 1 N ω i V i - V i ≦ i = 1 N ω i V j - V i j = 1 , 2 , , N . ( 5 ) where V
is the set of input motion vectors.
The method of claim 1, in which the set of input blocks are obtained from the input bitstream and the output block is for the output bitstream.
The method of claim 1, in which the set of input blocks are obtained from blocks previously encoded in the input bitstream, and output block is an output block of a decoded picture.
The method of claim 1, in which the output motion vector is a predicted motion vector that is used to reconstruct the output block of a decoded picture.
The method of claim 21, in which a residual motion vector of the output block of the decoded picture is decoded.
The method of claims 21, in which a sum of the predicted motion vector and the residual motion vector yields a reconstructed motion vector that is used to reconstruct the output block of the decoded
A transcoder for mapping motion vectors, comprising:means for determining a set of weights for a set of motion vectors of a set of input blocks of an input bitstream, there being one weight for each
motion vector of each input block; andmeans for mapping the set of motion vectors to an output motion vector of an output block of an output bitstream according to the set of weights.
The transcoder of claim 24, further comprising:means for refining the output motion vector.
A decoder for mapping motion vectors, comprising:means for determining a set of weights for a set of motion vectors of a set of input blocks of an input bitstream, there being one weight for each
motion vector of each input block; andmeans for mapping the set of motion vectors to an output motion vector of an output block of a decoded picture according to the set of weights.
FIELD OF THE INVENTION [0001]
The invention related generally to video signal processing, and more particularly to mapping motion vectors.
BACKGROUND OF THE INVENTION [0002]
MPEG-2 is currently the primary format for coding videos. The H.264/AVC video coding standard promises the same quality as MPEG-2 in about half the storage requirement, ITU-T Rec. H.264|ISO/IEC
14496-10, "Advanced Video Coding," 2005, incorporated herein by reference. The H.264/AVC compression format is being adopted into storage format standards, such as Blu-ray Disc, and other consumer
video recording systems. As more high-definition content becomes available and the desire to store more content or record more channels simultaneously increases, long recording mode will become a key
feature. Therefore, there is need to develop techniques for converting MPEG-2 videos to the more compact H.264/AVC format with low complexity. The key to achieving low complexity is to reuse
information decoded from an input MPEG-2 video stream.
An MPEG-2 decoder connected to a H.264/AVC encoder can form a transcoder. This is referred to as a reference transcoder. The reference transcoder is very computationally complex due to the need to
perform motion estimation in the H.264/AVC encoder. It is well understood that one can reduce the complexity of the reference transcoder by reusing the motion and mode information form the input
MPEG-2 video bitstream, see A. Vetro, C. Christopoulos, and H. Sun, "Video transcoding architectures and techniques: an overview, " IEEE Signal Processing Mag. 20(2): 18-19, March 2003. However, the
reuse of such information in the most cost-effective and useful manner is a known problem.
FIG. 1 shows a prior art video transcoder 100. An input MPEG-2 bitstream 101 is provided to an MPEG-2 video decoder 110. The decoder outputs decoded picture data 111 and control data 112, which
includes MPEG-2 header information and macroblock data. The MPEG-2 macroblock data includes motion information 121 and mode information 131 for each input macroblock of the MPEG-2 bitstream. This
information is provided as input to motion mapping 120 and mode decision 130, which estimates H.264 macroblock data including motion and mode information for each output macroblock of the H.264
bitstream. The H.264 macroblock data and the decoded picture data are then used to perform a simplified H.264/AVC encoding, which includes prediction 140, difference 150 between decoded picture data
and prediction, transform/quantization (HT/Q) 160, entropy coding 170, inverse transform/quantization (Inverse Q/Inverse HT) 180 to yield a reconstructed residual signal, summation 185 of the
reconstructed residual signal with the prediction, deblocking filter 190 and storage of a reconstructed picture into frame buffers 195. The encoder is "simplified" relative to the reference
transcoder, because the motion and mode information are based on the input MPEG-2 video bitstream and corresponding MPEG-2 macroblock data.
Methods for motion mapping in a transcoder are described by Z. Zhou, S. Sun, S. Lei, and M. T. Sun "Motion information and coding mode reuse for MPEG-2 to H.264 transcoding," IEEE Int. Symposium on
Circuits and Systems, pages 1230-1233, 2005, and X. Lu, A. Tourapis, P. Yin, and J. Boyce, "Fast mode decision and motion mapping for H.264 with a focus on MPEG-2/H.264 transcoding," In IEEE Int.
Symposium on Circuits and Systems, 2005.
However, those methods require a complex motion mapping process. For inter 16×16 prediction, the motion vectors from the input MPEG-2 video bitstream are used as additional motion vector predictors.
For smaller block sizes, e.g., 16×8, 8×16 and 8×8, motion vectors cannot be estimated directly from the input motion vectors because MPEG-2 does not include such motion vectors. Instead, the motion
vectors are estimated using conventional encoding processes without considering the MPEG-2 motion vectors. Therefore, such methods still need very complicated motion search processes.
There are no prior art methods that perform efficient mapping of mapping MPEG-2 motion vectors directly to H.264/AVC motion vectors, regardless of the block sizes. There is a need to perform such a
mapping without complex motion search processes.
SUMMARY OF THE INVENTION [0008]
The embodiments of the invention provide a method for mapping motion vectors between blocks with different sizes. A motion vector for a output block is estimated from a set of input motion vectors
and spatial properties of a set of input blocks. A input block either overlaps or is neighboring to the output block. A motion refinement process can be applied to the estimated motion vector.
BRIEF DESCRIPTION OF THE DRAWINGS [0009]
FIG. 1 is a block diagram of a prior art transcoder;
FIG. 2 is a block diagram of a method for mapping motion vectors between blocks with different sizes according to an embodiment of the invention;
FIG. 3 is a block diagram of motion vector mapping for a 16×8 macroblock partition from a set of input motion vectors according to an embodiment of the invention;
FIG. 4 is a block diagram of the motion vector mapping for an 8×16 macroblock partition from a set of input motion vectors according to an embodiment of the invention;
FIG. 5 is a block diagram of motion vector mapping for an 8×8 macroblock partition from a set of input motion vectors according to an embodiment of the invention;
FIG. 6 is a block diagram of the motion vector mapping for a 8×8 macroblock partition from a set of input motion vectors of different block sizes according to an embodiment of the invention; and
[0015]FIG. 7
is a block diagram of the motion vector mapping for a 16×8 macroblock partition from a set of input motion vectors of causal neighbors according to an embodiment of the invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT [0016]
The H.264/AVC standard specifies seven block sizes for inter prediction, i.e. 16×16, 16×8, 8×16, 8×8, 8×4, 4×8, and 4×4, while the MPEG-2 standard specifies two sizes, 16×16 or 16×8. This requires
mapping map motion vectors corresponding to given block sizes to a much wider range of block sizes, when transcoding videos from MPEG-2 to H.264/AVC.
As shown in FIG. 2, our invention provides a method 200 for motion vector mapping 203, which determines a motion vector 208 of a output block 220 using a set of input motion vectors 201 based on a
set of input blocks 210. The set of input blocks 210 either overlap or are neighboring to the output block 220. The output block can be a different size than the input block. As defined herein, a set
can include one or more members. There is a trade-off between the extent of the neighborhood and the effectiveness of the mapping. Too few blocks may not provide enough input data, while too many
blocks may introduce noise.
The set of input motion vectors 201 associated with the set of input blocks 210 is subject to motion vector mapping 203 to yield an estimated motion vector 204. The motion vector mapping 203 makes
use of a set of weights 205. There is one weight for each input block 210. The mapping 203 is determined as either a weighted average or a weighted median. Other operations can also be applied. The
weights 205 are based on the input motion vectors 201 and spatial properties 202 of the set of input blocks 201 using a weight determination 206. The estimated motion vector 204 is then subject to an
optional motion vector refinement 207 to yield a refined motion vector 208 for the output block 220. Further details on the motion vector mapping method 203 and weight determination 206 are described
Without loss of generality, it is assumed that an input MPEG-2 video is encoded using frame pictures, which is the more popular MPEG-2 encoding method. Also, it is assumed that the output is encoded
using H.264/AVC frame pictures without a use of a macroblock adaptive frame/field (MBAFF). These assumptions are made only to simplify the description of the invention, and are not required to work
the invention. It is understood that the embodiments of the invention are generally applicable for field picture input, frame picture output with MBAFF, or field picture output, i.e., any block based
vide encoding method.
The motion vector of a block is the same as the motion vector of its geometric center. Consequently, one input to the motion vector mapping 203 is the geometric centers of the set of input blocks
210, and the output is the motion vector 208 of the geometric center of the output block 220. The motion vector can be derived as a weighted average or weighted median of the set of input motion
vectors 210.
It should be noted that the set of input blocks can be obtained from an input bitstream and the output block is for an output bitstream. Alternatively, the set of input blocks are obtained from
blocks previously encoded in the input bitstream, and the output block is an output block of a decoded picture. In addition, the output motion vector can be a predicted motion vector that is used to
reconstruct the output block of a decoded picture. A residual motion vector of the output block of the decoded picture can be decoded, and a sum of the predicted motion vector and the residual motion
vector yields a reconstructed motion vector that is used to reconstruct the output block of the decoded picture.
Weight Determination
In the embodiments of the invention, the weights 205 are based on the spatial properties 202 of the input blocks 201 and the set of input motion vectors 201. Alternative embodiments are described
One embodiment of the invention, the weight 205 for each input motion vector 201 is inversely proportional to the distance between geometric centers of the corresponding input block and the output
FIG. 3 shows a output macroblock of size 16×16 (heavy line) 300, a cross-hatched output macroblock block partition "A" 305 of size 16×8, and six input macroblocks 310, labeled as "a
" through "a
", respectively. The input macroblocks "a
" overlaps is the output macroblock 300. The geometric centers of each input macroblock 310 and the output macroblock partition "A" 300 are shown as dots 320.
If one motion vector is associated with each of the input macroblocks "a
" through "a
", then a weight ω
is proportional to the distance between the geometric center of the input macroblock "a
" and that of the target macroblock partition "A". Each distance d
between each geometric center of each input block and the partition 305 is shown as a line 325.
In this case, distances d
are {5/2, 3/2, 5/2, {square root over (17)}/2, 1/2, {square root over (17)}/2}, assuming an eight pixel distance is equal to 1. We normalize these distances one to get respective weights:
ω i = 1 d i i = 1 6 1 d i . ( 1 )
That is, the weights are inversely proportional to the distance. For this particular case, the set of weights for the set of input motion vectors are:
={0.0902, 0.1503, 0.0902, 0.1093, 0.4508, 0.1093}, (2)
which sum to
FIG. 4 shows a output macroblock (heavy line) 410, a output macroblock partition "B" 420 of size 8×16, and a set of six input macroblocks, labeled as "b
" through "b
", respectively. The geometric centers and centers are also shown.
FIG. 5 shows a output macroblock 510, an output macroblock partition "C" 520 of size 8×8, and a set of four input macroblocks, labeled as "c
" through "c
", respectively.
Similar to the descriptions for FIG. 3, the motion vectors of the output macroblock partitions "B" and "C", as shown in FIG. 4 and FIG. 5, can be estimated using weighted average of the set of input
motion vectors.
In another embodiment, the weights ω also depend on the sizes of the input blocks. This is particularly useful when the input blocks are different sizes than the output block. In this case, the
weight is proportional to the size.
FIG. 6 shows a output macroblock (heavy line) 610 of size 16×16, a output macroblock block partition "F" 620 of size 16×8, and a set of six input macroblocks, labeled as "f
" through "f
", respectively. The geometric centers and distances are also shown. In this case, each weight is determined as:
ω i = 1 d i × b i i = 1 6 ( 1 d i × b i ) , ( 3 )
where d[i]
is the distance between geometric center of each input block "f
" and the output macroblock partition "F" 620, b
is the smaller dimension of the input block, which is determined by the block size. For example b
is 8 for "f
" and 4 for "f
". Alternatively, b can be the area (size) of the input block. The weights can be determined in a similar manner for other input and output block sizes. Thus, the weight can be distance, a dimension,
an area, or combinations thereof. The weight is set to zero for a input motion vector if the input motion vector is not available, or the input motion vector is determined to be an outlier not to be
One process for determining whether a motion vector V is an outlier is described in the following. Let V
be an average of all input motion vectors. Then, V is considered an outlier if |V-V
| is greater than a predetermined threshold T, where |V-V
.y|, V
, V
are x and y components of the vector V, and V
.x, V
.y are the x and y components of the vector V
Motion Vector Mapping and Refinement
With the set of weights {ω
}, for i=1, 2, . . . , N, and the set of input motion vectors {V
}, we estimate the output motion vector V
for the output block using a weighted average
V o
= i = 1 N ( ω i × V i ) i = 1 N ω i . ( 4 )
or a weighted median
{ V o .di-elect cons. { V i } i = 1 N ω i V i - V i ≦ i = 1 N ω i V j - V i j = 1 , 2 , , N . ( 5 )
After the weighted average or median operation, the resulting motion vector can be subject to the refinement process 207, e.g., when the estimate motion vector is used to perform motion compensated
prediction. Motion vector refinement is a well known method for making relatively small adjustments to a motion vector so that a prediction error is minimized within a small local area of interest,
see A. Vetro, C. Christopoulos, and H. Sun, "Video transcoding architectures and techniques: an overview," IEEE Signal Processing Mag. 20(2): 18-29, March 2003, incorporated herein by reference.
During transcoding from MPEG-2 and H.263, to H.264/AVC, the invention can be used to efficiently estimate motion vectors of different block sizes for H.264/AVC encoding from motion vectors decoded
form the input video bitstream.
The invention can also be used to efficiently encode motion vectors during video encoding. The output motion vector can use the motion vector estimated from motion vectors of neighboring blocks as a
predictor, and then only the difference between the output motion vector and the predictor is signaled to the decoder. Decoding is the reverse process.
This idea is shown in
FIG. 7
, where a output macroblock partition "P" 710 and four causal neighboring blocks "p
" through "p
" are shown. In this case, the motion vector for the partition (shaded) "P" 620 can be encoded using the motion vector estimated from the motion vector of the set of blocks "p
" through "p
" as a predictor.
This approach is more general than a translational macroblock motion model used in conventional encoding. Even when there are motions like zoom-in or zoom-out, the motion vector of a rectangular
macroblock can be considered to be approximately the same as the motion vector of its geometric center.
Although the invention has been described by way of examples of preferred embodiments, it is to be understood that various other adaptations and modifications may be made within the spirit and scope
of the invention. Therefore, it is the object of the appended claims to cover all such variations and modifications as come within the true spirit and scope of the invention.
Patent applications by Anthony Vetro, Arlington, MA US
Patent applications by Jun Xin, Quincy, MA US
Patent applications in class Motion vector
Patent applications in all subclasses Motion vector
User Contributions:
Comment about this patent or add new information about this topic: | {"url":"http://www.faqs.org/patents/app/20080247465","timestamp":"2014-04-18T21:37:35Z","content_type":null,"content_length":"50952","record_id":"<urn:uuid:463f7bb7-34e3-47e2-b7e0-70290f161a97>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00346-ip-10-147-4-33.ec2.internal.warc.gz"} |
Binary operations
So I have really been struggling with this question. The original question said: The map [tex]\varphi[/tex]:Z->Z defined by [tex]\varphi[/tex](n)=n+1 for n in Z is one to one and onto Z. For (Z, . )
onto (Z,*) (i am using . for usual multiplication) define * and show that * makes phi into an isomorphism.
I know that the operation must be m*n=mn-m-n+2. But I get stuck in proving that the operations are preserved. When I do [tex]\varphi[/tex](m.n) i get mn+1. and i can't get [tex]\varphi[/tex](m).
[tex]\varphi[/tex](n) to work. I think I am doing something wrong. Can any one help? | {"url":"http://www.physicsforums.com/showthread.php?t=286305","timestamp":"2014-04-17T18:33:36Z","content_type":null,"content_length":"22066","record_id":"<urn:uuid:d45bacd4-460a-4825-a9c1-6cd13b23e6de>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00468-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cogging Torque Analysis of a Permanent Magnet Machine in a Wind Turbine
Issue Archive
Cogging Torque Analysis of a Permanent Magnet Machine in a Wind Turbine
Sunday, 01 September 2013
Finite element analysis is used to analyze the effects of different designs on the reduction of cogging torque.
Permanent magnet machines are used in many industrial applications because of their ability to produce high power densities. The market for such machines has been expanding due to the availability of
affordable magnet materials, technological improvements, and advances in design and control. While still a relatively new phenomenon in wind turbines, permanent magnet generators are increasingly the
focus of R&D in that field.
In any application, the interaction of the permanent magnets with the stator teeth or rotor poles in permanent magnet machines can give rise to cogging torque, which is unwanted pulsation in the
shaft torque that causes structural vibrations and noise. Due to the absence of axisymmetry in rotor geometry, cogging torque will vary with the angular position of the rotor. The periodicity is
determined by the number of stator slots and rotor poles, while the magnitude is determined by a number of geometric factors such as pole arc angle, magnet dimensions, geometry of the stator teeth,
The cogging variation in the torque may interfere with other components such as position sensors. Vibration and noise are amplified further when the frequency of the cogging torque matches the
mechanical resonant frequency of the stator or rotor. It is therefore essential to evaluate the cogging torque produced by various design choices for a permanent magnet machine.
Finite element analysis (FEA) is used to analyze the effects of different designs on the reduction of cogging torque, and to enable faster prototyping of the final product. Abaqus FEA from SIMULIA,
Dassault Systèmes, was used to compute cogging torque in a permanent magnet generator designed for a wind turbine. To reduce numerical noise, techniques were employed involving a sliding mesh, and
then repeated meshes, using the advanced meshing functionality.
One of the challenging aspects of computing a cogging torque curve using the finite element method is reducing the numerical noise generated from the mesh. The noise arises due to the complex
variation of the magnetic field in the air gap, often leading to numerical cancellation errors that are sensitive to the nature of the chosen mesh. The analyses were based on the geometry of the
stator-internal permanent magnet generator proposed by Zhang et.al. for wind power generation applications (Figure 1).
The simulation consisted of a number of individual analyses, each considering a different angular position of the rotor. Cogging torque computations are sensitive to the nature of the mesh; noise may
be introduced if the mesh topology varies in each angular position. The subsequent cogging torque curve can be noisy, and the periodicity of the curve may be lost.
To minimize the numerical noise, a sliding mesh technique was used. With this approach, the stator mesh remains fixed and the rotor mesh is circumferentially re-positioned for each individual
analysis. The rotor and stator meshes have fixed topologies and their common interface (the center of the air gap) is divided into equally spaced segments with every angular degree of the interface
spanned by seven equally spaced nodes. This allows the rotor mesh to be moved circumferentially in discrete angular increments for each analysis while maintaining spatially coincident nodes at the
The nature of the mesh in the stator teeth also influences the numerical noise. Abaqus/CAE was used to generate a repeated mesh in the stator teeth to help reduce the numerical noise. To maintain
mesh quality, controls such as edge seeds and single/double biased edge seeds are used. The biased edge seeds can be used to generate smaller elements at the stator-air and the rotor-air interfaces
in the air gap. Biased meshing helps resolve the field variation at these interfaces as the magnetic flux leaves the stator tooth and enters the rotor poles, and vice versa. The two-dimensional
magnetostatic problem is modeled using an extruded three-dimensional mesh that has only one element along the thickness direction.
A 2D magnetostatic analysis was performed for various angular positions of the rotor. A 2D analysis ignores the end effects and assumes that the field is invariant along the length of the device.
This is a reasonable assumption for many motor applications, and allows for fast prototyping of the device. A full 3D analysis of the model can be performed at the end of the design either to confirm
or make minor modifications to the design. The angular positions of the rotor considered here range from 0° to 12° with an increment of one-sixth of a degree.
The magnetic field output was postprocessed for each angular position of the rotor to compute the torque on the rotor. The Maxwell stress-tensor-based approach was adopted to compute the torque. In
this approach, the torque is computed as an integral on a surface that encompasses the rotor. For the current analysis, the integration surface is chosen at the center of the air gap to minimize
numerical noise.
The contour plot of the magnetic flux density at zero angular rotation of the rotor is shown in Figure 2. Notice that the magnetic field is saturated (red regions) in the bridge regions. If the
analysis did not account for nonlinearity, the magnetic flux would completely pass through the bridge regions and avoid the high-reluctance air gap, and hence the rotor, altogether.
The cogging torque as a function of angular position is extracted by postprocessing the field output. The torque curve is very smooth and does not exhibit any noise. For an electrical machine, the
periodicity of the cogging torque in degrees is given by 360/LCM(M,N), where M is the number of stator slots, N is the of number rotor poles, and LCM signifies the least common multiple. The current
model has 12 stator slots and 10 rotor poles and hence the periodicity is six degrees. The computed torque curve has the expected periodicity of six degrees.
This work was done by Krishna Gundu, Engineering Specialist, at SIMULIA, Dassault Systèmes. For more information, visit http://info.hotims.com/45607-122.
Digital Edition | {"url":"http://www.techbriefs.com/component/content/article/17173","timestamp":"2014-04-21T02:11:28Z","content_type":null,"content_length":"35035","record_id":"<urn:uuid:bcb89456-188d-4294-bf2b-07ef107e66ad>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00033-ip-10-147-4-33.ec2.internal.warc.gz"} |
Binary operations
So I have really been struggling with this question. The original question said: The map [tex]\varphi[/tex]:Z->Z defined by [tex]\varphi[/tex](n)=n+1 for n in Z is one to one and onto Z. For (Z, . )
onto (Z,*) (i am using . for usual multiplication) define * and show that * makes phi into an isomorphism.
I know that the operation must be m*n=mn-m-n+2. But I get stuck in proving that the operations are preserved. When I do [tex]\varphi[/tex](m.n) i get mn+1. and i can't get [tex]\varphi[/tex](m).
[tex]\varphi[/tex](n) to work. I think I am doing something wrong. Can any one help? | {"url":"http://www.physicsforums.com/showthread.php?t=286305","timestamp":"2014-04-17T18:33:36Z","content_type":null,"content_length":"22066","record_id":"<urn:uuid:d45bacd4-460a-4825-a9c1-6cd13b23e6de>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00468-ip-10-147-4-33.ec2.internal.warc.gz"} |
Compact Kaehler manifolds that are isomorphic as symplectic manifolds but not as complex manifolds (and vice-versa)
up vote 6 down vote favorite
1. What are some examples of compact Kaehler manifolds (or smooth complex projective varieties) that are not isomorphic as complex manifolds (or as varieties), but are isomorphic as symplectic
manifolds (with the symplectic structure induced from the Kaehler structure)? Elliptic curves should be an example, but I can't think of any others. I'm sure there should be lots...
2. In the other direction, if I have two compact Kaehler manifolds (or smooth complex projective varieties) that are isomorphic as complex manifolds (or as varieties), then are they necessarily
isomorphic as symplectic manifolds?
3. And one last question that just came to mind: If two smooth complex (projective, if need be) varieties are isomorphic as complex manifolds, then they are isomorphic as varieties?
ag.algebraic-geometry sg.symplectic-geometry complex-geometry
add comment
5 Answers
active oldest votes
1. Well, there are stupid examples like the fact that $\mathbb{P}^n$ has Kähler structures where any rational multiple of the hyperplane class is the Kähler class which are
compatible with the standard complex structure (you just rescale the symplectic structure and metric). I think you should get similar examples with multi-parameter families on
things like toric varieties with higher dimensional $H^2$.
2. I know some non-compact examples where you can deform the complex structure without changing the symplectic one. I don't know any compact examples, but they probably exist. The
up vote 3 down thing is, the only thing you can deform about a symplectic structure on a compact thing is its cohomology class (by the Moser trick), so anything with an big enough family of
vote accepted Kähler metrics will work.
3. This probably follows from GAGA, but you'd have to ask someone more expert than me to be sure. Edit: David's answer made me realize I forgot to say projective here. That's
add comment
Re 3: If you say projective, then yes. GAGA tells you that an analytic isomorphism is also an algebraic one.
up vote 6 down vote If you don't say projective, then no. See the appendix to Hartshorne for a family of nonisomorphic algebraic structures on C^2/Z^2.
add comment
So here are some examples: When X has no continuous families of automorphisms (H^0(X, TX)=0), complex deformations of X to first order are given by H^1(X, TX). For compact Calabi-Yaus
this is H^{(n-1, 1)} and moreover by Bogomolov-Tian-Todorov the deformations are unobstructed.
Symplectic deformations as Ben noted are controlled by H^2(X, R) by Moser's trick. If we want to deform while staying Kahler, then in H^{(1,1)}(X, R). In mirror symmetry (where this
discussion is stolen from) one allows a B-field and correspondingly a complexified space of deformations H^{(1,1)}. Then for mirror manifolds these two spaces of deformations are
up vote 3
down vote This is discussed in Denis Auroux's notes on mirror symmetry (http://math.mit.edu/~auroux/18.969/, any misinterpretation is my fault).
Mirror symmetry is cool and all, but if we just stay on the same Calabi-Yau the deformation spaces for symplectic and complex structures can have different dimensions - with either one
bigger, giving examples for both 1 and 2.
That's a very nice observation! – Kevin H. Lin Oct 16 '09 at 15:24
add comment
In case anybody is curious, there are still examples of (1) even if one replaces the requirement that the complex manifolds be nonisomorphic with the requirement that they be not even
deformation equivalent. In fact in arXiv:0608110 Catanese showed that Manetti's examples of general type surfaces which are diffeomorphic but not deformation equivalent are
up vote 3 symplectomorphic (with respect to their canonical Kahler forms).
down vote
add comment
If $M \to X$ is smooth and proper, and $M$ is K\"ahler, then the fibers are all symplectomorphic. (Proof: the Levi-Civita connection generates symplectomorphisms.) The family of elliptic
curves was already mentioned, but another interesting one has every general fiber being $F_0$ and the special fiber $F_2$ (Hirzebruch surfaces).
up vote 2 A curious example is the family $\{ xy = t \}$ of hypersurfaces in ${\mathbb C}^2$ as $t$ varies (away from $0$). There, the fibers are all holomorphic, and symplectomorphic, but not by
down vote the same diffeomorphism (their unique closed geodesics are of varying length).
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry sg.symplectic-geometry complex-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/243/compact-kaehler-manifolds-that-are-isomorphic-as-symplectic-manifolds-but-not-as/248","timestamp":"2014-04-21T10:23:33Z","content_type":null,"content_length":"68899","record_id":"<urn:uuid:e0ef4ee1-9a44-4d0f-a757-02a43bf5c562>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00229-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: January 2011 [00662]
[Date Index] [Thread Index] [Author Index]
Re: The new (v.8.0) distribution plots using a numerical
• To: mathgroup at smc.vnet.net
• Subject: [mg115754] Re: The new (v.8.0) distribution plots using a numerical
• From: Darren Glosemeyer <darreng at wolfram.com>
• Date: Thu, 20 Jan 2011 06:27:22 -0500 (EST)
On 1/19/2011 4:29 AM, Mac wrote:
> Statistical visualisation of data has improved dramatically in v 8.0
> and provide a very useful way of summarising statistical uncertainty
> in measurements or simulations. I've been particularly impressed with
> the types of graphs that can be produced using the BoxWhiskerChart[]
> or DistributionChart[] functions.
> Unfortunately I've been frustrated with the lack of support of these
> functions for typical plots of time series or variables which have
> either a numerical or date X-axis. Imagine for instance that you would
> like to plot the distribution of temperatures within a 24 period and
> plot these either in terms of date and/or day of the year, and going
> further compare these to another time series of measurements. It is
> not possible to specify an X-coordinate for each BoxWisker or
> DistributionChart.
> In short, what I would like to achieve is something like this
> data = Table[RandomReal[ExponentialDistribution[1], 10], {10}];
> DistributionChart[data, ChartStyle -> 47,
> ChartElementFunction -> "PointDensity"]
> with an X-axis which is either a number (e.g. day of the year) or even
> better a date. I would anticipate that this can be achieved either
> playing around with the ChartLabel[] function or (more generally)
> using the PlotMarker[] functionality as this would allow the
> superposition of several time series, but I've made little progress
> here.
> Your help would be much appreciated.
> Mac
This can be accomplished by storing the date and list of values for that
date (for instance) together and picking the appropriate parts of the
data. The following labels by day of week.
data = Table[{{2011, 1, i}, RandomReal[ExponentialDistribution[1], 10]},
{i, 10}];
DistributionChart[data[[All, 2]], ChartStyle -> 47,
ChartElementFunction -> "PointDensity",
ChartLabels -> Table[DateString[j, "DayNameShort"], {j, data[[All, 1]]}]]
Your data may need a little processing first to get the individual dates
and associated data, though. For instance, if the measurements are
stored as pairs of dates and numbers, you could first group by date and
then proceed as above. The following does that but labels with month/day
instead of day of week.
(* make a bunch of {date, number} pairs*)
data2 = Table[{{2011, 1, RandomInteger[{1, 10}]},
RandomVariate[ExponentialDistribution[1]]}, {i, 100}];
(* group them by their date coordinates, then construct {date,
listOfValuesForThatDate} pairs and sort to order by the date lists *)
data2 = Sort[Map[{#[[1, 1]], #[[All, 2]]} &, GatherBy[data2, First]]];
DistributionChart[data2[[All, 2]], ChartStyle -> 47,
ChartElementFunction -> "PointDensity",
ChartLabels -> Table[DateString[j, {"MonthShort", "/", "DayShort"}],
{j, data2[[All, 1]]}]]
Darren Glosemeyer
Wolfram Research | {"url":"http://forums.wolfram.com/mathgroup/archive/2011/Jan/msg00662.html","timestamp":"2014-04-17T21:50:51Z","content_type":null,"content_length":"28111","record_id":"<urn:uuid:d58c3148-dfb1-4939-b50b-9c4772817dff>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00143-ip-10-147-4-33.ec2.internal.warc.gz"} |
e Poisson
POISSON DISTRIBUTION
The Poisson distribution arises in many situations. It is safe to say that it is one of the three most important discrete probability distributions (the other two being the uniform and the binomial
distributions). The Poisson distribution can be viewed as arising from the binomial distribution or from the exponential density.
This law was introduced by Poisson into his treaty " Search on the probability of the judgements out of criminal matter and civil matter " published in 1837. His goal was to estimate the influence of
the size of the jury and the rule of majority on the reliability of the verdicts.
We will take an " industrial " example to introduce this law which has been used in very varied fields (communications, astrophysics, nuclear power, queues...). A factory manufactures a textile of
which the density of defects (an average number of defects to m2) is lambda. By supposing the defects distributed independently from each other, we propose to determine the law of the number of
defects in a part of textile of 1 m^2.
We note N the number of defects in the piece of textile , we proposes the following modelisation
We divide the part of textile into n pieces of same size and sufficiently small, and by noting N[i] the number of defects in the ith piece, we can suppose that the random variables N1... Nn are
independents, with values in { 0,1 }, of hope l/n. In other words, N1.., Nn are variables of same law B(l/n) and per continuation N=å (Ni) ( for i=1…n) follows a law B(n, l/n).
Proposal on the convergence of the binomial distribution towards the Poisson distribution
Given (p[n]) n> 1 a series of reals in ]0, 1[ and l >0 such as
So if S[n ]follows a law B(n,p[n]) we have :
Proof :
We just have to write :
and to pass to the limit in each term of the expression. š
II Definition of the Poisson distribution
A real random variable follows a Poisson distribution with parameter l >0 ( P(l)) when we have :
In this case we have E(N)=Var(N)=l. ( The proof is let to the reader.)
The law of Poisson meets however to model phenomena of a type different from the repetitions of rare events that we have just approached here; we go with this intention carry out the first
probabilistic modeling " dynamic " noncommonplace of this course.
Suppose that we have a situation in which a certain kind of occurrence happens at random over a period of time. For example, the occurrences that we are interested in might be incoming telephone
calls to a police station in a large city. We want to model this situation so that we can consider the probabilities of events such as more than 10 phone calls occurring in a 5-minute time interval.
Presumably, in our example, there would be more incoming calls between 6:00 and 7:00 P.M. than between 4:00 and 5:00 A.M., and this fact would certainly affect the above probability. Thus, to have a
hope of computing such probabilities, we must assume that the average rate, i.e., the average number of occurrences per minute, is a constant. This rate we will denote by l. (Thus, in a given
5-minute time interval, we would expect about 5¸ occurrences.) This means that if we were to apply our model to the two time periods given above, we would simply use different rates for the two
time periods, thereby obtaining two different probabilities for the given event.
Our next assumption is that the number of occurrences in two non-overlapping time intervals are independent. In our example, this means that the events that there are j calls between 5:00 and 5:15
P.M. and k calls between 6:00 and 6:15 P.M. on the same day are independent.
We can use the binomial distribution to model this situation. We imagine that a given time interval is broken up into n subintervals of equal length. If the subintervals are sufficiently short, we
can assume that two or more occurrences happen in one subinterval with a probability which is negligible in comparison with the probability of at most one occurrence. Thus, in each subinterval, we
are assuming that there is either 0 or 1 occurrence. This means that the sequence of subintervals can be thought of as a sequence of Bernoulli trials, with a success corresponding to an occurrence in
the subinterval.
To decide upon the proper value of p, the probability of an occurrence in a given subinterval, we reason as follows. On the average, there are lt occurrences in a time interval of length t. If this
time interval is divided into n subintervals, then we would expect, using the Bernoulli trials interpretation, that there should be np occurrences. Thus we want lt=np so p=lt / n.
We now wish to consider the random variable X, which counts the number of occurrences in a given time interval. We want to calculate the distribution of X. For ease of calculation, we will assume
that the time interval is of length 1; for time intervals of arbitrary length t.
We know that P(X = 0) =B(n; p; 0) = (1 - p)^n =(1-l/n)^n
For large n, this is approximately e^-l . It is easy to calculate that for any fixed k, we have
b(n; p; k)/ b(n; p; k - 1) = (l - (k - 1)p)/ kq
which, for large n (and therefore small p) is approximately l/k.
Thus, we have P(X = 1) » l e^-l ,
And in general :
The above distribution is the Poisson distribution. We note that it must be checked that the distribution really is a distribution, i.e., that its values are non-negative and sum to 1.
The Poisson distribution is used as an approximation to the binomial distribution when the parameters n and p are large and small. However, the Poisson distribution also arises in situations where it
may not be easy to interpret or measure the parameters n and p.
Example 1: In his book, Feller discusses the statistics of flying bomb hits in the south of London during the Second World War. Assume that you live in a district of size 10 blocks by 10 blocks so
that the total district is divided into 100 small squares. How likely is it that the square in which you live will receive no hits if the total area is hit by 400 bombs? We assume that a particular
bomb will hit your square with probability 1/100. Since there are 400 bombs, we can regard the number of hits that your square receives as the number of successes in a Bernoulli trials process with n
= 400 and p = 1/100. Thus we can use the Poisson distribution with l= 400 * 1/100 = 4 to approximate the probability that your square will receive j hits. This probability is p(j) =e^-4 ( 4)^j / j!.
The expected number of squares that receive exactly j hits is then 100 * p(j).
If the reader would rather not consider flying bombs, he is invited to instead consider an analogous situation involving cookies and raisins. We assume that we have made enough cookie dough for 500
cookies. We put 600 raisins in the dough, and mix it thoroughly. One way to look at this situation is that we have 500 cookies, and after placing the cookies in a grid on the table, we throw 600
raisins at the cookies.
Example 2: Suppose that in a certain fixed amount A of blood, the average human has 40 white blood cells. Let X be the random variable which gives the number of white blood cells in a random sample
of size A from a random individual. We can think of X as binomially distributed with each white blood cell in the body representing a trial. If a given white blood cell turns up in the sample, then
the trial corresponding to that blood cell was a success. Then p should be taken as the ratio of A to the total amount of blood in the individual, and n will be the number of white blood cells in the
individual. Of course, in practice, neither of these parameters is very easy to measure accurately, but presumably the number 40 is easy to measure. But for the average human, we then have 40=np, so
we can think of X as being Poisson distributed, with parameter l= 40. In this case,it is easier to model the situation using the Poisson distribution than the binomial distribution.
Back to the introduction:
Some exercices :
A little bit of linguistics... : exo6
A story of TGV : exo7 | {"url":"http://www.emse.fr/~yukna/prototypefactory/projects/proba/Loi_de_Poisson.htm","timestamp":"2014-04-17T15:45:02Z","content_type":null,"content_length":"11132","record_id":"<urn:uuid:177eb714-c5ce-4e1f-a027-0524328f71c0>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00236-ip-10-147-4-33.ec2.internal.warc.gz"} |
At this site we maintain a list of the 5000 Largest Known Primes which is updated hourly. This list is the most important databases at The Prime Pages: a collection of research, records and results
all about prime numbers. This page summarizes our information about one of these primes.
This prime's information:
field (help) value
Description: 2 · 3^1074726 + 1
Verification status (*): Proven
Official Comment:
Unofficial Comments: This prime has 1 user comment below.
Proof-code(s): (*): p199 : Broadhurst, NewPGen, OpenPFGW
Decimal Digits: 512775 (log[10] is 512774.91862984)
Rank (*): 771 (digit rank is 1)
Entrance Rank (*): 91
Currently on list? (*): short
Submitted: 2/13/2010 07:52:58 CDT
Last modified: 2/17/2010 07:50:50 CDT
Database id: 91746
Status Flags: none
Score (*): 44.5826 (normalized score 3.5441)
User comments about this prime (disclaimer):
User comments are allowed to convey mathematical information about this number, how it was proven prime.... See our guidelines and restrictions.
Verification data:
The Top 5000 Primes is a list for proven primes only. In order to maintain the integrity of this list, we seek to verify the primality of all submissions. We are currently unable to check all
proofs (ECPP, KP, ...), but we will at least trial divide and PRP check every entry before it is included in the list.
field value
prime_id 91746
person_id 9
machine Ditto P4 P4
what trial_divided
notes Command: /home/ditto/client/TrialDiv/TrialDiv -q 2 3 1074726 1 2>&1
[Elapsed time: 10.060 seconds]
modified 2011-12-27 16:48:40
created 2010-02-13 08:05:02
id 112868
field value
prime_id 91746
person_id 9
machine RedHat P4 P4
what prime
Command: /home/caldwell/client/pfgw -t -q"2*3^1074726+1" 2>&1
PFGW Version 20031027.x86_Dev (Beta 'caveat utilitor') [FFT v22.13 w/P4]
Primality testing 2*3^1074726+1 [N-1, Brillhart-Lehmer-Selfridge]
Running N-1 test using base 2
Using SSE2 FFT
Adjusting authentication level by 1 for PRIMALITY PROOF
Reduced from FFT(229376,19) to FFT(229376,18)
Reduced from FFT(229376,18) to FFT(229376,17)
Reduced from FFT(229376,17) to FFT(229376,16)
3406812 bit request FFT size=(229376,16)
Running N-1 test using base 3
Using SSE2 FFT
Adjusting authentication level by 1 for PRIMALITY PROOF
Reduced from FFT(229376,19) to FFT(229376,18)
Reduced from FFT(229376,18) to FFT(229376,17)
Reduced from FFT(229376,17) to FFT(229376,16)
3406812 bit request FFT size=(229376,16)
Running N-1 test using base 17
Using SSE2 FFT
Adjusting authentication level by 1 for PRIMALITY PROOF
Reduced from FFT(229376,19) to FFT(229376,18)
Reduced from FFT(229376,18) to FFT(229376,17)
Reduced from FFT(229376,17) to FFT(229376,16)
3406812 bit request FFT size=(229376,16)
Running N-1 test using base 23
Using SSE2 FFT
Adjusting authentication level by 1 for PRIMALITY PROOF
notes Reduced from FFT(229376,19) to FFT(229376,18)
Reduced from FFT(229376,18) to FFT(229376,17)
Reduced from FFT(229376,17) to FFT(229376,16)
3406812 bit request FFT size=(229376,16)
Running N-1 test using base 29
Using SSE2 FFT
Adjusting authentication level by 1 for PRIMALITY PROOF
Reduced from FFT(229376,19) to FFT(229376,18)
Reduced from FFT(229376,18) to FFT(229376,17)
Reduced from FFT(229376,17) to FFT(229376,16)
3406812 bit request FFT size=(229376,16)
Running N-1 test using base 31
Using SSE2 FFT
Adjusting authentication level by 1 for PRIMALITY PROOF
Reduced from FFT(229376,19) to FFT(229376,18)
Reduced from FFT(229376,18) to FFT(229376,17)
Reduced from FFT(229376,17) to FFT(229376,16)
3406812 bit request FFT size=(229376,16)
Running N-1 test using base 41
Using SSE2 FFT
Adjusting authentication level by 1 for PRIMALITY PROOF
Reduced from FFT(229376,19) to FFT(229376,18)
Reduced from FFT(229376,18) to FFT(229376,17)
Reduced from FFT(229376,17) to FFT(229376,16)
3406812 bit request FFT size=(229376,16)
Calling Brillhart-Lehmer-Selfridge with factored part 100.00%
2*3^1074726+1 is prime! (-1665.4337s+0.1400s)
[Elapsed time: 4.00 days]
modified 2010-03-13 18:56:30
created 2010-02-13 07:53:01
id 112867
Query times: 0.0009 seconds to select prime, 0.0013 seconds to seek comments. | {"url":"http://primes.utm.edu/primes/page.php?id=91746","timestamp":"2014-04-19T17:02:03Z","content_type":null,"content_length":"13670","record_id":"<urn:uuid:819f901b-bbb8-4697-b2c8-14c21268067d>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00108-ip-10-147-4-33.ec2.internal.warc.gz"} |