content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Odds and ends: the 2010 Joint Mathematics Meeting and Euler's Gem
Posted by: Dave Richeson | January 5, 2010
Odds and ends: the 2010 Joint Mathematics Meeting and Euler’s Gem
I’ll be heading to the 2010 Joint Mathematics Meeting in San Francisco next week. In case any of you are interested in meeting up, here are a few of the items on my (busy) schedule. Please introduce
yourself; it would be nice to put faces with names.
• I’m giving a talk on some work with my collaborator, Jim Wiseman, entitled “Symbolic Dynamics for Nonhyperbolic Systems.” It is in the AMS special session on Dynamical Systems, Friday, January 15
at 5:15. I have no idea how I’ll be able to say anything in my 10 minute time slot, but I’ll do my best.
• I’m on the MAA Committee for Minicourses and will be monitoring two minicourses (one of the perks of being on the committee!):
□ Using GeoGebra to create activities and applets for visualization and exploration, by Michael K. May
□ The hitchhiker’s guide to mathematics, by Dan Kalman and Bruce F. Torrence
• I’m having a book signing for my book Euler’s Gem at the Princeton University Press booth in the exhibit hall, Friday, January 15 at 10:30. Please come by!
If you are going to JMM 2010 and are giving a talk, post it in the comments below. Also, if you’re on Twitter, the hashtag for the meeting is #jointmath. I don’t have a smart phone, so I’m not sure
how much I’ll be able to tweet. But I’ll try to contribute some.
Speaking of Euler’s Gem,… in case you are interested…
I’ll hope to see you at your book signing.
By: Sue VanHattum on January 6, 2010
at 2:00 pm
I asked my son Dan to get me a copy at your book signing but he may be too busy with interviews.
By: David Freeman on January 6, 2010
at 4:17 pm
Great, Sue, I’m looking forward to meeting you.
David, I hope your son can make it. Please wish him luck with the interviews. It will be an exhausting week for him.
By: Dave Richeson on January 6, 2010
at 11:28 pm
Posted in Math | Tags: AMS, Euler's Gem, Joint Mathematics Meeting, MAA, Princeton University Press | {"url":"http://divisbyzero.com/2010/01/05/odds-and-ends-the-2010-joint-mathematics-meeting-and-eulers-gem/","timestamp":"2014-04-17T09:55:35Z","content_type":null,"content_length":"64768","record_id":"<urn:uuid:d3cde9a3-643a-408b-bfe8-a5ea2d3c7537>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00422-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: common core geometry question
Replies: 4 Last Post: Apr 6, 2012 4:42 AM
Messages: [ Previous | Next ]
Re: common core geometry question
Posted: Jan 19, 2012 6:06 PM
I think it means the latter. There used to be a classic construction in the old geometry Regents that required a student to divide a given segment into any number of congruent parts.
In your example, the ratio is 3:4 so that there will be 7 parts. Then depending on how the question is worded, there would be two answers.
If you need further info as to how to do the construction, let me know.
Bobbi Eisenberg,
Chairperson UFT Math Teachers Committee
On Jan 19, 2012, at 5:31 PM, Lisa Clark wrote:
> Has anybody started to write curriculum for the common core High School courses?
> Here is one standard for Geometry : G.GPE.6 Find the point on a directed line segment between two given points that partitions the segment in a given ratio.
> Do you think this means find the midpoint? (that is what I found on one website)
> or they have to find a point between two given points such that the ratio of the segments is 3:4 (for example) ? What do they expect the students to do to find this out?
> Anybody???
> *******************************************************************
> * To unsubscribe from this mailing list, email the message
> * "unsubscribe nyshsmath" to majordomo@mathforum.org
> *
> * Read prior posts and download attachments from the web archives at
> * http://mathforum.org/kb/forum.jspa?forumID=671
> *******************************************************************
* To unsubscribe from this mailing list, email the message
* "unsubscribe nyshsmath" to majordomo@mathforum.org
* Read prior posts and download attachments from the web archives at
* http://mathforum.org/kb/forum.jspa?forumID=671
Date Subject Author
1/19/12 common core geometry question L.C.
1/19/12 Re: common core geometry question Sharon
1/19/12 Re: common core geometry question Pulcini, Brittney
1/19/12 Re: common core geometry question Roberta M. Eisenberg
4/6/12 Re: Use similar right triangles Lance Sayward | {"url":"http://mathforum.org/kb/message.jspa?messageID=7648897","timestamp":"2014-04-21T05:05:40Z","content_type":null,"content_length":"22679","record_id":"<urn:uuid:76b556f7-80cf-40ff-9e1b-f7241262f02a>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00577-ip-10-147-4-33.ec2.internal.warc.gz"} |
pari-2.3.2 released
Karim Belabas on Wed, 28 Mar 2007 17:55:00 +0200
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
Dear PARI lovers,
I would like to announce the release of pari-2.3.2 (STABLE). The sources and
a Windows binary are available from
This is a BUGFIX release for the stable branch, fixing most problems
reported so far.
The one major new feature is specific to the Windows binary:
from now on, it includes GMP kernel instead of the (slower) native one.
N.B: This binary has not been tested on Vista. We see no reason why
anything would fall apart with Vista, but one never knows. Please try
and tell us !
Many thanks to all those who reported problems, on the mailing lists or
through our Bug Tracking System. ( See http://pari.math.u-bordeaux.fr/Bugs/ )
Have fun,
P.S1: If you are running Mac OS X, please see this FAQ before reporting
problems with 'readline' (line editing):
A better fix has been proposed at
but we did not implement it yet.
P.S2: The Changelog.
Done for version 2.3.2 (released 28/03/2006):
[last column crossreferences current development release 2.4.1]
1- [Cygwin] missing -L... -lgmp when compiling with gmp. [F2]
2- ispower(522^3) -> 0 [ looked like a 7th power to is_357_power(), which
then forgot to test for cubes ] [#506] [F3]
3- when nf.disc < 0, nf.diff was an incorrect PARI ideal [#510] [F6]
4- nf.codiff was only correct up to multiplication by some rational [F7]
number (a divisor of nf.disc) [#510]
5- inaccuracy (>= 2ulp) in [cached] log(2) [#498] [F8]
6- exp, sinh, asinh, tanh, atanh were inaccurate near 0 [F9]
7- [GMP kernel] forvec(x=[[-1,0]],print(x)) --> error [#509] [F10]
[ 'resetloop' failed when passing through '0' ]
8- nfbasistoalg(nfinit(y),x) created an invalid t_POLMOD [F11]
9- incorrect result in ZX_resultant (accuracy loss computing bound) [F12]
10- [Configure] gcc-specific flags were used on linux/freebsd/cygwin,
even when __gnuc__ was unset [F14]
11- factor( pure power FqX ) --> SEGV [F15]
12- [GMP kernel] polrootsmod(f, 4) --> wrong result [ low level t_INT [F16]
manipulation not using the int_* macros ]
13- polrootspadic(f, 2, r) --> some roots would be found twice [F17]
[ due to FpX_roots(f, 4) called ] [#521]
14- ??sumalt doesn't compile: in GPHELP, treat \ref in verbatim [F18]
15- matinverseimage returned [;] when no pre-image exists. Conform to [F20]
the docs: "an empty vector or matrix", depending on the input types.
16- 3.5 % 2 --> error [ should be 0.5 ] [F22]
17- sin(1/10^100) --> 0e-28 [ also affected cos,tan,cotan ] [F23]
18- check that k >= 0 in thetanullk [#531] [F26]
19- isprime(-2,1) returned 1 [F27]
20- Fix 'Not enough precision in thue' error [F28]
BA 21- [OS X] Fix kernel detection on x86_64-darwin [F29]
BA 22- [Configure] spectacular failure to recognize gcc under some locales[F34]
23- polredabs(x^8+2*x^6-5*x^4+78*x^2+9) was incorrect [ missed
x^8+6*x^6-x^4+54*x^2+25 due to incorrect "skipfirst" ] [F35]
24- typo in resmod2n (specific to GMP kernel) [#546] [F36]
25- nfmodprinit could create FpX's which were not reduced mod p [F40]
26- O(x^3)^(1/2) was O(x^2) instead of O(x) [F41]
27- substpol(x^-2+O(x^-1),x^2,x) --> error [#555] [F43]
Karim Belabas Tel: (+33) (0)5 40 00 26 17
Universite Bordeaux 1 Fax: (+33) (0)5 40 00 69 50
351, cours de la Liberation http://www.math.u-bordeaux.fr/~belabas/
F-33405 Talence (France) http://pari.math.u-bordeaux.fr/ [PARI/GP] | {"url":"http://pari.math.u-bordeaux.fr/archives/pari-announce-07/msg00000.html","timestamp":"2014-04-20T03:26:10Z","content_type":null,"content_length":"7948","record_id":"<urn:uuid:64e89c3e-95c8-4891-893b-9578e57ee9ba>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00067-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summer 2012
Seminar on Blow-up in Dynamical Systems
Prof. Dr. Bernold Fiedler, Dr. Stefan Liebscher, Hannes Stuke
July 14-18, 2012, Preliminary discussion will be at the 08.05.12 after the lecture
Recent PDE research was mainly dominated by the question of existence and regularity of solutions of certain PDEs. But in applications, like physics, there naturally arise PDEs, which possess
solutions that do not stay globally bounded. In this seminar we want to study verious aspects of solutions that “blow up”, that is to say, stop to exists in a certain sense.
We want to adress questions like
• What is the mathematical framework to define “blow-up”?
• Are there different types? Can we categorize them?
• How does the blow-up look like? Can we quantify it?
• Can we continue solutions after the blow-up?
Central examples will be reaction-diffusion equations of the types u[t] = Δu + u^p and u[t] = Δu + |u|^p-1u.
• A. Pazy: Semigroups of Linear Operators and Applications to Partia Differential Equations. Springer, 1983
• P. Quittner / P. Souplet: Superlinear Parabolic Problems - Blow-up, Global Existence and Steady States. Springer Basel, 2007
• B. Hu: Blow-up Theories for Semilinear Parabolic Equations. Springer, 2011
• M. Fila / H. Matano: Blow-up in nonlinear heat equations from the dynamical systems point of view. Handbook of Dynamical Systems 2, Chapter 14, 2002
• Y. Giga / R. Kohn: Asymptotically Self-similar Blow-up of Semilinear Heat Equations. Communications on Pure and Applied Mathematics, 1985
• S. Fillipas: On the blow-up of multidimensional semilinear heat equations. IMA Preprint Series 798, 1991
• B. Fiedler / H. Matano: Global Dynamics of Blow-up Profiles in One-dimensional Reaction Diffusion Equations. Journal of Dynamics and Differential Equations, 2007
• H. Fujita: On the blowing up of solutions of the Cauchy problem for u[t] = Δu + u^1 + α. 1966
Target audience
Students of semesters 6-10, students of the BMS (talks can be given in German and/or English)
Experience in Dynamical Systems or Partial Differential Equations
Depending on prior knowledge and interests of the participants, talks will cover a selection of the following topics.
PDE basics
• References: Evans; Gilbarg-Trudinger; Jost; or other books on PDE
• Scope: Maximum principle; super- and subsolutions; Sobolev spaces; weak solutions
Local existence results
• References: Pazy
• Scope: Well-posedness; classical, distributional, mild, weak solutions; analytic semigroups; maximal time of existence
Simple blow-up [1-2 Talks]
• References: Hu, Chapter 5.1-5.3; Quittner / Souplet, Chapter II.17
• Scope: Kaplan eigenvalue method, Concavity method, Comparison principle, Starting above positive equilibrium
Critical exponent
• References: Quittner / Souplet, Chapter II.18; Fujita
• Scope: Fujita type results, Dependence of the dimension
Diffusion vs Blow-up
• References: Quittner / Souplet, Chapter II.19.3
• Scope: Diffusion eliminates blow-up, Discussion of examples
Blow-up set
• References: Quittner / Souplet, Chapter II.24; Hu, Chapter 7
• Scope: What is a blow-up set?, How does it look like --- discrete, compact?
Blow-up rate
• References: Hu, Chapter 7; Quittner / Souplet Chapter II.23
• Scope: Lower and upper bound of the blow-up rate based on scaling methods, Blow-up rates for examples
Shape of blow-up via energy estimates
• References: Hu, Chapter 8; Giga / Kohn
• Scope: Similarity variables, Backward self-similarity, Asymptotics of the blow-up solutions, Stationary solutions and blow-up
Shape of blow-up: center-manifold approach
• References: Fila / Matano; Fillipas
• Scope: Center manifolds, slaving principle, Center dynamics and shapes
Shape of blow-up: Prescribed shape in unstable manifolds
• References: Fiedler / Matano
• Scope: Understanding the paper
Beyond blow-up
• References: Fila / Matano; Quittner / Souplet, Chapter II.27
• Scope: Complete vs incomplete blow-up, Continuation of blow-up solutions | {"url":"http://dynamics.mi.fu-berlin.de/lectures/12SS-Fiedler-Seminar/","timestamp":"2014-04-19T09:25:39Z","content_type":null,"content_length":"10513","record_id":"<urn:uuid:1e4379f8-9d4a-45e5-a6b7-1eaae0ce5a90>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00021-ip-10-147-4-33.ec2.internal.warc.gz"} |
Estimate mean and std dev?
April 8th 2009, 11:50 AM #1
Mar 2009
A random sample of 13 observations from a population yielded $\sum$ x=488.8 and $\sum$$x^2$ = 18950.2. Estimate $\mu$ and $\sigma.$
I can't find this in my notes, I think $\sum$ x = 488.8 would be the sum of the sample, not sure where to go from there.
Thank you
$\overline{x} = E(X) = \frac{\sum_{i=1}^n x_i}{n}$
$s_X^2 = Var(X) = \frac{\sum_{i=1}^n (x_i - \overline{x})^2}{n} = \frac{\left(\sum_{i=1}^n x_i^2\right) - 2 \overline{x} \left( \sum_{i=1}^n x_i \right) + n \overline{x}^2}{n}$.
$= \frac{\sum_{i=1}^n x_i^2}{n} - 2 \overline{x} \frac{\sum_{i=1}^n x_i}{n} + \overline{x}^2$
$= \frac{\sum_{i=1}^n x_i^2}{n} - \overline{x}^2$.
Substitute your data and do the calculations.
$\overline{x} = E(X) = \frac{\sum_{i=1}^n x_i}{n}$
$s_X^2 = Var(X) = \frac{\sum_{i=1}^n (x_i - \overline{x})^2}{n} = \frac{\left(\sum_{i=1}^n x_i^2\right) - 2 \overline{x} \left( \sum_{i=1}^n x_i \right) + n \overline{x}^2}{n}$.
$= \frac{\sum_{i=1}^n x_i^2}{n} - 2 \overline{x} \frac{\sum_{i=1}^n x_i}{n} + \overline{x}^2$
$= \frac{\sum_{i=1}^n x_i^2}{n} - \overline{x}^2$.
Substitute your data and do the calculations.
Ok, but I don't understand these formulas. What is $\frac{\sum_{i=1}^n x_i}{n}$ the n over i?
Is the first formula for $\mu$ and the second for $\sigma$?
Could you possibly give me a head start or an example maybe?
Thanks for your help.
$\sum_{i=1}^n x_i$ is the sum of all the data.
$\sum_{i=1}^n x_i^2$ is the sum of the squares of all the data.
$n$ is the number of data points.
You have been given the value of these three things. Substitute them into the formulae I gave you.
Ok, I think I understand. Thank you for the help.
April 8th 2009, 02:37 PM #2
April 8th 2009, 03:47 PM #3
Mar 2009
April 9th 2009, 02:43 AM #4
April 9th 2009, 11:56 AM #5
Mar 2009 | {"url":"http://mathhelpforum.com/advanced-statistics/82900-estimate-mean-std-dev.html","timestamp":"2014-04-16T19:16:45Z","content_type":null,"content_length":"50162","record_id":"<urn:uuid:97bafda2-7bd3-4327-b68c-6cd20ff2b7a1>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00117-ip-10-147-4-33.ec2.internal.warc.gz"} |
Notes on Batteries
Duration under Load
Calculating how long a battery will last at a given rate of discharge is not as simple as "amp-hours" - battery capacity decreases as the rate of discharge increases. For this reason, battery
manufacturers prefer to rate their batteries at very low rates of discharge, as they last longer and get higher ratings that way. That is fine if you're building a low-power application, but if your
contraption really "sucks juice", you won't be getting the amp-hours you paid for.
The formula for calculating how long a battery will really last has the charming name of "Peukert's Formula". It is...
T = C / I^n
where C is theoretical capacity (in amp-hours, equal to actual capacity at one amp), I is current (in amps), T is time (in hours), and n is the Peukert number for the battery. The Peukert number
shows how well the battery holds up under high rates of discharge - most range from 1.1 to 1.3, and the closer to 1, the better. The Peukert number is determined empirically, by testing the battery
at different rates. See Uve's Battery Page for a Javascript Peukert calculator.
You can see from the graph that a battery with a Peukert's number of 1.3 has half the capacity of a battery with a Peukert's number of 1.1 at a discharge rate of 25 amps, even though they both have
the same theoretical capacity (and are probably rated the same by the manufacturers).
Here is a little calculator to play with.
Depth of Discharge and Life Expectancy
Batteries don't last forever - their lifetimes are measured in cycles, or how many times they can be discharged and recharged before they will no longer take a full charge. The depth of discharge
(D.O.D.) has a major effect on the life expectancy of a battery - discharging only 80% of the total capacity of the battery will typically get you 25% more cycles than total discharges, and
discharging to only 20% will make the battery last essentially forever. Car batteries, however, have to be treated differently - they're not designed to discharge even 20%, and will be damaged if
they're deeply discharged. A "deep cycle" battery, on the other hand, can typically survive 400 full discharges.
Multiple Batteries
Very often, one battery won't do the trick - or more likely, you don't have the one that will do the trick, so you're stuck with multiple small batteries.
Hooking batteries in parallel will give you the same voltage as a single battery, but with a Ah and current carrying capacity equal to the sum of the capacities of all the batteries. For example,
three 12v 20 Ah batteries in parallel will give you 12v 60 Ah. If each battery could put out 200 amps max, three in parallel could put out 600 amps max.
Batteries in Parallel
Hooking the batteries in series will give you a voltage equal to the total voltage of all the batteries, but the Ah and current carrying capacity of only one. For example, three 12v 20 Ah batteries
in series will give you 36v 20 Ah. If each battery could put out 200 amps max, then three in series will put out only 200 amps max.
Batteries in Series
In other words, you can combine voltage, or capacity, but not both.
Current draw is divided up amongst the batteries the same way that capacity is combined. In parallel, each battery only need supply a fraction of the total current drawn by the load; in series, each
battery must supply the full current. Thus, a motor drawing 150 amps from three 12 volt batteries in parallel will draw 50 amps from each.
Electrical power is measured in watts, and is equal to voltage times current:
P = E × I
Notice that while you can combine voltage by putting batteries in series, or combine Ah and current carrying capacity by putting them in parallel, you cannot effect the total power required of the
batteries. 12v × 600 A = 36v × 200 A = 72,000 watts either way.
Sample Calculation
Consider an electric car with ten (12 volt, 75 Ah theoretical capacity, 1.1 Peukert) batteries and a 20 hp electric motor that draws 155 amps at 5000 rpm at 120 volts.
As the ten batteries are in series, the current draw is not distributed among the batteries: each battery must provide the full 155 amps to feed the motor. Using the above calculator to figure the
capacity of a battery at 155 amps, we get 45.2 Ah. The 100% discharge time will be 45.2 Ah ÷ 155 amps = 0.29 hours, or 17 minutes, so the 80% discharge time will be 14 minutes. If the car is geared
to travel at 60 mph at 5000 rpm (and assuming it can reach 60 mph on only 20 hp, which it probably can), it's range will be 14 miles.
© 2003 W. E. Johns | {"url":"http://gizmology.net/batteries.htm","timestamp":"2014-04-18T00:12:30Z","content_type":null,"content_length":"7683","record_id":"<urn:uuid:6c259f35-db08-45b8-be8a-6625ec492180>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00028-ip-10-147-4-33.ec2.internal.warc.gz"} |
matrix multiplication
Results 1 - 10 of 34
, 1984
"... Given a primitive element g of a finite field GF(q), the discrete logarithm of a nonzero element u GF(q) is that integer k, 1 k q - 1, for which u = g k . The well-known problem of computing
discrete logarithms in finite fields has acquired additional importance in recent years due to its appl ..."
Cited by 87 (6 self)
Add to MetaCart
Given a primitive element g of a finite field GF(q), the discrete logarithm of a nonzero element u GF(q) is that integer k, 1 k q - 1, for which u = g k . The well-known problem of computing discrete
logarithms in finite fields has acquired additional importance in recent years due to its applicability in cryptography. Several cryptographic systems would become insecure if an efficient discrete
logarithm algorithm were discovered. This paper surveys and analyzes known algorithms in this area, with special attention devoted to algorithms for the fields GF(2 n ). It appears that in order to
be safe from attacks using these algorithms, the value of n for which GF(2 n ) is used in a cryptosystem has to be very large and carefully chosen. Due in large part to recent discoveries, discrete
logarithms in fields GF(2 n ) are much easier to compute than in fields GF(p) with p prime. Hence the fields GF(2 n ) ought to be avoided in all cryptographic applications. On the other hand, ...
- In Proc. 44th ACM Symposium on Theory of Computation , 2012
"... We develop new tools for analyzing matrix multiplication constructions similar to the Coppersmith-Winograd construction, and obtain a new improved bound on ω < 2.3727. 1 ..."
Cited by 39 (5 self)
Add to MetaCart
We develop new tools for analyzing matrix multiplication constructions similar to the Coppersmith-Winograd construction, and obtain a new improved bound on ω < 2.3727. 1
- SIAM J. Sci. Stat. Comput , 1988
"... The Cray-2 is capable of performing matrix multiplication at very high rates. Using library routines provided byCray Research, Inc., performance rates of 300 to 425 MFLOPS can be obtained on a
single processor, depending on system load. Considerably higher rates can be achieved with all four process ..."
Cited by 32 (2 self)
Add to MetaCart
The Cray-2 is capable of performing matrix multiplication at very high rates. Using library routines provided byCray Research, Inc., performance rates of 300 to 425 MFLOPS can be obtained on a single
processor, depending on system load. Considerably higher rates can be achieved with all four processors running simultaneously. This article describes how matrix multiplication can be performed even
faster, up to twice the above rates. This can be achieved by (1) employing Strassen's matrix multiplication algorithm to reduce the number of oating-point operations performed and (2) utilizing local
memory on the Cray-2 to avoid performance losses due to memory bank contention. The numerical stability and potential for parallel application of this procedure are also discussed.
- SIAM J. Comput , 1988
"... A new parallel algorithm is given to evaluate a straight line program. The algorithm evaluates a program over a commutative semi-ring R of degree d and size n in time O(log n(log nd)) using M(n)
processors, where M(n) is the number of processors required for multiplying n \Theta n matrices over the ..."
Cited by 31 (5 self)
Add to MetaCart
A new parallel algorithm is given to evaluate a straight line program. The algorithm evaluates a program over a commutative semi-ring R of degree d and size n in time O(log n(log nd)) using M(n)
processors, where M(n) is the number of processors required for multiplying n \Theta n matrices over the semi-ring R in O(log n) time. Appears in SIAM J. Comput., 17/4, pp. 687--695 (1988).
Preliminary version of this paper appeared in [6]. y Research supported in part by National Science Foundation Grant MCS-800756 A01. z Research supported by NSF under ECS-8404866, the Semiconductor
Research Corporation under RSCH 84-06-049-6, and by an IBM Faculty Development Award. x Research Supported in part by NSF Grant DCR-8504391 and by an IBM Faculty Development Award. 1 INTRODUCTION 1 1
Introduction In this paper we consider the problem of dynamic evaluation of a straight line program in parallel. This is a generalization of the result of Valiant et al [10]. They consider the
problem of ta...
, 1988
"... An efficient context-free parsing algorithln is preseuted that can parse sentences with unknown parts of unknown length. It produc in finite form all possible parses (often infinite in number)
that could account for the missing parts. The algorithm is a variation on the construction due to Earl ..."
Cited by 29 (2 self)
Add to MetaCart
An efficient context-free parsing algorithln is preseuted that can parse sentences with unknown parts of unknown length. It produc in finite form all possible parses (often infinite in number) that
could account for the missing parts. The algorithm is a variation on the construction due to Earley. ltowever, its presentation is such that it can readily be adapted to any chart parsing schema
(top- down, bottom-up, etc...).
- In FOCS ’03: Proceedings of the 44th Annual IEEE Symposium on Foundations of Computer Science , 2003
"... ..."
- Communications of the ACM , 1983
"... foremost recognition of technical contributions to the computing community. The citation of Cook's achievements noted that "Dr. Cook has advanced our understanding of the complexity of
computation in a significant and profound way. His seminal paper, The Complexity of Theorem Proving Procedures ..."
Cited by 17 (0 self)
Add to MetaCart
foremost recognition of technical contributions to the computing community. The citation of Cook's achievements noted that "Dr. Cook has advanced our understanding of the complexity of
computation in a significant and profound way. His seminal paper, The Complexity of Theorem Proving Procedures, presented at the 1971 ACM SIGACT Symposium on the Theory of Computing, laid the
foundations for the theory of NP-completeness. The ensuing exploration of the boundaries and nature of the NP-complete class of problems has been one of the most active and important research
activities in computer science for the last decade. Cook is well known for his influential results in fundamental areas of computer science. He has made significant contributions to complexity
theory, to time-space tradeoffs in computation, and to logics for programming languages. His work is characterized by elegance and insights and has illuminated the very nature of computation."
During 1970-1979, Cook did extensive work under grants from the
- In Proceedings of the 24th ACM Symposium on Parallelism in Algorithms and Architectures, SPAA ’12 , 2012
"... Parallel matrix multiplication is one of the most studied fundamental problems in distributed and high performance computing. We obtain a new parallel algorithm that is based on Strassen’s fast
matrix multiplication and minimizes communication. The algorithm outperforms all known parallel matrix mul ..."
Cited by 15 (13 self)
Add to MetaCart
Parallel matrix multiplication is one of the most studied fundamental problems in distributed and high performance computing. We obtain a new parallel algorithm that is based on Strassen’s fast
matrix multiplication and minimizes communication. The algorithm outperforms all known parallel matrix multiplication algorithms, classical and Strassen-based, both asymptotically and in practice. A
critical bottleneck in parallelizing Strassen’s algorithm is the communication between the processors. Ballard, Demmel, Holtz, and Schwartz (SPAA’11) prove lower bounds on these communication costs,
using expansion properties of the underlying computation graph. Our algorithm matches these lower bounds, and so is communication-optimal. It exhibits perfect strong scaling within the maximum
possible range. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=658937","timestamp":"2014-04-18T17:39:17Z","content_type":null,"content_length":"34942","record_id":"<urn:uuid:a8dc2fda-28b1-49fc-8141-2322eeba7d45>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00603-ip-10-147-4-33.ec2.internal.warc.gz"} |
The College Mathematics Journal - November 1999
Contents for November 1999
Things I Have Learned at the AP Reading
by Dan Kennedy
Elementary calculus contains subtleties, which sometimes even those who construct problems for the calculus Advanced Placement test can overlook. You may be able to think of an example of a
function whose derivative approaches no limit as x approaches zero but has a derivative at zero, but you probably can't think of a function whose derivative at zero is 0 and whose derivative is
positive on either side of zero but which doesn't have an inflection point at zero.
Minimizing Aroma Loss
by Robert Barrington Leigh and Richard Travis Ng
Every time you open a coffee can, some of the aroma (which smells much better than the coffee tastes) escapes. Naturally, you would like to make the loss as small as possible. The authors,
neither of whom is out of high school yet, show how.
Recounting Fibonacci and Lucas Identities
by Arthur T. Benjamin and Jennifer J. Quinn
As we all know, there are almost infinitely many identities involving Fibonacci and Lucas numbers (only for binomial coefficients is the number larger) and they can be established in a large
number of ways--induction, the Binet formulas, generating functions, determinants, and so on. The authors give another way of looking at them, combinatorically, and show that many can be reduced
almost to proofs without words thereby.
Do Most Cubic Graphs Have Two Turning Points?
by Robert Fakler
Well, they tend to when teachers draw pictures of them on the board, but no conclusions can be drawn from that. If they could, we would conclude that most lines have positive slope and most
parabolas open up. If your intuition tells you that the answer to the question is "yes," it is operating properly. If fact, almost all cubics (in the proper sense) have relative maxima and
Folding Stars
by Charles Waiveris and Yuanqian Chen
When can you fold paper to make a star? Not all of the time, nor none of the time. This paper shows when.
The Effects of a Stiffening Spring
by K. E. Clark and S. Hill
The coupled-spring problem is a staple of differential equations courses. What would happen if one of the springs got stiffer and stiffer? Intuition tells us that the system would act more and
more like one with a single spring. Once again intuition is vindicated, though, as is often the case, the verification is not trivial.
Fallacies, Flaws, and Flimflam
edited by Ed Barbeau
A proof that both Cauchy and Schwartz had the inequality sign in the Cauchy-Schwartz inequality going in the wrong direction, and other surprising results.
Classroom Capsules
From Square Roots to n-th Roots: Newton's Method in Disguise
by W. M. Priestley
An appealing way to approximate n-th roots.
Amortization: An Application of Calculus
by Richard E. Klima and Robert G. Donnelly
An opportunity to construct a mathematical model of a fascinating subject, namely money.
Reexamining the Catenary
by Paul Cella
Texts commonly throw up their hands when confronted with a cable hanging from supports at different levels. They need not.
Second Order Iterations
by Joseph J. Roseman and Gideon Zwas
If our power to divide were destroyed, all would not be lost: there exist iterative procedures not involving division that calculate quotients. The second-order procedure is better than both the
first- and third-order methods.
Software Reviews
The New Mathwright Library, reviewed by Dan Kalman
Problems and Solutions
Media Highlights
Including who first found the palindrome "sex at noon taxes" and why students should major in mathematics.
Book Review
State Mathematical Standards, reviewed by Mark Saul | {"url":"http://www.maa.org/publications/periodicals/college-mathematics-journal/college-mathematics-journal-contents-november-9","timestamp":"2014-04-16T08:45:00Z","content_type":null,"content_length":"100955","record_id":"<urn:uuid:93ba9640-f2a4-4ca9-bbfc-6e9d9c7eb31b>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00611-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lecture 2: Differential Eqns and Difference Eqns
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare offer high quality educational resources for free. To make a donation or to view
additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
PROFESSOR STRANG: Starting with a differential equation. So key point here in this lecture is how do you start with a differential equation and end up with a discrete problem that you can solve? But
simple differential equation. It's got a second derivative and I put a minus sign for a reason that you will see. Second derivatives are essentially negative definite things so that minus sign is to
really make it positive definite. And notice we have boundary conditions that at one end the solution is zero, at the other end it's zero. So this is fixed-fixed. And it's a boundary value problem.
That's different from an initial value problem. We have x space, not time. So we're not starting from some thing and oscillating or growing or decaying in time. We have a fixed thing. Think of an
elastic bar. An elastic bar fixed at both ends, maybe hanging by its own weight. So that load f(x) could represent the weight.
Maybe the good place to sit is over there, there are tables just, how about that? It's more comfortable.
So we can solve that equation. Especially when I change f(x) to be one. As I plan to do. So I'm going to change, I'm going to make it-- it's a uniform bar because there's no variable coefficient in
there and let me make it a uniform load, just one. So it actually shows you that, I mentioned differential equations and we'll certainly get onto Laplace's equation, but essentially our differential
equations will not-- this isn't a course in how to solve ODEs or PDEs. Especially not ODEs. It's a course in how to compute solutions. So the key idea will be to replace the differential equation by
a difference equation. So there's the difference equation. And I have to talk about that. That's the sort of key point. That up here you see what I would call a second difference. Actually with a
minus sign. And on the right-hand side you see the load, still f(x).
Can I move to this board to explain differences? Because this is like, key step is given the differential equation replace it by difference equation. And the interesting point is you have many
choices. There's one differential equation but even for a first derivative, so this if you remember from calculus, how did you start with the derivative? You started by something before going to the
limit. h or delta x goes to zero in the end to get the derivative. But this was a finite difference. You moved a finite amount. And this is the one you always see in calculus courses. U(x + h)-U(x)
just how much did that step go. You divide by the delta x, the h and that's approximately the derivative, U'(x). Let me just continue with these others. I don't remember if calculus mentions a
backward difference. But you won't be surprised that another possibility, equally good more or less, would be to take the point and the point before, take the difference, divide by delta x. So again
all these approximate U'.
And now here's one that actually is really important. A center difference. It's the average of the forward and back. If I take that plus that, the U(x)'s cancel and I'm left with, I'm centering it.
This idea of centering is a good thing actually. And of course I have to divide by 2h because this step is now 2h, two delta x's. So that again is going to represent U'. But so we have a choice if we
have a first derivative. And actually that's a big issue. You know, one might be called upwind, one might be called downwind, one may be called centered. It comes up constantly in aero and mechanical
engineering, everywhere. You have these choices to make. Especially for the first difference. We don't have, I didn't allow a first derivative in that equation because I wanted to keep it symmetric
and first derivatives, first differences tend to be anti-symmetric. So if we want to get our good matrix K, and I better remember to divide by h squared because the K just has those that I introduced
last time and will repeat, the K just has the numbers -1, 2, -1.
Now, first point before we leave these guys what's up with them? How do we decide which one is better? There's something called the order of accuracy. How close is the difference to the derivative?
And the answer is the error is of size h. So I would call that first order accurate. And I can repeat here but the text does it, how you recognize what this is the, sort of local error, truncation
error, whatever, you've chopped off the exact answer and just did differences. This one is also order of h. And in fact, the h terms, the leading error, which is going to multiply the h, has opposite
sign in these two and that's the reason center differences are great. Because when you average them you center things. This is correct to order h squared. And I may come back and find out why that h
squared term is.
Maybe I'll do that. Yeah. Why don't I just? Second differences are so important. Why don't we just see. And center differences. So let me see. How do you figure U(x + h)? This is a chance to remember
something called Taylor series. But that was in calculus. If you forgot it, you're a normal person. So but what does it say? That's the whole point of calculus, in a way. That if I move a little bit,
I start from the point x and then there's a little correction and that's given by the derivative and then there's a further correction if I want to go further and that's given by half of h squared,
you see the second order correction, times the second derivative. And then, of course, more. But that's all you ever have to remember. It's pretty rare. Second order accuracy is often the goal in
scientific computing. First order accuracy is, like, the lowest level. You start there, you write a code, you test it and so on. But if you want production, if you want accuracy, get to second order
if possible.
Now, what about this U(x - h)? Well, that's a step backwards, so that's U(x). Now the step is -h, but then when I square that step I'm back to +h squared, U''(x) and so on. Ooh! Am I going to find
even more accuracy? I could tell you what the next turn is. Plus h cubed upon six, that's three times two times one, U''' . And this would be, since this step is -h now, it would be -h cubed upon six
U''' . So what happens when I take the difference of these two? Remember now that center differences subtract this from this. So now that U(x + h)-U(x-h) is zero. 2hU' subtracting that from that is
zero, two of these, so I guess that we really have an h cubed over three, U'''. And now when I divide by the 2h, can I just divide by 2h here? Oh yeah, it's coming out right, divide by 2h, divide
this by 2h, that'll make it an h squared over six.
I've done what looks like a messy computation. I'm a little sad to start a good lecture, important lecture by such grungy stuff. But it makes the key point. That the center difference gives the
correct derivative with an error of order h squared. Where the error if for the first differences the h would have been there. And we can test it. Actually we'll test it. Okay for that? This is first
differences. And that's a big question; what do you replace the first derivative by if there is one? And you've got these three choices. And usually this is the best choice.
Now to second derivatives. Because our equation has got U'' in it. So what's a second derivative? It's the derivative of the derivative. So what's the second difference? It's the difference, first
difference of the first difference. So the second difference, the natural second difference would be-- so now let me use this space for second differences. Second differences. I could take the
forward difference of the backward difference. Or I could take the backward difference of the forward difference. Or you may say why don't I take the center difference of the center difference. All
those, in some sense it's delta squared, but which to take? Well actually those are the same and that's the good choice, that's the 1, -2, 1 choice. So let me show you that. Let me say what's the
matter with that. Because now having said how great center differences are, first differences, why don't I just repeat them for second differences?
Well the trouble is, let me say in a word without even writing, well I could even write a little, the center difference, suppose I'm at a typical mesh point here. The center difference is going to
take that value minus that value But then if I take the center difference of that I'm going to be out here. I'm going to take this value, this value, and this value. I'll get something correct. Its
accuracy will be second order, good. But it stretches too far. We want compact difference molecules. We don't want this one, minus two of this, plus one of that. So this would give us a 1, 0, -2 , 0,
1. I'm just saying this and then I'll never come back to it because I don't like this one, these guys give 1, -2, 1 without any gaps. And that's the right choice. And that's the choice made here.
So I'm not thinking you can see it in your head, the difference of the difference. But well, you almost can. If I take this, yeah. Can you sort of see this without my writing it? If I take the
forward difference and then I subtract the forward difference to the left, do you see that I'll have minus two. So there is what I started with. I subtract U(x)-U(x-h) and I get two -U(x)'s. This is
what I get. Now I'm calling that ui. I better make completely clear about the minus sign. The forward difference or the backward difference, what this leads is 1, -2, 1. That's the second difference.
Very important to remember, the second difference of a function is the function, the value ahead, minus two of the center, plus one of the left. It's centered obviously, symmetric, right? Second
differences are symmetric. And because I want a minus sign I want minus the second difference and that's why you see here -1, 2, -1 . Because I wanted positive twos there. Are you ok? This is the
natural replacement for -U''. And I claim that this second difference is like the second derivative, of course.
And why don't we just check some examples to see how like the second derivative it is. So I'm going to take the second difference or some easy functions. It's very important that these come out so
well. So I'm going to take the second difference. I'm going to write it as sort of a matrix. So this is like the second different. Yeah, because this is good. I'm inside the region, here. I'm not
worried about the boundaries now. Let me just think of myself as inside. So I have second differences and suppose I'm applying it to a vector of all ones. What answer should I get? So if I think of
calculus it's the second derivative of one, of the constant function. So what answer am I going to get? Zero. And do I get zero? Of course. I get zero. Right? All these second differences are zero.
Because I'm not worrying about the boundary yet. So that's like, check one. It passed that simple test.
Now let me move up from constant to linear. And so on. So let me apply second differences to a vector that's growing linearly. What answer do I expect to get for that? So remember I'm doing second
differences, like second derivatives, or minus second derivatives, actually. So what do second derivatives do to a linear function? If I take a straight line I take the-- sorry, second derivatives.
If I take second derivatives of a linear function I get? Zero, right. So I would hope to get zero again here and I do. Right? -1+4-3=0. Minus one, sorry, let me do it here, -2+6-4. And actually,
that's consistent with our little Taylor series stuff. The function x should come out right.
Now what about-- now comes the moment. What about x squared? So I'm going to put squares in now. Do I expect to get zeroes? I don't think so. Because let me again test it by thinking about what
second derivative. So now I'm sort of copying second derivative of x squared, which is? Second derivative of x squared is? Two, right? First derivative's 2x, second derivative is just two. So it's a
constant. And remember I put in a minus sign so I'm wondering, do I get the answer minus two? All the way down. -4+8-9. Whoops. What's that? What do I get there? What do I get from that second
difference of these squares? -4+8-9 is? Minus two, good. So can we keep going? -4+18-16. What's that? -4+18-16, so I've got -20+18 . I got minus two again. -9, 32, -25, it's right. The second
differences of the vector of squares, you could say, is a constant vector with the right number. And that's because that second difference is second order accurate. It not only got constants right
and linears right, it got quadratics right. So that's, you're seeing second differences. We'll soon see that second differences are also on the ball when you apply them to other vectors. Like vectors
of sines or vectors of cosines or exponentials, they do well. So that's just a useful check which will help us over here.
Okay, can I come back to the part of the lecture now? Having prepared the way for this. Well, let's start right off by solving the differential equation. So I'm bringing you back years and years and
years, right? Solve that differential equation with these two boundary conditions. How would you do that in a systematic way? You could almost guess after a while, but systematically if I have a
linear, I notice-- What do I notice about this thing? It's linear. So what am I expecting? I'm expecting, like, a particular solution that gives the correct answer one and some null space solution or
whatever I want to call it, homogenous solution that gives zero and has some arbitrary constants in it. Give me a particular solution. So this is going to be our answer. This'll be the general
solution to this differential equation. What functions have minus the second derivative equal one, that's all I'm asking. What are they? So what is one of them? One function that has its second
derivative as a constant and that constant is minus one. So if I want the second derivative to be a constant, what am I looking at? x squared. I'm looking at x squared. And I just want to figure out
how many x squareds to get a one. So some number of x squareds and how many do I want? -1/2, good. Good. -1/2. Because x squared would give me two but I want minus one so I need -1/2. Okay that's the
particular solution.
Now throw in all the solutions, I can add in any solution that has a zero on the right side, so what functions have second derivatives equals zero? x is good. I'm looking for two because it's a
second derivative, second order equation. What's the other guy? Constant, good. So let me put the constant first, C, say, and Dx. Two constants that I can play with and what use am I going to make of
them? I'm going to use those to satisfy the two boundary conditions. And it won't be difficult. You could say plug in the first boundary condition, get an equation for the constants, plug in the
second, got another equation, we'll have two boundary conditions, two equations, two constants. Everything's going to come out. So if I plug in U(0)=0, what do I learn? C is zero, right? If I plug
in, is that right? If I plug in zero, then that's zero already, this is zero already, so I just learned that C is zero. So C is zero. So I'm down to one constant, one unused boundary condition. Plug
that in. U(1)=-1/2. What's D? It's 1/2, right. D is 1/2. So can I close this up? There's 1/2. Dx is 1/2. Now it just always pays to look back. At x=0, that's obviously zero. At x=1 it's zero because
those are the same and I get zero. So -1/2x squared plus 1/2x. That's the kind of differential equation and solution that we're looking for. Not complicated nonlinear stuff.
So now I'm ready to move to the difference equation. So again, this is a major step. I'll draw a picture of this from zero to one. And if I graph that I think I get a parabola, right? A parabola that
has to go through here. So it's some parabola like that. That would be always good, to draw a graph of the solution. Now, what do I get here? Moving to the difference equation. So that's the
equation, and notice it's boundary conditions. Those boundary conditions just copied this one because I've chopped this up. I've got i equal one, two, three, four, five and this is one, the last
point then is 6h . h is 1/6. What's going to be the size of my matrix and my vector and my unknown u here? How many unknowns am I going to have? Let's just get the overall picture right. What are the
unknowns going to be? They're going to be u_1, u_2, u_3, u_4, u_5. Those are unknown. Those will be some values, I don't know where, maybe something like this because they'll be sort of like that
one. And this is not an unknown, u_6 , this is not an unknown, u_0 , those are the ones we know. So this is what the solution to a difference equation looks like. It gives you a discreet set of
unknowns. And then, of course MATLAB or any code could connect them up by straight lines and give you a function. But the heart of it is these five values. Okay, good. And those five values come from
these equations. I'm introducing this subscript stuff but I won't need it all the time because you'll see the picture. This equation applies for i equal one up to five. Five inside points and then
you notice how when i is one, this needs u_0 , but we know u_0. And when I is five, this needs u_6 , but we know u_6 . So it's a closed five by five system and it will be our matrix. That -1, 2, -1
is what sits on the matrix. When we close it with the two boundary conditions it chops off the zero column, you could say and chops off the six column and leaves us with a five by five problem and
I guess this is a step not to jump past because it takes a little practice. You see I've written the same thing two ways. Let me write it a third way. Let me write it out clearly. So now here I'm
going to complete this matrix with a two and a minus one and a two and a minus one and now it's five by five. And those might be u but I don't know if they are so let me put in u_1, u_2, u_3, u_4,
and u_5. Oh and divide by h squared. I'll often forget that. So I'm asking you to see something that if you haven't, after you get the hang of it it's like, automatic. But I have to remember it's not
automatic. Things aren't automatic until you've done them a couple of times. So do you see that that is a concrete statement of this? This delta x squared is the h squared. And do you see those
differences when I do that multiplication that they produce those differences? And now, what's my right-hand side? Well I've changed the right-hand side to one to make it easy. So this right-hand
side is all ones. And this is the problem that MATLAB would solve or whatever code. Find a difference code. I've got to a linear system, five by five, it's fortunately, the matrix is not singular,
there is a solution.
How does MATLAB find it? It does not find it by finding the inverse of that matrix. Monday's lecture will quickly review how to solve five equations and five unknowns. It's by elimination, I'll tell
you the key word. And that's what every code does. And sometimes you would have to exchange rows, but not for a positive definite matrix like that. It'll just go bzzz, right through. When it's
tridiagonal it'll go like with the speed of light and you'll get the answer. And those five answers will be these five heights. u_1, u_2, u_3, u_4, and u_5. And we could figure it out. Actually I
think section 1.2 gives the formula for this particular model problem for any size, and particular for five by five.
And there is something wonderful for this special case. The five points fall right on the correct parabola, they're exactly right. So for this particular case when the solution was a quadratic, the
exact solution was a quadratic, a parabola, it will turn out, and that quadratic matches these boundary conditions, it will turn out that those points are right on the money. So that's, you could
call, is like, super convergence or something. I mean that won't happen every time, otherwise life would be like, too easy. It's a good life, but it's not that good as a rule. So they fall right on
that curve. And we can say what those numbers are. Actually, we know what they are. Actually, I guess I could find them. What are those numbers then? And of course, one over h squared is-- What's one
over h squared, just to not forget? One over h squared there, h is what? 1/6. Squared is going to be a 36. So if I bring it up here, bring the h squared up here, it would be times a 36. Well let me
leave it here, 36. And I'm just saying that these numbers would come out right.
Maybe I'll just do the first one. What's the exact u_1, u_2? u_1 and u_2 would be what? The exact u_1, ooh! Oh shoot, I've got to figure it out. If I plug in x=1/6... Do we want to do this? Plug in x
=1/6? No, we don't. We don't. We've got something better to do with our lives. But if we put that number in, whatever the heck it is, in this one, we would find out came out right. The fact that it
comes out right is important.
But I'd like to move on to a similar problem. But this one is going to be free-fixed. So if this problem was like having an elastic bar hanging under its own weight and these would be the
displacements points on the bar and fixed at the ends, now I'm freeing up the top end. I'm not making u_0, zero anymore. I better maybe use a different blackboard because that's so important that I
don't want to erase it. So let me take the same problem, uniform bar, uniform load, but I'm going to fix U over one, that's fixed, but I'm going to free this end. And from a differential equation
point of view, that means I'm going to set the slope at zero to be zero. U'(0)=0. That's going to have a different solution. Change the boundary conditions is going to change the answer. Let's find
the solution. So here's another differential equation. Same equation, different boundary conditions, so how do we go? Well I had the general solution over there. It still works, right? U(x) is still
-1/2 of x squared. The particular solution that gives me the one. Plus the Cx plus D that gives me zero, one, zero for second derivatives but gives me the possibility to satisfy the two boundary
conditions. And now again, plug in the boundary conditions to find C and D. Slope is zero at zero. What does that tell me? I have to plug that in. Here's my solution, I have to take it's derivative
and set x to zero. So it's derivative is a 2x or a minus x or something which is zero. The derivative of that is C and the derivative of that is zero. What am I learning from that left, the free
boundary condition? C is zero, right? C is zero because the slope here is C and it's supposed to be zero. So C is zero.
Now the other boundary condition. Plug in x=1. I want to get the answer zero. The answer I do get is minus 1/2 at x=1 , plus D . So what is D then? What's D? Let me raise that. What do I learn about
D? It's 1/2. I need 1/2. So the answer is -1/2 of x squared plus 1/2. Not 1/2x as it was over there, but 1/2. And now let's graph it. Always pays to graph these things between x equals zero and one.
What does this looks like? It starts at 1/2, right? At x=0. And it's a parabola, right? It's a parabola. And I know it goes through this point. What else do I know? Slope starts at? The slope starts
to zero. The other, the boundary condition, the free condition at the left-hand end, so slope starts at zero, so the parabola comes down like that. It's like half a-- where that was a symmetric bit
of a parabola, this is just half of it. The slope is zero. And so that's a graph of U(x) .
Now I'm ready to replace it by a difference equation. So what'll be the difference equation? It'll be the same equation for the -u''. No change. So minus u_(i+1) minus 2u_i , minus u_(i-1) over h
squared equals. I'm taking f(x) to be one, so let's stay with one. Okay, big moment. What boundary conditions? What boundary conditions? Well, this guy is pretty clear. That says u_(n+1) is zero.
What do I do for zero slope? What do I do for a zero slope? Okay, let me suggest one possibility. It's not the greatest, but one possibility for a zero slope is (u_1-u_0)/h . That's the approximate
slope, should be zero. So that's my choice for the left-hand boundary condition. It says u_1 is u_0 . It says that u_1 is u_0 .
So now I've got again five equations for five unknowns, u_1, u_2, u_3, u_4, and u_5. I'll write down what they are. Well, you know what they are. So this thing divided by h squared is all ones, just
like before. And of course all these rows are not changed. But the first row is changed because we have a new boundary condition at the left end. And it's this. So u_1, well u_0 isn't in the picture,
but previously what happened to u_0 , when i is one, I'm in the first equation here, that's where I'm looking. i is one. It had a u_0. Gone. In this case it's not gone. u_0 comes back in, u_0 is u_1.
That might-- Ooh! Don't let me do this wrong. Ah! Don't let me do it worse! All right. There we go. Good. Okay. Please, last time I videotaped lecture 10 had to fix up lecture 9, because I don't go
in. Professor Lewin in the physics lectures, he cheats, doesn't cheat, but he goes into the lectures afterwards and fixes them. But you get exactly what it looks like. So now it's fixed, I hope. But
don't let me screw up.
So now, what's on this top row? When i is one. I have minus u_2, that's fine. I have 2u_1 as before, but now I have a minus u_1 because u_0 and u_1 are the same. So I just have a one in there. That's
our matrix that we called T. The top row is changed, the top row is free. This is the equation T * U divided by h squared is the right-hand side ones. Ones of five, would call that. Properly I would
call it ones of five one, because the MATLAB command ones wants matrix and it's a matrix with five rows, one column. But it's T, that's the important thing. And would you like to guess what the
solution looks like? In particular, is it again exactly right? Is it right on the money? Or if not, why not? The computer will tell us, of course. It will tell us whether we get agreement with this.
This is the exact solution here and this is the exact parabola starting with zero slope. So but I solved this problem. Oh, let me see, I didn't get u_1, u_2 to u_5 in there. So it didn't look right.
u_1, u_2, u_3, u_4, and u_5. And that's the right-hand side. Sorry about that. So that's T divided by h squared, T with that top row changed times U is the right-hand side.
By the way, I better just say what was the reason that we came out exactly right on this problem? Would we come out exactly right if it was some general load f(x) ? No. Finding differences can't do
miracles. They have no way to know what's happening to f(x) between the mesh points, right? If I took this to be f(x) and took this at the five points, at these five points, this wouldn't know what f
(x) is in between, couldn't be exactly right. It's exactly right in this lucky special case because, of course, it has the right ones. But also because, the reason it's exactly right is that second
differences of quadratics are exactly right. That's what we checked on this board that's underneath there. Second differences of squares came out perfectly. And that's why the second differences of
this guy give the right answer, so that guy is the answer to both the differential and the difference equation. I had to say that word about why was that exactly right. It was exactly right because
second differences of squares are exactly right.
Now, again, we have second differences of squares. So you could say exactly right or no? What are you betting? How many think, yeah, it's going to come out on the parabola? Nobody. Right. Everybody
thinks there's something going to miss here. And why? Why am I going to miss something? Yes? It's a first order approximation at the left boundary. Exactly right, exactly right. It's a first order
approximation to take this and I'm not going to get it right. That first order approximation, that error of size h is going to penetrate over the whole interval. It'll be biggest here. Actually I
think it turns out, and the book has a graph, I think it comes out wrong by 1/2h there. 1/2h, first order. And then it, of course, it's discrete and of course it's straight across because that's the
boundary condition, right? And then it starts down, it gets sort of closer, closer, closer and gets, of course, that's right at the end. But there's an error. The difference between U, the true U and
the computed U is of order h. So you could say alright, if h is small I can live with that. But as I said in the end you really want to get second order accuracy if you can. And in a simple problem
like this we should be able to do it. What I've done already covers section 1.2 but then there's a note, a worked example at the end of 1.2 that tells you how to upgrade to second order.
And maybe we've got a moment to see how would we do it. What would you suggest that I do differently? I'll get a different matrix. I'll get a different discrete problem. But that'll be ok. I can
solve that just as well. And what shall I replace that by? because that was the guilty party, as you said. That was guilty. That's only a first order approximation to zero slope at zero. A couple of
ways we could go. This is a correct second order approximation at what mesh point? That is a correct second order approximation to U'=0, but not at that point or at that point where? Halfway between.
If I was looking at a point halfway between that, that would be centered there, that would be a centered difference and it would be good. But we're not looking there. So I'm looking here. So what do
you suggest I do? Well I've got to center it. Essentially I'm going to use U, minus one. I'm going to use U, minus one. And let me just say what the effect is. You remember we started with the usual
second difference here, 2, -1, -1. This is what got chopped off for the fixed method. It got brought back here by our first order method. Our second order method will-- You see what's likely to
happen? That -1 is going to show up where? Over here. To center it around zero. So that guy will make this into a minus two. Now that matrix is still fine. It's not one of our special matrices.
When I say fine, it's not beautiful is it? It's got one, like, flaw, it needs what do you call it when you have your face-- cosmetic surgery or something. It needs a small improvement. So what's the
matter with it? It's not symmetric. It's not symmetric and a person isn't happy with a un-symmetric problem, approximation to a perfectly symmetric thing. So I could just divide that row by two. If I
divide that row by 2, which you won't mind if I do that, make that one, minus one and makes this 1/2. I divided the first equation by two. Look in the notes in the text if you can. And the result is
now it's right on. It's exactly on. Because again, the solution, the true solution is squares. This is now second order and we'll get it exactly right. And I say all this for two reasons. One is to
emphasize again that the boundary conditions are critical and that they penetrate into the region. The second reason for my saying this is looking forward way into October. So let me just say the
finite element method, which you may know a little about, you may have heard about, it's another-- this was finite differences. Courses starting with finite differences, because that's the most
direct way. You just go for it. You've got derivatives, you replace them by differences. But another approach which turns out to be great for big codes and also turns out to be great for making, for
keeping the properties of the problem, the finite element method, you'll see it, it's weeks away, but when it comes, notice, the finite element method automatically produces that first equation.
Automatically gets it right. So that's pretty special. And so, the finite element method just has, it produces that second order accuracy that we didn't get automatically for finite differences.
Ok, questions on today or on the homework. So the homework is really wide open. It's really just a chance to start to see. I mean, the real homework is read those two sections of the book to capture
what these two lectures have done. So Monday I'll see. We'll do elimination. We'll solve these equations quickly and then move on to the inverse matrix. More understanding of these problems. Thanks. | {"url":"http://ocw.mit.edu/courses/mathematics/18-085-computational-science-and-engineering-i-fall-2008/video-lectures/lecture-2-differential-eqns-and-difference-eqns/","timestamp":"2014-04-17T09:35:45Z","content_type":null,"content_length":"91309","record_id":"<urn:uuid:ccce9169-8612-42d7-ac11-1814b92ba6a6>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00038-ip-10-147-4-33.ec2.internal.warc.gz"} |
Before we solve this rounding problem, we must first know that one divided by three is.
Luckily, (1/3) is a very well known fraction with a very simple and easy to find answer as well:
(1/3) = 0.3333333333….
The threes continue on forever, but for this problem we’re only concerned with the first four decimal places. Three is the number in the thousandths spot, as well as every other decimal spot:
The number in the ten-thousandths spot is also a three, and therefore the number is not rounded up to a four, making:
1/3 rounded to the nearest thousandth equal to 0.333. | {"url":"http://rapgenius.com/annotations/for_profile_page?id=BennyMoss&page=6&user_id=109850","timestamp":"2014-04-21T03:46:49Z","content_type":null,"content_length":"88481","record_id":"<urn:uuid:f2515822-259f-49d9-8e46-8fc869308ff7>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00481-ip-10-147-4-33.ec2.internal.warc.gz"} |
Queensgate, WA Geometry Tutor
Find a Queensgate, WA Geometry Tutor
...I've helped many students with their math and improved their grades. If you don't understand something or can't solve an algebra problem, I can simplify it until you get it and solve it all by
yourself. I enjoy tutoring Algebra 2, trying to make it interesting and easy to learn.
13 Subjects: including geometry, Chinese, algebra 1, algebra 2
...And perhaps most importantly of all, I love mathematics, and I have always enjoyed helping others learn to love it, too! I did quite well in all my middle and high school math classes,
including algebra II. I was often the go-to girl for helping my friends understand math concepts they were struggling with.
35 Subjects: including geometry, English, reading, writing
...I will help you with the technology aspect without charging for the time and I offer free e-mail support. Contact me today!Prealgebra is the foundation for all upper level math and as such it
is incredibly important for a student to fully understand prealgebra concepts. I enjoy helping students...
36 Subjects: including geometry, English, writing, reading
...From University of Colorado in Immunology with a translational science certificate (2013). I am currently a postdoctoral fellow at University of Washington in the Immunology Department. I have
been tutoring high school and college students since 2008 in math (pre-Algebra through Calculus), and S...
17 Subjects: including geometry, chemistry, calculus, physics
...At the elementary age, but also with older students, I learned how important it is to use visuals. We used skits, flannel stories, props, etc. It especially tickled me the way these kids, who
probably spent hours during the week playing video games with high tech, high definition graphics, were so attentively hooked on my low tech flannel stories.
14 Subjects: including geometry, reading, piano, algebra 1
Related Queensgate, WA Tutors
Queensgate, WA Accounting Tutors
Queensgate, WA ACT Tutors
Queensgate, WA Algebra Tutors
Queensgate, WA Algebra 2 Tutors
Queensgate, WA Calculus Tutors
Queensgate, WA Geometry Tutors
Queensgate, WA Math Tutors
Queensgate, WA Prealgebra Tutors
Queensgate, WA Precalculus Tutors
Queensgate, WA SAT Tutors
Queensgate, WA SAT Math Tutors
Queensgate, WA Science Tutors
Queensgate, WA Statistics Tutors
Queensgate, WA Trigonometry Tutors
Nearby Cities With geometry Tutor
Adelaide, WA geometry Tutors
Avondale, WA geometry Tutors
Clearview, WA geometry Tutors
Houghton, WA geometry Tutors
Inglewood, WA geometry Tutors
Juanita, WA geometry Tutors
Kennard Corner, WA geometry Tutors
Kingsgate, WA geometry Tutors
Maltby, WA geometry Tutors
North City, WA geometry Tutors
Queensborough, WA geometry Tutors
Thrashers Corner, WA geometry Tutors
Totem Lake, WA geometry Tutors
Wedgwood, WA geometry Tutors
Woodinville geometry Tutors | {"url":"http://www.purplemath.com/Queensgate_WA_geometry_tutors.php","timestamp":"2014-04-17T21:33:38Z","content_type":null,"content_length":"24280","record_id":"<urn:uuid:7db7fc52-d680-4ec1-8a12-5bb68bd3f7c2>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00530-ip-10-147-4-33.ec2.internal.warc.gz"} |
PS 6 SOLUTIONS
1. True or False (3 points each – 18 total)
a. False. Under a fixed exchange rate regime, interest rates are fixed. Therefore, an increase in G will not crowd out investment, and the increase in Y will be larger than under a flexible regime.
b. True. Under a fixed exchange rate, the central bank cannot conduct monetary policy. It must instead allow the money supply (and interest rates) to fluctuate in order to support the fixed E and
keep the interest parity equation in balance.
c. True. A large expected depreciation in the future will tend to increase today's exchange rate (a depreciation of the currency.) This will boost net exports and increase production (shift of IS
to the right) until a new equilibrium is reached.
d. False. As you can see in the interest parity equation (i = i* + (E^e - E)/E), a rise in E^e with E fixed will lead to an increase in the level of interest rates and will decrease output (shift
of the LM to the left).
e. False. If money supply increases and the interest rate goes down, this will weaken domestic currency. (E goes up – a depreciation of the domestic currency.)
f. False. The term refers to budget and trade deficits.
2. Open Economy IS-LM (40 points total)
a. (3 pts) Yes. NX = (30 + 0.2Y*)E - 10 - 0.4Y. Therefore, NX is increasing in E.
b. (2 pts) Goods market equilibrium, with i and E fixed:
Y = 24 + 0.4(Y - T) + 80 - 50i + G + (30 + 0.2Y*)E - (10 + 0.4Y)
Y = 100 - 50i + 50E
c. (2 pts) The goods market multiplier is 1.
d. (2 pts) LM: 80 = Y - 20i
e. (2 pts) The interest parity relation is: i = 0.05 + (1.95 - E)/E.
f. (4 pts) In equilibrium, Y = 100, i= 1, and NX= 0.
g. (2 pts) This is derived from the interest rate parity condition. Your graph should have i on the vertical axis, E on the horizontal axis, and should have curve with i and E negatively related.
h. (2 pts) The LM curve is drawn in i-Y space – Y is on the horizontal axis, i on the vertical. The LM is an upward sloping curve, with Y and i positively related, determined from (d).
i. (4 pts) The IS relates Y and i through a negatively sloped curve in the same plane as the LM curve above. When E is larger, the curve shifts more to the right.
j. (4 pts) G=20 implies Y=110 -50i +50E ; 80=Y - 20i ; i=0.05 +(1.95 - E)/E. Solving this system, one gets E=.95, Y=102.1, i=1.11.
k. (4 pts) If M = 200, then E=2.02, Y=200.3, i=.15
l. (4 pts) If E^e goes up to 2.5, then Y=102.8, E=1.20, and i=1.14.
m. (5 pts) In part (i), your graph should look the same as under flexible exchange rates – if E changes due to market fluctuations or due to changes in government policy, the effect on the IS via
NX is similar. For part (j), the answer is more complicated. When you had flexible exchange rates in the first part of this question, your 3 endogenous variables were i, Y, and E. Now that
exchange rates are fixed, your 3 endogenous variables are i, Y, and M^s . This means that you fix E = 1, E^e = 1.95, G = 20, and all the other variables as before, and solve the system of
equations. This gives you, in equilibrium: M^S = 90, Y = 110, E =1, NX = -4, and i = 1.
3. Exchange Rates and Expectations (4 points per question, 12 total)
a. People expect a devaluation to a level where E is equal to 1.09.
b. If the country devalues, E will rise. This will put pressure on the interest parity equation – at the moment of devaluation (before i has had a chance to adjust), domestic bonds look more
attractive than foreign bonds. This makes foreign investors want to buy domestic bonds – they trade foreign currency for domestic at the central bank at the new exchange rate (1.09), driving up
domestic money supply. This shifts out the LM, lowering i until i = i* = 0.6. This drop in interest rates will push investment up (moving along the IS curve) and therefore output will increase.
There is also be a second order effect on NX – the new, higher E means that NX rises. This shifts out the IS curve, puts upward pressure on interest rates, and leads to further expansionary monetary
accommodation. This has an additional boost on output.
c. People will expect an even higher E for the future (E^e goes way up.) This shifts out the interest parity curve. The moment that these expectations change, the return on foreign bonds looks
more attractive than domestic bonds – see this in the arbitrage equation. This means that investors will want to move their currency out of the country, putting lots of pressure on the central
bank to pay out its foreign reserves to support the exchange rate. In turn, this reduces the money supply, shifts in the LM curve, and pushes interests rates very high. Output will decrease
from its levels in (b).
Note that as long as the country is defending its exchange rate, there is no effect on NX or the IS: NX moves when E moves, and E is still fixed here. It is possible to imagine a scenario where the
central bank runs out of reserves if E^e keeps rising, and abandons its fixed E all together.
If you assumed that the conditions in (b) and (c) were happening at the same time (instead of sequentially, as above) then the 2 changes – the devaluation and the rise in E^e – would be pulling
output in different directions. The ultimate effect on output would depend on the magnitude of the change in E^e. | {"url":"http://web.mit.edu/course/14/14.02/www/F00/f00_ps6ans.htm","timestamp":"2014-04-19T17:22:16Z","content_type":null,"content_length":"21631","record_id":"<urn:uuid:49151f2b-372f-4860-9b55-e42b4fad64ec>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00332-ip-10-147-4-33.ec2.internal.warc.gz"} |
James T. Kirby
Tsung-Muh Chen
MARCH, 1988
Pacific Marine Environmental Laboratory
7600 Sand Point Way N.E.
Seattle, WA 98115
COASTAl & OCEANOqRAphic ENCqiNEERiNq dEpARTMENT
Gainesville, Florida 32611
James T. Kirby
Tsung-Muh Chen
MARCH, 1988
Pacific Marine Environmental Laboratory
7600 Sand Point Way N.E.
Seattle, WA 98115
1. Report No. 2. 3. Recipient's Accession No.
4. Title and Subtitle 5. Report Date
Surface Waves on Vertically Sheared Flows: March 1988
Approximate Dispersion Relations 6.
7. Author(s) 8. Performing Organization Report No.
James T. Kirby
Tsung-Muh Chen UFL/COEL-TR/077
9. Performing Organization Name and Address 10. Project/Task/Work Unit No.
Coastal and Oceanographic Engineering Department
University of Florida 11. Contract or Grant No.
336 Weil Hall 40ABNR6711
Gainesville, FL 32611 13. Type of Report
12. Sponsoring Organization Name and Address
U.S. Department of Commerce Final
National Oceanic and Atmospheric Administration
Pacific Marine Environmental Laboratory
7600 Sand Point Way N.E., Seattle, WA 98115 14.
15. Supplementary Notes
16. Abstract
Assuming linear wave theory for waves riding on a weak current of 0(E)
compared to the wave phase speed, an approximate dispersion relation is
developed to 0(E2) for arbitrary current U(z) in water of finite depth. The
0(e2) approximation is shown to be a significant improvement over the 0(s)
result, in comparison with numerical and analytic results. Various current
profiles in the full range of water depths are considered. Comments on
approximate action conservation and application to depth-averaged wave models
are included.
17. Originator's Key Words 18. Availability Statement
Linear wave theory
Perturbation methods
Wave-current interaction
19. U. S. Security Classif. of the Report 20. U. S. Security Classif. of This Page 21. No. of Pages 22. Price
Surface Waves on Vertically Sheared Flows:
Approximate Dispersion Relations
James T. Kirby and Tsung-Muh Chen
Coastal and Oceanographic Engineering Department
University of Florida, Gainesville, FL 32611
Work supported by the National Oceanographic and Atmospheric Administration,
Pacific Marine Environmental Laboratory, through Contract 40ABNR6711
March 1988
Abstract .................... .................................. 3
1 Introduction ..................... ............................ 4
2 Theory and Approximate Expressions for the Phase Speed.......... 6
2.1 Perturbation Method..................................... 7
2.2 Solutions to 0(E2) for Arbitrary U(z).................... 9
3 Examples Using Known Velocity Distributions................... 13
3.1 Linear Shear Current................................... 14
3.2 Cosine Profile......................................... 18
3.3 1/7-Power Law Profile.................................. 22
4 Expansion for Weak Vorticity................................. 34
5 Results for Deep-Water Waves.................................. 36
5.1 Comparison with Analytic Results........................ 37
5.2 Exponential and Linear Shear Profiles................... 45
6 Comments on Action Flux Conservation............................ 48
7 Conclusions ...................................................... 54
A Action Flux Velocity for Linear Shear Current.................. 55
References..................................................... .. 56
Assuming linear wave theory for waves riding on a weak current of 0(E)
compared to the wave phase speed, an approximate dispersion relation is
developed to 0(e2) for arbitrary current U(z) in water of finite depth. The
0(c2) approximation is shown to be a significant improvement over the 0(E)
result, in comparison with numerical and analytic results. Various current
profiles in the full range of water depths are considered. Comments on
approximate action conservation and application to depth-averaged wave models
are included.
1. Introduction
The problem of describing wave propagation through regions containing
tidal, ocean or discharge currents is fundamental in describing the nearshore
wave climate. Great strides have been made in extending wave propagation
models to include the effect of irrotational, large currents (assumed to be
uniform over water depth). However, currents typically do not possess so
simple a form, but instead have variations over depth and associated
vorticity, which renders the assumption of irrotationality invalid. The
resulting problem for wave motion on arbitrarily varying currents remains
unsolved, even in the linearized, uniform domain extreme.
The purpose of this study is to describe a perturbation method for the
special case of U(z)/c << 1, where c is the absolute phase speed, which allows
for the solution of the linearized problem for arbitrarily varying current
U(z). This case is restrictive in the general context of the study of wave-
current interaction, where U/c is taken to have no restriction on size.
However, in the context of coastal wave propagation, where we typically
consider waves propagating from a generation region into regions of varying
current, current fields of practical interest typically satisfy the scaling
restriction considered here except possibly in special situations such as
strong flows in inlets.
We proceed in section 2 by establishing the problem for a linear wave in
a uniform domain with arbitrary U(z). We then outline a perturbation
expansion based on small parameter e = 0(U/c) << 1, following the method
employed by Stewart and Joy (1974) for deep water. We then obtain solutions
to the general problem to 0(e) (reproducing the result of Skop (1987)) and to
0(e ). In section 3, we apply the method to linear, cosine, and 1/7 power
current variations, and compare results to analytically or numerically
obtained exact solutions.
The results of the analysis show that the solutions are valid in the
regime (maxlU-UI/U) << 1, where U is the depth-averaged current, leading to
the conjecture that the expansions are valid for arbitrarily large currents
having weak vorticity. In section 4, we outline an expansion for weak
vorticity and obtain the results for a linear shear current, proving
equivalence of the expansions for this restricted case.
In section 5, various results for the second-order approximation in the
deep-water limit are described.
In section 6, we end with some analysis of action flux formulations and
some cautionary notes on the direct use of the 0(e) depth-averaged velocity in
wave propagation models.
2. Theory and Approximate Expressions for the Phase Speed
We consider here the inviscid motion of a linear wave propagating on a
stream of velocity U(z), where water depth and current speed are assumed to be
uniform in the x-direction (Figure 1). Associated with the wave-induced
motion is a stream function of the form
4(x,z,t) = f(z)eik(x-ct)
After eliminating pressure from the Euler equations and linearized free-
surface boundary conditions, and using the continuity equation, the boundary
value problem for f(z) is given by the Rayleigh or inviscid Orr-Sommerfeld
[c-U(z)] (f" k2f) + U"(z) f = 0
; -h < z < 0
together with the boundary conditions
(U-c)2f' = [g + U'(U-c)] f
f = 0
; z= 0
; z = -h
Here, primes denote differentiation with respect to z, g is the gravitational
constant, h is the water depth in the absence of waves and k is the wavenumber
given by 2r/X, where A is the physical wavelength. This model has been used
in a number of studies of waves on arbitrary or particular current
Figure 1. Definition sketch
distributions; reference may be made to Peregrine
Jonsson (1983) for a review of existing results.
analysis is to obtain approximate expressions for
w = kc
(1976) or Peregrine and
The goal of the present
c in the dispersion relation
where w is the wave frequency with respect to a stationary observer. The
approximations are based on the assumption of U(z) arbitrary but IU(z) <
will thus assume the current to be weak and then evaluate the extent of this
restriction after obtaining the solutions.
2.1 Perturbation Method
A perturbation expansion for weak currents {U/c < 0(1)} is employed,
following the analysis by Stewart and Joy (1974) for deep water. We take
U(z) ~ eU(z)
where we introduce e as an apparent ordering and retain dimensional variables.
We then introduce the expansions
c = e n (2.7)
f = I nfn (2.8)
Equations (2.6-2.8) are substituted in (2.2-2.4) and the resulting expansions
are ordered in powers of e, which gives
f" k2f = F -h < z < 0 (2.9)
n n n
c 2f' gf = S z = 0 (2.10)
f = 0 z = -h (2.11)
where the F and S are inhomogeneous forcing terms involving information at
n n
lower order than n. For n>l, the homogeneous solution of (2.9-11) is the
lowest order solution fo(z), as derived below, and it is necessary to
construct a solvability condition according to the Fredholm alternative.
Using Green's formula on the quantities f0 and fn leads to the condition
Sf0 ndz = f0(0) S n > 1 (2.12)
This relation is used below to solve for the phase speed corrections cl and c2
due to the presence of a weak, arbitrary current profile.
2.2 Solutions to 0(c2) for Arbitrary U(z)
The perturbation problems obtained above are now solved in sequence.
We have
F0 = SO = 0
and the homogeneous solution (with amplitude arbitrarily taken as 1) is
f0(z) = sinh k(h+z)
2 = g tanh kh
CO k
This is the usual result for linear waves on a stationary domain.
At this order we have
FI = U''(z) f0(z)/c0 = U'sinh k(h+z)/c0
S1 = 2c0(U(0) c1) f'(0) coU'(0) f0(0)
= 2kc0(U(0) c ) cosh kh c U'(0) sinh kh
Substituting (2.16-17) in (2.12) gives
c 2k f U(z) cosh 2k(h+z)dz U (2.18)
1 sinh 2kh h
This is the finite-depth extension of the result of Stewart and Joy (1974),
who obtained the result
0 2kz
U = 2k f U(z)e dz (2.19)
in the limit kh+-. The result (2.18) has also been obtained by Skop (1987).
To 0(e), the dispersion relation is given by
c = cO + U (2.20)
or, equivalently,
w = a + kU (2.21)
where w is the absolute frequency in stationary coordinates and a is the
frequency relative to a frame moving with velocity U, the weighted-mean
a = gktanh kh (2.22)
The particular solution flp(z) may be obtained by the method of variation of
parameters; the entire solution fl(z) is then
fl(z) = {A1 ( U(S)sinh 2k(h+S)dS}sinh k(h+z)
+ {B + f- U''(E)[cosh 2k(h+E) -l]d(} cosh k(h+z) (2.23)
0 -h
where B1=0 in order to satisfy the bottom boundary condition and A1 may be set
to zero arbitrarily. Note that fl(z) is identically zero for a flow with
constant or zero vorticity (U''=0). For flows for which U'' is known, fl(z)
may be evaluated directly from (2.23). For general U(z), repeated integration
by parts is applied to (2.23) to obtain the expression
2kIl (z) 21 (z)
{(U(z) + U(-h)) 2k11 (z) 212(z)
f (z) = {+ } f (z) + fl(z) (2.24)
1c c 0 c 0
I (z) = f U(S)sinh 2k(h+d)dS (2.25)
2I(z) = f U(E)cosh 2k(h+S)dS (2.26)
We obtain
U"(z)f,(z) U"(z)(U-U)
F = co + 2 f0(z) (2.27)
0 0c
S = 2c (U(0) U)f'(O) c0U'(0)fl(0)
[2c0c2 + [U(0) ']2]f(0) + U'(0)[U(0) T]f0(0) (2.28)
Substituting (2.27-2.28) in (2.12) gives an expression for c2:
c.2 =U'(0)[U(0) N] [U(0) U]2
c 2g 2c02
+(U(0) U) f1(O) U'(0) f1(0)
c f'(0) 2c0 f(0)
+ 2 I U"'(z) {c0f0(z)fl(z) + (U(z) i)f02(z)}dz (2.29)
2gf0 (0) -h
where we have used the fact that f0(0)/f0(0) = c02/g. As above, several
options are possible here. For flows with zero or constant vorticity, we have
fl(z) = U"(z) = 0, and we obtain directly
= U'()[U() ]c0 [U(O) 2(2.30)
c2 (2.30)
2 2g 20
For flows where U'' is known, we may evaluate fl(z) from (2.23) and substitute
for U"(z) in (2.29) and then solve for c2. For general U(z) (especially for
tabulated U(z) where higher derivatives are not known), we use (2.24) in
(2.29) and integrate by parts to obtain the expression
c 2
c-2 = 2 [4kll(0) (1 + 2cosh 2kh)U]
0 2c0
k2 0
+ 2k /f U2(z)(I + 2cosh2k(h+z))dz
2gf0 (0) -h
2 1 [I2(z)I'(z) I (z)I'(z)]dz (2.31)
gf0 (0) -h 1 1
The remaining integrals are expressed completely in terms of U(z), and may not
be tractable for a general U(z); numerical approximation will then be
3. Examples Using Known Velocity Distributions
We now investigate the accuracy of the approximations derived above,
considering the results both to 0(e) and 0(s2). We consider three examples;
1.) Linear Shear Current
U(z) = Us (1 + a ) (3.1)
where a is the normalized constant vorticity w0 given by
a = 0h/Us (3.2)
2.) Cosine Profile
U(z) = Uscos (a ) (3.3)
3.) Power-Law Profile
U(z) = Us (1 + )1/7 (3.4)
These examples represent a hierarchy in increasing difficulty. For the linear
shear profile, analytic solutions may be obtained for both the full problem
and the perturbation solutions. (In fact, the perturbation may be carried to
any order with no difficulty, as will be seen below.) For the cosine profile,
analytic results may be obtained in the case of c=0 (i.e., a stationary
wave). For c*O, the problem must be solved numerically. In this case,
however, the approximate results U and c2 are obtained explicitly. For the
third case of a power-law profile, analytic results again have been obtained
only for the case c=0 (Lighthill, 1953). Fenton (1973) has provided a
numerical scheme for cases with cO# using a shooting method; this scheme is
utilized below. For the case of a power-law profile, evaluation of U and c2
also requires numerical approximation or approximation of the analytic result
by means of truncated series expansions.
3.1 Linear Shear Current
For the case of a current with uniform vorticity, the stability problem
posed in (2.2) (2.4) reduces to
f" k2f = 0
(Us- c)2 f' = [g + w0(Us- c)]f
f = 0
; -h z 0
; z= 0
S z= -h
The solution to (3.5) and (3.7) is simply
f(z) = sinh k(h+z)
Substituting (3.8) in (3.6) then gives directly
Watanh kh gtanh kh tanh kh 1/2
cE = Us 02k + kgtanh kh+ 4gk
E s --2k k 4gk
which is the exact solution which we will denote by subscript E. Turning to
the approximate solutions, we substitute (3.1) in (2.18) and obtain
w tanh kh
U = u (3.10)
s 2k
The exact solution may then be written as
0 tanh kh 2
E = + c {1 + gk (3.11)
where co is taken from (2.15). Evaluating the 0(e2) correction c2 from (2.30)
m tanh kh
c2 = c 8gk } (3.12)
and the phase speeds to 0(c) and 0(e2) are given by
c(e) = co + U ; 0(c) (3.13)
20 tanh kh
c(ec) = co {1 + 8gk + ; 0(2) (3.14)
Comparing (3.11), (3.13) and (3.14), it is apparent that the 0(s)
approximation is fairly weak unless normalized w02 is in some sense << 1. The
approximation to 0(e2) adds the first small term in the binomial expansion of
the square root appearing in the exact solution. Referring to (2.30), it is
apparent that c2 also provides the leading-order correction for non-zero
current shear.
The approximations obtained above are least accurate in shallow water.
To inspect this limit, we normalize the phase speeds by (gh) 1/2 and define
1/, tanh kh 2 2
F Us/(gh) 2 = kh = 1 + 0(kh)
Further, the weighted mean current U is given by
U= U -+ 0(kh)2 = U + 0(kh) (3.15)
s 2
where U is the unweighted, depth-averaged current. (U and U tend to converge
as kh+O for all velocity profiles as the wave motion loses its vertical
structure; this may be verified by inspection of (2.18).) Introducing a
defined in (3.2) and taking the limit kh+O, we may express the phase speeds
relative to i/(gh) 1/2 or U/(gh) /2 as
2F2 1/2
cRE = { a+ ; exact (3.16a)
cR = 1 ; 0() (3.16b)
cR 1 + ; 0(e)2 (3.16c)
R2 8
We may take a>0, where a=l reduces the velocity to zero at the bed and a>1
indicates reversed flow at the bed relative to the surface. Ignoring the
latter possibility, the worst case is for a=l, and we require F<<2 to employ
the truncated binomial expansion implied by (3.16c). The approximation to
0(e)2 is thus quite good for any subcritical flow. Plots of results for
various choices of a and F are given in Figure 2 for c' = 1.
We remark that higher-order approximations to the problem posed by (3.5)
- (3.7) may be obtained quite simply using a modification of the technique of
section 2. Substituting the expansions for c in (3.6) and using (3.8) gives
Figure 2.
0.00 0.20 0.40 0.60 080 100
Phase speed corrections (c U)/cO for long waves on a linear shear
current. exact solution; --- 0() approximation; _._. 0(E2)
(cUs I cn)2 ={g + 0[EU I E th kh
n=0 n=0
= 2 1 +--- I enc ]} (3.17)
0 g s n= n
The next several approximations beyond the level attained in section 2 are
1 O tanh kh2
C3 = = 8 { 4gk C0
m 2tanh kh
c5 = 0 c6 = 1 4gk } c (3.18)
5 = 6 16 4gk 0
c4 and c6 are consistent with the next two smaller terms in the binomial
expansion of (3.11), and it is apparent that the perturbation solution will
eventually converge to the exact solution when w02/gk is suitably small.
(Note that in the limit of m0 + 0, the solution to 0(e) is exact for all
current velocities, including stationary waves for which U/c = c.)
3.2 Cosine Profile
We now consider the cosine profile (3.3). Values of 0<4a2 produce
unidirectional flow, with a uniform velocity for a=0 and velocity reduced to
zero at the bed for a=r/2. Using the results of section 2.2, we obtain at
U + *) (3.19)
s (1 + 82)
S a sina (3.20)
S2kh sinh 2kh
The particular solution fl(z) is given by
fl(z) = s {B[sina + sin(z)] coshk(h+z)
c0(1 + B2)
2[cose + cos( -)] sinhk(h+z)} (3.21)
Also, the depth-mean velocity U is given by
U = U sina/a (3.22)
and U+U as kh+O for arbitrary a. At 0(e2), we obtain
U 2 2
2 = { 2 snkh + 08*cosa[ + 2 1
2 c +2 sinh 2kh + 2 2(1 + 02)2
1 + 8 + 4 2(1 + 8
( 8 2 cosh 2kh + [ + 22 + )2]} (3.23)
1 + 8 2 (1 + 48 ) 2(1 + 2 )
The asymptotic validity of the approximate solution in the limit of weak
vorticity (8,8* << 1) may be investigated by comparison with the exact
solution for the special case of c = 0 (stationary waves). Using this
condition in (2.2-3) and then solving (2.2-4) for the cosine profile (3.3)
leads to the results
f(z) = sinh[kh(l 482) 1/2 ] (3.24)
U2 = g tanh[kh(l 482) 1/2 ] (3.25)
k(1 482)1/2
Expanding (3.25) for the case 82 << 1 leads to the expression
U2 g tanhkh (1 288*) g tanhkh (1 + 2)2 (3.26)
= = + 0(s ) (3.26)
s k (1 22) k (1 + 88*)2
The second form of the right hand side is equivalent to the result of the
approximate solution to 0(e), given by
c + U = => 2 = c (3.27)
We see again that the approximate solution (for weak Us) converges to the
exact solution under the condition of weak vorticity with no restriction on
U as in the constant vorticity case. In particular, U/c + o for this case
and the perturbation method is seemingly inapplicable.
For cases where c#O, the full solution must be obtained numerically. We
have obtained solutions here using a modification of Fenton's (1973) shooting
method. Referring back to (2.2-4), we define a Froude number F according to
F = U(z) c (3.28)
(Note the difference from F in section 3.1.) We non-dimensionalize z
according to z' = z/h and define the variable transformation
q(z') = f(z)/(hf'(z)) (3.29)
The problem is then reduced to a Riccati equation
q' = 1 y22 ; 1 < z' < 0 (3.30)
2 (2 ( 42) 6cos(az') 1I (31)
S= (kh) { cos(z') 1 } (3.31)
S 4 ) cos(az') 1
6 = Us/c (3.32)
Equation (3.30) is solved over the interval 1 < z' < 0 with initial
q(-1) = 0 (3.33)
and with kh, a (and hence 6) and 6 specified. The surface boundary condition
T= q(0)/(6-1)2 (3.34)
which determines c. Us is then determined using (3.32). The numerical scheme
may be tested for accuracy against the long wave result (kh+0), which is
determined directly from the expression
1 = g f z (3.35)
-h (U(z) c)
(Burns, 1953) and gives
2 ta (1+6) tan()
c 6sina 2 -1
S- =- 22 3+ tan 1 ; 6<1 (3.36a)
gh a(1-62)(1-6cosa) a(1-623/2 )(1- 1/2
(i-_2) I/2
c2 6sin 1 (1+6) tan( (62-1) 2 (3.36b)
h= 2 -1 n 1 6>1
g a(1-6 2)(1-6cosa) (6 21)/2 (1+6) tan(!) + (6 2-1) /2
which is implicit in c and where 6 is given by (3.32). Five decimal place
accuracy was achieved straightforwardly in the numerical solution.
In Figure 3, we show the numerically determined exact dispersion relation
for the cosine profile for the case a=i/2 (velocity = 0 at bottom). The
Froude number chosen is based on the surface current speed. Consideration is
restricted to subcritical mean flow conditions in the long wave limit. In
Figure 4, we display the absolute value of the absolute error cn cE for the
long wave limit and a range of a values, with subscript n denoting the order
of approximation and cn representing the perturbation solution. Figures 4a
and 4b display results for the first and second order approximations,
respectively. The second order approximation is seen to provide an order of
magnitude reduction in error in comparison to the first-order approximation
(indeed, for the range of parameters considered it proves inconvenient to plot
the two results on equivalent scales).
Figure 5a and 5b show first and second order results, respectively, for
Icn CEl for the case of a = r/2 and a range of kh values. Here, the
comparison is between perturbation solutions and numerical solutions. The
dramatic increase in accuracy given by the second approximation is again
3.3 1/7-Power Law Profile
We now turn to the 1/7-power profile given by (3.4). This profile
differs from the previous examples both in analytic complexity and in the fact
that a weak-vorticity range is not available through choice of parameters, due
to the form of the profile near the bed. Using the results of section 2.2,
the expression U at 0(e) is given by
1.50 -5
1.00 -
-1.00 -0.50 0.00 0.50 1.00
Figure 3. Cosine profile. a=i/2, numerical results for dispersion relation.
-1.5 -1.0 -0.5 0.0 0.5 1.0
Figure 4. Absolute error Icn- CEI for cosine profile. kh=0.
a) n=l, first order approximation, b) n=2, second order
-1.0 -0.5 0.0 0.5 1.0 1.5
Figure 4.
ICn- CEI
00 -
.15 0.5
Cn- CEI
.10 1
.05 2
-1.5 -1.0 -0.5 0.0 0.5 1.0 1.5
Figure 5. Absolute error jcn- CEI for cosine profile. a=n/2. a) n=l, first
order approximation. b) n=2 second order approximation.
ICn- CEI
.005-05 00 05 10
-1.5 -1.0 -0.5 0.0 0.5 1.0 1.5
Figure 5.
2kU h -/7 0
= sinh 2kh (h+z) cosh2k(h+z)dz (3.37)
This expression is of little direct use. Two integration by parts yields
SUs tanh kh 7 6 f t7 sinh kht7dt] (3.38)
S 14kh 7kh sinh 2kh
where t = (1 + z/h)1/7. This expression has been given previously by Hunt
(1955). The remaining integral in (3.38) vanishes in the limit kh+- but
contributes significantly at finite values of kh (see below). Approximation
is thus required in order to evaluate the integral. Numerical experiments
with quadrature indicated that 32 weighting points were required in order to
obtain three decimal place accuracy for kh = 1, with the number increasing for
increasing kh. For this reason, we chose to treat the integral by taking the
Taylor series expansion for cosh2k(h+z) about h+z = 0
co 2n 2n
cosh2k(h+z)= I (2k) ( (3.39)
n=0 (2
and then summing the resulting expression for the integral up to the required
number of terms. The resulting expression is given by
14kh U N 2n
S s (2kh)
U sinh 2kh (2n)!(14n + 8) (3.40)
Values of N required to obtain three-decimal place accuracy are given for
various kh in Table 1. The expansion procedure is most appropriate for
shallow water, with convergence being obtained more slowly as depth increases.
kh N
0.5 3
1.0 4
2.5 8
5.0 14
Table 1. Rate of convergence of U (3.38) for varying kh. N is number of
required terms in series expansion in order to obtain 3 place
As kh+0, we obtain the shallow water limit
U -- U = U (3.41)
8 s
Exact numerical results indicate that higher-order effects are not as
important in this case as in previous examples, and so the solution is not
carried to O(c2) here. This result is due to the weak vorticity of the
current profile near the surface; only the longest waves are strongly affected
by the current shear. Reference may be made to the results of Thomas (1981),
who conducted experiments on nearly deep-water waves over a turbulent shear
flow of nearly 1/7-power form, and found the waves to respond only to the
surface current speed.
Numerical results were obtained following Fenton (1973) and the procedure
outlined in section 3.2. Using 6 defined by (3.32) and q(z') defined by
(3.29), we obtain the problem (as in Fenton (1973))
q' = 1 y2(z')q2 1 < z < 0 (3.42)
q(-l) = 0 (3.43)
where now
, -13/7
L = (kh)L + oo z-J (3.44)
49(1 6(1 + z')1/7)
The surface boundary condition yields the relation
[=FO) q(0) (3.45)
[1 + F(O)F'(O)]
where F = (U-c)/(gh) 1/2. This result leads to the expression
c q(0) (3.46)
gh (6 1)[(6 1) 6q(0)/7]
Equations (3.42-43) are solved using specified values of kh and 6. We then
use (3.46) to determine c and then (3.32) to determine Us. Numerical results
were checked against plots given by Fenton and also against the long-wave
analytic result, given by
U 2 3 4
= 7[T + )+ (c + 2(-) + 5(c)
gh 2 U U
s s s s
5 c-U 4
+ 6( -) n( + c(- c )] (3.47)
s s s
Note that (3.47) corrects an error appearing in Fenton's unnumbered expression
in the logarithmic term.
Plots of normalized phase speed vs. normalized Us are given in Figure 6
as solid curves for kh values ranging from 0 to (long to short waves). Also
plotted in Figure 6 are results of the 0(e) perturbation solution given as
dashed lines. The perturbation solution is seen to be in close agreement with
the full solution except for kh small and IUs/(gh) /2 large, representing
long waves on strong currents.
r *"
kh O
C 1.00 -
-1.00 -0.50 0.00 0.50 1.00
Figure 6. Dispersion relation for 1/7 power profile. -- numerical results;
--- first-order approximation.
The expression (3.40) used to determine U was found to be useful at all
water depths tested but converges very slowly for large values of kh. An
alternate expansion procedure for (3.37) could be based on using the binomial
1/7 2 3
( + 1/ = 1 + 3 Z) 13 ( + ... (3.48)
Substituting (3.48) in (3.37) leads to an expression for U which
increases in accuracy as more terms in the expansion are retained. The first
several expressions obtained in this manner are given by
Two terms: U = Us(1 tnh (3.49a)
~ tanh kh 3 1 1
Three terms: U = l tanhkh + { 2kh}) (3.49b)
s 14kh 49kh sinh 2kh 2kh
Stanh kh 81 3
Four terms: = Us(l tanhkh 81 +
U 14kh 686kh sinh 2kh 98(kh)2
78tanh kh) (349)
These expressions represent an ascending series in powers of (kh)-1 as kh+-,
and convergence is rapid. As kh+0, however, all terms beyond the first two in
a truncated expansion cancel identically, and the value of U+0.929 U Note
that the third term in (3.38) contributes all the higher-order terms in the
expansion; however, the 0(1) contribution as kh+0 is not obtained in a
truncated expansion due to the fact that the expression (1 + z/h)1/7 is not
differentiable at z=-h. Convergence of the truncated series is thus limited
to a range excluding the neighborhood of kh=0.
Figure 7 shows several expressions for U. The solid curve represents the
series (3.40) with a sufficient number of terms retained to obtain
convergence. Each of the three truncated expansions (3.49a-c) are also
included, and show the increase in accuracy with each included term as well as
the failure of the series at kh=0. Finally, a plot of (3.40) truncated to 10
terms is included, and shows the extreme sensitivity of the series to the
number of retained terms as kh increases. It is noted also that the
expression for U given by (3.49a) has a maximum relative error of only 5.8% in
the shallow water limit.
SI I
0.90 *-
0.85 I I I
Figure 7. Various estimates of U for the 1/7 power profile. full series
(3.40). ... (3.49a), --- (3.49b), -- (3.49c) --- 10 term
expansion (3.40).
4. Expansion for Weak Vorticity
The results of sections 3.1 and 3.2 have indicated that the approximate
solutions for the regime of weak currents with arbitrary vorticity are
seemingly valid for the regime of arbitrarily strong currents having weak
vorticity. This result may be expected in hindsight due to the form of the
approximate solutions. In particular, the weighted mean current U deviates
from the true mean current U by an amount proportional to the first power of
some vorticity parameter, while the expressions for c2/c0 are proportional to
some (current parameter x vorticity parameter)2. Thus, in the limit of small
vorticity, U + U and c2/cO represents a consistently small correction for
arbitrarily large U values. In order to further support these results and
claims, we provide the schematic for an expansion for weak vorticity and
examine the case of a linear shear current.
We proceed by defining a reference current U* according to
U(z) = U* + U(z) (4.1)
where JU/U*| << 1 owing to the weak vorticity assumption. From the results of
section 3, a natural choice for U* would be U; however, U must be regarded as
undetermined in the present context since it was found for the case jU*/c <<
1, which doesn't hold here. We thus take U* = U. For the case of a linear
shear current, we obtain
0 h
SUs 2 ; U(z) = 0o(z + 2) (4.2)
from (3.1). (2.2-4) may then be solved to obtain
f(z) = sinh k(h+z)
subject to the condition
2( ^'tanh kh
(U + EC(O) c = [g + (0)( + ) c)] tan kh
where e << 1. Introducing the expansion
c = enc
and solving sequentially for the cn then gives (to 0(e2))
= (g tanh kh 1/2 +U
WOh w tanh kh
1 2 2k
g tanh kh
c2 = (g k
c = (g tanh kh 1/2
1/2 (tanh kh
2 tanh kh
m tanh kh
2 + 0(e )
Using (4.2) and (3.10), (4.7) may be written as
m tanh kh
c = (g tanh kh) 1/2 + } +
k 8gk
where U is given by (3.10). (4.8) is identical to (3.14), and the two
expansions are seen to be equivalent.
5. Results for Deep-Water Waves
Skop (1987) has investigated the application of the first-order
approximation of Stewart and Joy (1974) in the deep-water limit to several
cases for which analytic results exist, including the general case for depth-
limited current profiles of zero and constant vorticity (Taylor, 1955) and the
case of stationary waves on these profiles as well as cosine and exponential
profiles (tabulated by Peregrine and Smith, 1975). Skop has shown that the
first-order approximation provides generally good estimates of the wave
parameters. However, the second-order approximation for deep-water waves is
particularly simple in form, and we include it here for comparison.
The deep-water formulation follows from (2.2-4) by replacing the bottom
boundary condition with a boundedness condition on f; in the remainder of the
formulation, lower limits of integration become --. The results for the wave
phase speed become
c = co + cl + c2 (5.1)
c = (g/k) /2 (5.2)
0 2kz
I = U = 2k / U(z)ez dz (5.3)
k 2 2kz 2
c2 = U (z)e dz c1 /2c0 (5.4)
0O -
where cl was given by Stewart and Joy (1974) and c2 is new here. These
results may also be obtained by taking the appropriate limits of (2.15),
(2.18) and (2.31) directly.
5.1 Comparison with Analytic Results
We consider first the case of waves propagating on opposing depth limited
currents. The case of a uniform current is given by
; d < z 0
(z) = -Us
U ( z ) = 0
z < d
In order to facilitate comparison with
dimensionless variables according to
0 =-
(gd) 1/2
Skop's results, we introduce
K = kd
where Q is dimensionless frequency and S is an inverse Froude number. From
Taylor (1955), the exact solution gives the dispersion relation
[(nE + K)4 Sn2S2 K tanh K + (nE + K2 (nE2 S2) = 0
where subscript E denotes the exact value of n.
The first-order approximation is given by (Skop, 1987)
S1/2 S + (i-e2K
Q1 = K S + K(1-e )
while the second approximation is given by
1 K2 '-2/2 + 2-2 )
12 = /S + K(1-e -2)(1 + 2 e- K
Skop has presented plots of RE and S1 vs. K for a range of values of S. In
Figure 8, we provide plots of absolute error n E(n = 1,2) vs. K for a
range of S values. For S < 2, the trend of increased accuracy for the second-
order approximation breaks down, indicating divergence of the expansion for
the case of strong currents in deep water.
For the case of stationary waves on a current, n = 0 and we determine the
value of S(K). The exact results is given by
SE = (ktanh K) /2
and the two approximations are given by
SI = 1/2(1-e-2K)
S 2(K 1/2- S) /2
S2 2 11 +(1 S ) }
Each form is asymptotic to the limit S = K2 as K+0. The results here also
mirror problems with the expansion for strong currents, in that the term
inside the square root in (5.12) has a zero at positive K given by KCR =
- n(-) = 0.549, for which the corresponding exact stopping inverse Froude
number SE = 0.524. For K < K CR the second order solution cannot predict the
value of the stopping current. In contrast, the first-order solution predicts
the value reasonably well for the entire range of K, with increasing error as
K+0. Note that
Jim S_ = K (5.13)
0.0 5
-.005 /
0 123 //
E Dep //l
C: -.010 "
-.015 \\ /
-.020 \2/
Figure 8. Absolute frequency error nn E- E Depth limited, opposing uniform
current in deep water. -- O(E2) approximation (n=2); -- 0(c)
approximation (n=l). Curve labels are values of S.
tim S = 2K3/2 (5.14)
A plot of absolute error for Sn SE; n=1,2 is given in Figure 9. The
first-order approximation is essentially accurate for K > 2 (SE > 1.39) while
accuracy in the second-order approximation is deferred to K > 5 (SE > 2.24).
This range of validity is likely to still be representative of relevant field
conditions (note that for a surface current of speed 1 m/sec, S = 2.25 implies
a depth of flow of 51.7 cm; increased depth of flow further increases S and
strengthens the validity of the second-order expansion).
A second case for which analytic results are available is the case of a
linearly-sheared jet
Us( + d) d < z < 0
U(z) = (5.15)
0 z < -d
The exact dispersion relation (Taylor, 1955) is given by
(SE + K){1 e + E )[2 E + e2 1}
+ KS [2E + 1- e-2K = 0 (5.16)
and first and second approximations are given by
1 -2K
I = S K{I1 (1 2K) (5.17)
1 2K
Ki2 1 -4K e-2K}
9 = Q + K I-L (1 e )- e (5.18)
2 1 2S 4
Absolute error Sn SE for predicted stopping current. Opposing
depth limited current in deep water. -- O(2) approximation
(n=2); --- 0(e) approximation (n=l).
Figure 9.
Plots of absolute error E Si ; n = 1,2 are given in Figure 10 for 0 < K < 6
and 2.5 < S < 4. For this case, the improvement afforded by the second-order
approximation is dramatic. This result is most likely due to enhanced
representation of the effect of surface shear, as was noted in the results for
linear shear currents in section 3.
The prediction of stopping currents leads to the exact formula
SE = (Kcoth K 1) /2 (5.19)
and the approximations
S = 2(1 (1 (5.20)
1 + [1 2 1 -4K -2K /2(5.21)
The expression under the square root in (5.21) again has a zero at KCR
0.78633, which corresponds to a stopping current SE = 0.44506. The second-
order approximation gives no prediction of S below this value of KCR. Plots
of absolute error SE Sn are included in Figure 11. For this case, the error
in predicted stopping current obtained from the second-order approximation is
reduced essentially to zero for K > 5, which corresponds to S = 2.0001 from
(5.19). The approach of the first-order prediction to the exact solution is
deferred to much higher values of K and is not as qualitatively satisfactory,
possibly due again to inadequate representation of the effects of surface
current shear.
-.005 -
-.010 -
-.015 %' --
S\ \ 4 -O
\\ 4
-.020 3.5
dO .
-.025 \ -I
1 2.5
-.030 5 I
-.025 --- -
Figure 10. As in Figure 8 for depth limited, opposing linear shear current in
deep water.
Figure 11.
As in Figure 9 for depth limited opposing linear shear current in
deep water.
5.2 Exponential and Linear Shear Profiles
Wave-current interaction assumes a role of great importance in the theory
of generation of wind waves. Waves generated by wind action interact with a
wind-driven, sheared current profile with a thickness of the order of the
wavelength. Several recent studies have shown that the dispersion properties
of the initial wavelets are not strongly dependent on the form of the current
chosen as long as the current profile reproduces the value of the current and
shear existing at the surface. (See Gastel et al, 1985, for a recent
In this section, we compare the dispersion relation for an exponential
current profile
U(z) = Usez/d z < 0 (5.22)
to the dispersion relation for a depth limited profile having the same
velocity and surface shear, namely
U ( 1 + -) d 4 z < 0
s d)
U(z) = (5.23)
0 z < -d
These profiles have total mass flux rates differing by a factor of 2 but have
very similar structure close to the surface, where the linear profile neglects
terms of 0(z/d)2 in (5.22) where d is the e-folding length scale for decay of
the exponential profile. To the second order of approximation in the present
theory, the dispersion relations for the two profiles are given by
S= 2 2K 2 3/2 2K } (5.24)
exp (2K+1) S 2(K+1) (2K+
S1 -2K K'2 1 -4K -2K
n = S 2+ K (1 e ) + 2- {e (1 e e } (5.25)
where Us is defined positive for a following current (k>0) and where the
notation of the previous section is retained. Figure 12 shows a plot of 0 vs.
K for a range of S values. There is close agreement between the two
approximate dispersion relationships. This result suggests that a velocity
potential solution based on a depth-limited linear shear profile could be used
to some advantage in the study of initial wave growth, since three-dimensional
effects could be handled more simply than is possible when the analysis is
based on a stream function.
We remark that agreement between the first-order approximations for the
two profiles considered are also close, but that there is a general overall
deviation between the dispersion curves for the first and second
approximations, reflecting the reduced accuracy of the first-order
approximation. In reference to the discussion of the limited range of
validity of the second-order approximation, we consider the basic no-wave
state described in Figure 1 of Gastel et al. For this case, d is
approximately 5 mm with Us = 0.08 ms-1, yielding a value S = 2.77, which is
well up into the range of validity of the present approximations. Capillarity
is neglected here and would significantly alter the expressions (5.24-25) at
the length scale for this particular example.
Figure 12.
--8 4 4 8 12
Comparison of dispersion relations for exponential profile (5.22)
and linear profile (5.23) having equal speed and shear at
z=0 exponential; --- linear.
6. Comments on Action Flux Conservation
One of the chief applications for approximate dispersion relations for
wave-current interaction is in the construction of models for waves in slowly
varying domains. Such an application deserves a detailed analysis in its own
respect and will be the subject of further work, which in any case is
necessitated by the findings below; here, we can provide some initial results,
using the results of the 0(e) problem in the context of irrotational wave
theory. In particular, Skop (1987) suggests that the velocity U obtained in
section 2.2 may be used as the basis for the wave-current interaction in
propagation models, but provides no further analysis or support. Here, we
proceed using such an assumption and then analyze the results for the special
case of a linear shear current, using the results of Jonsson et al (1978) as
the basis for analytic comparisons.
We consider a linear wave riding on a flow of uniform-over-depth
velocity U, given by
S= Re{- iga cosh k(h+z) + dx (6.1)
2a -~ cosh kh -d (
n = Re{aei} 2- (6.2)
k= Vh = Pt (6.3)
subject to the dispersion relation
o = a + kU ; a = (gk tanh kh) 1/2 (6.4)
which is a simple extension of (2.21) to two dimensions. Following Kirby
(1984), (6.1) may be used as a trial function in a variational principle due
to Luke (1967), leading to a wave equation
2~ ~
D' D2 2
2 + (Vh D) Vh (CC Vh ) + (a kCC) = 0
Dt2 h Dt h gh
Dt cosh k(h)/cosh kh
= $ cosh k(h+z)/cosh kh
S c k
D _
S+ U*Vh
Dt t h
C --
g 3k
We allow U and h to have slow spatial derivatives and a (the amplitude) to
vary slowly in space and time. Taking a and I to be real functions allows
(6.5) to be reduced to an eikonal equation for and a transport equation
given by
( + Vh'9(
() + v ( (c + U)) = 0
a t a
where E is given by the simple expression
~ 1 2
E = pga
and where C = C k/k. The quantity E/o is an estimate of the wave action
~g g~
density, and (6.8) expresses the conservation of flux of wave action. The
question to be addressed is whether
is a proper estimate of action flux to the level of approximation considered
Jonsson et al (1978) give an exact expression for action flux on a linear
shear current in one direction, which we write here as
F =E
a a
a= o kU
C = Crs+ U = (kCrs) + Us
where Crs is the phase speed relative to the surface current, given by (3.9)
rs-n k ) 1/ ( 2 tanh kh 1/2
C = g anh k 1(1+ 4gk
rs k 4gk
0 tanh kh
Considering terms only to 0(e), it is apparent that
C + U = c + U + 0(s)
Crs s 0
S E E
f (C + U) =- C
crg ~ ga
and hence Cga, given by
C = {kcO + k = C + 0(E2) (6.16)
g 9k 0 g
a a
is the correct advection velocity to 0(e). However, note that
g -^ v -^ (6.17)
C + U + k (6.17)
g 3k 3k
is not equivalent to the simple estimate obtained from irrotational theory,
where U is entered simply as a local estimate of depth-uniform velocity, and
hence is not apparently a function of k. It is necessary to take this
dependence into account explicitly in arriving at the correct expression for
the group velocity in the absolute reference frame. Details of a comparison
of the expressions for Cga and Cga are given in Appendix A.
Turning to the expression for wave action, we may write the exact
expression for E for a linear shear current (from Jonsson et al) as
1 2 O rs.
E = pga (- 2g (6.18)
as = w kUs (6.19)
is the frequency relative to the surface current. To 0(e), we may then write
E as
c Wgh
E= (1-a--)- +0 (6.20)
s Jgh
The expression for wave action density is E/a which then gives
E (1 ) + 0 (0 (6.21)
a s Vgh
Examining as, we have
a = m kU = m kU k(U U)
w tanh kh
~ "0
w kc0
a g (6.22)
Factoring out a = kc0 then gives
s = l ( ) (6.23)
s 2g
Substituting (6.22) in (6.20) finally gives
+ 0 (0 (6.24)
a a gh
and we see that E/I is a proper estimate of action density to the required
order. The final expression for wave action flux is then given by
co 0cO o0h
= { (1 + G) + U + (1 G)} + 0 ) (6.25)
2 2g
where we have used (A1) and where G is defined in (A2)gh
where we have used (A.1) and where G is defined in (A.2).
It is apparent that the derivation of a wave propagation model based on
irrotational theory and using U as the local uniform-over-depth velocity does
not produce a consistent model at the order of the expansion considered. The
construction of proper wave equations or evolution equations depends on
further investigation of the full rotational problem in the context of a
slowly-varying, one- or two-dimensional (in plan) domain. Direct use of U as
a depth-averaged velocity in existing evolution equation models based on
irrotational theory will incur an error of 0(e) in action flux conservation,
thus rendering the models invalid over accumulated distances of 0(e- ).
However, the expression (6.25) (or alternate forms of (6.16) for non-constant
shear) may be used in eikonal-transport models for refraction calculations,
with consistency maintained up to 0(e2).
7. Conclusions
This study has provided approximate dispersion relations to 0(e2) for
waves propagating on weak currents U(z) = 0(sc). In contrast to approximate
results for deepwater, where 0(e) approximations are quite sufficient (Stewart
and Joy, 1974, Skop, 1987), the results here indicate that approximations to
0(e2) are required for any degree of accuracy to be obtained in finite water
depth, except for very weak current conditions or for cases where vorticity is
confined near the bed and waves are relatively short. The 0(e2) results
provide the next correction to the results of Skop (1987) and provide
significant improvements for cases where vorticity is distributed more or less
evenly over the depth. Additional analysis indicates that an expansion
procedure for arbitrarily strong currents with weak vorticity yields
equivalent results to the weak current case; this conjecture is proven here
only for the case of a linear shear current.
A consideration of the formulae for action flux resulting from the 0(e)
approximation and the exact solution for a linear shear current indicates that
the use of the 0(e) average velocity U as an estimate of depth-averaged
velocity in existing wave models incurs an error of 0(e) in the action flux,
rendering existing models invalid for length scales of 0(e ). Correction of
this problem awaits further research on rotational waves in slowly-varying
Appendix A: Action Flux Velocity for Linear Shear Current
Based on the results in (6.14) (6.17), we consider the equivalence of
the advection velocity Cga between the 0(e) solution and the expansion of
Jonsson et al's exact solution to that order. (Here, we take the viewpoint of
the large current, small vorticity expansion so that e = 0(o0h/(gh) /2)
<< 1.) Evaluating Cga from (6.17) gives
c (0tanh kh
g 2( + G)+U 2k G
co m0wtanh kh
(1 + G) + U + 2k (1 G) (A.1)
G = 2kh/sinh 2kh (A.2)
From Jonsson et al we have
C =a (kC ) + U
g 8k rs s
C [(1 + G) 0 ]
=- + U (A.3)
2 MCrs s
[1 rs]
[1 2g J
To the required order, we have
Cs = + U U = 2g (A.4)
rs 0 s 0 2g
Using (A.4) in (A.3) and retaining terms only to first order in MO gives back
(A.1), indicating the desired result
Wh 2
C =C + (- (A.5)
ga ga =gh
Burns, J.C., 1953, "Long waves in running water", Proc. Cambr. Phil. Soc. 49,
Fenton, J.D., 1973, "Some results for surface gravity waves on shear flows",
J. Inst. Maths. Applics. 12, 1-20.
Gastel, K. van, Janssen, P.A.E.M. and Komen, G.J., 1985, "On phase velocity
and growth rate of wind-induced gravity-capillary waves", J. Fluid Mech.
161, 199-216.
Hunt, J.N., 1955, "Gravity waves in flowing water", Proc. Roy. Soc. London
A231, 496-504.
Jonsson, I.G., Brink-Kjaer, 0. and Thomas, G.P., 1978, "Wave action and set-
down for waves on a shear current", J. Fluid Mech., 87, 401-416.
Kirby, J.T., 1984, "A note on linear surface wave-current interaction over
slowly varying topography", J. Geophys. Res. 89, 745-747.
Lighthill, M.J., 1953, "On the critical Froude number for turbulent flow over
a smooth bottom", Proc. Cambr. Phil. Soc. 49, 704-706.
Luke, J.C., 1967, "A variational principle for a fluid with a free surface",
J. Fluid Mech., 27, 395-397.
Peregrine, D.H., 1976, "Interaction of water waves and currents", Adv. Appl.
Mech. 16, 9-117.
Peregrine, D.H. and Smith, R., 1975, "Stationary gravity waves on non-uniform
free streams: jet-like streams", Math. Proc. Cambr. Phil. Soc. 77,
Peregrine, D.H. and Jonsson, I.G., 1983, "Interaction of waves and currents",
Misc. Report MR83-6, U.S. Army Corps of Engineers, Coastal Engineering
Research Center.
Skop, R.A., 1987, "An approximate dispersion relation for wave-current
interactions", J. Waterway, Port, Coast. and Ocean Eng. 113, 187-195.
Stewart, R.H. and Joy, J.W., 1974, "HF radio measurements of surface
currents", Deep-Sea Res., 21, 1039-1049.
Taylor, G.I., 1955, "The action of a surface current used as a breakwater",
Proc. Roy. Soc. London A 231, 466-478.
Thomas, G.P., 1981, "Wave-current interactions: an experimental and numerical
study. Part 1. Linear waves", J. Fluid Mech., 110, 457-474. | {"url":"http://ufdc.ufl.edu/UF00075471/00001","timestamp":"2014-04-19T12:20:55Z","content_type":null,"content_length":"94487","record_id":"<urn:uuid:c48320d7-815d-473e-85d1-3ce2811a47b3>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00156-ip-10-147-4-33.ec2.internal.warc.gz"} |
Integer Power Algorithm
One of the old chestnuts of mathematical algorithms. Problem: Find x^n where x is real and n is a positive integer, using a minimal number of multiplications. Algorithm in Scheme:
(define (square x) (* x x))
(define (power x n)
((= n 0) 1)
((evenp n) (power (square x) (/ n 2)))
(else (* x (power (square x) (/ (- n 1) 2))))))
For example,
x^{10} = y^5, where y = x^2
= y z^2, where z = y^2,
= y w, where w = z^2.
This requires 4 multiplications, to find y, z, w, and finally yw. If the binary expansion of n is a_{k}a_{k-1}...a_{0}, then this takes k + l - 1 multiplications, where l is the number of one-bits in
the expansion. The expansion of 10 is 1010, and in that case k = 3, l = 2, and k + l - 1 = 4. --
A few remarks about this algorithm.
It doesn't always do the fewest possible number of multiplications. For instance, when n=15 you could do (x^3)*(x^5) = (x*x^2)*(x*x^2*x^2) requiring 4 multiplies, but the algorithm above will do x*(x
^2)^7 = ... = x * x^2 * (x^2)^2 * ((x^2)^2)^2, for 6 multiplies.
Something equivalent can be implemented iteratively rather than recursively. This isn't likely to be important unless you have either a bad language implementation or a bias against recursive
Although it looks like this offers an
) speedup compared to the naive one-at-a-time algorithm, life's a bit more complicated in the real world where multiplying larger numbers takes longer. If you implement multiplication in such a way
that multiplying an
-bit number by an
-bit number takes time proportional to
, then the elegant algorithm above may be no faster than just multiplying by
x n
times; it might even be slower.
On the other hand, if you use a more sophisticated multiplication algorithm or you're doing modular arithmetic, an algorithm like the one above can be a very big win. (You need modular exponentiation
for doing RSA cryptography and primality testing, for instance.) Yes, I could have also added an auxiliary variable. In C, iterative style:
float power(float x, unsigned int n) {
float aux = 1.0;
while (n > 0) {
if (n & 1) { \\ odd?
aux *= x;
if (n == 1) return aux;
x *= x;
n /= 2;
return aux;
or in Scheme, functional style (
(define (multiply-with-power aux x n)
((= n 0) aux)
((= n 1) (* aux x))
((even? n) (multiply-with-power aux (square x) (quotient n 2)))
(else (multiply-with-power (* aux x) (square x) (quotient n 2)))))
(define (power x n)
(multiply-with-power 1 x n))
Your x^15 algorithm was wrong; you had (x^3)*(x^5) when you needed (x^3)^5. To compute x^3 from x requires 2 multiplications, while to compute y^5 from y (= x^3) needs 3 multiplications. The total is
5. You are right, however. The squaring method is not necessarily the best sequence. More striking would be x^27, which would need 8 multiplications by squaring, but 6 from cubing. It's actually an
interesting, and so far as I know unsolved, problem: given a positive integer n, let f(n) be the smallest k such that there is a sequence
1 = a_0, a_1, ..., a_k = n,
such that each element other than a_0 is the sum of two earlier elements in the series. Is there a good algorithm for f(n)? What are its arithmetic properties? Neil Sloane's
, at
, references this as entry A003313, and cites
, vol. 2, p. 446. The sequence is given as:
1--10: 0,1,2,2,3,3,4,3,4,4,
11--20: 5,4,5,5,5,4,5,5,6,5,
21--30: 6,6,6,5,6,6,6,6,7,6,
31--40: 7,5,6,6,7,6,7,7,7,6,
41--50: 7,7,7,7,7,7,8,6,7,7,
51--60: 7,7,8,7,8,7,8,8,8,7,
61--70: 8,8,8,6,7,7,8,7,8,8,...
Though the above may make it seem that using such a system improves the performance of raising powers, quite the contrary is true. I have a theory, and some lemmas, but I don't have a mathematical
proof yet. Perhaps someone can help me with it, I'll give the basis here:
. The point is that finding how to multiply will be of the order O(sqrt(N)), while using the power system used at the beginning of the page is of order O(log(N)). Clearly for the general case, the
system used at the beginning of this page is better. -- ChristophePoucet
Yes, multiplication is a curious task in programming. We assume that it's all done in hardware and we ignore its complexities. You are right; a naive multiplication algorithm that multiplies two
N-bit numbers in O(N^2) time would be slow. However, modern algorithms use FFT-like methods, and require O(N log N) time. It's tricky; the constants the O-notation hides are crucial. -- | {"url":"http://c2.com/cgi/wiki?IntegerPowerAlgorithm","timestamp":"2014-04-18T18:43:33Z","content_type":null,"content_length":"6818","record_id":"<urn:uuid:92d869c1-56c1-4b8e-88c9-d842ea159101>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00607-ip-10-147-4-33.ec2.internal.warc.gz"} |
Another troubleshooting challenge [Archive] - LawnSite.com™ - Lawn Care & Landscaping Business Forum, Discuss News & Reviews
08-25-2010, 12:09 PM
Have you ever wondered how changes in transformer input voltage affect the total amperage draw? This is good to know so you can evaluate changes in amperage readings.
Say, for example, you install a system and make the following intial measurements:
Voltage at the GFCI (under full load): 115V
Primary transformer amperage: 8.0 amps
You return to the site a year later and take new measurements:
Voltage at the GFCI (under full load): 126V (about 10% higher)
Primary transformer amperage: ???? amps
What would you predict (if only the voltage has changed)? Would the amperage be higher, lower, or stay the same? If it does change, by how much?
Tips: Amps=Watts/Volts but this will not give the correct answer, unless you consider what's happening with the lamps. | {"url":"http://www.lawnsite.com/archive/index.php/t-327204.html","timestamp":"2014-04-17T07:21:09Z","content_type":null,"content_length":"23396","record_id":"<urn:uuid:701a47c9-f4e6-4dd5-bffb-d0ace2673be4>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00097-ip-10-147-4-33.ec2.internal.warc.gz"} |
Which Spheres are Complex Manifolds?
up vote 3 down vote favorite
Possible Duplicate:
complex structure on S^n
The two sphere $S^2$ is a real manifold of dimension $2$, while the three sphere $S^3$ is a real manifold of dimension $3$. Now $S^2$ is a complex manifold, while $S^3$ being odd dimensional is not.
Is it true that all spheres of the form $S^{2N}$ are complex manifolds?
dg.differential-geometry complex-geometry
add comment
marked as duplicate by Kevin H. Lin, Qiaochu Yuan, S. Carnahan♦ Jul 1 '10 at 15:18
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
2 Answers
active oldest votes
That's VERY much false. This question is looking for a reference to the fact that $S^2$ and $S^6$ are the only ones with even almost complex structures, and it's open if $S^6$ admits
a complex structure (the almost complex structure known is not one)
up vote 10 down
vote Edit: Other questions on complex structures on spheres are here and here.
add comment
It's a nice exercise in characteristic classes to show that S^4k for all k are NOT complex manifolds.
EDIT: I will answer Charlie's comment here and provide a sketch of the proof.
Let $\omega=TS^{4k}$ be the tangent space to the $4k$-sphere. If $S^{4k}$ was actually a complex manifold then $\omega$ would be a complex vector bundle. In this case the complexification
of the underlying real vector bundle $\omega_{\mathbb{R}}$ would be canonically isomorphic to the Whitney sum $\omega\oplus \bar{\omega}$ (Milnor&Stasheff page 176). Now by corollary 15.5
in Milnor&Stasheff $$p_k(\omega_{\mathbb{R}})=c_k^2(\omega)-2c_{k-1}c_{k+1}(\omega)+\cdots\mp 2c_{2k}(\omega)$$
up vote 8
down vote This then shows that the top Pontrjagin number $$< p_k,[S^{4k}]>=<\mp 2c_{2k},[S^{4k}]>=\mp 4$$ but we also know that spheres are boundries of an oriented manifold and thus have higher
Pontrjagin number 0. Contradiction.
On another note, according to C.C. Hsiung's book Almost Complex and Complex Structures on page 233 he says "In fact, the absence of an almost complex structure on $S^{4k}$ for $k\geq 1$ and
$S^{2n}$ for $n\geq 4$ was proved by Wu and jointly Borel and Serre respectively."
2 Don't characteristic classes in fact prove that $S^{2k}$ for $k\geq 3$ aren't? I thought it boiled down to $(k-1)!$ needs to divide 2, or else the characteristic classes obstruct,
leaving $S^2$, $S^4$ and $S^6$, and $S^4$ needs to be ruled out by hand. – Charles Siegel Jun 30 '10 at 6:52
@Charlie, I think you mean $k>3$. – Justin Curry Jun 30 '10 at 16:02
Ah, of course, $>$ not $\geq$. – Charles Siegel Jul 1 '10 at 14:39
As an almost complex structure on $X$ endows $TX$ with the structure of a complex vector bundle, doesn't your argument show that these spheres don't even admit almost complex structures,
let alone integrable ones? – Michael Albanese Dec 25 '12 at 7:01
add comment
Not the answer you're looking for? Browse other questions tagged dg.differential-geometry complex-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/29964/which-spheres-are-complex-manifolds?sort=oldest","timestamp":"2014-04-20T13:51:39Z","content_type":null,"content_length":"52031","record_id":"<urn:uuid:7185615f-57a4-4b9d-a652-38a7a1b10c68>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00439-ip-10-147-4-33.ec2.internal.warc.gz"} |
On the Unique Games Conjecture (part 2)
[I am preparing a survey talk on Unique Games for a mathematics conference, and writing a survey paper for a booklet that will be distributed at the conference. My first instinct was to write a
one-line paper that would simply refer to Subhash's own excellent survey paper. Fearing that I might come off as lazy, I am instead writing my own paper. Here are some fragments. Comments are very
1. Why is the Unique Games Conjecture Useful
In a previous post we stated the Unique Games Conjecture and made the following informal claim, here rephrased in abbreviated form:
To reduce Label Cover to a graph optimization problem like Max Cut, we map variables to collections of vertices and we map equations to collections of edges; then we show how to “encode” assignments
to variables as 2-colorings of vertices which cut a ${\geq c_1}$ fraction of edges, and finally (this is the hardest part of the argument) we show that given a 2-coloring that cuts a ${\geq c_2}$
fraction of edges, then
1. the given 2-coloring must be somewhat “close” to a 2-coloring coming from the encoding of an assignment and
2. if we “decode” the given 2-coloring to an assignment to the variables, such an assignment satisfies a noticeable fraction of equations.
Starting our reduction from a Unique Game instead of a Label Cover problem, we only need to prove (1) above, and (2) more or less follows for free.
To verify this claim, we “axiomatize” the properties of a reduction that only achieves (1): we describe a reduction mapping a single variable to a graph, such that assignments to the variable are
mapped to good cuts, and somewhat good cuts can be mapped back to assignments to the variable. The reader can then go back to our analysis of the Max Cut inapproximability proof in the previous post,
and see that the properties below are sufficient to implement the reduction.
Definition 1 (${(c_1,c_2)}$-Graph Family) A ${(c_1,c_2)}$ graph family is a collection of graphs ${G_m = (V_m,E_m)}$, for each positive integer ${m}$, together with an encoding function ${Enc_m :
\{1,\ldots,m\} \rightarrow 2^{V_m}}$ and a randomized decoding process ${Dec_m : 2^{V_m} \rightarrow \{1,\ldots,m\}}$ such that
□ For every ${m}$ and every ${i\in m}$, let ${S_i := Enc_m(i)}$. Then the partition ${(S_i,V_m-S_i)}$ cuts at least a ${c_1}$ fraction of the edges of ${G_m}$;
□ If ${(S,V_m-S)}$ is a partition of the vertices of ${G_m}$ that cuts at least a ${c_2 + \delta}$ fraction of the edges, then there is an index ${i\in \{1,\ldots,m\}}$ such that the
$\displaystyle \mathop{\mathbb P} [ Dec_m (S) = i ] \geq p(\delta) >0$
is at least a positive quantity ${p(\delta)}$ independent of ${m}$;
□ The encoding and decoding procedures are symmetric. That is, it is possible to define an action of the symmetric group of ${\{1,\ldots,m\}}$ on ${V_m}$, by which we mean that for every ${x\in
V_m}$ and every bijection ${\pi \{1,\ldots,m \} \rightarrow \{1,\ldots,m\}}$ we define an element ${x\circ \pi \in V_m}$ (for a set ${S}$, we define ${\pi(S)}$ to be ${\{ x\circ \pi : x\in S
\}}$) such that for every ${i\in m}$ and every bijection ${\pi: \{1,\ldots,m\} \rightarrow \{1,\ldots, m \}}$ we have
$\displaystyle Enc_m ({\pi(i)}) = \pi (Enc_m ({i} ))$
$\displaystyle Dec_m (\pi (S)) \approx \pi ( Dec_m (S))$
where ${D_1 \approx D_2}$ means that ${D_1}$ and ${D_2}$ have the same distribution.
We claim that, in the previous post, we defined a ${(1-\epsilon, 1 - \frac 2\pi \sqrt{\epsilon})}$ graph family, and that this was sufficient to prove that intractability of approximating Max Cut
under the Unique Games Intractability Conjecture.
The graph family is the following. For a given ${m}$:
1. The vertex set is ${V_m := \{ 0,1 \}^m}$;
2. The graph is weighted complete graph with edges of total weight ${1}$. The weight of edge ${(x,y)}$ is the probability of generating the pair ${(x,y)}$ by sampling ${x}$ at random and sampling $
{y}$ from the distribution ${N_{1-\epsilon}(x)}$;
3. ${Enc_m(i)}$ defines the cut ${(S_i,V_m-S_i)}$ in which ${S_i}$ is the set of all vertices ${x}$ such that ${x_i = 1}$
4. ${Dec_m(S)}$ proceeds as follows. Define ${f(x) := -1}$ if ${x\in S}$ and ${f(x) := 1}$ if ${xot \in S}$. Compute the Fourier expansion
$\displaystyle f(x) = \sum_R \hat f(R) (-1)^{\sum_i \in R x_i}$
Sample a set ${R}$ with probability proportional to ${\hat f^2(R)}$, and then output a random element of ${R}$
2. Semidefinite Programming
Solving an instance of a combinatorial optimization problem of minimization type is a task of the form
$\displaystyle \begin{array}{l} \max cost (z)\\ \mbox{subject to}\\ z\in Sol \end{array} \ \ \ \ \ (1)$
where ${Sol}$ is the set of admissible solutions and ${cost(z)}$ is the cost of solution ${z}$. For example the problem of finding the maximum cut in a graph ${G=(V,E)}$ is a problem of the above
type where ${Sol}$ is the collection of all subsets ${S\subseteq V}$, and ${cost(S)}$ is the number of edges cut by the vertex partition ${(S,V-S)}$.
If ${Sol \subseteq Rel}$, and ${cost' : Rel \rightarrow {\mathbb R}}$ is a function that agrees with ${cost()}$ on ${Sol}$, then we call the problem
$\displaystyle \begin{array}{l} \min cost'(z)\\ \mbox{subject to}\\ z\in Rel\end{array} \ \ \ \ \ (2)$
a relaxation of the problem in (1). The interest in this notion is that combinatorial optimization problems in which the solution space is discrete are often NP-hard, while there are general classes
of optimization problems defined over a continuous convex solution space that can be solved in polynomial time. A fruitful approach to approximating combinatorial optimization problems is thus to
consider relaxations to tractable convex optimization problems, and then argue that the optimum of the relaxation is close to the optimum of the original discrete problem.
The Unique Games Intractability Conjecture appears to be deeply related to the approximation quality of Semidefinite Programming relaxations of combinatorial optimization problems.
2.1. Semidefinite Programming
A symmetric matrix ${A}$ is positive semidefinite, written ${A \succeq {\bf 0}}$, if all its eigenvalues are non-negative. We write ${A \succeq B}$ if ${A-B}$ if positive semidefinite. We quote
without proof the following facts:
• A matrix ${A\in {\mathbb R}^{n \times n}}$ is positive semidefinite if and only if there are vectors ${v^{1},\ldots,v^n \in {\mathbb R}^m}$ such that for every ${i,j}$ we have ${A_{ij} = \langle
v^i,v^j \rangle}$. Furthermore, there is an algorithm of running time polynomial in ${n}$ that, given a matrix ${A}$, tests whether ${A}$ is positive semidefinite and, if so, finds vectors ${v^1,
\ldots,v^n}$ as above.
• The set of positive semidefinite matrices is a convex subset of ${{\mathbb R}^{n\times n}}$. In fact, it is a convex cone, that is, for every two positive semidefinite matrices ${A,B}$ and
non-negative scalars ${\alpha,\beta}$, the matrix ${\alpha A + \beta B}$ is positive semidefinite.
It often the case the optimizing a linear function over a convex subset of ${{\mathbb R}^N}$ is a polynomial time solvable problem, and indeed there are polynomial time algorithms for the following
Definition 2 (Semidefinite Programming) The Semidefinite Programming problem is the following computational program: given matrices ${C,A^1,\ldots,A^m \in {\mathbb R}^{n \times n}}$ and scalars $
{b_1,\ldots,b_m \in {\mathbb R}}$, find a matrix ${X}$ that solves the following optimization problem (called a semidefinite program):
$\displaystyle \begin{array}{l} \max C \bullet X\\ \mbox{subject to}\\ A^1 \bullet X \leq b_1\\ A^2 \bullet X \leq b_2\\ \cdots\\ A^m \bullet X \leq b_m\\ X \succeq {\bf 0} \end{array} \ \ \ \ \
where we use the notation ${A \bullet B := \sum_{ij} A_{ij} \cdot B_{ij}}$.
In light of the characterization of positive semidefinite matrices described above, the semidefinite program (3) can be equivalently written as
$\displaystyle \begin{array}{l} \max \sum_{ij} C_{ij} \cdot \langle v^i ,v^j \rangle \\ \mbox{subject to}\\ \sum_{ij} A^1_{ij} \cdot \langle v^i,v^j \rangle \leq b_1\\ \sum_{ij} A^2_{ij} \cdot \
langle v^i,v^j \rangle \leq b_2\\ \cdots\\ \sum_{ij} A^m_{ij} \cdot \langle v^i,v^j \rangle \leq b_m\\ v^1,\ldots,v^n \in {\mathbb R}^n \end{array} \ \ \ \ \ (4)$
That is, as an optimization problem in which we are looking for a collection ${v^1,\ldots,v^n}$ of vectors that optimize a linear function of their inner products subject to linear inequalities about
their inner products.
2.2. Semidefinite Programming and Approximation Algorithms
A quadratic program is an optimization problem in which we are looking for reals ${x_1,\ldots,x_n}$ that optimize a quadratic form subject to quadratic inequalities, that is an optimization problem
that can be written as
$\displaystyle \begin{array}{l} \max \sum_{ij} C_{ij} \cdot x_i \cdot x_j \\ \mbox{subject to}\\ \sum_{ij} A^1_{ij} \cdot x_i \cdot x_j \leq b_1\\ \sum_{ij} A^2_{ij} \cdot x_i \cdot x_j \leq b_2\\ \
cdots\\ \sum_{ij} A^m_{ij} x_i \cdot x_j \leq b_m\\ x_1,\ldots,x_n \in {\mathbb R} \end{array} \ \ \ \ \ (5)$
Since the quadratic condition ${x\cdot x = 1}$ can only be satisfied if ${x\in \{ -1,1 \}}$, quadratic programs can express discrete optimization problems. For example, the Max Cut problem in a graph
${G=(V,E)}$, where ${V=\{ 1,\ldots,n\}}$ can be written as a quadratic program in the following way
$\displaystyle \begin{array}{l} \max \sum_{ij \in E} \frac 12 - \frac 12 x_i \cdot x_j\\ \mbox{subject to}\\ x_1^2 = 1\\ \cdots\\ x_n^2 = 1\\ x_1,\ldots,x_n \in {\mathbb R} \end{array} \ \ \ \ \ (6)$
Every quadratic program has a natural Semidefinite Programming relaxation in which we replace reals ${x_i}$ with vectors ${v^i}$ and we replace products ${x_i \cdot x_j}$ with inner products ${\
langle v^i,v^j \rangle}$. Applying this generic transformation to the quadratic programming formulation of Max Cut we obtain the following semidefinite programming formulation of Max Cut
$\displaystyle \begin{array}{l} \max \sum_{ij \in E} \frac 12 - \frac 12\langle v^i,v^j \rangle\\ \mbox{subject to}\\ \langle v^1,v^1 \rangle = 1\\ \cdots\\ \langle v^n,v^n \rangle = 1\\ v^1,\ldots,v
^n \in {\mathbb R}^n \end{array} \ \ \ \ \ (7)$
The Max Cut relaxation (7) is the one used by Goemans and Williamson.
Algorithms based on semidefinite programming provide the best known polynomial-time approximation guarantees for a number of other graph optimization problems and of constraint satisfaction problem.
2.3. Semidefinite Programming and Unique Games
The quality of the approximation of Relaxation (7) for the Max Cut problem exactly matches the intractability results proved assuming the Unique Games Intractability Assumptions. This has been true
for a number of other optimization problems.
Remarkably, Prasad Raghavendra has shown that for a class of problems (which includes Max Cut as well as boolean and non-boolean constraint satisfaction problems), there is a semidefinite programming
relaxation such that, assuming the Unique Games Intractabiltiy Conjecture, no other polynomial time algorithm can provide a better approximation than that relaxation.
If one believes the conjecture, this means that the approximability of all such problems has been resolved, and a best-possible polynomial time approximation algorithm has been identified for each
such problem. An alternative view is that, in order to contradict the Unique Games Intractabiltiy Conjecture, it is enough to find a new algorithmic approximation techniques that works better than
semidefinite programming for any of the problems that fall into Raghavendra’s framework, or maybe find a different semidefinite programming relaxation that works better than the one considered in
Raghavendra’s work.
2.4. Sparsest Cut, Semidefinite Programming, and Metric Embeddings
If, at some point in the future, the Unique Games Intractability Conjecture will be refuted, then some of the theorems that we have discussed will become vacuous. There are, however, a number of
unconditional results that have been discovered because of the research program that originated from the conjecture, and that would survive a refutation.
First of all, the analytic techniques developed to study reductions from Unique Games could become part of future reductions from Label Cover or from other variants of the PCP Theorem. As discussed
above, reductions from Unique Games give ways of encoding values of variables of a Label Cover instance as good feasible solutions in the target optimization problems, and ways of decoding good
feasible solutions in the target optimization problems as values for the variables of the Label Cover instance.
It is also worth noting that some of the analytic techniques developed within the research program of Unique Games have broader applicability. For example the impetus to prove the Invariance Theorem
of Mossel, O’Donnell and Oleszkiewicz came from its implications for conditional inapproximability results, but it settles a number of open questions in social choice theory.
Perhaps the most remarkable unconditional theorems motivated by Unique Games regard integrality gaps of Semidefinite Programming relaxations. The integrality gap of a relaxation of a combinatorial
optimization problem is the worst-case (over all instances) ratio between the optimum of the combinatorial problem and the optimum of the relaxation. The integrality gap defines how good is the
optimum of the relaxation as a numerical approximation of the true optimum, and it is usually a bottleneck to the quality of approximation algorithms that are based on the relaxation.
The integrality gap of Relaxation (7) is ${.8785\cdots}$, the same as the hardness of approximation result proved assuming the Unique Games Intractabiltiy Conjecture. Indeed, the graph that exhibits
the ${.8785\cdots}$ gap is related to the graph used in the reduction from Unique Games to Max Cut. This is part of the larger pattern discovered by Raghavendra (cited above), who shows that, for a
certain class of optimization problems, every integrality gap instance for certain semidefinite programming relaxations can be turned into a conditional inapproximability result assuming the Unique
Games Intractability Assumption. The Sparsest Cut problem, described in the previous post, has a Semidefinite Programming relaxation, first studied by Goemans and Linial, whose analysis is of
interest even outside of the area of approximation algorithms. A metric space ${(X,d)}$ is of negative type if ${(X,\sqrt d)}$ is also a metric space and is isometrically embeddable in Euclidean
space. If every ${n}$-point metric space of negative type can be embedded into ${L1}$ with distortion at most ${c(n)}$, then the Semidefinite Programming relaxation of Goemans and Linial can be used
to provide a ${c(n)}$-approximate algorithm for sparsest cut, where ${n}$ is the number of vertices, and the integrality gap of the relaxation is at most ${c(n)}$. Equivalently, if there is an ${n}$
-vertex instance of Sparsest Cut exhibiting an integrality gap at least ${c(n)}$, then there is an ${n}$-point negative-type metric space that cannot be embedded into ${L1}$ without incurring
distortion at least ${c(n)}$.
Interestingly, there is a generalization of the Sparsest Cut problem, the Non-uniform Sparsest Cut problem, for which the converse is also true, that is, the integrality gap of the Goemans-Linial
Semidefinite Programming relaxation of the Non-uniform Sparsest Cut problem for graphs with ${n}$ vertices is ${\leq c(n)}$ if and only if every ${n}$-point negative-type metric space can be embedded
into ${L1}$ with distortion at most ${c(n)}$.
It had been conjectured by Goemans and Linial that the integrality gap of the semidefinite relaxations of Sparsest Cut and Non-Uniform Sparsest Cut was at most a constant. Arora, Rao and Vazirani
proved in 2004 that the Sparsest Cut relaxation had integrality gap ${O(\sqrt {\log n})}$, and Arora, Lee and Naor proved in 2005 that Non-Uniform Sparsest Cut relaxation had integrality gap ${O(\
sqrt{\log n} \cdot \log\log n)}$, results that were considered partial progress toward the Goemans-Linial conjecture.
Later in 2005, however, Khot and Vishnoi proved that the relaxation of Non-Uniform Sparsest Cut has an integrality gap ${(\log\log n)^{\Omega(1)}}$ that goes to infinity with ${n}$. Their approach
was to:
1. Prove that the Non-Uniform Sparsest Cut problem does not have a constant-factor approximation, assuming the Unique Games Intractability Conjecture, via a reduction from unique games to
non-uniform sparsest cut;
2. Prove that a natural Semidefinite Programming relaxation of Unique Games has integrality gap ${(\log\log n)^{\Omega(1)}}$;
3. Show that applying the reduction in (1) to the Unique Games instance in (2) produces an integrality gap instance for the Goemans-Linial Semidefinite Programming relaxation of Non-Uniform Sparsest
In particular, Khot and Vishnoi exhibit an ${n}$-point negative-type metric space that requires distortion ${(\log\log n)^{\Omega(1)}}$ to be embedded into ${L1}$. This has been a rather unique
approach to the construction of counterexamples in metric geometry. The lower bound was improved to ${\Omega(\log\log n)}$ by Krauthgamer and Rabani, and the following year, Devanur, Khot, Saket and
Vishnoi showed that even the Sparsest Cut relaxation has an integrality gap ${\Omega(\log\log n)}$.
Cheeger, Kleiner and Naor have recently exhibited a ${(\log n)^{\Omega(1)}}$ integrality gap for Non-Uniform Sparsest Cut, via very different techniques.
3. Algorithms for Unique Games
When Khot introduced the Unique Games Conjecture, he also introduced a Semidefinite Programming relaxation. Charikar, Makarychev and Makarychev provide a tight analysis of the approximation guarantee
of that Semidefinite Program, showing that, given a unique game with range ${\Sigma}$ in which a ${1-\epsilon}$ fraction of the equations can be satisfied, it is possible to find in polynomial time a
solution that satisfies at least a ${1/\Sigma^{O(\epsilon)}}$ fraction of constraints.
This is about as good as can be expected, because earlier work had shown that if the Unique Games Intractability Conjecture holds, then there is no polynomial time algorithm able to satisfy a ${1/\
Sigma^{o_\Sigma(\epsilon)}}$ fraction of constraints in a unique game with range ${\Sigma}$ in which a ${(1-\epsilon)}$ fraction of equations is satisfiable. Furthermore, the analysis of Charikar,
Makarychev and Makarychev is (unconditionally) known to be tight for the specific Semidefinite Programming relaxation used in their algorithm because of the integrality gap result of Khot and Vishnoi
discussed in the previous section.
Recently, Arora, Barak and Steurer have devised an algorithm that satisfies in time ${2^{n^{O(\epsilon)}}}$ a constant fraction of the equations in an instance of unique games in which it is possible
to satisfy a ${1-\epsilon}$ fraction of equations. Although this result is far from refuting the Unique Games Intractability Conjecture, it casts some doubts on the Unique Games NP-hardness
Conjecture. The following stronger form of the ${Peq NP}$ conjecture is generally considered to be very likely: that for every NP-hard problem that is a ${c>0}$ such that the problem cannot be solved
with worst-case running time faster than ${2^{n^c}}$, where ${n}$ is the size of the input. This means that if the running time of the Arora-Barak-Steurer algorithm could be improved to ${2^{n^{o
(1)}}}$ for a fixed ${\epsilon}$, the Unique Games NP-hardness Conjecture would be in disagreement with the above conjecture about NP-hard problems, and would have to be considered unlikely.
Recent Comments
• Not a Troll on This story has a moral
• Jim Hefferon on This story has a moral
3 comments
In program (1), there should be “MIN” instead of “MAX”.
Thank you for sharing this “talk” on UGC for the non expert!
“indeed there are polynomial time algorithms for the following problem:
Definition 2 (Semidefinite Programming)”
I don’t think this is correct as stated. The ellipsoid algorithm for SDP requires an extra bound on the feasible region, where it looks for solutions. (And interior point methods require much more,
as well as they are analyzed in a weird complexity model, to say the least…)
Further evidence against such a claim:
there are examples due to Khachiyan and Porkololab that show that the bitsize of any feasible SDP point might be exponential.
there is a paper by S.Tarasov and M.Vyalyi (portal.acm.org/citation.cfm?id=1393773) that shows that (exactly) solving SDPs in polynomial type would allow efficient arithmetic on rational numbers
given by arithmetic circuits.
Although in combinatorial optimization one usually has boundedness necessary for the ellipsoid method to run in polynomial time.
It seems you kept referring to “post” in the Bulletin adaptation of this post on the UGC.
Leave a Reply Cancel reply | {"url":"http://lucatrevisan.wordpress.com/2010/11/20/on-the-unique-games-conjecture-part-2/","timestamp":"2014-04-17T15:30:52Z","content_type":null,"content_length":"127245","record_id":"<urn:uuid:fcb251fb-fc8e-479b-937b-0cc56a4df81a>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00507-ip-10-147-4-33.ec2.internal.warc.gz"} |
East Lake, CO Algebra Tutor
Find an East Lake, CO Algebra Tutor
...I love teaching and tutor in a way that pin-points the root cause. If your child is reading at a level a few grades behind, we will start from the level and slowly work up to their expected
level. I am an extremely patient and understanding person.
13 Subjects: including algebra 1, reading, English, grammar
...I am looking for dedicated, hard working students who have passions for the math and sciences and are looking to further their knowledge in these sometimes frustrating but well rewarding
disciplines. I am excited to have the opportunity to help you succeed. Please feel free to reach out to me.
20 Subjects: including algebra 1, algebra 2, calculus, chemistry
...I am currently taking a break from school, but do not want to get away from teaching and tutoring mathematics. Working one-on-one with my students has always been my favorite part about
teaching, and I am hoping to continue that with tutoring. Honestly, I enjoy tutoring so much that I have even been known to volunteer tutor.
13 Subjects: including algebra 1, algebra 2, calculus, statistics
...I have done two years of population genetics research, where I applied concepts of genetics and biotechnology to my research daily. I was a TA for Bio 1 at CU Boulder, and I conducted review
sections for the genetics section of the course. I have also helped a graduate student's genetics project in the ecology department, where I guided him on the genetics aspects of his research.
39 Subjects: including algebra 2, algebra 1, chemistry, reading
...My experience as a tutor began in high school and has followed me through my adult life. I have not only tutored in academics but have lead several people to fitness excellence. In high
school, I tutored several students in geometry, algebra and biology.
13 Subjects: including algebra 1, geometry, biology, Chinese
Related East Lake, CO Tutors
East Lake, CO Accounting Tutors
East Lake, CO ACT Tutors
East Lake, CO Algebra Tutors
East Lake, CO Algebra 2 Tutors
East Lake, CO Calculus Tutors
East Lake, CO Geometry Tutors
East Lake, CO Math Tutors
East Lake, CO Prealgebra Tutors
East Lake, CO Precalculus Tutors
East Lake, CO SAT Tutors
East Lake, CO SAT Math Tutors
East Lake, CO Science Tutors
East Lake, CO Statistics Tutors
East Lake, CO Trigonometry Tutors
Nearby Cities With algebra Tutor
Bow Mar, CO algebra Tutors
Columbine Valley, CO algebra Tutors
Commerce City algebra Tutors
Dacono algebra Tutors
Eastlake, CO algebra Tutors
Edgewater, CO algebra Tutors
Erie, CO algebra Tutors
Federal Heights, CO algebra Tutors
Firestone algebra Tutors
Henderson, CO algebra Tutors
Lafayette, CO algebra Tutors
Lakeside, CO algebra Tutors
Northglenn, CO algebra Tutors
Thornton, CO algebra Tutors
Westminster, CO algebra Tutors | {"url":"http://www.purplemath.com/East_Lake_CO_Algebra_tutors.php","timestamp":"2014-04-19T09:38:26Z","content_type":null,"content_length":"24084","record_id":"<urn:uuid:683ea684-8af0-42e0-ac2c-7d4bb9c4f8c6>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00251-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematics 256
Mathematics 256 - Fall 1997
Differential equations for engineers
Section 101: Dr. Casselman, Math Annex 1100, 8:30 MWF
Section 102: Dr. Ward, Math Annex 1100, 1:30 MWF
Dr. Casselman and Dr. Feldman and are just now beginning to prepare the text and lab notes for the fall term. In the meantime, you might take a look at last year's material. There will be a small
number of substantial changes from last year, but the general outline will be the same.
Some may find the textbook by Boyce & DiPrima useful, although last year nobody used it.
The laboratories will be based on software written by us in Java (which is, for these purposes, not so different from C). They will likely involve writing short pieces of code, as well as using tools
developed by us to handle standard numerical routines.
The fall sections of this course are intended exclusively for electrical engineering students. Mechanical engineering students are intended to take the course in the spring.
Roughly speaking, this course differs from other differential equations courses in that (1) we introduce more physical examples; (2) the laboratories allow us to deal with more interesting stuff; (3)
it covers about 60% of the material in Math 255 and Math 257, but in a single term.
Course notes
Part I: First order equations
Chapter 1. Newton's law of cooling
Chapter 2. Complex numbers
Chapter 3. Periodic functions and flickering lights
Chapter 4. Euler's method for numerical approximation of solutions
Chapter 5. Efficient numerical methods
Homework assignments
First homework
Course applets
A falling object
Cooling with square wave variation
Euler's method: text example y' = y - t | {"url":"http://www.math.ubc.ca/~cass/courses/m256-7b.html","timestamp":"2014-04-16T22:03:29Z","content_type":null,"content_length":"3274","record_id":"<urn:uuid:af1c90af-b9b3-4647-88c2-0dc34e037877>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00572-ip-10-147-4-33.ec2.internal.warc.gz"} |
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material,
please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 16
Earthquake Input EARTHQUAKE EXCITATION CONCEPTS In evaluating the earthquake performance of concrete dams, it is evident that descriptions of the earthquake ground motions and the manner in which
these motions excite dynamic response are of paramount importance. The procedure that leads to the selection of the seismic input for concrete dams is similar, in general, to that for other large
important structures such as nuclear power plants and long-span bridges. It involves the study of the regional geologic setting, the history of seismic activity in the area, the geologic structure
along the path from source to site, and the local geotechnical conditions. Ground motion parameters that may be utilized in characterizing earthquake motions include peak acceleration (or effective
peak acceleration), duration, and frequency content of the accelerogram. However, the methods of selecting such seismic parameters for the purpose of generating input ground motions or response
spectra are well documented (2-1) and are not repeated here. The focus of this chapter is on the specification of earthquake input motions to be used in the analysis of concrete
dam-reservoir-foundation systems. Obviously, the level of sophistication to be used in defining the seismic input is closely related to the degree of understanding of the dynamic behavior of the dam
system and to capabilities for modeling such behavior. Thus, progress in defining seismic input has followed a long evolutionary process parallel to that of the dynamic analysis capability. It is not
surprising, therefore, that the earliest method of defining earthquake input to concrete dams was merely to apply a distributed horizontal force amounting to a uniform specified fraction (typically
10 percent) of the weight of the dam body. This force was intended to represent the inertial resistance of a rigid 16
OCR for page 16
17 dam subjected to the horizontal motion of a rigid foundation. The procedure was easily extended in an approximate sense to include the hydrodynamic pressure effects of the reservoir water by
invoking the added mass concept (i.e., assuming that a portion of the reservoir water would move together with the dam body). Major improvements over the rigid dam approach resulted when the dynamic
effects of dam deformability, i.e., the free vibration behavior, were recognized. The first improvement was to convert the equivalent static force from a uniform distribution to a form related to the
dam fundamental vibration mode shape. The second improvement was to account for the amplification of the base input motions in the response of the dam. Representing these frequency-dependent
amplification effects by means of the earthquake response spectrum provided an appropriate amplitude of equivalent static load distribution to be used in the response analysis. The basic assumption
of all these methods of analysis is that the foundation rock supporting the dam is rigid, so the specified earthquake motions are applied uniformly over the entire dam-foundation interface. However,
as the methods of response analysis improved, it became apparent that the rigid base earthquake input no longer was appropriate. Because of the great extent of the dam, and recognizing the wave
propagation mechanisms by which earthquake motions are propagated through the foundation rock, it is important to account for spatial variation of the earthquake motions at the dam-foundation
interface; these spatial variations also may result from "scattering" of the propagating earthquake waves by the topography near the dam site. A brief discussion of basic procedures for defining
seismic input is presented in a recent report (2-2~; the essential concepts contained in that report are summarized in the following paragraphs. Standard Base Input Model The dam is assumed to be
supported by a large region of deformable rock, which in turn is supported by a rigid base boundary, as shown in Figure 2-la. The seismic input is defined as a history of motion of this rigid base,
but it is important to note that the motions at this depth in the foundation rock are not the same as the free-field motions recorded at ground surface. Massless Foundation Rock Model An improved
version of the preceding model is obtained by neglecting the mass of the rock in the deformable foundation region. This has the effect of eliminating wave propagation mechanisms in the deformable
rock, so that motions prescribed at the rigid base are transmitted directly to the
OCR for page 16
18 ,~ CANYON — WALL ~ DEFORMABLE FOUNDATION ROCK __ _~ ~ ~ ~ ~ / ~ ~ ~ ~ ~ ~ / ~ ~ ~ / ~ ~ / ~ ~ A RIGID BASE —~ Vgz ( t ) SEISM IC INPUT ~~ ~V9x ( t ) Vgy(t) FREE FIELD MOTION (MEASURED BY
HORIZONTALLY L AYERE D FOUNDATION ROCK RIGID BASE MOTION (CALCULATED BY DECONVOLUTION ) - ?'MED SUPPORTED APPROPRI ATE LY FOR EACH EARTHQUAKE CO M PO N E N T ~\\~",' / i: CANYON _' WALL DEFO RMA B L
E FOU NDATI O N ROC K DECONVOLVED RIGID BASE MOTION LIMIT OF FLEXIBLE WON i._. ~ 1 At' ~ \ \: ~ CANYON INPUT AT CAM I~TF~FArF /~/~/ L' FIGURE 2-1 Proposed seismic input models for concrete dams
(2-2). (a) Standard rigid base input model; mass of foundation rock either included or neglected. (b) Deconvolution of free- field surface motions to determine rigid base motions. (c) Analysis of
two-dimensional free- field canyon motions using deconvolved rigid base motions as input. (d) Analysis of three- dimensional dam-foundation system response using two-dimensional free-field canyon
motions as input.
OCR for page 16
19 dam interface. With this assumption it is reasonable to prescribe recorded free-field surface motions as the rigid base input. Deconvolution Base Rock Input Model In this approach, as illustrated
in Figure 2-lb, a deconvolution analysis is performed on a horizontally uniform layer of deformable rock to determine motions at the rigid base boundary that are consistent with the recorded
free-field surface motions. The resulting rigid base motion is then used in the standard base input model. This procedure tends to be computationally expensive because the mathematical model includes
a large volume of foundation rock in addition to the concrete dam. Free-Field Input Model A variation of the preceding procedure is to apply the deconvolved rigid base motion to a model of the
deformable foundation rock without the dam in place, in order to determine the free-field motions at the interface positions where the dam is to be located. These calculated interface free-field
motions account for the scattering effects of the canyon topography on the earthquake waves and are used as input to the combined dam-foundation rock system. In many cases of the form shown in Figure
Alec), it may be reasonable to assume two-dimensional behavior in modeling the scattering effects and then to apply these two-dimensional free-field motions as input to a three- dimensional
dam-canyon system, as indicated by Figure 2-l~d). In the above-mentioned report (2-2) these seismic input models are discussed in the context of arch dam analysis, but they are equally applicable for
gravity dams when the foundation topography warrants a three-dimensional analysis. If the dam is relatively long and uniform, so that its response may be considered to be two-dimensional, the canyon
scattering effect need not be considered, but the seismic input still may vary spatially due to traveling- wave effects. It is important to note that all four of the input models listed above include
a rigid boundary at the base of the deformable foundation rock; thus, vibration energy is not permitted to radiate from the model. Elimination of this constraint is one of the key issues in present
research on seismic input procedures. Of these four input models the free-field model usually is the most reasonable, so a key element in the input definition is the determination of appropriate
free-field motions. Research progress in this area is discussed in the next section.
OCR for page 16
20 PRESENT STATUS OF KNOWLEDGE Prediction of Free-Field Motion Definition of seismic input is very closely related to the way the dam- reservoir-foundation system is modeled. Although existing finite
element programs for the dynamic analysis of concrete dams (2-3, 2-4, 2-5) use uniform base motion as input, these programs can be modified to accept nonuniform earthquake excitation at the interface
between dam and canyon wall. In this case the free-field motion is defined as the motion of the dam- foundation contact surface due to seismic excitation without the presence of the dam.
Two-Dimensional Case Method If the canyon where the dam is to be located has an essentially constant cross section for some distance upstream and downstream, it may be treated as a linearly elastic
half-plane, and the problem of evaluating the earthquake free-field motion can be formulated as a wave-scattering problem with the canyon as the scatterer. Various approaches have been used to obtain
solutions to the problem. The case involving earthquake SH waves (i.e., horizontal shear waves) is somewhat simpler in the sense that only out-of-plane displacements occur. Closed-form solutions have
been obtained for semicircular canyons (2-6) and for semielliptical canyons (2-7~. For cases with more general geometry, further assumptions must be made in the formulation or He numerical solution
techniques. By using the method of images and integral equation formulation, results have been obtained for SH wave scattering due to arbitrarily shaped canyons by solving the integral equation
numerically (2-8~. The same problem also has been solved using a different integral equation formulation (2-9~; in this approach the free boundary condition at the canyon wall is satisfied in the
least squares sense. An integral equation approach that imposes an approximately satisfied boundary condition also has been used in the solution of problems involving P and SV waves (2-10) (i.e.,
compression and vertical shear waves). Similar procedures have been used by others in solving P. SV, and Rayleigh wave problems (2-11, 2-12~. A solution for incident SH, P. and SV waves also has been
obtained by assuming periodicity in the surface topography and a downward-only scattered wave (2-13~. Direct numerical solution using a finite difference formulation has been employed to evaluate the
scattering effects of various surface irregularities- for example, a ridge with incident SH wave (2-14), vertically incident SV and P waves on a step change in surface (2-15), and the use of
nonreflecting boundary conditions with other surface irregularities (2-16~.
OCR for page 16
21 Direct finite element solutions also have been used to solve scattering problems of P and SV waves incident on a mountain and on an alluvium- filled canyon (2-17), using more than one solution to
obtain the canceling effect on nonreflecting boundaries (2-18~. Standard plane-strain soil dynamics finite element programs with special treatment at nonreflecting boundaries (2-19, 2-20, 2-21)
reportedly have been used for P. SV, and SH waves with a simple modification (2-221. A particle model combined with finite element modeling to account for an irregular surface has been used for SV
waves incident to cliff topography (2-23~. The free-field motions at V-shaped and close-to-V-shaped canyons also have been studied using a combination of finite and infinite elements in a model with
finite depth that extends to infinity horizontally (2-24~. In this case earthquake motions prescribed at the rigid base of the foundation are taken as input to the system. Two-Dimensional Case
Results Results for a semicircular canyon are used as the basis of discussion here because the most information is available for this simple geometry; some reference also is made to cases involving
other geometries, as depicted in Figure 2-2. Results expressed in the frequency domain are described in many cases to indicate the effects of wave frequency. Motions at the canyon walls generally are
found to be dependent on the ratio of canyon width to wave length (wave frequency), on the angle of wave incidence, and on the wave type. The effect of scattering is more significant when the wave
length is of the same order as or smaller than the canyon width. In comparison with the free-field motion without any canyon, the free-field motion at the canyon surface can be either amplified or
reduced depending on the location of the observation point, as shown in Figure 2-3. In general, motions near the upper corner of the canyon facing the incident wave are amplified; the amplification
increases as the wave length decreases and as the direction of incidence tends toward the horizontal. For incident SH, P. SV, and Rayleigh waves, the maximum amplification is found to be 2 for
semicircular and semielliptical canyons (2-6, 2-7, 2-121. However, this amplification factor can be higher if the canyon surface has local convex regions, which tend to trap energy (2-~. Motion from
SH and Rayleigh waves generally is reduced near the bottom of the canyon. For Rayleigh waves and close-to-horizontally incident SH waves, the motion at the back side of the canyon also is often
reduced, but this shielding effect disappears for SV and P waves. For vertically incident SH waves, the wall slope of a triangular canyon has significant effects on the motion at the wall surface
(2-9~; steeper slopes lead to greater reductions in motion near the bottom of the canyon. The amplification of motion at some locations and the attenuation of motion at others results in a large
frequency-dependent spatial variation of
OCR for page 16
22 SH, P. SV, RAYLEIGH - SH - - - - \ SH, P. SV, RAYLEIGH / \ ~ 1 SH, P. SV FIGURE 2-2 Valley shapes and input wave types for which two-dimensional analyses of wave- scattenng effects have been
reported. motion along the canyon walls. This spatial variation is more abrupt when the canyon-width-to-wave-length ratio is larger than 1 (higher frequency) for all types of waves. Calculated
relative motion ratios of 2 to 3 are common in many cases of differing incident angles and wave types. For the more irregular geometry of a real canyon, a calculated relative motion ratio as high as
6 was reported for Pacoima Dam, California (2-~. The relative phase of motions along the canyon walls has been reported for the case of SH waves incident to a semicircular canyon (2-6~. It seems that
the phase variation is close to what can be predicted from simple traveling- wave considerations for most of the canyon wall. Near the upper corners of the canyon, however, more abrupt variations of
phase angle appear. From the above brief description of theoretical results, it is clear that the spatial variation of free-field motion is very complex and frequency dependent. In an effort to
obtain an averaged index of motion intensity, a Topographical Effects Index was defined using the Arias Intensity Concept (2-25~; however, this index is still dependent on location and angle of
incidence. Three-Dimensional Case Analytical solutions for three-dimensional canyon topography are much more difficult to obtain. For the simple case of a hemispherical cavity at the surface of an
elastic half-space, series solutions have been obtained for
OCR for page 16
23 5 C] 4 - ~ 3 id LL ~ 2 LL a: CO 1 4: 3 2 1 o 1 2 3 6 7 8 7= 30 . - - ~ mar \4 5/ SH / / ;~ 0~ 0 1 2 3 5 1 2 3 6 7 8 ' ' 1 ~ I- ' ' SH \4 5J \/ ~ ~ =60 _ A 1 ~ 0 1 2 3 FIGURE 2-3 Calculated
amplification of incident plane SH waves by a semicylindrical canyon surface (2-6). A flat free-field surface gives a displacement amplitude of 2; ~ represents the SH wave velocity.
OCR for page 16
24 incident P and S waves (2-26~. A boundary element method that satisfies the free surface condition at and near the canyon walls in a least squares sense has also been applied to axisymmetric
cavity problems (2-27), although results are given only for a hemispherical cavity with a vertically incident P wave. Perhaps the most relevant solution for three-dimensional canyon topography is
that obtained by finite element analysis of Pacoima Dam and its adjacent canyon (2-28), shown in Figure 2-4. Part of the foundation rock was included with the finite element model of the dam, taking
account of the variations in the rock properties. Three-dimensional modeling was considered necessary because of the complex topography, which is apparent near the dam, consisting of a thin, spiny
ridge at the left abutment and a broad, massive right abutment. A rigid base motion was assumed at the finite element base boundary, and its three components of motion were calculated by a process of
deconvolution from the three components of earthquake motion recorded on the ridge above the left abutment. The strong-motion accelerograph was located at the crest of the ridge, as indicated in the
photograph. The peak acceleration of the filtered left abutment record was 1.15 g, and that of the calculated base motion was 0.40 g. This result indicates that the amplification may be larger than
expected because of the assumed rigid energy-trapping boundary at the base; also, the assumption of uniform motion at the base may have contributed to the conservative results. Applicability of the
Results The theoretical free-field motion results have limitations in their application due to the various simplifying assumptions made in their derivation. In the two-dimensional analyses it was
assumed that the change of topography along the upstream-downstream direction was negligible; therefore, the results are valid only for prismatically shaped canyons. Moreover, the results apply only
to specific wave types, and the amplification effect of wave scattering is very much dependent on the type of incident wave. Unless the composition of an actual incident earthquake wave is known in
terms of its wave types, such results are not directly applicable. Often a complicated canyon geometry requires that free-field motion varying in three dimensions be considered, and it is doubtful
that any method other than a numerical one can be expected to produce realistic results for such cases. Even with a numerical approach the various assumptions made in treating the finite boundary and
in modeling the inhomogeneous media may introduce errors; thus, both two- and three-dimensional results need to be compared with actual free-field earthquake records to assess their applicability
(2-29~. 7 ' Because of the many uncertainties involved in modeling the geometry,
OCR for page 16
25 FIGURE 2-4 Pacoima Dam, California, was subjected to the 1971 San Femando earthquake; the seismic motions were recorded by a seismograph on the narrow rock ridge above the left abutment, at the
point indicated (2-28).) (Courtesy of George W. Housner) the foundation material properties, and the incident earthquake motion, it is probable that a stochastic approach to defining the free-field
motions will be needed in addition to the deterministic procedures reviewed in this report. Random field theory (2-30, 2-31) is quite relevant to the problem of spatial variation of earthquake input
motion. Using the stochastic approach, some work already has been done for free-field motions over a flat, open surface (2-32, 2-33, 2-34, 2-35~. To date, no results have been reported on the
probabilistic nature of seismic motions along a canyon wall.
OCR for page 16
26 Measured Motions of Foundation Rock Reports of actual earthquake motion recorded at the walls of a canyon are very scarce; however, the importance of differential input motion at a dam site is
well recognized, and a few such reports do exist, mostly for abutment motion of existing dams. As early as 1964, differential motion at the two abutments of the Tonoyama (arch) Dam in Japan was
reported (2- 36, 2-37~. The dam is 65 m high, with a cross-canyon width at the crest level of about 150 m. The maximum recorded acceleration at the center of the dam crest during an earthquake in
1960 was 0.018 g, and those at the two abutments were less than 0.010 g. In general, the records at the two abutments appeared to be quite similar in magnitude and phase. However, Fourier analysis
revealed that the amplitude of motion at the right abutment was two to three times that at the left abutment for frequencies greater than 4 [Iz. Other observations made in Japan at the Tagokura
(gravity) Dam (2- 38) and at the Kurobe (arch) Dam (2-39) also indicate differing motions at the opposite abutments; however, in both of these cases there was amplification of motion over the height
of the abutments. Eight aftershock measurements were made in the vicinity of Pacoima Dam after the San Fernando earthquake (2-40~. Comparison of the records obtained at the south abutment near the
original strong-motion station with those obtained at a downstream canyon floor location some distance from the dam revealed an average amplification of about 1.5 for horizontal motions at the top of
the ridge. The amplification was about 4.2 near a frequency of 5 Hz but decreased to a ratio near 1 for lower frequencies. In a separate study four aftershocks of the San Fernando earthquake were
recorded at two stations, one near the dam base and the other near the top of Kagel Mountain (2-41~. The two stations were approximately 3,000 ft apart and were selected to represent the free-field
motions at the base and top of the mountain. The highest time-domain horizontal acceleration ratio (top-to-base) was about 1.75, but the frequency-domain ratio, as measured by the pseudorelative
velocity spectra, was as high as 30 at a frequency of about 2 Hz. A correlation study on ground motion intensity and site elevation was carried out for the general area of Kagel Mountain and Pacoima
Dam using the San Fernando earthquake data (242~. On a larger scale it was found that there was an almost linear relationship between the peak recorded motion of a site and the recorder elevation, as
the profile rises from the lower San Fernando dam site (approximately 1,200 ft) to the Pacoima dam site (approximately 2,000 It) to the peak of Kagel Mountain (approximately 3,500 ft). Based on this
linear relationship, it was calculated that the base rock acceleration at the Pacoima dam site was about 0.99 g, but it is evident that this calculation ignores local topographic features such as the
abutment ridge. In a somewhat similar case, aftershocks of the 1976 Friuli earthquake in
OCR for page 16
27 Italy were measured at the Ambiesta (arch) Dam (2-43~. The dam is 60 m high, and the canyon width at the crest level is about 140 m. Records were obtained at three locations along the
dam-foundation interface, two at crest level at the abutment and one at the base of the dam. The average ratio of horizontal peak velocity at the crest level to that at the base of the dam ranged
from 3.11 to 1.88. The predominant frequency of motion at the base of the dam was about 4 Hz, based on more than 35 records having peak velocities greater than 0.002 cm/sec. Observed motion at the
Chirkey (arch) Dam in the Soviet Union was reported in a translated paper (2-44~. The motion at the left abutment was recorded at three heights during a magnitude 3.5 earthquake that occurred on 4
February 1971 at an epicentral distance of 46 km. The peak velocities at heights of 160, 220, and 265 m were 0.4, 0.63, and 0.62 cm/see, respectively. In the frequency domain it was found that the
maximum spectral value increased by a factor of 2.5 when the height of the observation point increased by 100 m. The Whittier, California, earthquake of 1 October 1987 triggered all 16 of the
accelerometers that had been installed on Pacoima Dam. Preliminary reports indicated that the accelerations at the dam base were on the order of 0.001 g, while those at about 80 percent height of the
dam on the dam- abutment interface were on the order of 0.002 g (2-45~. All of the above-mentioned records were of small amplitude due to low- intensity shaking. A larger-amplitude record was
obtained at the Techi dam site in Taiwan during an earthquake on 15 November 1986 (2-46~. This arch dam is 180 m high, and the canyon width is about 250 m at crest level. The peak acceleration
recorded at the center of the crest in the upstream- downstream direction was 0.170 g. Three strong-motion accelerographs had been installed along the dam-foundation interface, one at the base of the
dam and the others at about midheight on the opposite abutments. Unfortunately, one of the midheight instruments malfunctioned, leaving only one operational. The peak acceleration obtained at the dam
base in the upstream-downstream direction was 0.014 g, while that at midheight of the dam-abutment interface was 0.022 g. In the cross-canyon direction the peak acceleration at the dam base was 0.012
g, versus 0.017 g at the midheight abutment location. These records clearly demonstrate a large spatial variation of motion along the foundation interface, but it is probable that dam interaction
contributed significantly to the recorded motion. Consequently, these data are not representative of free-field canyon wall motions. A 1984 earthquake of amplitude comparable to the Techi event was
reported recently for the Nagawado (arch) Dam in Japan (247, 2-48~. The dam is 155 m high with a crest length of 355.5 m. The peak recorded radial accelerations of the dam crest were 0.197 g at
midspan and 0.245 g at the quarter point from the left abutment. The recorded peak accelerations in
OCR for page 16
28 the foundation rock 17 m below the base of the dam were 0.016 g in the N- S direction and 0.029 g in the E-W direction; the dam axis lies approximately in the N-S direction. At a level about 25 m
above the base of the dam, an accelerograph installed deep in the right abutment rock away from the dam recorded accelerations of 0.018 g (N-S) and 0.021 g (E-W). Almost directly above this
accelerograph, also deep in the right abutment, an instrument at crest level recorded peak accelerations of 0.031 g (N-S) and 0.026 g (E-W). Across the canyon at crest level deep in the left
abutment, another recorder indicated peak accelerations of 0.026 g (N-S) and 0.021 g (E-W). The spatial variation revealed by these data is quite indicative of the lack of uniformity in the
earthquake motions of the rock supporting the dam. In a translated paper (2-49) the findings of a model test of Toktogul Dam are reported. The model had a length scale of 1:4,000 and it simulated the
topography of the Toktogul dam site, which covers an area of 6 x 6 km and was 4 km deep. The model was subjected to excitation initiated at different points on the model, and the motion along the
canyon wall was recorded up to a height from the bottom of the canyon equal to twice that of the dam. Three configurations were tested: without the dam, with the dam but without water, and with both
dam and water; generally the greatest motion occurred for the empty canyon case. More recently model test results were reported for an existing arch dam and for a proposed arch dam, both in China
(2-50~. The model scales were 1:600 and 1:2,000, respectively, and input to the models was both random excitation and impact. The model test results were found to be consistent with those from finite
element analyses and from ambient vibration surveys. An amplification factor of between 2 and 3 was observed for abutment motion at the crest level relative to motion at the bottom of the canyon.
Predicted Response to Spatially Varying Input Dynamic Excitation Direct application of measured earthquake motions to predict dam response has been reported for the Ambiesta (arch) Dam (2-51~. Three
records are available, one at the base of the dam and two at the crest level near opposite dam-abutment interfaces. In the analysis the interface was divided into three zones by drawing a horizontal
line near midheight on an elevation view of the downstream face. Within each zone a uniform boundary motion identical to what was recorded in that zone was used as interface input. It was reported
that agreement between the calculated accelerations along the crest of the dam and the corresponding measured quantities was startlingly good, while poor agreement resulted if uniform input motion
was used along the entire interface.
OCR for page 16
29 Using prescribed input motions at the foundation rock boundary, the effects of differential input motion on the responses of a gravity dam and an arch dam have been studied by finite element
analysis (2-52~. A two- dimensional plane-strain analysis was performed of the gravity dam and its supporting block of foundation rock, assuming a traveling-wave input along the horizontal foundation
base boundary. To reduce the amount of computation, the boundary was divided into four regions, with uniform motion assumed in each. The dam was 46 m high, and the length of the horizontal base
boundary was approximately twice the height of the dam. The input wave form was that of the S16E component of the 1971 San Fernando earthquake recorded at Pacoima Dam, and three wave speeds were
used: 2,000 m/see, 4,000 m/see, and infinite. Stress analysis results indicated that as the wave speed was reduced the stresses in the dam increased. In the case of the 110-m-high arch dam, which had
a crest length of 528 m, a three-dimensional analysis was performed. The left half of the foundation boundary was assumed to move uniformly according to the prescribed San Fernando earthquake record,
while the right half was held fixed. It was reported that different stress patterns in the dam were obtained for the variable base input as compared with the uniform base input. In a recent study on
traveling-wave effects, a small portion of the foundation rock was treated as an extension of the dam body, and shell equations were used to model the extended arch dam (2-53~. Cross-canyon traveling
waves in the form of harmonic motion or earthquake motion were then assigned to the periphery of the shell; reservoir effects were neglected. Results indicated that stresses in the dam increased when
the period of the input harmonic wave approached the fundamental period of the shell. In a separate study the effects of traveling waves on arch dams were examined using a finite element approach
(2-54~. The model was similar to the free-field input model described above, but the free-field motion was taken as a prescribed traveling earthquake wave. It was found that the effects of a wave
traveling in the upstream direction were not significant when compared with the rigid base input. However, a traveling wave in the cross-canyon direction caused an average stress increase of 40 to 50
percent and a doubling of the maximum computed stress. Traveling-wave effects also have been studied, with emphasis on the energy input to the reservoir water. A two-dimensional solution was reported
for the problem of a rigid gravity dam with infinite reservoir excited by a vertical traveling ground motion (2-55~. Three cases were studied: infinite wave speed, wave leaving the dam moving
upstream, and wave approaching the dam from upstream. The vertical component of the E1 Centro 1940 earthquake was used, and in the latter two cases the wave propagation speed was taken to be three
times the speed of sound in water. It was found that maximum pressure on the dam occurred when the wave approached from
OCR for page 16
30 upstream, and it was lowest when the wave propagated away from the dam. In terms of maximum total force or overturning moment, the larger traveling- wave response was almost twice that resulting
from the infinite wave speed. A similar conclusion, but with much less difference between the cases for infinite and finite wave speeds, was obtained by a finite difference study of a flexible-dam
finite-reservoir model (2-56~. Later a finite difference solution scheme was applied to an improved model that included energy-transmitting boundaries and elastic foundation (2-57~. It was reported,
however, that traveling waves did not produce more critical stress conditions in the gravity dam than did the wave with infinite propagation speed. Applications of an energy-transmitting boundary
approach to a free-field input model were reported recently (2-58~. A numerical example was given to illustrate the computation procedure for a uniform free-field input motion assumed along the
dam-foundation interface. In another study a new input procedure that included the influence of the infinite foundation domain was developed (2-59~. The analysis procedure was divided into two
stages: first, the stresses were computed on a fictitious fixed boundary facing the incident wave; these stresses were then released in the second stage, when the complete domain was modeled by
finite elements near the canyon and by infinite elements away from the canyon. Numerical results were obtained for a three-dimensional dam topography, considering an SH wave propagating across the
canyon. The displacement amplitudes at the crest and on the crown cantilever were found to be much reduced from those obtained using a uniform base input. The presence of the dam body was found to
have the general effect of reducing the motion at the canyon wall, as compared with the free-field values at the same locations. A recent study (2-60) of an arch dam used free-field motions for the
seismic input that were computed for a canyon embedded in a two-dimensional half space and subjected to incident SH, SV, and P waves. These free-field motions were applied to a three-dimensional
finite element model containing the dam, a massless foundation region, and an infinite reservoir of compressible water. Frequency-domain responses were converted into the time domain in the form of
standard deviations of the response to a random input with an earthquakelike frequency content. As shown by an analysis of Pacoima Dam, inclusion of nonuniformity in the stream component of the
excitation reduces the dam response, while the effect of nonuniformity in the cross- stream and vertical components varies, with the potential for some increase. For various cases of nonuniform
input, the average arch stress along the crest ranged from 62 to 122 percent of that for uniform input.
OCR for page 16
31 Fault Displacement All the above-mentioned analyses were for vibratory seismic input motions. The case of a fault-displacement offset occurring directly beneath the base of a concrete gravity dam
also has been studied. In a two-dimensional nonlinear analysis of a dam-foundation system (2-61), a reverse fault was simulated by applying concentrated forces along an assumed fault zone that
extended from the finite element boundary of the modeled foundation rock to the base of the dam body. Results for the particular case studied indicated that the dam did not crack as a result of fault
displacement but partially separated from the foundation. It was recommended that the combined effect of fault displacement and vibratory seismic input could be accounted for in preliminary studies
by performing a dynamic-response analysis with a linear finite element model, using a softened foundation. More recently a model test study of a proposed 185-m-high arch dam in Greece was carried out
to determine the effects of fault displacements occurring directly beneath the base of the dam (2-62~. Selection of the dam type and location was based on economic considerations. A thorough
seismotectonic investigation concluded that fault displacement at the dam base of as much as several decimeters could not be ruled out; consequently, design measures were taken to accommodate such
possible movement. Among these was a sophisticated joint system to alleviate the adverse effects of the fault displace- ment. A 1:250 scale model was built and tested at the Laboratorio Nacional de
Engenharia Civil (LNEC) in Lisbon. The horizontal movement of the fault was simulated by imparting to the left abutment a gradual displacement upstream to compress the arch. Results of the test
indicated that the joint system worked very well in protecting the dam from collapse for a displacement of up to 1 m in prototype scale. It was also concluded that the joint system would enable the
dam body to withstand a fault displacement of the order of 5 to 10 cm without damage. There have been two cases where concrete gravity dams have actually been constructed in which the plane of an
underlying fault has been extended through the entire dam section in the form of a sliding joint to accommodate possible fault movement (2-63~. These are Morris Dam (2-64) in California, which was
completed in 1934, and the recently finished Clyde Dam (2-65) in New Zealand. In both cases the fault ran along the river channel perpendicular to the axis of the dam and dipped about 60 degrees off
horizontal. Figure 2- 5 is a photograph of Morris Dam. The joint is located near the gallery entrance, which can be seen on the downstream face. The sliding joints in both dams were oriented
vertically and were designed for displacements on the order of 2 m. This required an interesting geometric solution, details of which were quite different in the two cases. To date, no movement has
occurred on either fault.
OCR for page 16
OCR for page 16
33 Other methods of defensive design of concrete gravity dams for fault displacement include the placement of a zoned self-healing berm of embankment material at the heel of the dam and a buttressing
berm of free-draining granular fill against the downstream face (2-63~. Reference 2-63 also states that an acceptable defense may not exist for thin arch dams against fault displacements. STATUS OF
STRONG-MOTION INSTRUMENT NETWORKS The topic of strong-motion instrumentation placed at concrete dam sites for the purpose of studying the spatial variation of ground motion has not received
sufficient attention. Traditionally, for recording the input motion it was considered adequate to have one strong-motion recorder at either the toe of the darn or one of the abutments. As early as
1975, however, it was recommended that there be a minimum of two accelerographs located at the dam site to `'record earthquake motions in the foundation" (2-66, p. 1,099~. The purpose of requiring
two instruments was "to give some indication of the uniformity of conditions, and to ensure some useful information in the event of an instrument malfunction." In the 1978 International Workshop on
Strong-Motion Instrument Arrays, various aspects of instrumentation were discussed, and useful suggestions were made specifically for study of the spatial variation of seismic ground motions (2-67~.
One of the array types suggested was the "local effect array" that could be used to study the "variation of ground motions across valleys." But in that suggestion the emphasis was clearly on the
motion of the overburden soil in a valley rather than that along a canyon wall. In a follow-up meeting of U.S. researchers in 1981 (2-68, p. 8), the following recommendation was made: "Lifeline and
other systems should be instrumented along with building structures. These should include highway bridges and overpasses, dams, and other utility system facilities. The degree of instrumentation
should be sufficient to obtain information equivalent to that for building structures." However, few if any concrete dams in the United States are currently instrumented to the extent needed to study
the seismic input problem. Of the 45 concrete dams listed in a survey report (2-69), only Pacoima Dam has strong-motion instruments installed at both the toe and two abutments (as well as at other
locations on the dam). Although the survey list is not complete, very few other concrete dams have seismographs installed near the toe. Even though the measuring of free-field and interface input
motion has been recognized to be as important as that of the dam response (2-70), current strong-motion instrumentation for concrete dams in the United States is inadequate for the purpose of
defining seismic input.
OCR for page 16
34 RESEARCH NEEDS Various theoretical models have been developed for prediction of free- field motion at the surface of a valley or canyon to be used as input to a dam system; however, no
verification of such input predictions has yet been achieved by comparison with actually recorded earthquake motions. The existing strong-motion instrumentation at concrete dams is not designed to
provide such essential data. It is clear, therefore, that an improved instrumenta- tion program for observation of earthquake motions at sites of existing or proposed dams is needed. Similarly,
further theoretical work is needed on the deterministic and stochastic modeling of input motion to provide the basis for realistically modeling seismic input to concrete dams. Specific
recommendations for research on earthquake input to concrete dams follow: 1. Deployment of Strong-Motion Instrumentation (a) Arrays of strong-motion instruments should be deployed at selected dam
sites. The locations of the instruments at each site should include at least three elevations along the abutment interfaces and selected locations within the abutment rock, the face of the canyon
downstream of the dam, and several positions along the reservoir bottom. Triggering of these instruments should be synchronized so that traveling-wave effects can be detected. The seismographs should
be made part of an overall instrumentation system that includes pressure transducers at the dam surface in the reservoir and accelerographs at selected locations within and on the dam body. (b) An
array of strong-motion instruments should be deployed at sites being considered for construction of concrete dams to obtain the free-field motions at a canyon location without the interference of an
existing dam. 2. Strong-Motion Instrumentation Program . (a) The necessity of obtaining actual records and the high cost of instrumentation point to the need for a concerted joint effort between the
dam owner/operator and the research community. A permanent or semiperma- nent instrumentation program should be established after a careful study of potential sites for instrumentation. (b) A joint
program of strong-motion instrumentation for concrete dams In areas of high seismicity should be developed in cooperation with other countries having similar hazards. 3. Use of Recorded Seismic
Motion Records Seismic motions actually recorded at a dam-foundation interface should be utilized in analyses intended to verify the various input methods. Because
OCR for page 16
35 the recorded motions would be affected by dam-foundation interaction, a system identification approach may be needed to determine the free-field input motion. 4. Enhancement of Two-Dimensional
Analyses Currently available two-dimensional theoretical free-field canyon or valley wall motions are presented in terms of the incident wave angle and wave type and are in a frequency-dependent
form. Even though they are of limited applicability because of assumptions regarding two~imensional geometry and homogeneity of the medium, these results should be synthesized to provide guidelines
for defining realistic input for concrete dams. 5. Enhancement of Three-Dimensional Models The deterministic prediction of free-field motion at a dam site with three- dimensional topography can be
performed by numerical methods such as finite elements, boundary elements, finite differences, or some combination of the three. The development of nonreflective boundaries for such three-
dimensional models remains a high-priority requirement. 6. Stochastic Approach In view of the many uncertainties involved, the simulation of spatially varying free-field motion may require the
application of stochastic theory; therefore, methods should be developed for simulating stochastic inputs for valley and canyon topographies. 7. Effects of Fault Displacement Further studies should
be carried out focusing on the effects of fault displacements on the safety of concrete dams, using both numerical simulation procedures and physical model testing. The effectiveness of joint systems
in a dam at the location of the fault break should be included in these studies. | {"url":"http://www.nap.edu/openbook.php?record_id=1742&page=16","timestamp":"2014-04-18T01:12:52Z","content_type":null,"content_length":"75227","record_id":"<urn:uuid:3cdde77a-5e7b-4eee-969a-8f21b48d57d5>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00661-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reactance Sentence Examples
• Reactance of a capacitor and the phase relationship between p.d. and current.
• Each of the elements has a variable reactance which must be altered to produce the desired performance.
• However, there is enough capacitive reactance to cause a significant phase shift in the current that flows through the load.
• Reactance matrices is proposed.
• This radiation reactance needs to be tuned out, to get power transfer from the generator to the radiated field.
• Reactance of the matching circuit to become dominant as the frequency increases.
• It also has a small inductive reactance, of about 11 ohms.
• Reactance Xc which has units of Ohms.
• Stubs are shorted or open circuit lengths of transmission line which produce a pure reactance at the attachment point.
• The capacitive reactance Xc becomes very large and the inductive reactance XL becomes insignificant.
• One often sees short monopoles with a coil at the foot, to provide inductive tuning for this capacitative reactance.
• Reactance x s is to a good approximation proportional to f.
• At this frequency a parallel resonant circuit is formed between the primary inductance and the net capacitive reactance reflected back from the secondary.
• It also has a small inductive reactance, of about 11 ohms.
Search Sentence Examples | {"url":"http://sentence.yourdictionary.com/reactance","timestamp":"2014-04-18T13:11:33Z","content_type":null,"content_length":"43446","record_id":"<urn:uuid:556bf880-126c-4332-8b32-8bb54b2ee3ad>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00013-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lagrangian Mechanics
Lagrangian Mechanics is the reformulation of Newtonian Mechanics that utilizes the Lagrangian defined by
T= total kinetic energy of a system of particles
U= sum of the potential energy functions of a system of particles
In other branches of physics, the Lagrangian is defined as the function
L: TQ--> R
Where Q is the configuration space, a subset of R^3N
such that the action, defined as the functional,
A(q)= int(L) dt
when it reaches stationary value at {q(t)}, will male {q(t)} the equations of motion.
Note, int() means the integration notation with the limits of integration being positive an negative infinity, respectively.
It is shown, using the Calculus of Variations, that the equations of motion are in such a way that they satisfy the Lagrange's Equations of motion
Where D is the differential operator with respect to time, d/dx stands for partial differentiation, and v is the generalized velocity.
Bob: Why is general relativity so tough to learn?!
Doug: Cause' you don't know enough Lagrangian Mechanics!
November 11, 2009 | {"url":"http://www.urbandictionary.com/define.php?term=Lagrangian%20Mechanics","timestamp":"2014-04-20T17:39:02Z","content_type":null,"content_length":"41854","record_id":"<urn:uuid:443d28e0-3b8c-41ed-a68e-3bb0cfa3a9b8>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00283-ip-10-147-4-33.ec2.internal.warc.gz"} |
Billerica SAT Math Tutor
Find a Billerica SAT Math Tutor
My tutoring experience has been vast in the last 10+ years. I have covered several core subjects with a concentration in math. I currently hold a master's degree in math and have used it to tutor
a wide array of math courses.
36 Subjects: including SAT math, English, reading, calculus
...I've worked with students specifically in the following subjects: Pre-Algebra, Algebra 1, Geometry, U.S. History, World History, and European History. I am HQT in English and can work with
students on American and British Literature.
29 Subjects: including SAT math, reading, English, writing
...The prerequisites were algebra I, geometry, and algebra II. Of course I used my precalculus knowledge and skills in learning calculus and engineering at Cornell University and the United States
Air Force Academy. During my first seven years as a math teacher at Lowell High School, MA, I taught various levels of algebra I, II, and geometry.
9 Subjects: including SAT math, calculus, geometry, algebra 1
...I am happy to help students reach their full potential and become better at public speaking I have three diplomas: one undergraduate and two graduate degrees (BA, JD, and MBA). I have attended
various educational institutions and studied various disciplines. I have worked in numerous corporation...
67 Subjects: including SAT math, English, calculus, reading
...From 1996-1999, I student Symbolic Logic and Many Valued Logic at Westfield State College. I also served as a volunteer teaching assistant. With the tax deadline fast approaching, I am here to
assist any last-minute, anxious people in the use of free online tax software to complete their taxes accurately and efficiently.
30 Subjects: including SAT math, English, writing, reading | {"url":"http://www.purplemath.com/billerica_sat_math_tutors.php","timestamp":"2014-04-18T21:59:43Z","content_type":null,"content_length":"23845","record_id":"<urn:uuid:fd078110-aca2-4999-a32e-628d119c2fb5>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00337-ip-10-147-4-33.ec2.internal.warc.gz"} |
Simplify the integral (is my version correct?)
April 5th 2011, 12:01 PM #1
Simplify the integral (is my version correct?)
Basically I am asked to simplify the integral by going to polar coordinates. But professor's summary is not available at this moment, so I'm afraid I might be doing something wrong, because this
material is very new to me.
$\displaystyle \[I = \int\limits_0^{\frac{R}{{\sqrt {1 + {R^2}} }}} {\left( {\int\limits_0^{Rx} {f\left( {\frac{y}{x}} \right)dy} } \right)} dx + \int\limits_{\frac{R}{{\sqrt {1 + {R^2}} }}}^R {\
left( {\int\limits_0^{\sqrt {{R^2} - {x^2}} } {f\left( {\frac{y}{x}} \right)dy} } \right)} dx\]$
I assume it to be a sector of a circle. My angle increases until it reaches line Rx (that's why I wrote first upper limit to be arctan(R)). Then for every angle I go from 0 to R with my rho as
well. Only then I go to usual polar substitutions.
$\displaystyle \[I = \int\limits_0^{\arctan (R)} {\left( {\int\limits_0^R {\left( {f\left( {\frac{{y(\rho ,\varphi )}}{{x(\rho ,\varphi )}}} \right) \cdot \rho } \right)d\rho } } \right)} d\
varphi = \int\limits_0^{\arctan (R)} {\left( {\int\limits_0^R {\left( {f\left( {\tan (\varphi )} \right) \cdot \rho } \right)d\rho } } \right)} d\varphi = \]$
$\displaystyle \[ = \frac{{{R^2}}}{2}\int\limits_0^{\arctan (R)} {f\left( {\tan (\varphi )} \right)} d\varphi \]$
Does it seem like a reasonable enough idea? What logic should I use to check if the outcome is likely to be good?
P.S. That was based on my previous and first overall (yet don't know if it's correct) solution of this kind:
$\displaystyle \[I = \iint\limits_D {2x \cdot dS}\]$
where D is area within
$\displaystyle \[{x^2} + {y^2} = 2x \Rightarrow {(x - 1)^2} + {y^2} = 1\]$
I came up with
$\displaystyle \[I = 2 \cdot \int\limits_0^{\frac{\pi }{2}} {\left( {\int\limits_0^{2\cos (\varphi )} {2{\rho ^2} \cdot \cos (\varphi )d\rho } } \right)} d\varphi = 4\int\limits_0^{\frac{\pi }
{2}} {\frac{8}{3}} {\left( {\cos (\varphi )} \right)^4}d\varphi = \frac{{32}}{3} \cdot \frac{3}{8} \cdot \frac{\pi }{2} = 2\pi \]$
But how should I check (using WolframAlpha of similar stuff) the answer? I mean how to calculate the answer using purely the first given form?
Thanks a lot and sorry for possibly quite bad English.
Everything you have written is OK for me
April 6th 2011, 08:45 AM #2
MHF Contributor
Nov 2008 | {"url":"http://mathhelpforum.com/calculus/176915-simplify-integral-my-version-correct.html","timestamp":"2014-04-21T13:19:00Z","content_type":null,"content_length":"35923","record_id":"<urn:uuid:86f0e303-5cc3-4a36-91aa-107d3dee66e7>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00536-ip-10-147-4-33.ec2.internal.warc.gz"} |
Series help
June 21st 2006, 04:40 AM #1
Junior Member
May 2006
Sydney, Australia
Series help
I have this formula:
${c_n}^2 = \frac{1}{2}{c_{n-1}}^2$
How do I convert this to a series with $\sum$
From n = 1 to n = 10
I have this formula:
${c_n}^2 = \frac{1}{2}{c_{n-1}}^2$
How do I convert this to a series with $\sum$
From n = 1 to n = 10
Not really sure what you are asking.
Hello, Malay!
Edit: You're right! . . . *blush*
. . . . I'll correct it now.
I have this formula: $(c_n)^2 \:= \:\frac{1}{2}(c_{n-1})^2$
How do I convert this to a series with $\sum^{10}_{n=1}$
We have: . $c_n\;=\;\frac{1}{\sqrt{2}}c_{n-1}$
We have a geometric series with first term $c_1 = a$ and common ratio $r = \frac{1}{\sqrt{2}}$
The $k^{th}$ term is: . $c_n\:=\:\frac{a}{2^{\frac{n-1}{2}}}$
Then: . $\sum^{10}_{k=1}\frac{a}{2^{\frac{n-1}{2}}} \;=\;a \cdot \frac{1 - \frac{1}{(\sqrt{2})^{10}}} {1 - \frac{1}{\sqrt{2}}}$
Last edited by Soroban; June 21st 2006 at 12:11 PM.
Hello, chancey!
I had to baby-talk my way through it . . .
Let $c_1 = a$, the first term.
Then crank out the first few terms:
But someone check my work . . . please!
Hello Soroban!
It should be
Keep Smiling
I have this formula:
${c_n}^2 = \frac{1}{2}{c_{n-1}}^2$
How do I convert this to a series with $\sum$
From n = 1 to n = 10
${c_n}^2 = \frac{1}{2}{c_{n-1}}^2$
I observed that
Hence the given sequence is a G.P. with common ratio $\frac{1}{\sqrt{2}}$
Keep Smiling
I need the series to do this:
Example (not the actual formula, changed it so the numbers were easier)
$c_1 = 1$
$c_2 = \frac{1}{2} 1 = 0.5$
$c_3 = \frac{1}{2} 0.5 = 0.25$
$c_4 = \frac{1}{2} 0.25 = 0.125$
$c_n = ?$
Where each number in the series has to be calculated with the previous number. My question is, can that be put into a series so that i can run a formula once to find $n_{10}$ or do I have to do
it 10 times
As explained in the previous discussions this sequence is geometric, thus, if,
Where $a$ is the initial term.
Note, it is not possible to find $a$, it can be anything number.
Hello, chancey!
I need the series to do this:
Example (not the actual formula, changed it so the numbers were easier)
$c_1 = 1$
$c_2 = \frac{1}{2}\cdot 1 = \frac{1}{2}$ . Don't use decimals!
$c_3 = \frac{1}{2}\cdot\frac{1}{2} = \frac{1}{4}$
$c_4 = \frac{1}{2}\cdot\frac{1}{4} = \frac{1}{8}$
$c_n = \:?$
Where each number in the series has to be calculated with the previous number.
My question is, can that be put into a series so that i can run a formula once to find $n_{10}$
or do I have to do it 10 times
You're expected to be familiar with Arithmetic Series and Geometric Series by now
. . and to be able to "eyeball" a sequence and determine its general form.
This sequence is: . $1,\;\frac{1}{2},\;\frac{1}{4},\;\frac{1}{8},\; \frac{1}{16},\;\hdots$
How long does it take for you to see that the denominators are doubled each time?
With a little thought, we see that the $n^{th}$ denominator is: $2^{n-1}$
So the general term is: . $c_n\:= \:\frac{1}{2^{n-1}}$
Hello, chancey!
You're expected to be familiar with Arithmetic Series and Geometric Series by now
. . and to be able to "eyeball" a sequence and determine its general form.
This sequence is: . $1,\;\frac{1}{2},\;\frac{1}{4},\;\frac{1}{8},\; \frac{1}{16},\;\hdots$
How long does it take for you to see that the denominators are doubled each time?
With a little thought, we see that the $n^{th}$ denominator is: $2^{n-1}$
So the general term is: . $c_n\:= \:\frac{1}{2^{n-1}}$
Of course I can see that, that was just an example, im not really that interested in the answer, but the process is what I want to know.
${c_n}^2 = \frac{1}{2} {c_{n-1}}^2 - 2$
${c_n} = \sqrt{\frac{1}{2} {c_{n-1}}^2 - 2}$
${c_n} = (\sqrt{\frac{1}{2} a^2 - 2} )^{n-1}$
June 21st 2006, 05:35 AM #2
Global Moderator
Nov 2005
New York City
June 21st 2006, 05:45 AM #3
Super Member
May 2006
Lexington, MA (USA)
June 21st 2006, 06:20 AM #4
June 21st 2006, 06:24 AM #5
June 21st 2006, 12:58 PM #6
Junior Member
May 2006
Sydney, Australia
June 21st 2006, 01:19 PM #7
Global Moderator
Nov 2005
New York City
June 21st 2006, 01:27 PM #8
Super Member
May 2006
Lexington, MA (USA)
June 21st 2006, 10:30 PM #9
Junior Member
May 2006
Sydney, Australia | {"url":"http://mathhelpforum.com/algebra/3561-series-help.html","timestamp":"2014-04-16T20:35:14Z","content_type":null,"content_length":"66684","record_id":"<urn:uuid:ac31ee03-d1ba-4681-8b37-bd2394516bb2>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00238-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Rapid Evaluation of Potential Fields in Particle Systems
Results 1 - 10 of 261
- INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE , 1995
"... The SPLASH-2 suite of parallel applications has recently been released to facilitate the study of centralized and distributed shared-address-space multiprocessors. In this context, this paper
has two goals. One is to quantitatively characterize the SPLASH-2 programs in terms of fundamental propertie ..."
Cited by 1091 (13 self)
Add to MetaCart
The SPLASH-2 suite of parallel applications has recently been released to facilitate the study of centralized and distributed shared-address-space multiprocessors. In this context, this paper has two
goals. One is to quantitatively characterize the SPLASH-2 programs in terms of fundamental properties and architectural interactions that are important to understand them well. The properties we
study include the computational load balance, communication to computation ratio and traffic needs, important working set sizes, and issues related to spatial locality, as well as how these
properties scale with problem size and the number of processors. The other, related goal is methodological: to assist people who will use the programs in architectural evaluations to prune the space
of application and machine parameters in an informed and meaningful way. For example, by characterizing the working sets of the applications, we describe which operating points in terms of cache size
and problem size are representative of realistic situations, which are not, and which re redundant. Using SPLASH-2 as an example, we hope to convey the importance of understanding the interplay of
problem size, number of processors, and working sets in designing experiments and interpreting their results.
, 1991
"... this paper, we introduce an algorithm that attempts to produce aesthetically-pleasing, two-dimensional pictures of graphs by doing simplified simulations of physical systems. We are concerned
with drawing undirected graphs according to some generally accepted aesthetic criteria: 1. Distribute the v ..."
Cited by 431 (0 self)
Add to MetaCart
this paper, we introduce an algorithm that attempts to produce aesthetically-pleasing, two-dimensional pictures of graphs by doing simplified simulations of physical systems. We are concerned with
drawing undirected graphs according to some generally accepted aesthetic criteria: 1. Distribute the vertices evenly in the frame. 2. Minimize edge crossings. 3. Make edge lengths uniform. 4. Reflect
inherent symmetry. 5. Conform to the frame. Our algorithm does not explicitly strive for these goals, but does well at distributing vertices evenly, making edge lengths uniform, and reflecting
symmetry. Our goals for the implementation are speed and simplicity. PREVIOUS WORK Our algorithm for drawing undirected graphs is based on the work of Eades which, in turn, evolved from a VLSI
technique called force-directed placement
"... ... process a set S of points in so that the points of S lying inside a query R region can be reported or counted quickly. Wesurvey the known techniques and data structures for range searching
and describe their application to other related searching problems. ..."
Cited by 256 (40 self)
Add to MetaCart
... process a set S of points in so that the points of S lying inside a query R region can be reported or counted quickly. Wesurvey the known techniques and data structures for range searching and
describe their application to other related searching problems.
- J. ACM , 1992
"... We define the notion of a well-separated pair decomposition of points in d-dimensional space. We then develop efficient sequential and parallel algorithms for computing such a decomposition. We
apply the resulting decomposition to the efficient computation of k-nearest neighbors and n-body potential ..."
Cited by 244 (4 self)
Add to MetaCart
We define the notion of a well-separated pair decomposition of points in d-dimensional space. We then develop efficient sequential and parallel algorithms for computing such a decomposition. We apply
the resulting decomposition to the efficient computation of k-nearest neighbors and n-body potential fields.
, 1994
"... tion based on mesh analysis can be combined with a GMRES-style iterative matrix solution technique to make a reasonably fast 3-D frequency dependent inductance and resistance extraction
algorithm. Unfortunately, both the computation time and memory re- quired for that approach grow faster than n 2, ..."
Cited by 173 (39 self)
Add to MetaCart
tion based on mesh analysis can be combined with a GMRES-style iterative matrix solution technique to make a reasonably fast 3-D frequency dependent inductance and resistance extraction algorithm.
Unfortunately, both the computation time and memory re- quired for that approach grow faster than n 2, where n is the number of volume-filaments. In this paper, we show that it is possible to use
multipole-acceleration to reduce both required memory and computation time to nearly order n. Results from examples are given to demonstrate that the multipole acceleration can reduce required
computation time and memory by more than an order of magnitude for realistic packaging problems.
, 1994
"... 1 Introduction Recent trends in realistic image synthesis have been towards a sepa-ration of the rendering process into two or more stages[10, 2, 9]. One of these stages solves for the global
energy equilibrium throughoutthe environment. This process can be very expensive and its complexity grows ra ..."
Cited by 130 (5 self)
Add to MetaCart
1 Introduction Recent trends in realistic image synthesis have been towards a sepa-ration of the rendering process into two or more stages[10, 2, 9]. One of these stages solves for the global energy
equilibrium throughoutthe environment. This process can be very expensive and its complexity grows rapidly with the number of objects in the environment.These computational demands generally limit
the level of detail of environments that can be simulated. Furthermore, a solution to thisproblem must be computed before anything useful can be displayed.
, 1999
"... A Very Large Scale Robotic (VLSR) system may consist of from hundreds to perhaps tens of thousands or more autonomous robots. The costs of robots are going down, and the robots are getting more
compact, more capable, and more flexible. Hence, in the near future, we expect to see many industrial and ..."
Cited by 124 (1 self)
Add to MetaCart
A Very Large Scale Robotic (VLSR) system may consist of from hundreds to perhaps tens of thousands or more autonomous robots. The costs of robots are going down, and the robots are getting more
compact, more capable, and more flexible. Hence, in the near future, we expect to see many industrial and military applications of VLSR systems in tasks such as assembling, transporting, hazardous
inspection, patrolling, guarding and attacking. In this paper, we propose a new approach for distributed autonomous control of VLSR systems. We define simple artificial force laws between pairs of
robots or robot groups. The force laws are inverse-power force laws, incorporating both attraction and repulsion. The force laws can be distinct and to some degree they reflect the 'social relations'
among robots. Therefore we call our method social potential fields. An individual robot's motion is controlled by the resultant artificial force imposed by other robots and other components of the
system. The approach is distributed in that the force calculations and motion control can be done in an asynchronous and distributed manner. We also extend the social potential fields model to use
spring laws as force laws. This paper presents the first and a preliminary study on applying potential fields to distributed autonomous multi-robot control. We describe the generic framework of our
social potential fields method. We show with computer simulations that the method can yield interesting and useful behaviors among robots, and we give examples of possible industrial and military
applications. We also identify theoretical problems for future studies. 1999 Published by Elsevier Science B.V. All rights reserved.
- In Proceedings of the 1994 ACM Conference on LISP and Functional Programming , 1993
"... Soft type systems provide the benefits of static type checking for dynamically typed languages without rejecting untypable programs. A soft type checker infers types for variables and
expressions and inserts explicit run-time checks to transform untypable programs to typable form. We describe a prac ..."
Cited by 108 (4 self)
Add to MetaCart
Soft type systems provide the benefits of static type checking for dynamically typed languages without rejecting untypable programs. A soft type checker infers types for variables and expressions and
inserts explicit run-time checks to transform untypable programs to typable form. We describe a practical soft type system for R4RS Scheme. Our type checker uses a representation for types that is
expressive, easy to interpret, and supports efficient type inference. Soft Scheme supports all of R4RS Scheme, including procedures of fixed and variable arity, assignment, continuations, and
top-level definitions. Our implementation is available by anonymous FTP. The first author was supported in part by the United States Department of Defense under a National Defense Science and
Engineering Graduate Fellowship. y The second author was supported by NSF grant CCR-9122518 and the Texas Advanced Technology Program under grant 003604-014. 1 Introduction Dynamically typed
languages like Scheme...
, 1997
"... The classical problem of solving an nth degree polynomial equation has substantially influenced the development of mathematics throughout the centuries and still has several important
applications to the theory and practice of present-day computing. We briefly recall the history of the algorithmic a ..."
Cited by 85 (16 self)
Add to MetaCart
The classical problem of solving an nth degree polynomial equation has substantially influenced the development of mathematics throughout the centuries and still has several important applications to
the theory and practice of present-day computing. We briefly recall the history of the algorithmic approach to this problem and then review some successful solution algorithms. We end by outlining
some algorithms of 1995 that solve this problem at a surprisingly low computational cost.
- Lectures on Parallel Computation , 1993
"... A vast amount of work has been done in recent years on the design, analysis, implementation and verification of special purpose parallel computing systems. This paper presents a survey of
various aspects of this work. A long, but by no means complete, bibliography is given. 1. Introduction Turing ..."
Cited by 77 (5 self)
Add to MetaCart
A vast amount of work has been done in recent years on the design, analysis, implementation and verification of special purpose parallel computing systems. This paper presents a survey of various
aspects of this work. A long, but by no means complete, bibliography is given. 1. Introduction Turing [365] demonstrated that, in principle, a single general purpose sequential machine could be
designed which would be capable of efficiently performing any computation which could be performed by a special purpose sequential machine. The importance of this universality result for subsequent
practical developments in computing cannot be overstated. It showed that, for a given computational problem, the additional efficiency advantages which could be gained by designing a special purpose
sequential machine for that problem would not be great. Around 1944, von Neumann produced a proposal [66, 389] for a general purpose storedprogram sequential computer which captured the fundamental
principles of... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=142836","timestamp":"2014-04-21T03:40:26Z","content_type":null,"content_length":"38218","record_id":"<urn:uuid:1566eb4d-fc7b-4e6b-b417-db06dc53a9a0>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00387-ip-10-147-4-33.ec2.internal.warc.gz"} |
Roundtrip to Shanghai via Tokyo
In my previous post, I explained the basic concept of multi_type_vector – one of the two new data structures added to mdds in the 0.6.0 release. In this post, I’d like to explain a bit more about
multi_type_matrix – the other new structure added in the aforementioned release. It is also important to note that the addition of multi_type_matrix deprecates mixed_type_matrix, and is subject to
deletion in future releases.
In short, multi_type_matrix is a matrix data structure designed to allow storage of four different element types: numeric value (double), boolean value (bool), empty value, and string value. The
string value type can be either std::string, or one provided by the user. Internally, multi_type_matrix is just a wrapper to multi_type_vector, which does most of the hard work. All multi_type_matrix
does is to translate logical element positions in 2-dimensional space into one-dimensional positions, and pass them onto the vector. Using multi_type_vector has many advantages over the previous
matrix class mixed_type_matrix both in terms of ease of use and performance.
One benefit of using multi_type_vector as its backend storage is that, we will no longer have to differentiate densely-populated and sparsely-populated matrix density types. In mixed_type_matrix, the
user would have to manually specify which backend type to use when creating an instance, and once created, it wasn’t possible to switch from one to the other unless you copy it wholesale. In
multi_type_matrix, on the other hand, the user no longer has to specify the density type since the new storage is optimized for either density type.
Another benefit is the reduced storage cost and improved latency in memory access especially when accessing a sequence of element values at once. This is inherent in the use of multi_type_vector
which I explained in detail in my previous post. I will expand on the storage cost of multi_type_matrix in the next section.
Storage cost
The new multi_type_matrix structure generally provides better storage efficiency in most average cases. I’ll illustrate this by using the two opposite extreme density cases.
First, let’s assume we have a 5-by-5 matrix that’s fully populated with numeric values. The following picture illustrates how the element values of such numeric matrix are stored.
In mixed_type_matrix with its filled-storage backend, the element values are either 1) stored in heap-allocated element objects and their pointers are stored in a separate array (middle right), or 2)
stored directly in one-dimensional array (lower right). Those initialized with empty elements employ the first variant, whereas those initialized with zero elements employ the second variant. The
rationale behind using these two different storage schemes was the assertion that, in a matrix initialized with empty elements, most elements likely remain empty throughout its life time whereas a
matrix initialized with zero elements likely get numeric values assigned to most of the elements for subsequent computations.
Also, each element in mixed_type_matrix stores its type as an enum value. Let’s assume that the size of a pointer is 8 bytes (the world is moving toward 64-bit systems these days), that of a double
is 8 bytes, and that of an enum is 4 bytes. The total storage cost of a 5-by-5 matrix will be 8 x 25 + (8 + 4) x 25 = 500 bytes for empty-initialized matrix, and (8 + 4) x 25 = 300 bytes for
zero-initialized matrix.
In contrast, multi_type_matrix (upper right) stores the same data using a single array of double’s, whose memory address is stored in a separate block array. This block array also stores the type of
each block (int) and its size (size_t). Since we only have one numeric block, it only stores one int value, one size_t value, and one pointer value for the whole block. With that, the total storage
cost of a 5-by-5 matrix will be 8 x 25 + 4 + 8 + 8 = 220 bytes. Suffice it to say that it’s less than half the storage cost of empty-initialized mixed_type_matrix, and roughly 26% less than that of
zero-initialized mixed_type_matrix.
Now let’s a look at the other end of the density spectrum. Say, we have a very sparsely-populated 5-by-5 matrix, and only the top-left and bottom-right elements are non-empty like the following
illustration shows:
In mixed_type_matrix with its sparse-storage backend (lower right), the element values are stored in heap-allocated element objects which are in turn stored in nested balanced-binary trees. The space
requirement of the sparse-storage backend varies depending on how the elements are spread out, but in this particular example, it takes one 5-node tree, one 2-node tree, four single-node tree, and
five element instances. Let’s assume that each node in each of these trees stores 3 pointers (pointer to left node, pointer right node and pointer to the value), which makes up 24 bytes of storage
per node. Multiplying that by 11 makes 24 x 11 = 264 bytes of storage. With each element instance requiring 12 bytes of storage, the total storage cost comes to 24 x 11 + 12 x 6 = 336 bytes.
In multi_type_matrix (upper right), the primary array stores three element blocks each of which makes up 20 bytes of storage (one pointer, one size_t and one int). Combine that with one 2-element
array (16 bytes) and one 4-element array (24 bytes), and the total storage comes to 20 x 3 + 8 * (2 + 4) = 108 bytes. This clearly shows that, even in this extremely sparse density case,
multi_type_matrix provides better storage efficiency than mixed_type_matrix.
I hope these two examples are evidence enough that multi_type_matrix provides reasonable efficiency in either densely populated or sparsely populated matrices. The fact that one storage can handle
either extreme also gives us more flexibility in that, even when a matrix object starts out sparsely populated then later becomes completely filled, there is no need to manually switch the storage
structure as was necessary with mixed_type_matrix.
Run-time performance
Better storage efficiency with multi_type_matrix over mixed_type_matrix is one thing, but what’s equally important is how well it performs run-time. Unfortunately, the actual run-time performance
largely depends on how it is used, and while it should provide good overall performance if used in ways that take advantage of its structure, it may perform poorly if used incorrectly.
In this section, I will provide performance comparisons between multi_type_matrix and mixed_type_matrix in several difference scenarios, with the actual source code used to measure their performance.
All performance comparisons are done in terms of total elapsed time in seconds required to perform each task. All elapsed times were measured in CPU time, and all benchmark codes were compiled on
openSUSE 12.1 64-bit using gcc 4.6.2 with -Os compiler flag.
For the sake of brevity and consistency, the following typedef’s are used throughout the performance test code.
typedef mdds::mixed_type_matrix<std::string, bool> mixed_mx_type;
typedef mdds::multi_type_matrix<mdds::mtm::std_string_trait> multi_mx_type;
The first scenario is the instantiation of matrix objects. In this test, six matrix object instantiation scenarios are measured. In each scenario, a matrix object of 20000 rows by 8000 columns is
instantiated, and the time it takes for the object to get fully instantiated is measured.
The first three scenarios instantiate matrix object with zero element values. The first scenario instantiates mixed_type_matrix with filled storage backend, with all elements initialized to zero.
mixed_mx_type mx(20000, 8000, mdds::matrix_density_filled_zero);
Internally, this allocates a one-dimensional array and fill it with zero element instances.
The second case is just like the first one, the only difference being that it uses sparse storage backend.
mixed_mx_type mx(20000, 8000, mdds::matrix_density_sparse_zero);
With the sparse storage backend, all this does is to allocate just one element instance to use it as zero, and set the internal size value to specified size. No allocation for the storage of any
other elements occur at this point. Thus, instantiating a mixed_type_matrix with sparse storage is a fairly cheap, constant-time process.
The third scenario instantiates multi_type_matrix with all elements initialized to zero.
multi_mx_type mx(20000, 8000, 0.0);
This internally allocates one numerical block containing one dimensional array of length 20000 x 8000 = 160 million, and fill it with 0.0 values. This process is very similar to that of the first
scenario except that, unlike the first one, the array stores the element values only, without the extra individual element types.
The next three scenarios instantiate matrix object with all empty elements. Other than that, they are identical to the first three.
The first scenario is mixed_type_matrix with filled storage.
mixed_mx_type mx(20000, 8000, mdds::matrix_density_filled_empty);
Unlike the zero element counterpart, this version allocates one empty element instance and one dimensional array that stores all identical pointer values pointing to the empty element instance.
The second one is mixed_type_matrix with sparse storage.
mixed_mx_type mx(20000, 8000, mdds::matrix_density_sparse_empty);
And the third one is multi_type_matrix initialized with all empty elements.
multi_mx_type mx(20000, 8000);
This is also very similar to the initialization with all zero elements, except that it creates one empty element block which doesn’t have memory allocated for data array. As such, this process is
cheaper than the zero element counterpart because of the absence of the overhead associated with creating an extra data array.
The most expensive one turns out to be the zero-initialized mixed_type_matrix, which allocates array with 160 million zero element objects upon construction. What follows is a tie between the
empty-initialized mixed_type_matrix and the zero-initialized multi_type_matrix. Both structures allocate array with 160 million primitive values (one with pointer values and one with double values).
The sparse mixed_type_matrix ones are very cheap to instantiate since all they need is to set their internal size without additional storage allocation. The empty multi_type_matrix is also cheap for
the same reason. The last three types can be instantiated at constant time regardless of the logical size of the matrix.
Assigning values to elements
The next test is assigning numeric values to elements inside matrix. For the remainder of the tests, I will only measure the zero-initialized mixed_type_matrix since the empty-initialized one is not
optimized to be filled with a large number of non-empty elements.
We measure six different scenarios in this test. One is for mixed_type_matrix, and the rest are all for multi_type_matrix, as multi_type_matrix supports several different ways to assign values. In
contrast, mixed_type_matrix only supports one way to assign values.
The first scenario involves assigning values to elements in mixed_type_matrix. Values are assigned individually inside nested for loops.
size_t row_size = 10000, col_size = 1000;
mixed_mx_type mx(row_size, col_size, mdds::matrix_density_filled_zero);
double val = 0.0;
for (size_t row = 0; row < row_size; ++row)
for (size_t col = 0; col < col_size; ++col)
mx.set(row, col, val);
val += 0.00001; // different value for each element
The second scenario is almost identical to the first one, except that it’s multi_type_matrix initialized with empty elements.
size_t row_size = 10000, col_size = 1000;
multi_mx_type mx(row_size, col_size);
double val = 0.0;
for (size_t row = 0; row < row_size; ++row)
for (size_t col = 0; col < col_size; ++col)
mx.set(row, col, val);
val += 0.00001; // different value for each element
Because the matrix is initialized with just one empty block with no data array allocated, the very first value assignment allocates the data array just for one element, then all the subsequent
assignments keep resizing the data array by one element at a time. Therefore, each value assignment runs the risk of the data array getting reallocated as it internally relies on std::vector’s
capacity growth policy which in most STL implementations consists of doubling it on every reallocation.
The third scenario is identical to the previous one. The only difference is that the matrix is initialized with zero elements.
size_t row_size = 10000, col_size = 1000;
multi_mx_type mx(row_size, col_size, 0.0);
double val = 0.0;
for (size_t row = 0; row < row_size; ++row)
for (size_t col = 0; col < col_size; ++col)
mx.set(row, col, val);
val += 0.00001; // different value for each element
But this seemingly subtle difference makes a huge difference. Because the matrix is already initialized with a data array to the full matrix size, none of the subsequent assignments reallocate the
array. This cuts the repetitive reallocation overhead significantly.
The next case involves multi_type_matrix initialized with empty elements. The values are first stored into an extra array first, then the whole array gets assigned to the matrix in one call.
size_t row_size = 10000, col_size = 1000;
multi_mx_type mx(row_size, col_size);
// Prepare a value array first.
std::vector<double> vals;
double val = 0.0;
for (size_t row = 0; row < row_size; ++row)
for (size_t col = 0; col < col_size; ++col)
val += 0.00001;
// Assign the whole element values in one step.
mx.set(0, 0, vals.begin(), vals.end());
Operation like this is something that mixed_type_matrix doesn’t support. What the set() method on the last line does is to assign the values to all elements in the matrix in one single call; it
starts from the top-left (0,0) element position and keeps wrapping values into the subsequent columns until it reaches the last element in the last column.
Generally speaking, with multi_type_matrix, assigning a large number of values in this fashion is significantly faster than assigning them individually, and even with the overhead of the initial data
array creation, it is normally faster than individual value assignments. In this test, we measure the time it takes to set values with and without the initial data array creation.
The last scenario is identical to the previous one, but the only difference is the initial element values being zero instead of being empty.
size_t row_size = 10000, col_size = 1000;
multi_mx_type mx(row_size, col_size, 0.0);
// Prepare a value array first.
std::vector<double> vals;
double val = 0.0;
for (size_t row = 0; row < row_size; ++row)
for (size_t col = 0; col < col_size; ++col)
val += 0.00001;
// Assign the whole element values in one step.
mx.set(0, 0, vals.begin(), vals.end());
The only significant thing this code does differently from the last one is that it assigns values to an existing numeric data array whereas the code in the previous scenario allocates a new array
before assigning values. In practice, this difference should not make any significant difference performance-wise.
Now, let’s a take a look at the results.
The top orange bar is the only result from mixed_type_matrix, and the rest of the blue bars are from multi_type_matrix, using different assignment techniques.
The top three bars are the results from the individual value assignments inside loop (hence the label “loop”). The first thing that jumps out of this chart is that individually assigning values to
empty-initialized multi_type_matrix is prohibitively expensive, thus such feat should be done with extra caution (if you really have to do it). When the matrix is initialized with zero elements,
however, it does perform reasonably though it’s still slightly slower than the mixed_type_matrix case.
The bottom four bars are the results from the array assignments to multi_type_matrix, one initialized with empty elements and one initialized with zero elements, and one is with the initial data
array creation and one without. The difference between the two initialization cases is very minor and well within the margin of being barely noticeable in real life.
Performance of an array assignment is roughly on par with that of mixed_type_matrix’s if you include the cost of the extra array creation. But if you take away that overhead, that is, if the data
array is already present and doesn’t need to be created prior to the assignment, the array assignment becomes nearly 3 times faster than mixed_type_matrix’s individual value assignment.
Adding all numeric elements
The next benchmark test consists of fetching all numerical values from a matrix and adding them all together. This requires accessing the stored elements inside matrix after it has been fully
With mixed_type_matrix, the following two ways of accessing element values are tested: 1) access via individual get_numeric() calls, and 2) access via const_iterator. With multi_type_matrix, the
tested access methods are: 1) access via individual get_numeric() calls, and 2) access via walk() method which walks all element blocks sequentially and call back a caller-provided function object on
each element block pass.
In each of the above testing scenarios, two different element distribution types are tested: one that consists of all numeric elements (homogeneous matrix), and one that consists of a mixture of
numeric and empty elements (heterogeneous matrix). In the tests with heterogeneous matrices, one out of every three columns is set empty while the remainder of the columns are filled with numeric
elements. The size of a matrix object is fixed to 10000 rows by 1000 columns in each tested scenario.
The first case involves populating a mixed_type_matrix instance with all numeric elements (homogenous matrix), then read all values to calculate their sum.
size_t row_size = 10000, col_size = 1000;
mixed_mx_type mx(row_size, col_size, mdds::matrix_density_filled_zero);
// Populate the matrix with all numeric values.
double val = 0.0;
for (size_t row = 0; row < row_size; ++row)
for (size_t col = 0; col < col_size; ++col)
mx.set(row, col, val);
val += 0.00001;
// Sum all numeric values.
double sum = 0.0;
for (size_t row = 0; row < row_size; ++row)
for (size_t col = 0; col < col_size; ++col)
sum += mx.get_numeric(row, col);
The test only measures the second nested for loops where the values are read and added. The first block where the matrix is populated is excluded from the measurement.
In the heterogeneous matrix variant, only the first block is different:
// Populate the matrix with numeric and empty values.
double val = 0.0;
for (size_t row = 0; row < row_size; ++row)
for (size_t col = 0; col < col_size; ++col)
if ((col % 3) == 0)
mx.set_empty(row, col);
mx.set(row, col, val);
val += 0.00001;
while the second block remains intact. Note that the get_numeric() method returns 0.0 when the element type is empty (this is true with both mixed_type_matrix and multi_type_matrix), so calling this
method on empty elements has no effect on the total sum of all numeric values.
When measuring the performance of element access via iterator, the second block is replaced with the following code:
// Sum all numeric values via iterator.
double sum = 0.0;
mixed_mx_type::const_iterator it = mx.begin(), it_end = mx.end();
for (; it != it_end; ++it)
if (it->m_type == mdds::element_numeric)
sum += it->m_numeric;
Four separate tests are performed with multi_type_matrix. The first variant consists of a homogeneous matrix with all numeric values, where the element values are read and added via manual loop.
size_t row_size = 10000, col_size = 1000;
multi_mx_type mx(row_size, col_size, 0.0);
// Populate the matrix with all numeric values.
double val = 0.0;
for (size_t row = 0; row < row_size; ++row)
for (size_t col = 0; col < col_size; ++col)
mx.set(row, col, val);
val += 0.00001;
// Sum all numeric values.
double sum = 0.0;
for (size_t row = 0; row < row_size; ++row)
for (size_t col = 0; col < col_size; ++col)
sum += mx.get_numeric(row, col);
This code is identical to the very first scenario with mixed_type_matrix, the only difference being that it uses multi_type_matrix initialized with zero elements.
In the heterogeneous matrix variant, the first block is replaced with the following:
multi_mx_type mx(row_size, col_size); // initialize with empty elements.
double val = 0.0;
vector<double> vals;
for (size_t col = 0; col < col_size; ++col)
if ((col % 3) == 0)
// Leave this column empty.
for (size_t row = 0; row < row_size; ++row)
val += 0.00001;
mx.set(0, col, vals.begin(), vals.end());
which essentially fills the matrix with numeric values except for every 3rd column being left empty. It’s important to note that, because heterogeneous multi_type_matrix instance consists of multiple
element blocks, making every 3rd column empty creates roughly over 300 element blocks with matrix that consists of 1000 columns. This severely affects the performance of element block lookup
especially for elements that are not positioned in the first few blocks.
The walk() method was added to multi_type_matrix precisely to alleviate this sort of poor lookup performance in such heavily partitioned matrices. This allows the caller to walk through all element
blocks sequentially, thereby removing the need to restart the search in every element access. The last tested scenario measures the performance of this walk() method by replacing the second block
sum_all_values func;
where the sum_all_values function object is defined as:
class sum_all_values : public std::unary_function<multi_mx_type::element_block_node_type, void>
double m_sum;
sum_all_values() : m_sum(0.0) {}
void operator() (const multi_mx_type::element_block_node_type& blk)
if (!blk.data)
// Skip the empty blocks.
if (mdds::mtv::get_block_type(*blk.data) != mdds::mtv::element_type_numeric)
// Block is not of numeric type. Skip it.
using mdds::mtv::numeric_element_block;
// Access individual elements in this block, and add them up.
numeric_element_block::const_iterator it = numeric_element_block::begin(*blk.data);
numeric_element_block::const_iterator it_end = numeric_element_block::end(*blk.data);
for (; it != it_end; ++it)
m_sum += *it;
double get() const { return m_sum; }
Without further ado, here are the results:
It is somewhat surprising that mixed_type_matrix shows poorer performance with iterator access as opposed to access via get_numeric(). There is no noticeable difference between the homogeneous and
heterogeneous matrix scenarios with mixed_type_matrix, which makes sense given how mixed_type_matrix stores its element values.
On the multi_type_matrix front, element access via individual get_numeric() calls turns out to be very slow, which is expected. This poor performance is highly visible especially with heterogeneous
matrix consisting of over 300 element blocks. Access via walk() method, on the other hand, shows much better performance, and is in fact the fastest amongst all tested scenarios. Access via walk() is
faster with the heterogeneous matrix which is likely attributed to the fact that the empty element blocks are skipped which reduces the total number of element values to read.
Counting all numeric elements
In this test, we measure the time it takes to count the total number of numeric elements stored in a matrix. As with the previous test, we use both homogeneous and heterogeneous 10000 by 1000 matrix
objects initialized in the same exact manner. In this test, however, we don’t measure the individual element access performance of multi_type_matrix since we all know by now that doing so would
result in a very poor performance.
With mixed_type_matrix, we measure counting both via individual element access and via iterators. I will not show the code to initialize the element values here since that remains unchanged from the
previous test. The code that does the counting is as follows:
// Count all numeric elements.
long count = 0;
for (size_t row = 0; row < row_size; ++row)
for (size_t col = 0; col < col_size; ++col)
if (mx.get_type(row, col) == mdds::element_numeric)
It is pretty straightforward and hopefully needs no explanation. Likewise, the code that does the counting via iterator is as follows:
// Count all numeric elements via iterator.
long count = 0;
mixed_mx_type::const_iterator it = mx.begin(), it_end = mx.end();
for (; it != it_end; ++it)
if (it->m_type == mdds::element_numeric)
Again a pretty straightforward code.
Now, testing this scenario with multi_type_matrix is interesting because it can take advantage of multi_type_matrix’s block-based element value storage. Because the elements are partitioned into
multiple blocks, and each block stores its size separately from the data array, we can simply tally the sizes of all numeric element blocks to calculate its total number without even counting the
actual individual elements stored in the blocks. And this algorithm scales with the number of element blocks, which is far fewer than the number of elements in most average use cases.
With that in mind, the code to count numeric elements becomes:
count_all_values func;
where the count_all_values function object is defined as:
class count_all_values : public std::unary_function<multi_mx_type::element_block_node_type, void>
long m_count;
count_all_values() : m_count(0) {}
void operator() (const multi_mx_type::element_block_node_type& blk)
if (!blk.data)
// Empty block.
if (mdds::mtv::get_block_type(*blk.data) != mdds::mtv::element_type_numeric)
// Block is not numeric.
m_count += blk.size; // Just use the separate block size.
long get() const { return m_count; }
With mixed_type_matrix, you are forced to parse all elements in order to count elements of a certain type regardless of which type of elements to count. This algorithm scales with the number of
elements, much worse proposition than scaling with the number of element blocks.
Now that the code has been presented, let move on to the results:
The performance of mixed_type_matrix, both manual loop and via iterator cases, is comparable to that of the previous test. What’s remarkable is the performance of multi_type_matrix via its walk()
method; the numbers are so small that they don’t even register in the chart! As I mentions previously, the storage structure of multi_type_matrix replaces the problem of counting elements into a new
problem of counting element blocks, thereby significantly reducing the scale factor with respect to the number of elements in most average use cases.
Initializing matrix with identical values
Here is another scenario where you can take advantage of multi_type_matrix over mixed_type_matrix. Say, you want to instantiate a new matrix and assign 12.3 to all of its elements. With
mixed_type_matrix, the only way you can achieve that is to assign that value to each element in a loop after it’s been constructed. So you would write code like this:
size_t row_size = 10000, col_size = 2000;
mixed_mx_type mx(row_size, col_size, mdds::matrix_density_filled_zero);
for (size_t row = 0; row < row_size; ++row)
for (size_t col = 0; col < col_size; ++col)
mx.set(row, col, 12.3);
With multi_type_matrix, you can achieve the same result by simply passing an initial value to the constructor, and that value gets assigned to all its elements upon construction. So, instead of
assigning it to every element individually, you can simply write:
multi_mx_type(row_size, col_size, 12.3);
Just for the sake of comparison, I’ll add two more cases for multi_type_matrix. The first one involves instantiation with a numeric block of zero’s, and individually assigning value to the elements
afterward, like so:
multi_mx_type mx(row_size, col_size, 0.0);
for (size_t row = 0; row < row_size; ++row)
for (size_t col = 0; col < col_size; ++col)
mx.set(row, col, 12.3);
which is algorithmically similar to the mixed_type_matrix case.
Now, the second one involves instantiation with a numeric block of zero’s, create an array with the same element count initialized with a desired initial value, then assign that to the matrix in one
multi_mx_type mx(row_size, col_size);
vector<double> vals(row_size*col_size, 12.3);
mx.set(0, 0, vals.begin(), vals.end());
The results are:
The performance of assigning initial value to individual elements is comparable between mixed_type_matrix and multi_type_matrix, though it is also the slowest of all. Creating an array of initial
values and assigning it to the matrix takes less than half the time of individual assignment even with the overhead of creating the extra array upfront. Passing an initial value to the constructor is
the fastest of all; it only takes roughly 1/8th of the time required for the individual assignment, and 1/3rd of the array assignment.
I hope I have presented enough evidence to convince you that multi_type_matrix offers overall better performance than mixed_type_matrix in a wide variety of use cases. Its structure is much simpler
than that of mixed_type_matrix in that, it only uses one element storage backend as opposed to three in mixed_type_matrix. This greatly improves not only the cost of maintenance but also the
predictability of the container behavior from the user’s point of view. That fact that you don’t have to clone matrix just to transfer it into another storage backend should make it a lot simpler to
use this new matrix container.
Having said this, you should also be aware of the fact that, in order to take full advantage of multi_type_matrix to achieve good run-time performance, you need to
• try to limit single value assignments and prefer using value array assignment,
• construct matrix with proper initial value which also determines the type of initial element block, which in turn affects the performance of subsequent value assignments, and
• use the walk() method when iterating through all elements in the matrix.
That’s all, ladies and gentlemen.
Windows clipboard dumper
Inspired by this bug report, I just wrote a small, quick and dirty utility to dump the current clipboard content on Windows. Windows development to me is still pretty much an uncharted territory, so
even a utility as simple as this took me some time. Anyway, you can download the binary from here: clipdump.exe. Note that this is a console utility, so you need to run this from the console window.
Here is the source code.
#include <Windows.h>
#include <cstdio>
#include <cstdlib>
#include <iostream>
#include <vector>
using namespace std;
size_t char_per_line = 16;
typedef vector<WORD> line_store_type;
void dump_line(const line_store_type& line)
if (line.empty())
size_t fill_size = char_per_line - line.size();
line_store_type::const_iterator i = line.begin(), iend = line.end();
for (; i != iend; ++i)
printf("%04X ", *i);
while (fill_size--)
cout << " ";
cout << ' ';
i = line.begin();
for (; i != iend; ++i)
WORD c = *i;
if (32 <= c && c <= 126)
// ASCII printable range
cout << static_cast<char>(c);
// non-printable range
cout << '.';
cout << endl;
void dump_clip(HANDLE hdl)
if (!hdl)
LPTSTR buf = static_cast<LPTSTR>(GlobalLock(hdl));
if (!buf)
line_store_type line;
for (size_t i = 0, n = GlobalSize(hdl); i < n; ++i)
if (line.size() == char_per_line)
int main()
if (!OpenClipboard(NULL))
return EXIT_FAILURE;
UINT fmt = 0;
for (fmt = EnumClipboardFormats(fmt); fmt; fmt = EnumClipboardFormats(fmt))
char name[100];
int len = GetClipboardFormatName(fmt, name, 100);
if (!len)
cout << "---" << endl;
cout << "format code: " << fmt << endl;
cout << "name: " << name << endl << endl;
HANDLE hdl = GetClipboardData(fmt);
return EXIT_SUCCESS;
It’s nothing sophisticated, and it could probably use more polishing and perhaps some GUI (since it’s a Windows app). But for now it serves the purpose for me.
Tor has submitted his version in the comment section. Much more sophisticated than mine (and it’s C not C++).
Working with a branch using git-new-workdir
Git package contains a script named git-new-workdir, which allows you to work in a branch in a separate directory on the file system. This differs from cloning a repository in that git-new-workdir
doesn’t duplicate the git history from the original repository and shares it instead, and that when you commit something to the branch that commit goes directly into the history of the original
repository without explicitly pushing to the original repository. On top of that, creating a new branch work directory happens very much instantly. It’s fast, and it’s efficient. It’s an absolute
time saver for those of us who work on many branches at any given moment without bloating the disk space.
As wonderful as this script can be, not all distros package this script with their git package. If your distro doesn’t package it, you can always download the source packages of git and find the
script there, under the contrib directory. Also, if you have the build repository of libreoffice cloned, you can find it in bin/git-new-workdir too.
Now, I’m going to talk about how I make use of this script to work on the 3.3 branch of LibreOffice.
Creating a branch work directory
If you’ve followed this page to build the master branch of libreoffice, then you should have in your clone of the build repository a directory named clone. Under this directory are your local clones
of the 19 repositories comprising the whole libreoffice source tree. If you are like me, you have followed the above page and built your libreoffice build in the rawbuild directory.
The next step is to create a separate directory just for the 3.3 branch which named libreoffice-3-3 and set things up so that you can build it normally as you did in the rawbuild. I’ve written the
following bash script (named create-branch-build.sh) to do this in one single step.
#!/usr/bin/env bash
print_help() {
echo Usage: $1 [bootstrap dir] [dest dir] [branch name]
die() {
echo $1
exit 1
if [ "$BOOTSTRAP_DIR" = "" ]; then
echo bootstrap repo is missing.
print_help $0
exit 1
if [ "$DEST_DIR" = "" ]; then
echo destination directory is missing.
print_help $0
exit 1
if [ "$BRANCH" = "" ]; then
echo branch name is missing.
print_help $0
exit 1
if [ -e "$DEST_DIR/$BRANCH" ]; then
die "$DEST_DIR/$BRANCH already exists."
# Clone bootstrap first.
$GIT_NEW_WORKDIR "$BOOTSTRAP_DIR" "$DEST_DIR/$BRANCH" "$BRANCH" || die "failed to clone bootstrap repo."
# First, check out the branches.
echo "creating directory $DEST_DIR/$BRANCH/$REPOS"
mkdir -p "$DEST_DIR/$BRANCH/$REPOS" || die "failed to create $DEST_DIR/$BRANCH/$REPOS"
for repo in `ls "$BOOTSTRAP_DIR/clone"`; do
if [ ! -d $repo_path ]; then
# we only care about directories.
echo ===== $repo =====
$GIT_NEW_WORKDIR $repo_path "$DEST_DIR/$BRANCH/$REPOS/$repo" $BRANCH
# Set symbolic links to the root directory.
cd "$DEST_DIR/$BRANCH"
for repo in `ls $REPOS`; do
if [ ! -d $repo_path ]; then
# skip if not directory.
ln -s -t . $repo_path/*
The only thing you need to do before running this script is to set the GIT_NEW_WORKDIR variable to point to the location of the git-new-workdir script on your file system.
With this script in place, you can simply
cd .. # move out of the build directory
create-branch-build.sh ./build/clone . libreoffice-3-3
and you now have a new directory named libreoffice-3-3 (same as the branch name), where all modules and top-level files are properly symlinked to their original locations, while the actual repo
branches are under the _repos directory. All you have left to do is to start building. :-)
Note that there is no need to manually create a local branch named libreoffice-3-3 that tracks the remote libreoffice-3-3 branch in the original repository before running this script; git-new-workdir
takes care of that for you provided that the remote branch of the same name exists.
Updating the branch work directory
In general, when you are in a branch work directory (I call it this because it sounds about right), updating the branch from the branch in the remote repo consists of two steps. First, fetch the
latest history in the original repository by git fetch, move back to the branch work directory and run git pull -r.
But doing this manually in all the 19 repositories can be very tedious. So I wrote another script (named g.sh) to ease this pain a little.
#!/usr/bin/env bash
die() {
echo $1
exit 1
if [ ! -d $REPOS ]; then
die "$REPOS directory not found in cwd."
echo ===== main repository =====
git $@
for repo in `ls $REPOS`; do
echo ===== $repo =====
if [ ! -d $repo_path ]; then
# Not a directory. Skip it.
pushd . > /dev/null
cd $repo_path
git $@
popd > /dev/null
With this, updating the branch build directory is done:
g.sh pull -r
That’s all there is to it.
A few more words…
As with any methods in life, this method has limitations. If you build libreoffice with the old-fashioned way of applying patches on top of the raw source tree, this method doesn’t help you; you
would still need to clone the repo, and manually switch to the branch in the cloned repo.
But if you build, hack and debug in rawbuild almost exclusively (like me), then this method will help you save time and disk space. You can also adopt this method for any feature branches, as long as
all the 19 repos (20 if you count l10n repo) have the same branch name. So, it’s worth a look! :-)
Thank you, ladies and gentlemen.
P.S. I’ve updated the scripts to adopt to the new bootstrap based build scheme.
STL container performance on data insertion
I just ran a quick analysis on the performance of various STL containers on simple data insertion. The result was not exactly what I expected so I’d like to share it with you.
What was performed was sequential insertions of 50,000,000 (50 million) unique pointer values into various STL containers, either by push_back or insert, depending on which method is supported by the
container. I ran the test on openSUSE 11.2, with g++ 4.4.1, with the compiler options of -std=c++0x -Os -g. The -std=c++0x flag is necessary in order to use std::unordered_set.
Anyway, here is the result I observed:
I was fully aware of the set containers being slower than list and vector on insertion, due to the internal structure of set being more elaborate than those of list or vector, and this test confirms
my knowledge. However, I was not aware of such wide gap between list and vector. Also, the difference between unreserved and reserved vector was not as wide as I would have expected. (For the sake of
completeness, a reserved vector is an instance of vector whose internal array size is pre-allocated in advance in order to avoid re-allocation.) My belief has always been that reserving vector in
advance improves performance on data insertion, which it does, but I was expecting a wider gap between the two. So, the result I see here is a bit unexpected.
In case you want to re-run this test on your own environment, here is the code I used to measure the containers’ performance:
#include <vector>
#include <unordered_set>
#include <set>
#include <list>
#include <stdio.h>
#include <string>
#include <sys/time.h>
using namespace std;
namespace {
class StackPrinter
explicit StackPrinter(const char* msg) :
fprintf(stdout, "%s: --begin\n", msMsg.c_str());
mfStartTime = getTime();
double fEndTime = getTime();
fprintf(stdout, "%s: --end (duration: %g sec)\n", msMsg.c_str(), (fEndTime-mfStartTime));
void printTime(int line) const
double fEndTime = getTime();
fprintf(stdout, "%s: --(%d) (duration: %g sec)\n", msMsg.c_str(), line, (fEndTime-mfStartTime));
double getTime() const
timeval tv;
gettimeofday(&tv, NULL);
return tv.tv_sec + tv.tv_usec / 1000000.0;
::std::string msMsg;
double mfStartTime;
int main()
size_t store_size = 50000000;
StackPrinter __stack_printer__("vector non-reserved");
string* ptr = 0x00000000;
vector<void*> store;
for (size_t i = 0; i < store_size; ++i)
StackPrinter __stack_printer__("vector reserved");
string* ptr = 0x00000000;
vector<void*> store;
for (size_t i = 0; i < store_size; ++i)
StackPrinter __stack_printer__("list");
string* ptr = 0x00000000;
list<void*> store;
for (size_t i = 0; i < store_size; ++i)
StackPrinter __stack_printer__("set");
string* ptr = 0x00000000;
set<void*> store;
for (size_t i = 0; i < store_size; ++i)
StackPrinter __stack_printer__("unordered set");
string* ptr = 0x00000000;
unordered_set<void*> store;
for (size_t i = 0; i < store_size; ++i)
This is valid C++ code?
My compiler reported a build error in the following code block today.
long ScDPOutput::GetHeaderDim( const ScAddress& rPos, USHORT& rOrient )
SCCOL nCol = rPos.Col();
SCROW nRow = rPos.Row();
SCTAB nTab = rPos.Tab();
if ( nTab != aStartPos.Tab() )
return -1; // wrong sheet
// calculate output positions and sizes
// test for column header
if ( nRow == nTabStartRow && nCol >= nDataStartCol && nCol < nDataStartCol + nColFieldCount )
rOrient = sheet::DataPilotFieldOrientation_COLUMN;
long nField = nCol - nDataStartCol;
return pColFields[nField].nDim;
// test for row header
if ( nRow+1 == nDataStartRow && nCol >= nTabStartCol == nCol < nTabStartCol + nRowFieldCount )
rOrient = sheet::DataPilotFieldOrientation_ROW;
long nField = nCol - nTabStartCol;
return pRowFields[nField].nDim;
// test for page field
SCROW nPageStartRow = aStartPos.Row() + ( bDoFilter ? 1 : 0 );
if ( nCol == aStartPos.Col() && nRow >= nPageStartRow && nRow < nPageStartRow + nPageFieldCount )
rOrient = sheet::DataPilotFieldOrientation_PAGE;
long nField = nRow - nPageStartRow;
return pPageFields[nField].nDim;
//! single data field (?)
rOrient = sheet::DataPilotFieldOrientation_HIDDEN;
return -1; // invalid
In particular, my compiler didn’t like the == (equality operator) in nTabStartCol == nCol in the 3rd if statement block from the top. Looking at the if statement before and after that, you’ll
probably say “yeah, looks like that ‘==’ was supposed to be &&, so what’s the surprise?” Well, the thing is, this piece of code has not changed for at least a few years, which means it was compiling
just fine up until today (though it may have caused a bug somewhere…). And even today, it compiled fine before I made a few changes that were not related to this method, and I didn’t modify this
method itself at all.
I have to wonder, why this code block compiled fine up till today, and what change of mine triggered the compiler to complain all of a sudden if the method itself is unchanged…. :-/
Excel sheet protection password hash
When you protect either your workbook or one of your worksheets with a password in Excel, Excel internally generates a 16-bit hash of your password and stores it instead of the original password
text. The hashing algorithm used for that was previously unknown, but thanks to the infamous Office Open XML specification it is now documented for the world to see (take a look at Part 4, Section
3.3.1.81 – Sheet Protection Options for the details). Thankfully, the algorithm is identical for all recent versions of Excel including XP, 2003 and 2007, so you can simply reuse the documented
algorithm for the older versions of Excel too.
But alas! the documented algorithm is incorrect; it does not produce correct hash values. Being determined to find out the correct algorithm, however, I started to analyze the hashes that the
documented algorithm produces, and compare them with the real hash values that Excel generates, in order to decipher the correct algorithm.
In the end, the documented algorithm was, although not accurate, pretty close enough that I was able to make a few changes and derive the algorithm that generates correct values. The following code:
#include <stdio.h>
using namespace std;
typedef unsigned char sal_uInt8;
typedef unsigned short sal_uInt16;
sal_uInt16 getPasswordHash(const char* szPassword)
sal_uInt16 cchPassword = strlen(szPassword);
sal_uInt16 wPasswordHash = 0;
if (!cchPassword)
return wPasswordHash;
const char* pch = &szPassword[cchPassword];
while (pch-- != szPassword)
wPasswordHash = ((wPasswordHash >> 14) & 0x01) |
((wPasswordHash << 1) & 0x7fff);
wPasswordHash ^= *pch;
wPasswordHash = ((wPasswordHash >> 14) & 0x01) |
((wPasswordHash << 1) & 0x7fff);
wPasswordHash ^= (0x8000 | ('N' << 8) | 'K');
wPasswordHash ^= cchPassword;
return wPasswordHash;
int main (int argc, char** argv)
if (argc < 2)
printf("input password = %s\n", argv[1]);
sal_uInt16 hash = getPasswordHash(argv[1]);
printf("hash = %4.4X\n", hash);
return 0;
produces the right hash value from an arbitrary password. One caveat: this algorithm takes an 8-bit char array, so if the input value consists of 16-bit unicode characters, it needs to be first
converted into 8-bit character array. The conversion algorithm is also documented in the OOXML specification. I have not tested it yet, but I hope that algorithm is correct. ;-)
Missing vcl resource
At one point in the past, I started getting this annoying error message dialog
on startup, and OO.o simply shuts itself down after that. It happened whenever I installed the trunk version of ooo-build with ooinstall (with an -l option for linking), or the upstream build with
linkoo. These two commands are two, totally separate scripts, but they both create symbolic links to the shared libraries and resources in the installation directory, to their respective location in
the build (actually ooinstall makes use of linkoo for the -l functionality). This setting allows a fast iteration of code change, compilation and testing without having to manually swap the shared
libraries each time they get modified in the build. But because of this problem I was not able to use linkoo with the upstream build, or ooinstall -l with the trunk ooo-build, and was forced to
manually set symlink to the modules I was working on. (Somehow, the 2.2 and 2.1 branches of ooo-build didn’t have this problem.)
But today, after not having been able to use an automatically symlinked installation for a long, long time, I got tired of it and decided to jump into the code, to see what causes this problem.
After a little tracing of the code, I finally found the offending code block (tools/source/rc/resmgr.cxx#244):
void ResMgrContainer::init()
// get resource path
std::list< OUString > aDirs;
sal_Int32 nIndex = 0;
// 1. relative to current module (<installation>/program/resource)
OUString libraryFileUrl;
if( Module::getUrlFromAddress(
reinterpret_cast< oslGenericFunction >(ResMgrContainer::release),
libraryFileUrl) )
nIndex = libraryFileUrl.lastIndexOf( '/' );
DBG_ASSERT( nIndex > 0, "module resolution failed" );
if( nIndex > 0 )
OUStringBuffer aBuf( libraryFileUrl.getLength() + 16 );
aBuf.append( libraryFileUrl.getStr(), nIndex+1 ); // copy inclusive '/'
aBuf.appendAscii( "resource" );
aDirs.push_back( aBuf.makeStringAndClear() );
// 2. in STAR_RESOURCEPATH
Here is what the code does. It tries to locate all of the resources (.res) files and put their locations into an internal data structure. To do it, it needs to know where to find the resource files.
It cleverly determines the resource file directory by first getting the absolute path of the module where the code resides (libtl680li.so), and move down to the resource file directory from that
location. In the normal installation, the modules are located in the [installation root]/program/, and the resources directory is only one level down.
However, when the shared library in question is a symbolic link (symlink) to another file, the code ends up getting the path of the actual file the symlink points to, instead of the path of that
symlink (via dladdr call), and this causes the above problem.
There is an easy workaround. Since it’s only the shared library where the ResMgrContainer class is (which is libtl680li.so as mentioned) needs to be the actual file, you can simply delete the symlink
that points to libtl680li.so, and put the original file in its place. Then OO.o launches just fine. You can leave all the other symlinked shared libraries alone. The only problem with this workaround
is, if you want to hack at the tools code, you would need to manually swap the shared library on each module re-build, but for me, that’s not a major problem (I don’t hack at the tools code).
SSE2 Instructions
For the past several weeks I have been studying X86 assembly language, mainly because I wanted to update my knowledge on the assembly language to match the latest CPU technology. I had previosly
taken an X86 assembly language course at NCSU roughly a year ago, but the course only covered 8086 instruction set, and used the MASM version 6.0 as the assembler which is only good for writing
MS-DOS applications. I wanted to at least learn how to do floating-point calculations in assembly, and do it in GNU assembly so that my apps would run on Linux.
There are quite a few extensions to the core X86 instructions, such as FPU, MMX, SSE, and SSE2. The FPU takes care of normal floating point calculations since 80386, MMX for operating multiple
integer calculations in a single CPU cycle, SSE for multiple single-precision calculations, and SSE2 for multiple double-precision calculations (again, in a single CPU cycle). Since software these
days, and OO.o in particular, seem to do almost all of floating point calculations in double-precision, I decided to give SSE2 a little benchmark test.
Here is how I did it. I wrote some simple mathematical routines in C, compiled it normally with gcc with -O1 optimization. Then I had gcc generate an assembly code of that routine, cleaned it up a
bit and replaced several instructions with SSE2 instructions, reassembled it into an executable to run benchmark.
Here is the original C code for the routine:
void array_multiply(double* array1, double* array2, unsigned int size)
// Make nloop a multiple of 2.
unsigned int nloop = size/2;
nloop += nloop;
unsigned int i = 0;
for (; i < nloop; i += 2)
array1[i] *= array2[i] * array2[i] * array2[i];
array1[i+1] *= array2[i+1] * array2[i+1] * array2[i+1];
and this is the assembly instructions that gcc generated (with -O1):
.align 2
.globl array_multiply
.type array_multiply, @function
pushl %ebp
movl %esp, %ebp
pushl %ebx
movl 8(%ebp), %edx
movl 12(%ebp), %ebx
movl 16(%ebp), %ecx
andl $-2, %ecx
je .L5
movl $0, %eax
fldl (%ebx,%eax,8)
fld %st(0)
fmul %st(1), %st
fmulp %st, %st(1)
fmull (%edx,%eax,8)
fstpl (%edx,%eax,8)
fldl 8(%ebx,%eax,8)
fld %st(0)
fmul %st(1), %st
fmulp %st, %st(1)
fmull 8(%edx,%eax,8)
fstpl 8(%edx,%eax,8)
addl $2, %eax
cmpl %eax, %ecx
ja .L4
popl %ebx
popl %ebp
It does all the calculations using FPU instructions. And here is the assembly code after I replaced the FPU instructions with SSE2 ones:
.section .text
.align 16
.globl array_multiply
.type array_multiply, @function
pushl %ebp
movl %esp, %ebp
pushl %ebx
movl 8(%ebp), %edx # pointer to array1
movl 12(%ebp), %ebx # pointer to array2
movl 16(%ebp), %ecx # size
andl $-2, %ecx # make the size a multiple of 2
je .L5
movl $0, %eax # i = 0
movapd (%edx,%eax,8), %xmm0 # SSE2
movupd (%ebx,%eax,8), %xmm1 # SSE2
mulpd %xmm1, %xmm0 # SSE2
mulpd %xmm1, %xmm0 # SSE2
mulpd %xmm1, %xmm0 # SSE2
movapd %xmm0, (%edx,%eax,8) # SSE2
addl $2, %eax # i += 2
cmpl %eax, %ecx
ja .L4
popl %ebx
popl %ebp
I then used the following main C++ code
void print_array(double* array, unsigned int size)
cout << "{ ";
for (unsigned int i = 0; i < size; ++i)
if (i)
cout << ", ";
cout << array[i];
cout << " }" << endl;
int main()
double myarray1[] = {10.5, 50.0, 25.0, 10.0, 2345.4848, 594.23, 0.4, 87.2};
double myarray2[] = {1.2, 50.0, 1.5, 10.0, 120.9, 44.09, 874.234, 233.333};
unsigned int array_size = 8;
cout << "myarray1 = ";
print_array(myarray1, array_size);
cout << "myarray2 = ";
print_array(myarray2, array_size);
double myarray[array_size];
for (long counter = 0; counter < 99999999; ++counter)
for (unsigned int i = 0; i < array_size; ++i)
// To prevent calculation results from being cached.
myarray[i] = myarray1[i] + 0.000000001*counter;
array_multiply(myarray, myarray2, array_size);
for (unsigned int i = 0; i < array_size; ++i)
cout << i << " t " << myarray[i] << endl;
to call both the C version and the assembly with SSE2 version to compare performance. The executables with the original C version and the SSE2 version are named test_c and test_sse, respectively.
Here is the result (on my machine):
$ time ./test_c
myarray1 = { 10.5, 50, 25, 10, 2345.48, 594.23, 0.4, 87.2 }
myarray2 = { 1.2, 50, 1.5, 10, 120.9, 44.09, 874.234, 233.333 }
0 18.3168
1 6.2625e+06
2 84.7125
4 4.14505e+09
5 5.09387e+07
6 3.34082e+08
7 1.10903e+09
real 0m3.308s
user 0m3.292s
sys 0m0.012s
$ time ./test_sse
myarray1 = { 10.5, 50, 25, 10, 2345.48, 594.23, 0.4, 87.2 }
myarray2 = { 1.2, 50, 1.5, 10, 120.9, 44.09, 874.234, 233.333 }
0 18.3168
1 6.2625e+06
2 84.7125
4 4.14505e+09
5 5.09387e+07
6 3.34082e+08
7 1.10903e+09
real 0m2.451s
user 0m2.436s
sys 0m0.000s
Indeed, the SSE2 version seems to perform better! I also compared my SSE2 version against the -O3 optimized C-code, but there was not much difference from the -O1 optimized code for this particular
algorithm. But of course, YMMV.
Does this mean we should start writing assembly code for performance optimization? Probably not. I still think it’s much better to write a better algorithm in the first place than re-write code in
assembly language, because re-writing in assembly is itself not a guarantee for a better performance. But for serious performance work, however, knowing what type of assembly code that the compiler
generates from a C/C++ code, for various compiler optimization flags, will undoubtedly benefit. You never know, in a few extreme cases, it may even make sense to write parts of the application code
in assembly, even if that means having to write that part in assembly for every platform that app needs to support. OO.o has some parts of UNO bridge written in assembly, and I’ve seen some assembly
code in the FireFox codebase as well.
Oh, by the way, for anyone looking for a good study guide on GNU assembly for X86 family of chips, the “Professional Assembly Language” by Richard Blum (Wiley Publishing) would be a pretty good place
to start.
How to (pretend to) write an export filter
It turns out that pretending to write an export filter, at least adding a new entry to the Export dialog, is quite easy. In fact, you don’t even have to write a single line of code. Here is what to
Suppose you do your own build, and you have installed the OO.o that you have built. Now, go back to your build tree, and change directory into the following location
and add the following two new files relative to this location:
You can name your files anyway you want, of course. ;-) Anyway, put the following XML fragments into these files:
<!-- calc_Kohei_SDF_Filter.xcu -->
<node oor:name="calc_Kohei_SDF_Filter" oor:op="replace">
<prop oor:name="Flags"><value>EXPORT ALIEN 3RDPARTYFILTER</value></prop>
<prop oor:name="UIComponent"/>
<prop oor:name="FilterService"><value>com.sun.star.comp.packages.KoheiSuperDuperFileExporter</value></prop>
<prop oor:name="UserData"/>
<prop oor:name="FileFormatVersion"/>
<prop oor:name="Type"><value>Kohei_SDF</value></prop>
<prop oor:name="TemplateName"/>
<prop oor:name="DocumentService"><value>com.sun.star.sheet.SpreadsheetDocument</value></prop>
<!-- calc_Kohei_SDF_Filter_ui.xcu -->
<node oor:name="calc_Kohei_SDF_Filter">
<prop oor:name="UIName"><value xml:lang="x-default">Kohei Super Duper File Format</value>
<value xml:lang="en-US">Kohei Super Duper File Format</value>
<value xml:lang="de">Kohei Super Duper File Format</value>
Likewise, create another file:
with the following content
<!-- Kohei_SDF.xcu -->
<node oor:name="Kohei_SDF" oor:op="replace" >
<prop oor:name="DetectService"/>
<prop oor:name="URLPattern"/>
<prop oor:name="Extensions"><value>koheisdf</value></prop>
<prop oor:name="MediaType"/>
<prop oor:name="Preferred"><value>false</value></prop>
<prop oor:name="PreferredFilter"><value>calc_Kohei_SDF_Filter</value></prop>
<prop oor:name="UIName"><value xml:lang="x-default">Kohei Super Duper File Format</value></prop>
<prop oor:name="ClipboardFormat"><value>doctype:Workbook</value></prop>
Once these new files are in place, add these files to fcfg_calc.mk so that the build process can find them. To add, open fcfg_calc.mk and add Kohei_SDF to the end of T4_CALC, calc_Kohei_SDF_Filter to
F4_CALC, and calc_Kohei_SDF_Filter_ui to F4_UI_CALC. Save the file and rebuild the module. This should rebuild the following configuration files (build done on Linux):
One note: the language pack zip package should contain the file named Filter.xcu with the new UI string you just put in. If you don’t see that, remove the whole unxlngi6.pro directory and build the
module again.
Now it’s time to update your installation. You need to update the following files:
with the new ones you just rebuilt. Next, unpack the langpack zip file and extract Filter.xcu. Place this file in
to replace the old one.
Ok so far? There is one more thing you need to do to complete the process. Since these configuration files are cached, in order for the updated configuration files to take effect, the cached data
must be removed. The cached data is in the user configuration directory, so you need to locate and delete the following directory:
rm -rf <user_config_dir>/user/registry/cache
That’s it! Now, fire up Calc and launch the Export dialog. You see the new file format entry you just put in. :-)
Just try not to export your file using this new filter for real, because that will utterly fail. ;-) | {"url":"http://kohei.us/tag/code-snippet/","timestamp":"2014-04-19T14:32:45Z","content_type":null,"content_length":"217681","record_id":"<urn:uuid:aeb16fad-b4aa-4f6e-bfe6-f982b54722d6>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00507-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathsland National Lottery
Copyright © University of Cambridge. All rights reserved.
Derin from Woodhouse College gave a clear presentation of his reasoning:
At first draw, there are 6 balls in the bag, any one of your 3 numbers needs to be picked, this gives you a $\frac{3}{6}$ chance (or $\frac{1}{2}$).
For your second ball to be chosen, there are now five balls in the bag and there are two possible balls which if picked would lead to a win, therefore, your chance for the second ball is $\frac{2}{5}
$. For the last ball, there are now four balls left in the bag, only one of which has your number painted on it, therefore, the chance of your ball being picked is $\frac{1}{4}$.
In order for all three of your balls to be picked sequentially, you must multiply the probabilities of each being chosen on their own, i.e:$\frac{3}{6}\times\frac{2}{5}\times\frac{1}{4}= \frac{1}{20}
$ therefore you have a one-in-twenty (or 0.05) chance of winning.
Ayden from Melbourn Village College calculated the solution to the rest of the problem. He also noticed that the chances of picking two balls out of six and four balls out of six is the same.
If you had a 2 out of 6 ball lottery it would increase your chances of winning, which is $\frac{1}{15}$ as opposed to $\frac{1}{20}$.
Drawing 4 balls instead of 2 will keep the chance of winning the same, $\frac{1}{15}$.
Having a 1 ball lottery will dramatically increase your chance of winning, which is $\frac{1}{6}$.
Having a 5 ball lottery will give you the same chance of winning, $\frac{1}{6}$.
To have the least possibility of winning a10 ball lottery, you would need to pick 5 balls.
The probability of winning the national lottery $\frac{1}{13983816}$ this is often rounded to $\frac{1}{14000000}$.
Phil from Wilson's School provided more explanations and some shrewd insights on how to solve the last parts of the problem
If the Mathsland lottery is using a ten-ball lottery and wants to make the least chance of winning as possible, then you must need to pick 5 balls correctly to win. This is because if you look at the
six-ball lottery the chance of winning was its lowest when three balls were needed to be picked correctly. This is half of three and either side of 3 balls, 2 and 4, the chance of winning went back
For example, when five out of ten must be picked, the chance of winning is $\frac{1}{2}\times\frac{4}{9}\times\frac{3}{8}\times\frac{2}{7}\times\frac{1}{6}= \frac{24}{6048}=\frac{1}{252}$, which is
smaller than four balls ($\frac{2}{5}\times\frac{1}{3}\times\frac{1}{4}\times\frac{1}{7}=\frac{1}{210}$) and six balls, therefore if the Mathsland lottery is organising a ten-ball lottery 5 balls
should be predicted as this results in the lowest winning chances.
The chances of winning the UK National lottery is $\frac{6}{49}\times\frac{5}{48}\times\frac{4}{47}\times\frac{3}{46}\times\frac{2}{45}\times\frac{1}{44}=\frac{1}{13,983,816}$ or approximately 1 in
14 million.
However you can also win money for guessing 3, 4 or 5 numbers and the chances are:
3 numbers: 1/56.7
4 numbers: 1/1032.4
5 numbers: 1/55,491.3 recurring.
George, also from Wilson's School, compared the result obtained with the simulator to the theoretical result. Great work! Click here to see the file. It is worth pointing out that picking 2 balls out
of 6 balls is essentially the same as picking 4 balls out of 6 balls, since by picking 2 balls and dumping 4 balls, you are also choosing 4 balls to dump and keeping 2 balls. The two processes are
equivalent. The same goes for picking 1 out of 6 and picking 5 out of 6, etc. Well done to everyone! | {"url":"http://nrich.maths.org/7238/solution?nomenu=1","timestamp":"2014-04-19T02:41:39Z","content_type":null,"content_length":"7090","record_id":"<urn:uuid:f0159c6b-d4e7-4251-980d-75214c31300a>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00590-ip-10-147-4-33.ec2.internal.warc.gz"} |
n-Transport and Higher Schreier Theory
Posted by Urs Schreiber
We are interested in categorifying the notion of parallel transport in a fiber bundle with connection.
There are several ways to define an ordinary connection on an ordinary bundle. Depending on which of these we start with, we end up with categorifications that may differ.
One definition goes like this:
Given a principal $G$-bundle $B \to X$, let
• $\mathrm{Trans}(B) = B\times B/G$ be the transport groupoid of $B$, whose objects are the fibers of $B$ and whose morphisms are the torsor morphisms between these;
• $P(X)$ be the groupoid of thin homotopy classes of paths in $X$ (meaning that we divide out by orientation-preserving diffeomorphisms and let orientation-reversing diffeos send a path to its
inverse class).
Then a connection on $B$ is a smooth functor
(1)$\mathrm{tra} : P(X) \to \mathrm{Trans}(B) \,.$
This definition has an obvious categorification. Working it out ($\to$, $\to$), one finds a notion of 2-connection with a special property that has been termed “fake flatness”.
There are a couple of applications where precisely this fake flatness is required ($\to$). For others, however, fake flatness is too restrictive ($\to$, $\to$).
Now, there have been several indications that in order to get a slightly more general categorification we need a definition of connection with parallel transport which somehow involves not just the
gauge group, but its automorphism 2-group ($\to$).
In fact, Danny Stevenson has developed a rather beautiful theory of connections - without an explicit description of parallel transport - and their categorification, by using not transport along
finite paths, but infinitesimal/differential transport. He sees essentially this automorphism-extension appearing there and does get around fake flatness.
Danny Stevenson
Lie 2-Algebras and the Geometry of Gerbes
Chicago Lectures on Higher Gauge Theory, April 7-11, 2006
This is directly inspired by
Lawrence Breen
Théorie de Schreier supérieure
Annales Scientifiques de l’École Normale Supérieure Sér. 4, 25 no. 5 (1992), p. 465-514
In this entry here I want to understand the integrated, finite version of Danny’s theory. Where he uses morphisms of Lie algebroids, I would like to see morphisms of Lie groupoids (smooth functors
between smooth groupoids) along the lines of the first definition of connection with parallel transport stated above.
I had begun making comments on that over in the comment section of the 10D supergravity thread ($\to$). But it does deserve an entry of its own.
Danny’s concept of connection is based on a fundamental idea called Schreier theory, which is about the classification of fibrations.
You can get a good idea of what this is about by looking at
John Baez
TWF 223
and following the references given there.
Danny starts his discussion with the following standard observation.
Given any principal $G$-bundle $B \to X$, we obtain an exact sequence of vector bundles from it
(1)$0 \to \mathrm{ad}(B) \to T B/G \to T X \to 0 \,,$
called the Atiyah sequence.
Here $\mathrm{ad}(B)$ is the vector bundle associated to $B$ by using the adjoint action of $G$ on its Lie algebra.
I believe this sequence actually extends to a sequence of Lie algebroids ($\to$), all with anchor maps to $T X$.
This is important for what I would like to discuss here, since I would like to integrate these Lie algebroids to Lie groupoids.
It is a well-known standard fact, that a splitting
(2)$abla : T X \to T B/G$
of the Atiyah sequence is the same as a connection on $B$. In general, this splitting is just a splitting at the level of morphisms of vector bundles, not at the level of Lie algebroids. The failure
of $abla$ to actually be a a morphism of Lie algebroids is measured by its curvature 2-form.
That should make us wonder. If everything here lives in the world of Lie algebroids, we do expect connections to be expressible in terms of Lie algebroid morphisms.
Danny explains what is going on by comparing with the general idea of higher Schreier theory.
There, too, we are dealing with splittings of short exact sequences
(3)$1 \to K \to G \to B \to 1$
which fail to respect the available structure. But there it turns out that the structure is in fact respected one level higher. The splitting
(4)$B \to G$
actually extends to a homomorphism
(5)$B \to \mathrm{AUT}(K) \,,$
where $\mathrm{AUT}(K)$ is a $(n+1)$-categorical structure if $K$ is an $n$-categorical structure.
In terms of the concrete example we are dealing with here, this means the following.
The algebroids we are talking about involve the Lie algebras of sections of the bundles that appear in the Atiyah sequence
(6)$0 \to \Gamma(\mathrm{ad}(B)) \to \Gamma(T B/G) \to \Gamma(T X) \to 0 \,.$
The “automorphism Lie 2-algebra” of the Lie algebra $\Gamma(\mathrm{ad}(B))$ is usually called the Lie 2-algebra of autoderivations. Danny writes
(7)$\mathrm{DER}(\Gamma(\mathrm{ad}(B))) \,.$
He notes that combining the splitting $abla : TM \to TB/G$ with its curvature, regarded as a linear map $\bigwedge^2 T X \to \mathrm{ad}(B)$ does yield a morphism of Lie 2-algebras
(8)$(abla, F_abla) : \Gamma(T X ) \to \mathrm{DER}(\Gamma(\mathrm{ad}(B))) \,.$
This now is indeed a homomorphism (though of 2-algebras instead of 1-algebras). In analogy to the former situation, this property of being a homomorphism again is equivalent to a condition which says
that this linear map is “flat” in some sense.
But now this flatness is something desireable. It encodes precisely the Bianchi identity satisfied by the curvature.
From a different point of view I had described this idea of forming a flat “curvature $n+1$-gerbe” from a given $n$-gerbe with connection and parallel transport here.
But I would like to now understand this more systematically - by “integrating” Danny’s theory to a theory of sequences of Lie groupoids and their splittings.
My intention here is not to present a fully worked-out idea, but to start by discussing some first observations.
I believe it is known what the Lie groupoids corresponding to the three Lie algebroids appearing in the Atiyah sequence are. They should be the following.
• The Lie algebroid $T M \stackrel{\mathrm{Id}}{\to} T M$ should be the differential version of the fundamental groupoid $P(X)$ of $X$, whose objects are points of $X$ and whose morphisms are
homotopy classes of paths in $X$. (This is at least true when $X$ is simply connected.)
• The Lie algebroid $T B/G \to T X$ should be the differential version of the transport groupoid $\mathrm{Trans}(B) = B \times B / G \to X$, whose objects are the fibers of $B$ and whose morphisms
are the torsor morphisms between these.
• The Lie algebroid $\mathrm{ad}(B) \to T X$ should be the differential version of the skeletal groupoid $\mathrm{Ad}(B) \to X$, which I guess should be called the endomorphism groupoid of $B$. It
is just a bundle of groups over $X$ obtained by associating $G$ by the adjoint action of $G$ on itself to $B$.
Assuming this is true, the integrated version of the Atiyah sequence of $B$ would be
(9)$\mathrm{Ad}(B) \to \mathrm{Trans}(B) \to P(X) \,.$
Here the morphisms are supposed to be the obvious smooth functors.
The first one takes a group element in a fiber $\mathrm{Ad}(B)_x$ of $\mathrm{Ad}(B)$ and interprets as a an torsor morphism $B_x \to B_x$.
The second functor takes a torsor morphism $B_x \to B_y$ and sends it to the corresponding class of paths $x \to y$. (Here in my notation I am assuming that $X$ is simply connected. This should
generalize to the general case in the obvious way.)
Clearly, the kernel of the second functor is precisely the image of the first one. Moreover, the first one is monic, the second one is epi, so we do have an exact sequence.
I conjecture that differentiating this sequence of morphisms of groupoids yieds precisely the Atiyah sequence of algebroids. But I haven’t tried to write down a rigorous proof for this.
Now, with a fibration of groupoids in hand, we need to know Schreier theory for groupoids in order to have a chance to translate Danny’s concepts to the world of groupoids.
Luckily this is discussed in this nice paper:
V. Blanco, M. Bullejos, E. Faro
Categorical non abelian cohomology, and the Schreier theory of groupoids
Even more luckily, these authors find that to discuss a sequence of groupoids
(10)$K \to E \to G$
we want to assume that $K$ is skeletal, i.e. that it is just a bundle of groups! That’s precisely the situation we found above, so we can apply Schreier theory of groupoids to our integrated Atiyah
(11)$\mathrm{Ad}(B) \to \mathrm{Trans}(B) \to P(X) \,.$
According to the results of this paper, now, the analog of Danny’s algebroid morphism
(12)$(abla,F_abla) : T X \to \mathrm{DER}(\mathrm{ad}(B))$
is now a pseudofunctor
(13)$(\mathrm{tra},\mathrm{curv}_\mathrm{tra}) : P(X) \to \mathrm{AUT}(\mathrm{Ad}(G)) \,,$
where (this definition is hidden on p. 4 of the above paper, penultimate paragraph)
is the 2-groupoid whose
• objects are the fibers of $\mathrm{Ad}(G)$, which we may identitfy with the points $x\in X$
• 1-morphisms are group isomorphisms $\mathrm{Ad}(G)_x \to \mathrm{Ad}(G)_y$
• 2- morphisms are natural isomorphisms between these.
I expect that
(15)$(\mathrm{tra},\mathrm{curv}_\mathrm{tra}) : P(X) \to \mathrm{AUT}(\mathrm{Ad}(G))$
is the right notion of parallel transport whose differential version yields Danny’s conception of connection in terms of algebroid morphisms.
We can make some quick consistency checks of this claim.
Assume that $B$ is trivial. Then the above says a connection on $B$ is a pseudofunctor
(16)$P(X) \to \mathrm{AUT}(G) \,.$
And that’s true. Using a connection 1-form, we pick any representatives of paths between given pairs of points and associate to these paths the group element $P \exp(\int_\path A)$. This won’t
respect the composition of the pair groupoid $P(X)$, unless $A$ is flat. But the failure of composition to be respected strictly is given by a nontrivial compositor, which precisely encodes the
curvature of $A$. Together, this does give the required pseudofunctor.
(I should draw a simple diagram to illustrate this. Maybe I’ll type one into an extra pdf.)
Similarly in the categorified case. Locally, or equivalently for trivial 2-bundles, the above says that a $G_2$-2-connection with parallel transport is a pseudo-2-functor to the 3-group $\mathrm{AUT}
(G_2)$. That this does in fact yield the expected result is part of what I checked in my $n$-curvature entry ($\to$).
So it seems that the above is on the right track.
Addendum: When comparing these consistency checks with the above discussion, one should note that there is a natural way to pass between $n$-functors on cubical $n$-paths which satisfy a certain
flatness constraint and pseudofunctors on the pair groupoid.
The latter associate something to 1-simplices, such that composition is respected up to something involving 2-simplicies, which satisfy something involving 3-simplices, and so on. At the highest
level there is just an equation and no more data is associated to higher-dimensional simplices. That is what yields the flatness constraint.
We pass between these two pictures by slicing $n$-cubes into $n$-simplies or gluing $n$-simplices to $n$-cubes.
Posted at September 5, 2006 1:53 PM UTC
Re: n-Transport and Higher Schreier Theory
Beware the term ‘fibration’. To a homotopy theorist it’s likely to mean the homotopy lifting property or locally fibre homotopy trivial. See posting on the arXiv Sept 11 by Wirth and Stasheff. `Just
like bundles’ EXCEPT everything up to strong homotopy e.g. homotopy coherent functor $tra: P_{x_0}(X) \to H(F)$ and 1-cocycles up to strong homotopy coherence.
Posted by: jim stasheff on September 10, 2006 8:13 PM | Permalink | Reply to this
Re: n-Transport and Higher Schreier Theory
Beware the term ‘fibration’.
Okay. What would be a better term to use where I used “fibration”?
See posting on the arXiv Sept 11
by Wirth and Stasheff.
Thanks! I’ll have a look at that.
Posted by: urs on September 11, 2006 5:37 PM | Permalink | Reply to this | {"url":"http://golem.ph.utexas.edu/category/2006/09/nconnections_and_higher_schrei.html","timestamp":"2014-04-20T13:36:51Z","content_type":null,"content_length":"54346","record_id":"<urn:uuid:25352c15-e1d2-4942-8620-dccb1bad74a9>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00087-ip-10-147-4-33.ec2.internal.warc.gz"} |
IntroductionNoise Sources in EMG SignalsInherent Noise in the ElectrodeMovement ArtifactElectromagnetic NoiseCross TalkInternal NoiseInherent Instability of the SignalElectrocardiographic (ECG) ArtifactsEMG Signal ProcessingWavelet AnalysisHigher Order Statistics (HOS)Empirical Mode Decomposition (EMD)Artificial Neural Network (ANN)Independent Component Analysis (ICA)EMG FeaturesClassificationDiscussionConclusionsConflicts of InterestReferencesFigures and Tables
The identity of an actual EMG signal that originates in the muscle is lost due to the mixing of various noise signals or artifacts. The attributes of the EMG signal depend on the internal structure
of the subject, including the individual skin formation, blood flow velocity, measured skin temperatures, the tissue structure (muscle, fat, etc.), the measuring site, and more. These attributes
produce different types of noise signals that can be found within the EMG signals. This may have an effect on the result of feature extraction and hence affect the diagnosis of the EMG signals.
Various methods of noise elimination have been proposed during the EMG signal acquisition, and the subject continues to be a popular one among practitioners. The main challenges in analyzing the EMG
signals are explained below.
All types of electronic equipment generate electrical noise, otherwise known as “inherent noise”. This noise has frequency components that range from 0 Hz to several thousand Hz. Two kinds of EMG
signals in widespread use include surface EMG, and intramuscular (needle and fine-wire) EMG. To perform intramuscular EMG, a needle electrode or a needle containing two fine-wire electrodes is placed
within the muscle of interest (invasive electrode). However, the use of surface electrodes has become more accepted in clinical and physiological applications [6]. The advantage of surface electrodes
is that they are non-invasive, and the patient need not be anesthetized before placing the electrode. The operation is simple and painless.
For recording the EMG, the non-invasive electrodes are applied to the skin of the subject. For recording purposes, electrodes made of silver/silver chloride (10 × 1 mm) have been found to give
adequate signal-to-noise ratio and are electrically very steady. For this reason, they are widely used as surface electrodes [7]. When the electrode size enlarges, the impedance decreases. However,
electrode size should not be very large. On the other hand, high electrode impendence effectively reduces the signal quality and gives low signal-to-noise ratio. Therefore, both parameters should be
taken into consideration. Researchers are allowed to use high electrode impedances for experiments in which statistical power is high or in which large numbers of electrodes are necessary, but tend
to switch to low electrode impedances for experiments in which statistical power would otherwise be too low [8]. This noise can be eliminated by using intelligent circuit design and high quality
Movement of the cable connecting the electrode to the amplifier and the interface between the detection surface of the electrode and the skin creates motion artifacts. Muscle fibers generate electric
activity whenever muscles are active [9]. EMG signals are recorded by placing electrodes close to the muscle groups. When the muscle is activated, the length of the muscle decreases and the muscle,
skin and electrodes move with respect to one another. At that time, the electrodes will show some movement artifacts. The frequency range of the motion noise is usually 1–10 Hz and has a voltage
comparable to the amplitude of the EMG. Recessed electrodes can remove the movement artifact significantly, in which a conductive gel layer is used between the skin surface and the
electrode-electrolyte interface. Another type of movement artifact occurs due to the potential difference between skin layers. Recessed electrodes cannot remove this artifact. However, this type of
artifact is attenuated by reducing the skin impedance [10]. Tam and Webster [11] found that scratching the skin reduces these artifacts. Burbank and Webster [12] showed that low skin impedance could
be achieved by using the puncture technique, thus reducing the artifacts. Conforto et al. [13] tested four filtering procedures to reject the motion artifact from an EMG signal during dynamic
contractions. These procedures include the eighth order Chebyshev high pass filters with corner frequency at 20 Hz; the moving average filter; the moving median filter; and the adaptive filter, which
is based on orthogonal Meyer wavelets. They found that the wavelet procedure maintains all the information and detects the time more precisely than the other methods. The virtual movement between
skin surface electrodes and the innervations zone(s) of the underlying motor units can cause another type of motion artifact. Mesin et al. discovered that the outcome of the innervations zone (IZ) on
amplitude, frequency and conduction velocity can be calculated from the EMG and the effect of electrodes placed close to IZ. At the same time, they showed that the inter-electrode distance must be
thin with respect to the distance between the IZ and the tendon, and that no electrode should go beyond this zone [14].
The human body behaves like an antenna—the surface of the body is continuously inundated with electric and magnetic radiation, which is the source of electromagnetic noise. Electromagnetic sources
from the environment superimpose the unwanted signal, or cancel the signal being recorded from a muscle. The amplitude of the ambient noise (electromagnetic radiation) is sometimes one to three times
greater than the EMG signal of interest.
The human body's surface continuously emits electromagnetic radiation, and avoiding exposure to ambient noise on the surface of the Earth is impracticable [15]. The dominant concern for the ambient
noise arises from the 60 Hz (or 50 Hz) radiation from power sources, which is also called Power-Line Interference (PLI). This is caused by differences in the electrode impedances and in stray
currents through the patient and the cables. However, in order to remove the recorded artifact, off-line processing is necessary [10]. A high pass filter can remove the interference if the frequency
of this interference is high. However, if the frequency content of PLI is within the EMG signal then it is very essential to recognize the nature of the EMG signal. 50 Hz PLI and its four harmonics
(e.g., 100, 200, 300 and 400 Hz) are constructed mathematically by the equation [16]: PLI ref = cos ( 2 π 50 t ) + cos ( 2 π 100 t ) + cos ( 2 π 200 t ) + cos ( 2 π 300 t ) + cos ( 2 π 400 t )
Figure 1 illustrates the general model for the PLI cancellation system. A number of adaptive filter techniques have been proposed for the attenuation of the PLI noise, such as adaptive FIR notch
filter, adaptive IIR notch filter, adaptive notch filter using Fourier transform and so forth. An efficient Laguerre filter can eliminate power-line interference from EMG signals successfully; in
fact, it has been shown to be more effective than other adaptive algorithms. This filter can increase the SNR of an EMG signal significantly without using any information from the power-line
interference [16].
An undesired EMG signal from a muscle group that is not commonly monitored is called “crosstalk”. Crosstalk contaminates the signal and can cause an incorrect interpretation of the signal information
[17]. Crosstalk depends on the many physiological parameters [18,19], and can be minimized by choosing electrode size and inter-electrode distances (typically 1–2 cm or the radius of the electrode)
carefully. Electrodes with a smaller surface area reduce bipolar spacing and mathematical differentiation, and the combination of these three methods reduces the potential crosstalk effectively [20].
Crosstalk increases with increasing subcutaneous fat thickness. Lowery et al. showed that the distance from the active fibers increases the decay rate of the cross-correlation function, and acts
faster than crosstalk. The cross-correlation between two EMG signals is neither a qualitative nor a quantitative measure of crosstalk [19]. The main causal factor of crosstalk is the generation of
the non-propagating signal components due to loss of the intracellular action potentials at the tendons. Thus, crosstalk has a different shape with respect to the signals detected directly over an
active muscle and has a broader bandwidth than these signals. The cross-correlation coefficient analysis and high pass filtering method have no effect on crosstalk and are not reliable for reducing
it [21]. Selectivity of EMG electrodes depends on their interspacing, their conductive area, and axis direction with respect to the direction of the underlying muscular fibers. Minimal crosstalk area
(MCA) is defined as a surface where crosstalk versus co-contraction of muscles is minimal. The precise location and measurements of the distance between two bony landmarks are the keys to finding the
“minimal crosstalk area” of the targeted muscle. MCA helps to limit or avoid crosstalk from neighboring muscles [22].
Mezzarane et al. presented the mathematical relationship (see Equation (2) below) between the target muscle EMG and crosstalk [23]: T b 2 = Rb c 2 + T i 2 + Ob c 2where, the background EMG activity
recorded at the target muscle = Tb, the intrinsic activity of the target muscle itself = Ti, the crosstalk from the remote muscle = Rb and the crosstalk from other muscles = Ob. These random signals,
Ti, Rb and Ob are assumed to be uncorrelated; hence, the variance of Tb is the sum of the variances of Ti, Rb and Ob.
Anatomical, biochemical and physiological factors take place due to the number of muscle fibers per unit, depth and location of active fibers, and amount of tissue. These factors are called internal
noise and directly affect EMG signal quality. Conventionally, physical capacitive effects are assumed negligible when analyzing the EMG signals. However, these assumptions might not be valid f or
muscle tissue. Both muscle conductivity and permittivity are frequency-dependent (dispersive). Furthermore, skin has a relatively low conductivity and high permittivity such that capacitive effects
would be expected to be significant and the dispersive effects of permittivity will be more pronounced [24]. Therefore, the capacitive effects also act as an internal noise for an EMG signal. The
amount of the tissue between contracting muscles and electrodes, along with their thickness, affect the amplitude of the EMG signal. Hemingway et al. showed that if the thickness of the subcutaneous
tissue between the surface electrode and active muscles increases, then the electromyographic activity decreases [25]. They observed the effect by examining 20 normal subjects who contracted their
muscle force for 45 s. It should be mentioned that all the subjects had varying amounts of subcutaneous tissue. The amount of excess body fat is considered as an internal noise for EMG because it
increases the separation between the active muscle fibers and the detection sites. Under the recording sites, surgical fat layer reduction increases surface EMG signal amplitude [26]. These effects
can be partly reduced by using high pass spatial filters [27].
The amplitude of the EMG signal is quasi-random in nature. The frequency components between 0 and 20 Hz are mostly unstable because they are affected by the firing rate of the motor units. The firing
rate of the motor units is quasi-random in nature. Because of the unstable nature of these components of the signal, it is considered as unwanted noise. The numbers of active motor units, motor
firing rate and mechanical interaction between muscle fibers can change the behavior of the information in the EMG signal [15].
The electrical activity of the heart is the foremost interfering component for surface electromyography (sEMG) in the shoulder girdle, which is called an “electrocardiogram (ECG) artifact” [28].
Cardiac activity (ECG artifact) often contaminates EMG signals, especially in trunk muscle electromyography [29]. The placement of EMG electrodes, which is conducted by a selection of the
pathological muscle group, often decides the level of ECG contamination in EMGs. ECG contamination in EMGs may be kept at a minimal level by common-mode rejection at the recording site, by the
careful placement of bipolar recording electrodes along the heart's axis if possible [30]. Due to an overlap of frequency spectra by ECG and EMG signals and their relative characteristics, such as
non-stationarity and varied temporal shape, it is very difficult to remove the ECG artifacts from the EMG signal [31]. ECG contamination is only visually identifiable below 25% maximum voluntary
contraction (MVC) of EMG activation. However, Hu et al. suggested that the level of corruption by ECG artifacts on sEMG parameters is more serious and prominent under static sEMG measurements [32].
High-pass filtering at 100 Hz essentially removed the effect of ECG interference. Whenever subjects are maintaining constant force contraction, repetitive fluctuation occurs in the intensity of
surface EMG signals due to the ECG artifact. High-pass filter is a very effective method to eliminate this oscillation caused by the ECG artifact [33].
In the field of clinical diagnosis and biomedics, the analysis of EMG signals with powerful and advanced methodologies is becoming more and more a required tool for healthcare providers. This
overview covers recent advances in the field of EMG signal processing.
The time-frequency plane is one of the most fundamental concepts in signal analysis. The Wigner-ville distribution (WVD) is one time-frequency representation method, which is used for analyzing the
EMG signal. In 1992, Ricamato et al. showed that it is possible to present the frequency ranges of the motor unit by WVD [34]. WVD is highly concentrated in the instantaneous frequency and time of
the signal, which is an excellent localization property of this method. It has a cross-term effect and is very noisy. Therefore, it is not well suited for analyzing a multi-component signal like EMG.
Wavelets have been growing in popularity as an alternative to the usual Fourier transform method. The wavelet transform can essentially be divided into discrete and continuous forms. It efficiently
transforms the signals with a flexible resolution in both time- and frequency-domains. The time taken for processing the signal using Discrete Wavelet Transform (DWT) method is low. However, in
Continuous Wavelet Transform (CWT), it is more consistent and less time-consuming due to the absence of down sampling. The DWT method has been successful in analyzing non-stationary signals, such as
surface EMG (sEMG) signals, but it yields a high-dimensional feature vector [35].
The basic analytical expression for CWT is presented in Equation (3) below. In a wavelet transform, the wavelet corresponding to scale a and time location b is given by: ψ ( a , b ) = 1 / | a | ψ ( t
− b a )where ψ(t) is the ‘mother wavelet’ which can be taken as a band-pass function. The factor | a | is used to ensure energy preservation, which is the same for all values of a and b. There are
various ways of discretizing timescale parameters (a, b), and each one yields a different type of wavelet transform.
Successive low-pass and high pass filtering in the discrete-time domain computes the DWT. The general equation of DWT (Equation (4)), is given below: x ( t ) = ∑ k = − ∞ ∞ ∑ l = − ∞ ∞ d ( k , l ) 2 k
2 ψ ( 2 − k t − 1 )where k is related to a as: a = 2k; b is related to l as b = 2kl; and d (k,l) is a sampling of W (a,b) at discrete points k and l.
Daubechies analyzed the time series that contained non-stationary power at many different frequencies, by using wavelet transform [36]. The different types of wavelets have different time-frequency
structures. There are several factors that should be considered when choosing the wavelet function [37]. Guglielminotti and Merletti theorized that wavelet transform exhibits very good energy
localization in the time-scale plane when the shape of the MUAP is matched with wavelet shape [38], in the order that the wavelets are generally chosen, whose shapes are similar to those of the MUAP.
In 1997, Laterza and Olmo explained that wavelet transform was developed as an alternative approach to other time frequency representations to overcome the resolution problem. Moreover, WT is not
affected by cross terms, which is particularly relevant when dealing with multi-component signals [39]. The WT is principally useful for MUAP detection in the presence of additive white noise.
Mexican hat wavelet and the Morlet wavelet are the most popular continuous wavelets. One of the disadvantages in this approach is that the Mexican hat wavelet does not accurately match the MUAP
shape. The major problem of fast and short term Fourier transform (SFT and FFT) is that the signals are considered to be stationary signals [40]. Therefore, to overcome this problem Pattichis and
Pattichis processed the signal at different resolution levels by using the continuous wavelet transform [41].
The pre-processing stage based on a wavelet de-noising algorithm for sEMG upper- and lower-limb movement recognitions has been a huge success over the past few years [42,43]. Removing the
interference of random noises from EMG signals (for example, white Gaussian noise (WGN)) using filtering procedures is difficult. Wavelet de-noising algorithms can effectively remove these noises
[44]. Phinyomark et al. provided the basic idea of a wavelet-based de-noising procedure. The application of this algorithm needs the selection of five processing parameters, including: (1) the type
of wavelet basis function; (2) the scale; (3) the threshold selection rule; (4) the threshold rescaling method, and (5) the thresholding function [44]. Selecting the right wavelet function is the
most crucial part of wavelet de-noising, which in turn depends on a number of factors, such as the type of application and characteristics of the signal. Phinyomark et al. studied five wavelet
functions (db2, db5, sym5, sym8 and coif5) for de-noising the sEMG signal for multifunction myoelectric control. They analyzed the processed sEMG by measuring the mean square error (MSE) parameter
and showed that the scale level 4 provides the better performance when compared with other scale levels. They also showed that the fifth order of Coiflet provides the perfect reconstruction for sEMG
signal [45]. Where the signals contain discontinuities and sharp spikes, the wavelet transform de-noising method finely preserved the maximum signal character [46]. The selection of suitable wavelet
functions from three twenty-four wavelet function and decomposition levels is very important for EMG signal from a de-noising viewpoint. Jiang and Kuo assessed four classical threshold estimation
functions and concluded that EMG signals are insensitive to the selection of threshold estimation functions [47]. In 2003, Kumar et al. determined muscle failure by using the Symlet function (Sym4
and Sym5) with decomposition level 8 and 9 (out of 10 levels) [48]. Hossain and Mamun proved that WF db45 shows the best contrast when they analyzed the sEMG signal using both power spectrum and
bispectrum compared to the other four WFs (Haar, db2 sym4 and sym5) within the range 50 to 70 Hz [49]. In 2012 Wei et al. proposed a new wavelet-based algorithm that analyzed surface EMG signals in
three steps [50]. For de-noising EMG, they applied a Maximal Overlap Discrete Wavelet Transform (MODWT) algorithm and decomposed EMG data into different frequency band oscillations. For this
algorithm they used the wavelet function db4 at decomposition level 5. It was an easy, simple and inexpensive process.
The benefit of using a wavelet basic function is that it has continuous derivatives, which allows it to decompose a continuous function more efficiently. It also avoids unwanted signals. Daubechies's
wavelets provide better energy concentration with long-length filters than those with short-length filters [51]. Table 1 displays the different types of wavelet functions with their families.
By investigating and analyzing various research studies on wavelet transform, the author has concluded that analyzing sEMG signals using Daubechies's function renders successful results. For
obtaining better results from a sEMG analysis on different applications, the author recommends to use the db function (db2, db4, db6, db44 and db45) at decomposition level 4. In case of high and low
noises in sEMG, the db function at decomposition level 4 can be used as a compromise level. The author simulated the raw sEMG signal by using the above wavelet functions. Figure 2 represents the raw
sEMG signal from the right rectus femoris muscle during maximum walking speed and its de-noised version using a different wavelet function, such as db2, db4, db6, db44 and db45 at decomposition level
Higher order spectra are defined as spectral representations of higher order cumulants of a random process. Let x (k) be a real, discrete time and nth-order stationary random process. Moreover, let w
= [w1, w2…wn] T and x = [x(k + τ[1]),……,x(k + τ[n][−1])]^T. Then the nth-order moment of x (k), M n x = ( τ 1 , τ 2 ⋯ τ n − 1 ) is defined as the coefficient in the Taylor expansion of the moment
generating function in Equation (5): Φ ( W ) = E [ exp ( iWTx ) ]
In practice, the nth-order moment can be equivalently calculated by taking an probability over the process multiplied by (n−1) lagged versions of itself. Higher order spectra are often estimated
directly in the spectral domain as expected values of higher order periodograms. The spectral representation of Higher Order Statistics (HOS), such as moments and cumulants of the third order and
above, are known as polyspectra or higher order spectra. For efficient processing of the EMG signal, HOS is applicable due to its unique properties. HOS can identify deviations from linearity,
stationarity or Gaussianity in the signal [52]. HOS is important for a quality neuromuscular diagnosis to obtain information on innervation pulse trains and Motor-Unit Action Potentials (MUAPs)
characteristics. Kanosue et al. developed a statistical signal processing method that can determine the amplitude and number of recruited MUAPs [53]. The spectral moments (second and fourth order)
with a parametric model are used in this method. Second-order statistics (SOS) provide low order models and present the real data that is parsimonious with the particle dimension. Within the past few
decades, there has been considerable interest in using higher order statistics (HOS) technique [15]. HOS was introduced in the 1960s, and Giannakis and Tsatsanis applied HOS for EMG signal analysis
in 1991. The advantage of HOS is that accurate phase reconstruction is possible in the HOS domain, but SOS is phase-blind [54]. Moreover, HOS is useful for modelling non-Gaussian and nonlinear
processes. Kalpanis et al. gave the theory using HOS, which characterizes the Gaussianity of the sEMG signal by using the bicoherence index. sEMG signal distribution is highly non-Gaussian at low and
high levels of force, whereas the distribution has its maximum Gaussianity at the mid-level of maximum voluntary contraction (MVC). They used the HOS technique in their sEMG signal analysis in order
to extract a new parameter (power spectrum median frequency) that could enhance the diagnostic character of sEMG [55]. In probability theory and statistics, the skewness (measure of third order
cumulants) measures asymmetry and kurtosis (measure of fourth order cumulants) measures peakness of the probability distribution. Cumulants and moments are particularly convenient; this is why
cumulants and moments are successfully used in the higher order statistics technique. In an earlier stage, Yana et al. used HOS-based approaches to recover MUAPs from the sEMG signal [56]. However,
this approach was only applied to simulated sEMG signals. Shahid et al. applied HOS to the EMG signal and proposed the ‘Bispectrum of Linear Systems’ to characterize the motor unit action potential
due to its advantages of HOS over SOS [57]. The EMG processing method based on the first and second order moments and cumulants (SOS) cannot suppress white Gaussian noise from the signal where HOS
(bispectrum or third order spectrum) can eliminate this noise. The mathematical model of the EMG signal is of the output of a Linear Time Invariant (LTI) system whose input is non-Gaussian white
noise. Using the convolution theorem for the LTI system, the output x (n) can be expressed as: x ( n ) = ∑ k = 0 α h ( k ) e ( n − k ) + w ( n )where w (n) is an independent identically distributed
random-Gaussian white noise; e (k) is a white non-Gaussian process; and h (k) is a stable, possibly nonlinear kernel representing the EMG segment x (n). Based on this model, they applied cepstrum of
bispectrum-based system reconstruction algorithm on the real EMG for estimating the appearance of MUAPs when the muscles were at different contraction positions. Bispectrum is a part of the family of
higher-order spectra. Due to the speedy and economical software-based solution to visualizing MUAPs, this algorithm can regain high-quality estimates of MUAPs from sEMG signals. However, this
technique does not have a capacity to detect the effect of increased loading and exertion by the muscle.
Whenever a signal-processing technique is applied on the diagnosis of neuromuscular disorders, some parameters, such as amplitude, number of phases, spike duration, number of turns, etc. should be
taken into consideration. The HOS method characterizes and detects the non-linearity of the sEMG signal. This method is also able to estimate both the amplitude and phase information successfully.
From the above analysis of various research works on the HOS technique, the author concluded that this method is more useful for analyzing the sEMG signal in case of diagnosing neuromuscular
EMD is a moderately new, data-driven adaptive technique for the analysis of non-stationary and nonlinear signals. EMD is a method to analyze the underlying notion of instantaneous frequency, and
provides insight into the time-frequency signal features. The EMD method was first introduced by Huang et al. [58], and is used as a sifting process that estimates intrinsic mode functions (IMFs).
EMD aims to decompose a multi-component signal, x(t) into a number of virtually mono-component IMFs, h(t) plus a non-zero-mean value of the residual component, r(t): x ( t ) = ∑ i = 1 h h ( i ) ( t )
+ r ( t )
Each one of the IMFs; e.g., h(k + 1), is obtained by applying a process called sifting to the residual multi-component signal as in the following Equation (8): x ( k ) ( t ) = x ( t ) − ∑ i = 1 k h i
( t )
The sifting process is an iterative procedure which aims to achieve improved estimates of hk(t) in each iteration. More specifically, during the (n + 1) th sifting iteration, the temporal estimate of
the IMF hnk(t), is obtained in the previous sifting iteration. This process is repeated until the designated IMF fulfills the following criteria:
The number of extrema and the number of zero crossings must either equal one another, or differ at most by one.
The mean value of the upper envelope and lower envelope is zero at any point of the whole time series.
When the IMF component is a monotonic function, the process is finalized and the original signal is reconstructed by adding all the IMF components along with the mean of final residue, m[final].
Final residue is obtained by the difference between S(t) and the sum of all IMFs. The reconstructed signal, S(t) can be represented as in the following Equation (9): S ( t ) = ∑ k = 1 n IMF n + m
finalwhere n is the number of IMFs. Adriano et al. first used the EMD signal processing technique for filtering electromyographic (EMG) signals that can decompose an EMG signal into a set of IMFs
[59]. The sequence of steps for estimating the intrinsic mode functions of the EMD process is given in Figure 3.
During the signal processing, EMG signals use the EMD for background activity attenuation. EMD is very effective for noise reduction because it is a non-linear method that can deal with
non-stationary data. This procedure makes no assumptions about the input time-series where the wavelet procedure depends on the basic mother wavelet function. Andrade et al. showed that the EMD
method provides better results for the attenuation of EMG background activity (noise) when compared with different wavelet prototypes (db2, db3 and db4). However, computing IMFs takes a lot of time,
which can be disadvantageous when compared to wavelets [59]. Figure 4 illustrates the raw EMG data from the right vastus medialis muscle during maximum walking speeds; EMD decomposes it into a finite
number of intrinsic mode functions, which is shown in Figure 5.
The major drawback of the EMD method is that it is more sensitive to the presence of noise, and has a mode-mixing problem. The EMD method is also a time-consuming process. Therefore, a more robust,
noise-assisted version of the EMD algorithm, called Ensemble EMD (EEMD) is used [60]. Ensemble EMD (EEMD) was introduced to remove the mode-mixing effect. The EEMD bypasses the mode-mixing problem of
the original EMD by adding white noise into the targeted signal repeatedly. It also provides physically unique decompositions when it is applied to data with mixed and intermittent scales. Zhang et
al. showed that different types of noises [power-line interference (PLI), white Gaussian noise (WGN), and baseline wandering (BW)] could be adaptively removed based on an IMF filtering where an EMD/
EEMD-based IMF filtering framework achieved improved performance than the conventional digital filters (IIR causal and IIR non-causal filters). It has been found that with a low level of SNR of the
processed signal, the EEMD method provided the best surface EMG de-noising performance compared to all other methods [61].
By studying the sEMG signal analysis using the empirical mode decomposition technique, the author has come to the conclusion that the EEMD method offers the most successful results for the
attenuation of specific noises of sEMG signals. This method is more robust and the filtering procedure is able to directly extract signal components, which overlap significantly in time and
frequency. EEMD achieved best surface EMG de-noising performance for attenuating noises, especially in cases of power-line noises (PLI), white Gaussian noise (WGN), baseline wandering (BW) and ECG
The Neural Network (NN) approach is suitable for modeling nonlinear data and is able to cover distinctions among different conditions. The requirements for designing an ANN for a given application
include: (i) determining the network architecture; (ii) defining the total number of layers, the number of hidden units in the middle layers and number of units in the input and output layers; and
(iii) the training algorithm used during the learning phase [62]. The back propagation neural network (BPNN) is a popular learning algorithm to train self-learning feed-forward neural networks.
However, some drawbacks of this method exist, in that NN requires a huge quantity of training data, the network architecture is quite rigid and NN takes too many learning iterations [63]. Another
neural network, called Cascade Correlation Network (CCN), can overcome the limitations of BPNN. It reduces the Mean Square value of the required signal and the convergence time, while increasing the
SNR. CCN is an architecture that uses a supervised learning algorithm for artificial neural networks [64]. CCN offers several advantages, such as: no need to guess the size, depth and connectivity
pattern of the network in advance; it learns approximately 50 times faster than a standard back-propagation algorithm; and it is very appropriate for large training sets. However, a maximum
correlation criterion systematically pushes the hidden layers to their saturated excessive values in place of an active layer, so the error surface becomes rough. This is the main disadvantage of CCN
[65]. Mankar and Ghatol mentioned that a radial basis function (RBF) neural network efficiently removes artifacts from the EMG signal, when compared to other types of neural networks. Table 2
demonstrates the comparison of performance parameters; i.e., Mean Square Error (MSE) and correlation coefficient, among several neural network methods for EMG noise reduction [66,67]. In this case, a
single, hidden layer of processing elements belongs to the RBF network which uses Gaussian transfer functions, rather than the standard sigmoidal functions employed by Multilayer Perceptron (MLP).
Table 2 represents the performance comparison of various neural networks. All the neural networks were trained to reduce the noises in the EMG signal using the training data. In addition, cross
validation data were used to compare the efficiency of the learning ANN models in terms of solving the problem at hand.
Among these models, it compared: Multi-Layer Perceptron NN (MLP), Generalized Feed Forward NN (Gen FF), Modular NN (Mod NN), Jordan/Elman NN, and Recurrent Neural Network. The RBF network possesses
several distinctive features, which makes it unique from other networks.
The general Equation (10), of this network is given below [68]: Y j = ∑ i = 1 N W ij ϕ ( ‖ x − c i ‖ ) + β j
Here, ϕ(‖x−c[i]‖) is the radial basis function of the hidden layer; W[ij] is the weight between ith hidden layer and jth output; ci = center vector; Yj and βj are the output of the network and bias
value of the output jth neuron; and N = Number of nodes in the hidden layer.
Determining the number of neurons in the hidden layer is very crucial because the data learning capability in the RBF neural networks depends on its sufficiency [69]. Kale and Dudul demonstrated that
a Focused Time-Lagged Recurrent Neural Network (FTLRNN)-based filter with a single, hidden layer elegantly removes noise from the EMG signal and gives reasonable accuracy [70]. According to their
experimental study, Table 3 shows that compared to RBF NN and MLP, the FTLRNN model needs more time for training. However, the results of the Mean Square Error (MSE) and co-relation coefficient (r),
and the visual inspection of modeling characteristics prove the FTLRNN model to be superior to the other two NNs. Here, the number of epochs is constant (1,000) for all three NNs. From the table, it
can be observed that the FTLRNN model provides very low MSE and the high correlation coefficient. Therefore, FTLRNN is the best neural network to remove noise from an EMG signal.
An Artificial Neural Network is not a very common method for sEMG signal processing for noise reduction. However, in recent years several researchers have applied the different approaches of the ANN
method to sEMG noise removal. By analyzing all of the approaches, the author recommends to use Jordan/Elman NN as a sEMG noise reduction approach. The advantage of the Jordan/Elman NN is that it is
simple, speedy and is capable of generalization.
The ICA algorithm has rapidly become one of the most prominent signal processing techniques. The ICA is a statistical method, which can assume the original signal from the mixture signal. P. Comon
first proposed this method [71] and it is used for transforming an experimental multivariate random vector into components that are statistically independent from each other. In ICA there is no order
of magnitude associated with each component, and the extracted components are invariant to the sign of the sources. Using this vector-matrix notation, the above mixing model is written as: x = A s
Equation (11) represents an ICA model. Where X = [x[1], x[2]…x[m]]^T is an m vector of linear mixtures, S = [s[1], s[2],…,s[n]]^T is an n-dimensional random vector of independent source signals, and
A is full-rank m × n scalar linearly mixing matrix (n × m). Without knowing the source signals and the mixing matrix, a signal copy of the statistically independent sources s will be estimated from
observed mixtures x. Figure 6 shows that the block diagram of the blind source separation technique.
In this figure s (t) are the sources. X (t) are the recordings sˆ (t) are the estimated sources, A is the mixing matrix, and W is the un-mixing matrix. Without non-Gaussianity, the estimation of the
ICA model is not possible. ICA yields improvements above Principal Component Analysis (PCA), when signals do not display a Gaussian distribution [72]. It is suitable to separate the EMG signals from
different sources when the assumptions below are fulfilled:
Sources are independent at each time instant
Mixing matrix is linear and propagation delays of it are negligible
The sources are stationary and do not change with time
The signals are non-Gaussian
The electromyographic (EMG) artifacts are statistically and mutually independent.
Consequently, ICA is a feasible method for source separation and decomposition of an EMG signal. Nowadays it is widely used to separate and remove noise sources from EMG and to decompose EMG signals
into a maximum number of independent components. There are different types of ICA algorithms; some of them are used for processing the EMG signal, such as the Fast ICA algorithm, the Joint
Approximate Diagonalization of Eigen-matrices (JADE), and the Infomax Estimation or maximum likelihood algorithm. The Fast ICA algorithm is a very popular method due to its simplicity, fast
convergence and satisfactory results.
Hyvarinen introduced new contrast (or objective) functions for ICA based on the minimization of mutual information first [73]. There are two types of fast ICA algorithms: Fixed-point algorithm for
one unit, and Fixed-point algorithm for several units. The Fast ICA algorithm could be performed at the beginning of each iteration, in order to solve overlaps and cancellations between MUAPs. It
solves the low signal-to-noise ratio, which is the main complication in surface EMG signal decomposition [74]. Nakamura et al. reported that ICA is a very useful technique for decomposing sEMG
signals into Motor-Unit Action Potentials (MUAPs) originating from different muscle sources. Fast ICA could provide much better discrimination of the properties of Motor-Unit Action Potential Trains
(MUAPTs) for sEMG signal decomposition (i.e., waveforms, discharge intervals, etc.) than PCA [75]. Fast ICA is a type of algorithm that successfully isolates power-line components from EMG signals.
However, the performance of Fast ICA fluctuates quickly and few components obtained by ICA decomposition are inverted—a major problem when automatically decomposing EMG signals. Cardoso firstly
proposed the JADE algorithm [76], which is more effective than Fast ICA for decomposing sEMG signals [73]. The JADE algorithm is based on the principle of computing several cumulant tensors, which
are a generalization of matrices [77]. Firstly, Zhou et al. examined the feasibility of ICA based on an Information maximization (Infomax) algorithm for obtaining more information of the active motor
units. Infomax ICA was unable to isolate all the MUAP trains due to time delays and the variances in shape between the surface action potentials detected at the different electrode locations.
Furthermore, blind source separation techniques addressing a more complex convoluted mixing model are required for obtaining accurate firing rate information [78]. Bell and Sejnowski first introduced
the Maximum Likelihood (ML) algorithm by using the stochastic gradient method [79]. The estimation of this algorithm is based on the fact that no prior information is available. Furthermore, Garcia
et al. demonstrated that the JADE ICA could be used successfully for solving overlaps of MUAPs. In each iteration of the algorithm, the action potentials of one motor unit (MU) could generally be
separated from the others. They showed that the JADE algorithm is more efficient than Fast ICA. JADE's performance is not strongly affected by added noise. However, inter-channel delay is the main
drawback of this method [80].
In this section, the authors have reviewed some of the more prevalent approaches to ICA along with their potential benefits when applying them to EMG signals. The author has concluded that the
ICA-based filtering procedure provides successful results in removing ECG artifacts and power-line noise (PLI), due to its largely independent signal-to-noise ratio, and because of its subtle effects
on frequency content.
An efficient means of classifying electromyography (EMG) signal patterns has been the interest of many researchers in the modern era. There are different types of classifiers, which are effectively
used for different EMG applications, such as Artificial Neural Network (ANN), fuzzy classifier, Linear Discriminant Analysis (LDA), Self-Organizing Map (SOM) and Support Vector Machines (SVM) [89].
The raw EMG signal is represented as a feature vector in the feature extraction process, which is used as an input to the classifier. Because raw EMG signals directly feed to the classifier, they are
not practical due to the randomness of the EMG signal. To avoid overloading the classifier, features were reduced in the dimension, using different dimension reduction methods. Dimensional reduction
methods decrease the burden of the classifier and computational time. PCA is a more well-known method than other methods of dimension reduction [90]. Wavelet-based feature set reduced in dimension by
principal components analysis greatly improves the classification accuracy in myoelectric-controlled prosthesis applicaion [91]. However, Chu et al. proposed a linear-nonlinear feature projection
method by combining PCA with Self-Organizing Feature Map (SOFM). This method simplifies the structure of the classifier and provides greater classification performance compared to using only PCA
[92]. In the dimension reduction method, the PCA and the Linear Discriminant Analysis (LDA) are well known. However, this method takes more computational time for solving the eigenvalue problem. The
estimation method of LDA is a Simple Fisher Linear Discriminant Analysis (Simple-FLDA) that also can be used for the dimension reduction method. This algorithm takes less time to calculate the
eigenvector because it does not use a matrix [93]. The amalgamation of TD features set with FLDA technique provides a good balance between robustness of the algorithm and computational efficiency.
Accurate classification of EMG signal has great advantage on prosthetic control, which improves the quality of life of persons with limb deficiency. SVM and LDA classifiers are currently very popular
amongst the researcher for the prosthetic control application due to their simple implementation and ease of training [94,95]. Figure 7 represents the main components of the EMG pattern recognition
or classification method.
The success of the electromyogram classification system highly depends on the quality of the selected and extracted features [96]. Feature extraction step in the classification system increase
information density of the signal [94]. Moreover, assessing and developing efficient dimensionality reduction and classifier methods are suggested for perfect EMG pattern recognition.
Many researchers have highlighted the neural network classifier in EMG pattern recognition because it can represent both linear and nonlinear relationships taken from data being modeled. ANNs are
non-linear statistical data modeling tools that are inspired by the structure of biological neural networks and that are able to process an EMG signal. Del and Park suggested that ANN is a suitable
technique for real-time applications of EMG [97]. ANN can precisely recognize the myoelectric (MES) signal. Data obtained by this unsupervised learning technique are then automatically targeted and
presented to a Multilayer Perceptron-type Neural Network (MLP NN) [98]. The output of the neural network approach represents a quantity of preferred enervated muscle stimulation over a synergy [13].
In 1993, Tsuji et al. proposed an error back propagation-type neural network for the classification of six-forearm motions by using entropy [99]. Motion classification from the EMG signals is useful
in fields such as control of multifunctional powered prosthesis, human-assisting robots or rehabilitation devices, and virtual reality.
A new EMG pattern discrimination method, called the Recurrent Log-Linearized Gaussian Mixture Network (R-LLGMN), and based on the Hidden Markov Model (HMM), was proposed by Bu et al. [100]. For
prosthetic control, they used this method and showed successful forearm motion (Figure 8) discrimination capability and accuracy, which was better than LLGMN and back propagation ANN (BPNN). As
depicted in Table 5, they showed that the most excellent discrimination rate and approximately 0 standard deviation is obtained by using the R-LLGMN method among all three methods.
Moreover, Wei et al. classified three EMG steady patterns—the normal (NR) pattern, the eye closing (EC) pattern, and the rhythmic jaw movement (RJM) pattern—by using BPANN with the
Levenberg-Marquardt algorithm [101]. They used this classifier to generate five control commands for a simulated Intelligent Wheelchair. Figure 9 shows the block diagram of this algorithm.
On the other hand, ICA is a feasible method for source separation and decomposition of surface electromyogram (sEMG). Naik et al. examined four algorithms, Fast ICA, JADE-ICA, Infomax-ICA and
Temporal Décor-relation Source Separation (TDSEP) ICA, for identifying subtle wrist actions. Table 6 represents the comparison between the various types of ICA algorithms [102].
TDSEP is an ICA algorithm based on the simultaneous diagonalization of several time-delayed correlation matrices. From the table it is observed that it provided the best performance and gave an
overall efficiency of 97%. Use of ICA alone is not suitable for sEMG due to the nature of sEMG distribution and order ambiguity. Naik et al. proposed a novel method (Multi Run ICA) which is a
combination of the mixing matrix and network weights to classify the sEMG recordings. This approach is able to overcome the ambiguity problems [103].
Fuzzy logic systems have more advantages for bio-signal classification. Due to such biological signal characteristics as non-repeatable and stochastic, fuzzy logic is an advantageous technique in
biomedical signal classification. Fuzzy logic methods are superior to neural network-based approaches because of their simplicity and insensitivity to over-training. The insufficient number of
patterns interferes with the current sEMG, which repeatedly deepens by the inaccuracy of the instrumentation and analytical system. In order to resolve these difficulties, Khezri et al. suggested an
Adaptive Neuro-Fuzzy Inference System (ANFIS) to detect hand gestures [104]. The fuzzy system initially fuzzifies the inputs to values at interval [0, 1], using a set of membership functions (MF's)
[105–107]. Then, by using the IF-THEN rule it is derived by using fuzzy logic. The fuzzy rules can be represented Equation (12) as Ri: If x1 is MFi1 and/or x2 is MFi2 and/or … xj is MFj, then zi is:
Z i = si 0 + si 1 x 1 + … . + sijxjwhere Ri (i = 1, 2,…, l) denotes the ith fuzzy rule, xj (j = 1, 2,…, n) is the jth input, Zi is the output of the ith fuzzy rule, sij coefficients are the constants
that are determined after training the fuzzy system, and, finally, MFij is the jth fuzzy MF of an antecedent for the ith rule. Figure 10 shows the Structure of the fuzzy system with four inputs and
one output.
Based on the level of complexity and the change in hand movement and rate of precision, the ANFIS proves to be better than ANN. Table 7 shows the percentage of specificity and sensitivity measured in
both ANFIS- and ANN-based methods [104].
The classification of electromyography (EMG) signals is also very important for detecting diseases. In clinical diagnosis, the simplicity, speed and reliability of classification are essential. The
EMG signals from disabled patients or patients with different neurological diseases such as Parkinson's, Huntington's, Amyotrophic Lateral Sclerosis, etc. are very different from healthy EMG signals.
All these types of patients have neurologic movement disorders and they could have different muscle structures. Therefore, many researchers have been working on EMG signals from these types of
patients’ muscle tissue, recognizing it for monitoring the progression of the various diseases. Several research studies are also being carried out on neuromuscular fatigue (muscles that are not able
to generate force or power) EMG. Furthermore, an accurate and computationally efficient means of classifying fatigue electromyographic signal patterns has been the subject of considerable research in
recent years, which is most applicable in sports science.
Subasi et al. developed two classifier Feed-forward Error Back-propagation Artificial Neural Networks (FEBANN) and Wavelet Neural Networks (WNN) for diagnosing EMG patterns [108]. WNN is a neural
network where a discrete wavelet function is used as a node activation function in a hidden layer. They used an autoregressive spectrum of EMG as an input to the input layer of FBANN with three
discrete outputs representing normal, myopathic or neurogenic disorder. These three EMG patterns are shown in Figure 11.
In 2012, Christodouloua et al. used three classifiers, namely the statistical K-Nearest Neighbor (KNN), the Self-Organizing Map (SOM) and the Support Vector Machine (SVM) for classifying
neuromuscular disorders (20 normal, 11 myopathy and nine neuropathy subjects) [109]. They first used multi-scale amplitude modulation-frequency modulation (AM-FM) features as an input to these
classifiers. In their study, Gaussian Radial Basis Function was used in the SVM classifier. They proved that SVM provided the best diagnostic performance among all three classifiers. However, the
learning speed of SVM was slow [110].
Subasi and Kiymik used the time-frequency methods such as STFT, Wigner-Ville Distribution (WVD) and Continuous Wavelet Transform (CWT), which have been used as pre-processing techniques. ICA was also
used to reduce the dimension of feature vectors. Then, the extracted features of the EMG signal were used as an input to the Multilayer Perceptron Neural Network (MLPNN), which could be used to
detect muscle fatigue. They showed that ANN with ICA separates EMG signals from healthy and fatigued muscles. By avoiding the spectral estimation, the problems of the conventional Fourier spectral
variables deriving method is overcome by this method. Time-frequency methods do not assume quasi-stationarity or linearity in the order for this method to be appropriate for non-stationary signals.
Muscle fatigue is automatically detected by this method [111].
Moreover, for classifying EMG signals, Sezgin used higher order spectra [112]. A bispectrum analysis (which belongs to the higher order spectra class) extracts the phase information from an EMG
signal, which is called Quadratic Phase Coupling (QPC). These QPCs were fed into the Extreme Learning Machine (ELM) algorithm in order to separate abnormal activities from normal activities. The main
advantage of ELM over the traditional learning methods is that it is capable of training and testing data fast and with a high accuracy. Therefore, this method may also be useful and applicable for a
disease monitoring system. Table 8 represents the performance comparison among machine learning classification methods, such as ELM, Support Vector Machine (SVM), Logistic Regression (LR), Linear
Discriminant Analysis (LDA) and Artificial Neural Network (ANN). A summary of electromyography pattern recognition techniques in different applications is presented in Table 9.
This study showed that several undesired signal sources (extrinsic factors, inherent noise in electronic equipment, motion artifacts, ambient noise) can be attenuated to a great extent by using an
active electrode. However, this basic technique is not sufficient for the abovementioned noise elimination problem. Researchers have used different types of processing techniques for canceling these
noises. Proper use of these techniques can increase EMG signal quality to where the signal becomes much more accurate, simple, reliable and steady. Based on the studies reviewed, the wavelet
transform and higher order spectra employed in the processing (noise reduction and significant information extract) method are optimal.
The study also described the use of the electromyography pattern recognition method, which is very important in different applications, such as rehabilitation devices, prosthetic arm/leg control,
assistive technology, symptom detection for neuromuscular disorder, and so on.
In case of a disease monitoring system, two major criteria are applicable—one is robustness and reliability, and another is accuracy of diagnosis. Based on these criteria, the SVM classifier (where
multi-scale Amplitude Modulation and Frequency Modulation (AM-FM) histogram features are used as an input) is suggested for classifying electromyography signals. AM-FM features can capture
instantaneous variations in amplitude, frequency and phase of the electromyography signal. For real-time control of a robotic arm or leg, surface electromyographic (EMG) signal classification is also
an important issue. On the other hand, if the number of EMG channels and features increases, the number of control commands of the classifier also increases. A large number of features (especially
time domain and time scale feature vector) which extract the significant but different types of information from electromyography also provide improved classification results. Dimensionality
reduction methods, Principle Component Analysis (PCA), and Linear Discriminant Analysis (LDA) methods are recommended if a huge number of features are used as input to the classifier. The main
advantage of using the method is that the computational complexity of classifiers is allayed greatly. Dimensionality reduction methods should transform the data to a space vector with low dimensions
and keep maximum information of the signal.
Furthermore, for increasing the classification accuracy, a combination of processing methods and pattern recognition techniques is strongly recommended. This combination method may be helpful to
increase the classification accuracy without having to use too many muscle positions. The findings of this study are tabularized in Table 10, below. | {"url":"http://www.mdpi.com/1424-8220/13/9/12431/xml","timestamp":"2014-04-16T08:38:31Z","content_type":null,"content_length":"214473","record_id":"<urn:uuid:24ea3a20-803c-4a66-a549-a41c20e033dc>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00345-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Haskell-cafe] Type constructor variables no longer injective in GHC 7.2.1?
[Haskell-cafe] Type constructor variables no longer injective in GHC 7.2.1?
Daniel Schüssler anotheraddress at gmx.de
Fri Oct 21 12:04:02 CEST 2011
Hello Cafe,
say we take these standard definitions:
> {-# LANGUAGE GADTs, TypeOperators, TypeFamilies, ScopedTypeVariables #-}
> data a :=: b where
> Refl :: a :=: a
> subst :: a :=: b -> f a -> f b
> subst Refl = id
Then this doesn't work (error message at the bottom):
> inj1 :: forall f a b. f a :=: f b -> a :=: b
> inj1 Refl = Refl
But one can still construct it with a trick that Oleg used in the context of
Leibniz equality:
> type family Arg fa
> type instance Arg (f a) = a
> newtype Helper fa fa' = Helper { runHelper :: Arg fa :=: Arg fa' }
> inj2 :: forall f a b. f a :=: f b -> a :=: b
> inj2 p = runHelper (subst p (Helper Refl :: Helper (f a) (f a)))
So, it seems to me that either rejecting inj1 is a bug (or at least an
inconvenience), or GHC is for some reason justified in not assuming type
constructor variables to be injective, and accepting inj2 is a bug. I guess
it's the former, since type constructor variables can't range over type
functions AFAIK.
The error message for inj1 is:
Could not deduce (a ~ b)
from the context (f a ~ f b)
bound by a pattern with constructor
Refl :: forall a. a :=: a,
in an equation for `inj1'
at /tmp/inj.lhs:12:8-11
`a' is a rigid type variable bound by
the type signature for inj1 :: (f a :=: f b) -> a :=: b
at /tmp/inj.lhs:12:3
`b' is a rigid type variable bound by
the type signature for inj1 :: (f a :=: f b) -> a :=: b
at /tmp/inj.lhs:12:3
Expected type: a :=: b
Actual type: a :=: a
In the expression: Refl
In an equation for `inj1': inj1 Refl = Refl
More information about the Haskell-Cafe mailing list | {"url":"http://www.haskell.org/pipermail/haskell-cafe/2011-October/096221.html","timestamp":"2014-04-19T18:37:55Z","content_type":null,"content_length":"4697","record_id":"<urn:uuid:9beb9fd8-bb1b-4e06-96be-c4954ce4b74b>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00335-ip-10-147-4-33.ec2.internal.warc.gz"} |
PIRSA - Perimeter Institute Recorded Seminar Archive
The power of epistemic restrictions in reconstructing quantum theory
Abstract: A significant part of quantum theory can be obtained from a single innovation relative to classical theories, namely, that there is a fundamental restriction on the sorts of statistical
distributions over classical states that can be prepared. (Such a restriction is termed “epistemic” because it implies a fundamental limit on the amount of knowledge that any observer can have about
the classical state.) I will support this claim in the particular case of a theory of many classical 3-state systems (trits) where if a particular kind of epistemic restriction is assumed -- one that
appeals to the symplectic structure of the classical state space -- it is possible to reproduce the operational predictions of the stabilizer formalism for qutrits. The latter is an interesting
subset of the full quantum theory of qutrits, a discrete analogue of Gaussian quantum mechanics. This is joint work with Olaf Schreiber.
Date: 10/08/2009 - 4:30 pm | {"url":"http://pirsa.org/09080009","timestamp":"2014-04-20T11:29:09Z","content_type":null,"content_length":"9054","record_id":"<urn:uuid:33ac5e50-5a87-4ba2-b595-5276cb1eae05>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00317-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lower Bounds on Near Neighbor Search via Metric Expansion
Rina Panigrahy, Kunal Talwar, and Udi Wieder
October 2010
In this paper we show how the complexity of performing nearest neighbor (NNS) search on a metric space is related to the expansion of the metric space. Given a metric space we look at the graph
obtained by connecting every pair of points within a certain distance r . We then look at various notions of expansion in this graph relating them to the cell probe complexity of NNS for randomized
and deterministic, exact and approximate algorithms. For example if the graph has node expansion $\Phi$ then we show that any deterministic t-probe data structure for n points must use space S where
(St/n)^t > Phi. We show similar results for randomized algorithms as well. These relationships can be used to derive most of the known lower bounds in the well known metric spaces such as l1, l2, l1
by simply computing their expansion. In
the process, we strengthen and generalize our previous results [19]. Additionally, we unify the approach in [19] and the communication complexity based approach. Our work reduces the problem of
proving cell probe lower bounds of near neighbor search to computing the appropriate expansion parameter.
In our results, as in all previous results, the dependence on t is weak; that is, the bound drops
exponentially in t. We show a much stronger (tight) time-space tradeoff for the class of dynamic low contention data structures. These are data structures that supports updates in the data set and
that do not look up any single cell too often.
In FOCS
Publisher IEEE Computer Society | {"url":"http://research.microsoft.com/apps/pubs/default.aspx?id=139208","timestamp":"2014-04-21T12:21:35Z","content_type":null,"content_length":"12604","record_id":"<urn:uuid:798e269e-2254-47f9-95ed-0649593e3a5e>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00657-ip-10-147-4-33.ec2.internal.warc.gz"} |
- -
Please install Math Player to see the Math Symbols properly
Click on a 'View Solution' below for other questions:
e Name the fraction equivalent to 49. View Solution
e Simplify the fraction 1221 and choose an equivalent fraction of the simplified fraction. e View Solution
e In a class of 90 students, 40 students went on a picnic. What fraction of students went on picnic? View Solution
e A manufacturing company produces 111 cars per month. If 40 of them are red in color and 20 of them are blue, then what fraction of cars are blue when compared to red ones? e View Solution
e Find a number that divides the numerator and the denominator of the fraction 40100 to get 25. e View Solution
e The profit George got last year from his shop was $175 million. He spent $14 million out of the profit for the interior decoration of his shop. Find the fraction of profit spent on View
decorating the shop. Solution
e Express 1012 in its simplest form. e View Solution
e What is the number by which the numerator and the denominator of the fraction 34 should be multiplied with to get 3648? View Solution
e Nathan has 12 books and he gave 5 of them to Wilma. Choose an equivalent fraction to the fraction of the books left with Nathan. View Solution
e In a game of 42 matches, Jerald won 21 matches. What fraction of the matches did Jerald win? Identify equivalent fractions of this fraction. e View Solution
e In a family of 8, 3 are females and the remaining are males. Choose two equivalent fractions of the fraction representing the females in the family. View Solution
e A manufacturing company produces 176 motor cycles per month. If 60 of them are black-colored motor cycles and 40 of them are white-colored, then what fraction of motor cycles are View
white-colored when compared to the black-colored motor cycles? Solution
e Express the fraction 100220 in its simplest form. View Solution
e With what number would you multiply the numerator and the denominator of the fraction 713 to get 105195 ? e View Solution
e Find two equivalent fractions of 2448. View Solution
e Jim has 16 chocolates and he gave 7 of them to Paula. Choose a fraction that is equivalent to the fraction of the chocolates left with Jim. View Solution
e In a game of 20 matches, Andrew won 10 matches. What fraction of the matches did Andrew win? Choose two equivalent fractions of this fraction. e View Solution
e In an examination, Jake scored 70 out of 100. What fraction of the points did Jake get? Choose the fraction in its simplest form. e View Solution
e In a family of 12, 5 are females and the remaining are males. Choose two equivalent fractions for the fraction of females in the family. View Solution
e Express 69 in its simplest form. e View Solution
e Express the fraction 3070 in its simplest form. View Solution
e In a game of 9 matches, Tony lost 3 matches. What fraction of the matches did Tony lose? Identify equivalent fractions of this fraction. e View Solution
e Name the fraction equivalent to 817. View Solution
e Find two equivalent fractions of 1872. View Solution
e Express 2530 in its simplest form. e View Solution
e Which figure models an equivalent fraction of 35? e View Solution
e What is the number by which the numerator and the denominator of the fraction 89 should be multiplied with to get 104117? View Solution
e Select the figure that is an equivalent fraction of the model given. View Solution
e Name the fraction equivalent to 919. View Solution
e Simplify the fraction 612 and choose an equivalent fraction of the simplified fraction. e View Solution
e In a class of 100 students, 50 students went on a picnic. What fraction of students went on picnic? View Solution
e A manufacturing company produces 134 cars per month. If 50 of them are red in color and 30 of them are blue, then what fraction of cars are blue when compared to red ones? e View Solution
e Find a number that divides the numerator and the denominator of the fraction 1030 to get 13. e View Solution
e Identify an equivalent fraction for the model. e View Solution
e The profit Edward got last year from his shop was $302 million. He spent $18 million out of the profit for the interior decoration of his shop. Find the fraction of profit spent on View
decorating the shop. Solution
e Express the fraction 50110 in its simplest form. View Solution
e With what number would you multiply the numerator and the denominator of the fraction 35 to get 5185 ? e View Solution
e A manufacturing company produces 162 motor cycles per month. If 60 of them are black-colored motor cycles and 40 of them are pink-colored, then what fraction of motor cycles are View
pink-colored when compared to the black-colored motor cycles? Solution
e Find two equivalent fractions of 3648. View Solution
e Bill has 16 chocolates and he gave 7 of them to Quincy. Choose a fraction that is equivalent to the fraction of the chocolates left with Bill. View Solution
e Are the two models equal? e View Solution
e In a college election the number of votes polled were 700. Tommy got 300 votes. Express the votes that Tommy got as a fraction in the simplest form. View Solution
e In a game of 6 matches, Jeff won 3 matches. What fraction of the matches did Jeff win? Choose two equivalent fractions of this fraction. e View Solution
e In an examination, Jerald scored 60 out of 100. What fraction of the points did Jerald get? Choose the fraction in its simplest form. e View Solution
e In a family of 8, 3 are females and the remaining are males. Choose two equivalent fractions for the fraction of females in the family. View Solution
e Are the two models equal? e View Solution
e Which pair of fractions is equivalent to the fraction modeled by the figure shown? View Solution
e Write two equivalent fractions for the fraction modeled by the figure. View Solution
e Choose two equivalent fractions for the fraction represented by the figure. e View Solution
e Write the fraction represented by the model in its simplest form. e View Solution
e Jake has 8 books and he gave 3 of them to Katie. Choose an equivalent fraction to the fraction of the books left with Jake. View Solution
e In a game of 20 matches, Joe lost 10 matches. What fraction of the matches did Joe lose? Identify equivalent fractions of this fraction. e View Solution
e In a family of 8, 3 are males and the remaining are females. Choose two equivalent fractions of the fraction representing the males in the family. View Solution | {"url":"http://www.icoachmath.com/solvedexample/sampleworksheet.aspx?process=/__cstlqvxbefxdxbgdfmxkhjkg&.html","timestamp":"2014-04-17T18:25:13Z","content_type":null,"content_length":"98124","record_id":"<urn:uuid:55d006ff-978a-4c1b-8bac-c146d3458aaf>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00057-ip-10-147-4-33.ec2.internal.warc.gz"} |
Selling Multiple Items - Problem 2
In this problem, two different people are selling two different products. What I’m going to do is try and break it down so that I can have two equations with two unknowns and solve it like a system
of equations, hopefully I can do the Math better than I can speak it good, here we go.
Sharon and Susie are selling holiday wreaths. Sharon sold 10 ivy wreathes and 9 holly wreaths for a total of $282 hopefully that's going to give me one equation just from that sentence. Moving on
Susie sold 5 of each type for $150 total. How much would it cost to buy one of each wreath?
What I’m going to do is make sure that at the end of my problem I go through and make sure I answer the question they are asking; how much would it cost to but 1 of each wreath? Also before we do
this problem there is a secret like shortcut way that some of you guys might be thinking of already for how to solve this problem this particular one.
But I want to show you how you could solve any one problem of this type. So here we go Sharon I’m going to get her equation from this whole first sentence she sold 10 ivy wreathes and she sold 9
holly wreathes for a total of $282. That represents Sharon.
Now I’m going to have an equation for Susie coming out of this whole sentence about Susie. Susie sold 5 wreaths of each type, 5 ivies +5 holly reeds and she had a $150 total. Now I’m ready to boogie,
I have two equations with two unknowns and I can go through and solve using either elimination substitution, graphing or some of you guys even know matrices.
So while I’m trying to decide what method to use I’m going to look at the coefficients in front of my variables since I don't have anything with a coefficient of 1, I’m thinking I’m not going to be
doing either graphing or substitution. What I do notice is that I almost have additive inverses from my ivy rates I would if this number right here instead of being 5 were -10, then I would have 10
and -10. In order to make that into -10 I’m going to multiply this entire second red equation by -2 then I’ll have additive inverses and I can use elimination.
The first guy is the same the second guy I’m going to be distributing that -2, -10I, -10H -300. This is the exact same system just rewritten so that now I have additive inverses. For elimination you
add your answers vertically so that the Is cancel out and I’m left with -1H is equal -18. That means a +1H is equal to 18 or I can say a holly wreath costs $18. I'm half way done I already know how
much holly wreaths cost.
Then I can go back and use either original problem to solve for how much an Ivy wreath costs. I'm going to choose to use this bottom equation, no real reason why either one should give me the same
answer. 5 times whatever my Ivy cost is plus 5 times instead of holly I’m going to use 18 gives me the answer of 150 go through and give me the calculation.
5 times 18 is 90, subtract 90 from both sides so that 5I is equal to 60 or I is equal to 12 what that means is that an Ivy wreath costs $12. The problem asked me in the very beginning how much would
it cost to buy one of each if I want to buy one of each I will have to add those together. 1 of each costs 18 plus 12 is $30.
That's the final answer to this problem. A lot of times the multiple-choice test teachers will be cruel and they'll have like answer a 15, answer b 12, answer c 30, answer d something bonehead answer
like the trick problem, the trick answer. And somebody would have to have read this sentence really carefully in order to get that problem correct. They mighty have done all these Mathy stuff right
but then they put the wrong bubble answer because they forgot to answer the last sentence how much would it cost to but one of each, that's why it’s 30.
Okay so that's how you do the problem the long way, had any of you guys figured out the short way? The short way is just looking at Susie’s sentence; Susie sold 5 of each type of wreath for 150. If
we need to figure out how much it will cost to buy one of each, that's like a one fifth of what Susie sold right, so my answer here should be one fifth of Susie's total and it is one fifth of 150
equals $30.
I could have done that one in my head jut like that probably some of you guys did when you solved this problem on the board but I wanted to show you the long way so that you'll be able to solve any
system of equations of this type.
system of equations coefficient elimination | {"url":"https://www.brightstorm.com/math/algebra/word-problems-using-systems-of-equations/selling-multiple-items-problem-2/","timestamp":"2014-04-16T07:33:34Z","content_type":null,"content_length":"61271","record_id":"<urn:uuid:3dea3616-b5bc-4d98-9c62-f705eca561aa>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00428-ip-10-147-4-33.ec2.internal.warc.gz"} |
Riegelsville Prealgebra Tutor
Find a Riegelsville Prealgebra Tutor
...Feel free to send me an e-mail if you're interested in my tutoring services. My hours are listed, but I am willing to work on a time that will be mutually agreeable for both teacher and
student. I look forward to helping you enhance your learning needs.I have been working with personal computers, both Macintosh and Windows, since the computer revolution.
29 Subjects: including prealgebra, Spanish, English, reading
...I do a lot of tutoring after the school day. Although my primary love is science, I also tutor algebra 1. I have worked with students in all levels both in the classroom and during private
tutoring sessions.
6 Subjects: including prealgebra, chemistry, algebra 1, algebra 2
...I have a reference from a past client, if necessary. I am about to graduate high school and have experience in tutoring elementary and middle-school reading. I have completed Advanced
Placement Literature and Composition as well as Advanced Placement Language and Composition in high school and feel that I am qualified to teach several levels of reading below my level of
4 Subjects: including prealgebra, reading, elementary math, spelling
Hi! Looking for help is the first step to success. Most people think they can keep doing the same thing and get different results.
23 Subjects: including prealgebra, chemistry, physics, geometry
...I enjoy working with students one on one, and find tutoring as well as teaching extremely rewarding.I received my Bachelor of Arts in 2006 and am certified by the state of Pennsylvania and in
New Jersey as a K-6 Elementary school teacher. I also am certified in Middle School Mathematics grades 7-9. I received my Masters in Special Education N-12 in 2010.
14 Subjects: including prealgebra, reading, algebra 1, ESL/ESOL
Nearby Cities With prealgebra Tutor
Baptistown prealgebra Tutors
Broadway, NJ prealgebra Tutors
Coopersburg prealgebra Tutors
Durham, PA prealgebra Tutors
Freemansburg, PA prealgebra Tutors
Hellertown prealgebra Tutors
Kintnersville prealgebra Tutors
Little York, NJ prealgebra Tutors
Nazareth, PA prealgebra Tutors
Phillipsburg, NJ prealgebra Tutors
Revere, PA prealgebra Tutors
Richlandtown prealgebra Tutors
Springtown, PA prealgebra Tutors
Stewartsville, NJ prealgebra Tutors
West Easton, PA prealgebra Tutors | {"url":"http://www.purplemath.com/Riegelsville_Prealgebra_tutors.php","timestamp":"2014-04-17T10:59:21Z","content_type":null,"content_length":"24233","record_id":"<urn:uuid:0cc71401-f487-468a-8d5d-fd407671af2f>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00265-ip-10-147-4-33.ec2.internal.warc.gz"} |
Extended anti-Hebbian adaptation for unsupervised source extraction
- Neural Computing Surveys , 2001
"... A common problem encountered in such disciplines as statistics, data analysis, signal processing, and neural network research, is nding a suitable representation of multivariate data. For
computational and conceptual simplicity, such a representation is often sought as a linear transformation of the ..."
Cited by 1492 (93 self)
Add to MetaCart
A common problem encountered in such disciplines as statistics, data analysis, signal processing, and neural network research, is nding a suitable representation of multivariate data. For
computational and conceptual simplicity, such a representation is often sought as a linear transformation of the original data. Well-known linear transformation methods include, for example,
principal component analysis, factor analysis, and projection pursuit. A recently developed linear transformation method is independent component analysis (ICA), in which the desired representation
is the one that minimizes the statistical dependence of the components of the representation. Such a representation seems to capture the essential structure of the data in many applications. In this
paper, we survey the existing theory and methods for ICA. 1
, 1996
"... A general tool for multichannel and multipath problems is given in FIR matrix algebra. With Finite Impulse Response (FIR) filters (or polynomials) assuming the role played by complex scalars in
traditional matrix algebra, we adapt standard eigenvalue routines, factorizations, decompositions, and mat ..."
Cited by 74 (0 self)
Add to MetaCart
A general tool for multichannel and multipath problems is given in FIR matrix algebra. With Finite Impulse Response (FIR) filters (or polynomials) assuming the role played by complex scalars in
traditional matrix algebra, we adapt standard eigenvalue routines, factorizations, decompositions, and matrix algorithms for use in multichannel /multipath problems. Using abstract algebra/group
theoretic concepts, information theoretic principles, and the Bussgang property, methods of single channel filtering and source separation of multipath mixtures are merged into a general FIR matrix
framework. Techniques developed for equalization may be applied to source separation and vice versa. Potential applications of these results lie in neural networks with feed-forward memory
connections, wideband array processing, and in problems with a multi-input, multi-output network having channels between each source and sensor, such as source separation. Particular applications of
FIR polynomial matrix alg...
- Int. Journal of Neural Systems , 1997
"... Recently, several neural algorithms have been introduced for Independent Component Analysis. Here we approach the problem from the point of view of a single neuron. First, simple Hebbian-like
learning rules are introduced for estimating one of the independent components from sphered data. Some of th ..."
Cited by 22 (3 self)
Add to MetaCart
Recently, several neural algorithms have been introduced for Independent Component Analysis. Here we approach the problem from the point of view of a single neuron. First, simple Hebbian-like
learning rules are introduced for estimating one of the independent components from sphered data. Some of the learning rules can be used to estimate an independent component which has a negative
kurtosis, and the others estimate a component of positive kurtosis. Next, a two-unit system is introduced to estimate an independent component of any kurtosis. The results are then generalized to
estimate independent components from non-sphered (raw) mixtures. To separate several independent components, a system of several neurons with linear negative feedback is used. The convergence of the
learning rules is rigorously proven without any unnecessary hypotheses on the distributions of the independent components.
, 1998
"... this paper, we adopt the neural network approach. The main objective of this paper is threefold. 1. To present (in Section 2) a neural network and propose unconstrained extraction and deflation
criteria that do not require either a priori knowledge of source signals or whitening of mixed signals, an ..."
Cited by 6 (1 self)
Add to MetaCart
this paper, we adopt the neural network approach. The main objective of this paper is threefold. 1. To present (in Section 2) a neural network and propose unconstrained extraction and deflation
criteria that do not require either a priori knowledge of source signals or whitening of mixed signals, and can cope with a mixture of signals with positive kurtosis and signals with negative
kurtosis. These criteria should lead to simple, very efficient, purely local, and biologically plausible learning rules (Hebbian/anti-Hebbian type learning algorithms) . 2. To prove (in Section 3)
analytically that the proposed criteria have no spurious equilibria. In other words, the resulting learning rules always reach desired solutions, regardless of initial conditions. 3. To demonstrate
(in Section 4) with computer simulations the validity and high performance for practical use of the presented neural network and associated learning algorithms.
, 1997
"... Two alternative neural-network methods are presented which both extract independent source signals one-byone from a linear mixture of sources when the number of mixed signals is equal to or
larger than the number of sources. Both methods exploit the previously extracted source signals as a priori kn ..."
Cited by 2 (1 self)
Add to MetaCart
Two alternative neural-network methods are presented which both extract independent source signals one-byone from a linear mixture of sources when the number of mixed signals is equal to or larger
than the number of sources. Both methods exploit the previously extracted source signals as a priori knowledge so as to prevent the same signals from being extracted several times. One method employs
a deflation technique which eliminates from the mixture the already extracted signals and another uses a hierarchical neural network which avoids duplicate extraction of source signals by inhibitory
synapses between units. Extensive computer simulations confirm the validity and high performance of our methods. 1. INTRODUCTION Blind source separation can be formulated as the task to recover the
unknown sources from the sensor signals described by x(t) = As(t); where x(t) is an n 2 1 sensor vector, s(t) is an m 2 1 unknown source vector having independent and zero-mean signals, and A is an n
2 m ...
"... Two alternative neural-network methods are presented which both extract independent source signals one-byone from a linear mixture of sources when the number of mixed signals is equal to or
larger than the number of sources. Both methods exploit the previously extracted source signals as a priori kn ..."
Add to MetaCart
Two alternative neural-network methods are presented which both extract independent source signals one-byone from a linear mixture of sources when the number of mixed signals is equal to or larger
than the number of sources. Both methods exploit the previously extracted source signals as a priori knowledge so as to prevent the same signals from being extracted several times. One method employs
a de ation technique which eliminates from the mixture the already extracted signals and another uses a hierarchical neural network which avoids duplicate extraction of source signals by inhibitory
synapses between units. Extensive computer simulations con rm the validity and high performance of our methods. 1.
, 1998
"... SUMMARY We present a cascade neural network for blind source extraction. We propose a family of unconstrained optimization criteria, from which wederivealearning rule that can extract a single
source signal from a linear mixture of source signals. To prevent the newly extracted source signal from be ..."
Add to MetaCart
SUMMARY We present a cascade neural network for blind source extraction. We propose a family of unconstrained optimization criteria, from which wederivealearning rule that can extract a single source
signal from a linear mixture of source signals. To prevent the newly extracted source signal from being extracted again in the next processing unit, we propose another unconstrained optimization
criterion that uses knowledge of this signal. From this criterion, we then derive a learning rule that de ates from the mixture the newly extracted signal. By virtue of blind extraction and de ation
processing, the presented cascade neural network can cope with a practical case where the number of mixed signals is equal to or larger than the number of sources, with the number of sources not
known in advance. We prove analytically that the proposed criteria both for blind extraction and de ation processing have no spurious equilibria. In addition, the proposed criteria do not require
whitening of mixed signals. We also demonstrate the validity and performance of the presented neural network by computer simulation experiments. key words: blind source separation and extraction,
neural networks, on-line adaptive algorithms 1. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1028171","timestamp":"2014-04-17T05:24:49Z","content_type":null,"content_length":"30624","record_id":"<urn:uuid:97546811-01a5-4607-9280-eb89485a18b9>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00023-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 11 - 20 of 858
- Machine Learning , 1997
"... We discuss Bayesian methods for learning Bayesian networks when data sets are incomplete. In particular, we examine asymptotic approximations for the marginal likelihood of incomplete data given
a Bayesian network. We consider the Laplace approximation and the less accurate but more efficient BIC/MD ..."
Cited by 178 (10 self)
Add to MetaCart
We discuss Bayesian methods for learning Bayesian networks when data sets are incomplete. In particular, we examine asymptotic approximations for the marginal likelihood of incomplete data given a
Bayesian network. We consider the Laplace approximation and the less accurate but more efficient BIC/MDL approximation. We also consider approximations proposed by Draper (1993) and Cheeseman and
Stutz (1995). These approximations are as efficient as BIC/MDL, but their accuracy has not been studied in any depth. We compare the accuracy of these approximations under the assumption that the
Laplace approximation is the most accurate. In experiments using synthetic data generated from discrete naive-Bayes models having a hidden root node, we find that (1) the BIC/MDL measure is the least
accurate, having a bias in favor of simple models, and (2) the Draper and CS measures are the most accurate. 1
, 1996
"... This literature review discusses different methods under the general rubric of learning Bayesian networks from data, and includes some overlapping work on more general probabilistic networks.
Connections are drawn between the statistical, neural network, and uncertainty communities, and between the ..."
Cited by 172 (0 self)
Add to MetaCart
This literature review discusses different methods under the general rubric of learning Bayesian networks from data, and includes some overlapping work on more general probabilistic networks.
Connections are drawn between the statistical, neural network, and uncertainty communities, and between the different methodological communities, such as Bayesian, description length, and classical
statistics. Basic concepts for learning and Bayesian networks are introduced and methods are then reviewed. Methods are discussed for learning parameters of a probabilistic network, for learning the
structure, and for learning hidden variables. The presentation avoids formal definitions and theorems, as these are plentiful in the literature, and instead illustrates key concepts with simplified
examples. Keywords--- Bayesian networks, graphical models, hidden variables, learning, learning structure, probabilistic networks, knowledge discovery. I. Introduction Probabilistic networks or
probabilistic gra...
- Journal of the American Statistical Association , 1998
"... This paper reviews the principle of Minimum Description Length (MDL) for problems of model selection. By viewing statistical modeling as a means of generating descriptions of observed data, the
MDL framework discriminates between competing models based on the complexity of each description. This ..."
Cited by 145 (5 self)
Add to MetaCart
This paper reviews the principle of Minimum Description Length (MDL) for problems of model selection. By viewing statistical modeling as a means of generating descriptions of observed data, the MDL
framework discriminates between competing models based on the complexity of each description. This approach began with Kolmogorov's theory of algorithmic complexity, matured in the literature on
information theory, and has recently received renewed interest within the statistics community. In the pages that follow, we review both the practical as well as the theoretical aspects of MDL as a
tool for model selection, emphasizing the rich connections between information theory and statistics. At the boundary between these two disciplines, we find many interesting interpretations of
popular frequentist and Bayesian procedures. As we will see, MDL provides an objective umbrella under which rather disparate approaches to statistical modeling can co-exist and be compared. We
illustrate th...
, 2007
"... Scoring rules assess the quality of probabilistic forecasts, by assigning a numerical score based on the predictive distribution and on the event or value that materializes. A scoring rule is
proper if the forecaster maximizes the expected score for an observation drawn from the distribution F if he ..."
Cited by 143 (17 self)
Add to MetaCart
Scoring rules assess the quality of probabilistic forecasts, by assigning a numerical score based on the predictive distribution and on the event or value that materializes. A scoring rule is proper
if the forecaster maximizes the expected score for an observation drawn from the distribution F if he or she issues the probabilistic forecast F, rather than G ̸= F. It is strictly proper if the
maximum is unique. In prediction problems, proper scoring rules encourage the forecaster to make careful assessments and to be honest. In estimation problems, strictly proper scoring rules provide
attractive loss and utility functions that can be tailored to the problem at hand. This article reviews and develops the theory of proper scoring rules on general probability spaces, and proposes and
discusses examples thereof. Proper scoring rules derive from convex functions and relate to information measures, entropy functions, and Bregman divergences. In the case of categorical variables, we
prove a rigorous version of the Savage representation. Examples of scoring rules for probabilistic forecasts in the form of predictive densities include the logarithmic, spherical, pseudospherical,
and quadratic scores. The continuous ranked probability score applies to probabilistic forecasts that take the form of predictive cumulative distribution functions. It generalizes the absolute error
and forms a special case of a new and very general type of score, the energy score. Like many other scoring rules, the energy score admits a kernel representation in terms of negative definite
functions, with links to inequalities of Hoeffding type, in both univariate and multivariate settings. Proper scoring rules for quantile and interval forecasts are also discussed. We relate proper
scoring rules to Bayes factors and to cross-validation, and propose a novel form of cross-validation known as random-fold cross-validation. A case study on probabilistic weather forecasts in the
North American Pacific Northwest illustrates the importance of propriety. We note optimum score approaches to point and quantile
- Journal of the Royal Statistical Society, Series B , 2002
"... [Read before The Royal Statistical Society at a meeting organized by the Research ..."
, 2007
"... The data of interest are assumed to be represented as N-dimensional real vectors, and these vectors are compressible in some linear basis B, implying that the signal can be reconstructed
accurately using only a small number M ≪ N of basis-function coefficients associated with B. Compressive sensing ..."
Cited by 132 (15 self)
Add to MetaCart
The data of interest are assumed to be represented as N-dimensional real vectors, and these vectors are compressible in some linear basis B, implying that the signal can be reconstructed accurately
using only a small number M ≪ N of basis-function coefficients associated with B. Compressive sensing is a framework whereby one does not measure one of the aforementioned N-dimensional signals
directly, but rather a set of related measurements, with the new measurements a linear combination of the original underlying N-dimensional signal. The number of required compressive-sensing
measurements is typically much smaller than N, offering the potential to simplify the sensing system. Let f denote the unknown underlying N-dimensional signal, and g a vector of compressive-sensing
measurements, then one may approximate f accurately by utilizing knowledge of the (under-determined) linear relationship between f and g, in addition to knowledge of the fact that f is compressible
in B. In this paper we employ a Bayesian formalism for estimating the underlying signal f based on compressive-sensing measurements g. The proposed framework has the following properties: (i) in
addition to estimating the underlying signal f, “error bars ” are also estimated, these giving a measure of confidence in the inverted signal; (ii) using knowledge of the error bars, a principled
means is provided for determining when a sufficient
- Journal of Web Semantics , 2004
"... Workflow management systems (WfMSs) have been used to support various types of business processes for more than a decade now. In workflows for e-commerce and Web-services applications, suppliers
and customers define a binding agreement or contract between the two parties, specifying Quality of Servi ..."
Cited by 129 (15 self)
Add to MetaCart
Workflow management systems (WfMSs) have been used to support various types of business processes for more than a decade now. In workflows for e-commerce and Web-services applications, suppliers and
customers define a binding agreement or contract between the two parties, specifying Quality of Service (QoS) items such as products or services to be delivered, deadlines, quality of products, and
cost of services. The management of QoS metrics directly impacts the success of organizations participating in e-commerce. Therefore, when services or products are created or managed using workflows,
the underlying workflow system must accept the specifications and be able to estimate, monitor, and control the QoS rendered to customers. In this paper, we present a predictive QoS model that makes
it possible to compute the quality of service for workflows automatically based on atomic task QoS attributes. To this end, we present a model that specifies QoS and describe an algorithm and a
simulation system in order to compute, analyze and monitor workflow QoS metrics. 1
, 1996
"... This paper presents the mathematical foundations of Dirichlet mixtures, which have been used to improve database search results for homologous sequences, when a variable number of sequences from
a protein family or domain are known. We present a method for condensing the information in a protein dat ..."
Cited by 129 (22 self)
Add to MetaCart
This paper presents the mathematical foundations of Dirichlet mixtures, which have been used to improve database search results for homologous sequences, when a variable number of sequences from a
protein family or domain are known. We present a method for condensing the information in a protein database into a mixture of Dirichlet densities. These mixtures are designed to be combined with
observed amino acid frequencies, to form estimates of expected amino acid probabilities at each position in a profile, hidden Markov model, or other statistical model. These estimates give a
statistical model greater generalization capacity, such that remotely related family members can be more reliably recognized by the model. Dirichlet mixtures have been shown to outperform
substitution matrices and other methods for computing these expected amino acid distributions in database search, resulting in fewer false positives and false negatives for the families tested. This
paper corrects a previously p...
, 1996
"... . Learning multiple descriptions for each class in the data has been shown to reduce generalization error but the amount of error reduction varies greatly from domain to domain. This paper
presents a novel empirical analysis that helps to understand this variation. Our hypothesis is that the amount ..."
Cited by 126 (3 self)
Add to MetaCart
. Learning multiple descriptions for each class in the data has been shown to reduce generalization error but the amount of error reduction varies greatly from domain to domain. This paper presents a
novel empirical analysis that helps to understand this variation. Our hypothesis is that the amount of error reduction is linked to the "degree to which the descriptions for a class make errors in a
correlated manner." We present a precise and novel definition for this notion and use twenty-nine data sets to show that the amount of observed error reduction is negatively correlated with the
degree to which the descriptions make errors in a correlated manner. We empirically show that it is possible to learn descriptions that make less correlated errors in domains in which many ties in
the search evaluation measure (e.g. information gain) are experienced during learning. The paper also presents results that help to understand when and why multiple descriptions are a help
(irrelevant attribute...
, 1996
"... This is the first of two papers that use off-training set (OTS) error to investigate the assumption -free relationship between learning algorithms. This first paper discusses the senses in which
there are no a priori distinctions between learning algorithms. (The second paper discusses the senses in ..."
Cited by 123 (5 self)
Add to MetaCart
This is the first of two papers that use off-training set (OTS) error to investigate the assumption -free relationship between learning algorithms. This first paper discusses the senses in which
there are no a priori distinctions between learning algorithms. (The second paper discusses the senses in which there are such distinctions.) In this first paper it is shown, loosely speaking, that
for any two algorithms A and B, there are "as many" targets (or priors over targets) for which A has lower expected OTS error than B as vice-versa, for loss functions like zero-one loss. In
particular, this is true if A is cross-validation and B is "anti-cross-validation" (choose the learning algorithm with largest cross-validation error). This paper ends with a discussion of the
implications of these results for computational learning theory. It is shown that one can not say: if empirical misclassification rate is low; the Vapnik-Chervonenkis dimension of your generalizer is
small; and the trainin... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=13772&sort=cite&start=10","timestamp":"2014-04-19T02:03:41Z","content_type":null,"content_length":"40329","record_id":"<urn:uuid:58066dca-76d3-42da-b21e-452e21904694>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00535-ip-10-147-4-33.ec2.internal.warc.gz"} |
of the Mass
┃ ┃
┃ San José State University ┃
┃ ┃
┃ ┃
┃ applet-magic.com ┃
┃ Thayer Watkins ┃
┃ Silicon Valley ┃
┃ & Tornado Alley ┃
┃ USA ┃
┃ ┃
┃ The Enigma of the Mass Deficits of Nuclei ┃
When protons and neutrons form a nucleus the mass of the nucleus is less than the sum of the masses of the particles which make it up. Gamma rays are given off in the process of formation and the
nucleus is broken up only if the nucleus absorbs a gamma ray of the same or greater energy than the one given off in its formation. The phenomenon of mass deficits has been known and accepted for
many decades, without much comment on what an enigma it is.
The case of the deuteron, the combination of a proton and a neutron, is easy to visualize. It was first noticed in the 1930's that if gamma rays of at least 2.22457 MeV irradiated deuterium then
deuterons would dissociate into free neutrons and protons. Later it was noticed that if slow neutrons were brought in contact with protons deuterons would form, accompanied by the emission of gamma
rays of 2.22457 MeV energy.
When particles come together they lose potential energy which is transformed into an increase in kinetic energy. In atoms when electrons move into a lower energy state they lose more potential energy
than they gain in kinetic energy and the difference goes into the emission of photons. In nuclear processes it is a puzzle as to why the system must lose mass as well as potential energy.
One process that involves the loss of mass is when a neutron disintergrates. Free neutrons are unstable and their half-life is about 15 minutes. A neutron decays into a proton, an electron and
radiation. The mass of the neutron is estimated to be very close of 2.5 electron masses greater than that of the proton and thus 1.5 electron masses greater than the mass of a proton and electron
together. The rest-mass energy of an electron is 0.511 million electron volts (MeV). Thus the disintergration of a neutron involves a loss of mass of 0.7823 MeV.
It is dogma that neutrons exist as particles in nuclei rather than breaking up into protons and electrons, but consider what such a breakup would entail for a deuteron. The electron would be
positioned midway between the two protons. The protons would revolve around the electron at their center of mass as indicated in the following diagram.
There would be potential energies due to the nuclear force and the electrostatic force. Consider just the electrostatic force which has the form αq[1]q[2]/z, where q[1] and q[2] are the charges of
two bodies and z is their separation distance. As indicated in the diagram the effect of the intermediation of the electron is to create a negative potenial energy equal to three times the positive
potential energy created by the repulsion of the two protons. This possible configuration could be described as an electron bonded proton pair. In effect, the electron creates a bond between the two
protons in the same way that orbital electrons create a bond between the positive nuclei in diatomic atoms such as H[2], O[2] and N[2]. The diatomic molecules are known to be stable and that
indicates that the deuteron model would also be stable.
There is a bit of evidence for nuclei not containing proton/neutron pairs as subparticle. Alpha particles are sometimes ejected for nuclei and also electrons (beta rays) but no evidence of deuterons
being ejected.
Some evidence of the magnitude of the potential energy involved in the repulsion of protons is available from the properties of tritium (^3H[1]) nucleus and the (^3He[2]) nucleus. The binding energy
of a tritium nucleus is 8.482 million electron volts (MeV) but that of the Helium 3 nucleus is only 7.718 MeV. The 0.764 MeV difference is due to the mutual repulsion of the two protons in the He 3
If the 0.764 MeV figure is taken as the magnitude of the potential energy due to the mutual repulsion of the protons in the deuteron model, then the loss of potential energy due to the formation of a
deuteron would be 3(0.764)=2.292 MeV. This is not far off from the 2.22457 MeV energy of the gamma ray involved in the formation or dissolution of a deuteron. The difference is only 3 percent.
However the loss of potential energy from the creation of an electron-bonded proton pair could not occur without also the loss of 0.7823 MeV of mass due to the disintergration of the neutron. But
changes in potential energy might go in part to changes in kinetic energy as well as into the energy of emitted gamma rays. In the case of orbital electrons in an atom the change in potential energy
for a transition of an electron to a lower energy level goes half into the change in kinetic energy and half into the energy of an emitted photon. It is not known what the division of potential
energy change would be in a nucleus between the change in kinetic energy and the energy of an emitted photon. If all of the energy from the disintergration of the neutron and five eights of the loss
in potential energy from the formation of the electron bond between the two protons went into the energy of the photon, the energy of the photon would be 0.7823+(5/8)(2.292)=2.215 MeV, a value less
than 0.5 of 1 percent lower than the actual 2.22457 MeV value.
At the simplest level the binding energy might be conjectured to be proportional to the numbers of protons and neutrons. The results of the regression of binding energy on the number of protons, P,
and the number of neutrons, N, with no constant term based on the data set for 2932 isotopes is:
B(P,N) = 10.53476P + 6.01794N
(0.135) (0.095)
[145.5] [63.3]
R² = 0.99
The regression coefficient of 10.53476 for P indicates that for every additional proton added to the nucleus, on average the binding energy increases by 10.53476 MeV. For an additional neutron there
is the lesser amount of 6.01794 MeV. The numbers in parentheses below the regression coefficents are the standard deviation of the regression coefficient estimate. The numbers in the square brackets
below the standard deviations are the ratios of the coefficients to their standard deviations, the so-called t-ratios. For a coefficient to be statistically significantly different from zero at the
95 percent level of confidence the t-ratio needs to be of a magnitude on the order of 2 or higher. The t-ratios for the regression coefficents indicate that they are statistically highly
significantly different from zero. The coefficient of determination, R², indicates that 99 percent of the variation in binding energies of the nuclei is explained by the variations in their proton
and neutron numbers.
A more refined hypothesis is that the mass deficit (binding energy) of a nucleus is proportional to the number of proton-neutron pairs formed with somewhat less of a contribution from a singleton
proton or neutron. Let D=min(P,N), the number of proton-neutron (deuteron) pairs. Then the existence and nature of a singleton nucleon is given by values of #P=P-D and #N=N-D. The indicated
regression results are:
B(P,N) = 16.654D + 2.398#P + 5.797#N
(0.0439) (1.0729) (0.0984)
[379.3] [2.235] [58.952]
R² = 0.991
While the coefficient for a singleton proton, #P, is significant at the 95 percent level of confidence it is just barely so. On the other hand the t-ratio for the coefficient for the number of
proton-neutron pairs is astronomical, strongly suggesting that the mass deficit has its origin in a process involving these proton-neutron pairs. Even a singleton neutron has a significant effect.
The division of the coefficient for D by 2 gives a mass deficit per nucleon of 8.327 MeV, a value notably higher than the value for a singleton neutron.
A still more refined hypothesis is that the mass deficits are connected with the formation of alpha particles within the nucleus. The alpha particle, the Helium 4 nucleus, has an unusually high mass
deficit, indicating some energy efficient structure. Let α be the number of possible alpha particles. A singleton deuteron is indicated by the value of #D=D-2α. The existence of a singleton proton or
neutron is then given by #P=P-2α-#D and #N=N-2α-#D. The indicated regression results are:
B(P,N) = 32.94963α + 36.57635#D + 0.64663#P + 5.88038#N
(0.0908) (1.6611) (1.0578) (0.0963)
[362.8] [22.02] [0.61] [61.8]
R² = 0.991
In this case the effect of a singleton proton on the mass deficit is not statistically significant at the 95 percent level of confidence. It is possible that this is the result of a positive effect
being offset by the negative effect of the repulsion involved in the addition of a proton to a nucleus.
The t-ratio for the number of alpha particles is an astronomical 362.8, an order of magnitude higher than the t-ratio for the effect of a singleton proton-neutron pair. A singleton neutron does have
a significant effect. The effect per nucleon for the effect of an alpha particle is 8.119 MeV, about the same level as for the previous regression.
The above regression equation gives a good fit for the heavier nuclei but a notably poor fit for the lighter nuclei. The statistical fit can be improved by including a quadratic term. The only
variable that is different from 0 or 1 is α. The results for a regression equation involving α² are
B(P,N) = 38.45542α - 0.17207α² + 8.15586#D - 3.68489#P + 7.60537#N
(0.0436) (0.00104) (0.5436) (0.3295) (0.03166)
[882.3] [-165.6] [15.00] [-11.2] [240.3]
R² = 0.9991
A notable aspect of this regression equation is that it explains 99.9 percent of the variation in binding energy. In this case a singleton proton not only does not add to the binding energy it
detracts from the binding energy by 3.68 MeV. This negative value is highly statistically significant. A singleton deuteron adds only 8.15 MeV and a singleton neutron 7.605 MeV.
The effect of an additional singleton proton or neutron could vary with the size of the nucleus. Such an effect can be captured by including variables which are the products of the alpha value and #P
and #N. The results for such a regression are:
B(P,N) =
38.28163α - 0.16349α² + 8.10163#D - 1.71832#P - 0.1895α#P + 7.89910#N - 0.00957α#N
(0.0978) (0.0030) (0.5430) (0.3016) (0.08515) (0.1054) (0.0311)
[390.6] [-53.9] [14.9] [-2.2] [-2.2] [77.5] [-3.07]
R² = 0.9991
The inclusion of nuclear size adjustments for the effects of an additional singleton proton or neutron does not improve the overall R² value significantly, but improved the fit for the light nuclei.
Again the singleton proton was the least significant variable.
The conclusion to be drawn from the results is that the mass deficit (binding energy) for a nucleus comes from the neutrons it contains. The effect is enhanced by the neutrons' combinations with
protons but protons by themselves do not contribute significantly to the binding energy.
The dissociation of a neutron would account for only 0.7823 MeV. The potential energy loss in the formation of a proton-neutron pair accounts for about 2.3 MeV. The big factor in the mass deficit is
the potential energy loss in the formation of an alpha particle, about 28 MeV, or about 7 MeV per nucleon. The internal structure of the alpha particle is not known and so this relatively high
binding energy for such as light nuclei is another enigma of the physics of nuclei.
In addition to the mass deficit accounted for by the alpha particles themselves there is apparently some configuration of these alpha particles with in the nucleus that accounts for an additional
increment in the mass deficit. For more on this topic see Nuclear Structure.
The Measurement of the Mass of the Neutron
The mass deficits are based upon the estimated mass of the neutron. The mass of the neutron is inferred from the measured masses of the deuteron, proton and electron and the mass equivalent of the
energy of the gamma ray involved in the dissociation of deuterons. That is to say, the mass of the neutron is calculated as the mass of the deuteron less the mass of the proton and electron and less
the mass equivalent of the energy of the gamma ray. This presumes that the energy of the gamma comes exclusively from the change in mass in the formation or dissolution of a neutron. A different
allocation of the energy changes would produce a different estimate of the mass of the neutron and a different set of mass deficits for nuclei.
The published mass deficit for the Berylium 5 nucleus which contains 4 protons and 1 neutron is -0.750 MeV. This suggests that the mass of the neutron has been underestimated by at least 0.750 MeV.
The figure of 0.750 MeV is about 1½ electron masses. This would mean that the neutron has a mass 4 electron masses greater than the proton instead of 2½ electron masses. Thus in the formation or
dissolution of a deuteron the energy change would be (2.22457+1.5(0.511))=2.991 MeV, of which 3/4 goes into the energy of the gamma ray and the other 1/4 goes into a net change in potential and
kinetic energy. Thus, an error in the neutron mass would be accounted for by the unverified assumption that the energy of the gamma ray represents 100% of the change in energy for the formation or
dissolution of a deuteron. It is notable that the fraction going into the gamma ray energy is a simple fraction, 3/4. In atomic transitions the energy of the photon for an electron transition is 1/2
of the total energy change.
A statistical analysis of the binding energies recalculated on the basis of a neutron mass 1½ electron mass greater would result in the coefficients for those variables involving neutrons being
larger by an appropriate multiple of the error in the neutron mass. This would make the difference between the effect of neutrons and the effect of protons larger by the amount of error in the
neutron mass.
This is a matter that bears further investigation. Meanwhile however it may safely be concluded that the enigma of the mass deficits of nuclei has part of its explanation in the following sources:
• The mass surplus of the neutron, another enigma
• The potential energy loss in the formation of proton-neutron pairs, which may be due to the dissociation of the neutron and the formation of electron bonded proton pairs
• The high potential energy loss in the formation of alpha particles, a yet another enigma
• The potential energy loss from some arrangement of alpha particles in the nucleus.
When the binding energies of nuclides which could contain an integral number of alpha particles are compared with the binding energy which the alpha particle would contain there is an excess. This
excess increases systematically with the number of alpha particles as shown below.
For the change for the range 0 to 2 alpha particles the increase in excess binding energy is essentially zero. From two alpha particles to fourteen the increase averages 7.3 MeV per alpha particle
and from fourteen to twentyfive the increase averages 2.7 MeV per alpha particle.
(To be continued.)
For an alternate explanation of the mass deficits of nuclei see the Nature of Mass. For an implementation of the theory see Spectrum of Two-Particle Systems. | {"url":"http://www.sjsu.edu/faculty/watkins/massdeficits.htm","timestamp":"2014-04-19T15:34:32Z","content_type":null,"content_length":"19494","record_id":"<urn:uuid:6764c209-1d11-4da2-8d73-61fe001a2150>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00019-ip-10-147-4-33.ec2.internal.warc.gz"} |
November 18th 2008, 03:01 PM
We have two pairs of parallel lines in R^2 defined by the linear equations below:
a1x + b1y = r1
a1x + b1y = s1
a2x + b2y = r2
a2x + b2y = s2
We assume that these lines enclose a parallelogram P. Find the very simplest formula for the area of P in terms of a1, b1, a2, b2, r1, r2, s1, s2.
If it is R^2, why are there four formulas? Any help with this | {"url":"http://mathhelpforum.com/advanced-algebra/60306-parallelogram-print.html","timestamp":"2014-04-19T21:37:46Z","content_type":null,"content_length":"3342","record_id":"<urn:uuid:e98e5a77-b785-4003-a3d9-cb86c62332b8>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00645-ip-10-147-4-33.ec2.internal.warc.gz"} |
MWISD – making students hungry for Pi
Mineral Wells Index, Mineral Wells, TX
March 10, 2013
MWISD – making students hungry for Pi
Mineral Wells Index
— By LIBBY CLUETT
March is a month full of celebration, and many math aficionados have their sites on next week's Pi Day, a worldwide celebration of the mathematical constant 3.14159, or Pi.
This number represents the ratio of a circle's circumference to its diameter.
Since Pi Day – appropriately, March 14 – falls in the middle of spring break, Mineral Wells ISD celebrated Friday morning with learning activities at each campus.
Houston Elementary had all sorts of contests, including how many digits kids could remember after the decimal point – an infinite number. The winner reached 43 digits, but mathematicians have
calculated Pi to over one trillion digits beyond its decimal point.
Each grade level at Houston also tried to bring in 314 cans of food to be donated to a local food bank.
Other campuses joined the early celebration. Debbie Yates' and Autumn Lanes' eighth grade math students explored the outdoors around Mineral Wells Junior High, finding circular objects that would
measure up to Pi – meter covers, the base of a flagpole and a photographer's camera lens.
Students at Travis Elementary and the high school also partook in the math celebration.
Second grader Ymani Wilson perhaps had the most fun by being the Houston Good Citizen prize winner, granting her a chance to serve a pie in the face of music teacher Adam Hull. | {"url":"http://www.mineralwellsindex.com/topstory/x2045492424/MWISD-making-students-hungry-for-Pi/print","timestamp":"2014-04-16T23:11:44Z","content_type":null,"content_length":"3905","record_id":"<urn:uuid:75d99cb1-dfaa-4399-9f97-53e29fd93bd6>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00386-ip-10-147-4-33.ec2.internal.warc.gz"} |
gas constant
Definitions for gas constant
This page provides all possible meanings and translations of the word gas constant
Princeton's WordNet
1. gas constant, universal gas constant, R(noun)
(physics) the universal constant in the gas equation: pressure times volume = R times temperature; equal to 8.3143 joules per kelvin per mole
1. gas constant(Noun)
a universal constant, R, that appears in the ideal gas law, (PV = nRT), derived from two fundamental constants, the Boltzman constant and Avogadro's number, (R = Nk)
1. Gas constant
The gas constant is a physical constant which is featured in many fundamental equations in the physical sciences, such as the ideal gas law and the Nernst equation. It is equivalent to the
Boltzmann constant, but expressed in units of energy per temperature increment per mole. The constant is also a combination of the constants from Boyle's law, Charles's law, Avogadro's law, and
Gay-Lussac's law. Physically, the gas constant is the constant of proportionality that happens to relate the energy scale in physics to the temperature scale, when a mole of particles at the
stated temperature is being considered. Thus, the value of the gas constant ultimately derives from historical decisions and accidents in the setting of the energy and temperature scales, plus
similar historical setting of the value of the molar scale used for the counting of particles. The last factor is not a consideration in the value of the Boltzmann constant, which does a similar
job of equating linear energy and temperature scales.−7
Find a translation for the gas constant definition in other languages:
Use the citation below to add this definition to your bibliography:
Are we missing a good definition for gas constant? | {"url":"http://www.definitions.net/definition/gas%20constant","timestamp":"2014-04-19T02:28:53Z","content_type":null,"content_length":"25539","record_id":"<urn:uuid:9d559c46-ffc5-450c-add4-104fbb688e41>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00594-ip-10-147-4-33.ec2.internal.warc.gz"} |
My Leader Bike Build
* ♥ 4
Nov 23
Rule #12 / The correct number of bikes to own is n+1.
While the minimum number of bikes one should own is three, the correct number is n+1, where n is the number of bikes currently owned. This equation may also be re-written as s-1, where s is the
number of bikes owned that would result in separation from your partner.
motorrad likes this
kaseyirl likes this
alyssadavidson reblogged this from myleader725
myleader725 posted this | {"url":"http://myleader725.tumblr.com/post/13238381732/rule-12-the-correct-number-of-bikes-to-own-is-n-1","timestamp":"2014-04-20T00:58:50Z","content_type":null,"content_length":"31831","record_id":"<urn:uuid:525a61cf-5c6f-43ea-ae55-d9b3f87b2a49>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00584-ip-10-147-4-33.ec2.internal.warc.gz"} |
How were the speed of sound and the speed of light determined and measured?
Chris Oates, a physicist in the Time and Frequency Division of the National Institute of Standards and Technology (NIST), explains.
Despite the differences between light and sound, the same two basic methods have been used in most measurements of their respective speeds. The first method is based on simply measuring the time it
takes a pulse of light or sound to traverse a known distance; dividing the distance by the transit time then gives the speed. The second method makes use of the wave nature common to these phenomena:
by measuring both the frequency (f) and the wavelength (
In its simplest form, sound can be thought of as a longitudinal wave consisting of compressions and extensions of a medium along the direction of propagation. Because sound requires a medium through
which to propagate, the speed of a sound wave is determined by the properties of the medium itself (such as density, stiffness, and temperature). These parameters thus need to be included in any
reported measurements. In fact, one can turn such measurements around and actually use them to determine thermodynamic properties of the medium (the ratio of specific heats, for example).
The first known theoretical treatise on sound was provided by Sir Isaac Newton in his Principia, which predicted a value for the speed of sound in air that differs by about 16 percent from the
currently accepted value. Early experimental values were based on measurements of the time it took the sound of cannon blasts to cover a given distance and were good to better than 1 percent of the
currently accepted value of 331.5 m/s at 0 degrees Celsius. Daniel Colladon and Charles-Francois Sturm first performed similar measurements in water in Lake Geneva in 1826. They found a value only
0.2 percent below the currently accepted value of ~1,440 m/s at 8 degrees C. These measurements all suffered from variations in the media themselves over long distances, so most subsequent
determinations have been performed in the laboratory, where environmental parameters could be better controlled, and a larger variety of gases and liquids could be investigated. These experiments
often use tubes of gas or liquid (or bars of solid material) with precisely calibrated lengths. One can then derive the speed of sound from a measurement of the time that an impulse of sound takes to
traverse the tube. Alternatively (and usually more accurately), one can excite resonant frequencies of the tube (much like those of a flute) by inducing a vibration at one end with a loudspeaker,
tuning fork, or other type of transducer. Because the corresponding resonant wavelengths have a simple relationship to the tube length, one can then determine the speed of sound from the wave
relation and make corrections for tube geometry for comparisons with speeds in free space.
The wave nature of light is quite different from that of sound. In its simplest form, an electromagnetic wave (such as light, radio, or microwave) is transverse, consisting of oscillating electric
and magnetic fields that are perpendicular to the direction of propagation. Moreover, although the medium through which light travels does affect its speed (reducing it by the index of refraction of
the material), light can also travel through a vacuum, thus providing a unique context for defining its speed. In fact, the speed of light in a vacuum, c, is a fundamental building block of
Einstein's theory of relativity, because it sets the upper limit for speeds in the universe. As a result, it appears in a wide range of physical formulae, perhaps the most famous of which is E=mc^2.
The speed of light can thus be measured in a variety of ways, but due to its extremely high value (~300,000 km/s or 186,000 mi/s), it was initially considerably harder to measure than the speed of
sound. Early efforts such as Galileo's pair of observers sitting on opposing hills flashing lanterns back and forth lacked the technology needed to measure accurately the transit times of only a few
microseconds. Remarkably, astronomical observations in the 18th century led to a determination of the speed of light with an uncertainty of only 1 percent. Better measurements, however, required a
laboratory environment. Louis Fizeau and Leon Foucault were able to perform updated versions of Galileo¿s experiment through the use of ingenious combinations of rotating mirrors (along with improved
measurement technology) and they made a series of beautiful measurements of the speed of light. With still further improvements, Albert A. Michelson performed measurements good to nearly one part in
ten thousand.
Metrology of the speed of light changed dramatically with a determination made here at NIST in 1972. This measurement was based on a helium-neon laser whose frequency was fixed by a feedback loop to
match the frequency corresponding to the splitting between two quantized energy levels of the methane molecule. Both the frequency and wavelength of this highly stable laser were accurately measured,
thereby leading to a 100-times reduction in the uncertainty for the value of the speed of light. This measurement and subsequent measurements based on other atomic/molecular standards were limited
not by the measurement technique, but by uncertainties in the definition of the meter itself. Because it was clear that future measurements would be similarly limited, the 17th Conf¿rence G¿n¿rale
des Poids et Mesures (General Conference on Weights and Measures) decided in 1983 to redefine the meter in terms of the speed of light. The speed of light thus became a constant (defined to be
299,792,458 m/s), never to be measured again. As a result, the definition of the meter is directly linked (via the relation c= f×^-15). | {"url":"http://www.scientificamerican.com/article/how-were-the-speed-of-sou/","timestamp":"2014-04-18T07:02:00Z","content_type":null,"content_length":"63144","record_id":"<urn:uuid:21f14c20-1b65-49b4-9232-559f9d9298d6>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00255-ip-10-147-4-33.ec2.internal.warc.gz"} |
differentiable function
January 8th 2011, 07:58 AM
differentiable function
I must prove that the function f(x) is continuously differentiable function (just first derivate) on interval (-1,1).
$f(x)=exp(\frac{1}{x^2-1}) \ |x|<1 , \ 0$ for all other values.
I do that:
$f(x)=e^{\frac{1}{x^2-1}}=e^u \ [u(x)=\frac{1}{x^2-1}]$
$\frac{d}{dx}f(x)=\frac{df}{du}\frac{du}{dx}=-e^u \frac{2x}{(x^2-1)^2}=-\frac{2x}{(x^2-1)^2}e^{\frac{1}{x^2-1}$
Is it ok?
Thanks in advance
January 8th 2011, 08:43 AM
If the question it is only related to $(-1,1)$ then, it is right (evidently $f'(x)$ is continuous in $(-1,1)$ ).
Fernando Revilla
January 8th 2011, 08:54 AM
The basic question is if the f(x) is a part of $C_0^1(\mathbb{R})$ space?
January 8th 2011, 09:09 AM
That's actually a different question from what you first asked. In your first post you said "continuously differentiable function (just first derivate) on interval (-1,1)". Now you are saying on
all of $\mathbb{R}$. You had already answered the first question- f(x) is continuously differentiable on (-1, 1). Since f is defined to be a constant (0) for $x\le -1$ or $x\ge 1$, it is
certainly continuously differentiable there. But now the question is about x= -1 and x= 1. What are the derivatives at those points? What are the derivatives on either side? Is the derivative
continuous at x= -1 and x= 1?
January 8th 2011, 09:11 AM
In that case, you should prove $f'_{-}(1)=f'_{+}(-1)=0$ and $f'$ continuous at $x=\pm 1$ (for the rest is almost trivial)
Fernando Revilla
P.S. I suppose $C_0^1(\mathbb{R})$ means $C^1(\mathbb{R})$ (the standard notation for the set of continuosly differentiable functions on $\mathbb{R}$ ) .
Edited: Sorry, I didn't see HallsofIvy's post
January 8th 2011, 11:14 AM
How can I prove $f'_{-}(1)=f'_{+}(-1)=0$?
I tried with limits:
$\displaystyle \lim_{h \to 1}\frac{e^{\frac{1}{(x+h)^2-1}}-e^{\frac{1}{x^2-1}}}{h}$ and $\displaystyle \lim_{h \to -1}\frac{e^{\frac{1}{(-x-h)^2-1}}-e^{\frac{1}{x^2-1}}}{h}$
but this certainly is not the correct way.
January 8th 2011, 04:02 PM
While derivatives are not necessarily continuous, they do obey the "intermediate value property"- f'(x) takes on all values between f'(a) and f'(b) for x between a and b. In particular, that
means that f'(c) exists if and only if $\displaystyle \lim_{x\to c^-} f'(x)= \lim_{x\to c^+}f'(x)$. That is, determine the derivatives for x< -1, and x>-1 and determine if the limits at x= -1 are
the same. Do the same for x= 1.
By the way, you don't want to prove that " $f'_-(1)= f'_+(1)= 0$". You want to prove that $f'_-(1)= f'_+(1)$ and $f'_-(-1)= f'_+(-1)$. You don't need to use the limit definition of the
derivative- but, of course, h goes to 0, not 1 and -1.
January 9th 2011, 03:40 AM
So $f'_-(-1)=f'_+(1)=0$ and i must calculate:
$f'_-(1)=\displaystyle \lim_{x \to 1} -\frac{2x}{(x^2-1)^2}e^{\frac{1}{x^2-1}}$
$f'_+(-1)=\displaystyle \lim_{x \to -1} -\frac{2x}{(x^2-1)^2}e^{\frac{1}{x^2-1}}$
but how can I do that? The exp function is not defined in those two points!
January 10th 2011, 07:35 AM
Fuction with exp is not define in those two points so limit does not exist. But wikipedia says: the function is smooth so is continuosly differentiable on $\mathbb{R}$. But I dont know how to
prove that if one limit does not exist so $f'_-(1)e f'_+(1)$? | {"url":"http://mathhelpforum.com/calculus/167771-differentiable-function-print.html","timestamp":"2014-04-23T20:54:19Z","content_type":null,"content_length":"14831","record_id":"<urn:uuid:c3eefa8b-26b5-4772-8124-212ecdeb3ddd>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00092-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Exercise, Problem, and Investigation
As teachers, it is important that we vary the mathematical activities we give our students. The learning that takes place in the classroom is, in one way or another, affected by the kind of tasks
that we give our students. These tasks may be differentiated into three: exercises, problems, and investigations.
Math Exercise
A math exercise is a task where students know what is asked AND know a direct way of doing it. Task 1 is an example of a math exercise. In this task, students are asked for the number of squares
that make up the fourth figure. This can easily do this by looking at the pattern or by counting.
Task 1
Math exercises are usually given after examples were demonstrated. They are commonly used to enhance the basic computational skills of students.
Math Problem
A math problem is a task where students know what is asked, BUT do not know a direct way of doing it. Task 2 is an example of a math problem. The task is for the students to find the number of
squares in Figure 100. In this case, there is no direct way of solving it. Although they can draw the 100th figure by continuing the pattern and then counting the number of squares in that figure, it
would take them a long time to do it.
Task 2
Math problems are given at the end of the lesson, or sometimes at the beginning as context for a particular concept. These tasks intend to elicit thinking skills such as recognizing and generalizing
patterns, making conjectures, proving claims, giving counterexamples, to name a few.
Math Investigation
A math investigation is a task where students do not know what is asked AND do not know a direct way of solving it. Task 3 an example of a math investigation. In the example below, students are just
asked to investigate a problem they would like to pursue. They can investigate the number of squares in the nth figure, the perimeter of the figures in the nth figure, etc. (Can you think of
Task 3
In math investigations, students create their own problems and find ways to solve them. Math investigations are usually done for a longer period of time. At the end of an investigation, students may
be asked to report their findings written or orally.
The different tasks discussed above are all important in their own right. Exercises are used to reinforce mathematical computation skills, problems are used to improve higher order thinking skills,
and math investigations are used to train students in problem posing and self-directed exploration. However, if we want our students to be critical thinkers, we should do our best to scaffold them so
that they would be able to do problem solving and investigation. In addition, the tasks above also show that the same problem can be used to elicit different levels of thinking. It only depends on
the questions we ask.
8 thoughts on “Math Exercise, Problem, and Investigation”
1. Today, I had the opportunity to present your exercice/problem/investigation example in one of my university course. I also had a good discussion about these type of activities with my girlfriend,
which is also a mathematical teacher.
Sylvain Bérubé, Sherbrooke
□ Wow, that’s really nice Sylvain. Thank you. It’s inspiring!
3. I gave to solve my boy’s the exercise from first example and the problem from second example. He solved quickly this tasks. My boy has 9 years old and he passed in the third class
□ Hmmm… Your son must be smart. Maybe the next thing that you should ask is, can he find a way of calculating the number of squares in any figure number.
5. Thanks,
It’s really helpful to understand exactly the difference between a exercise and a problem.
□ You’re welcome Jenny. | {"url":"http://mathandmultimedia.com/2011/04/07/math-exercise-problem-and-investigation/","timestamp":"2014-04-16T07:39:16Z","content_type":null,"content_length":"347149","record_id":"<urn:uuid:c79ecbed-6766-4e70-99bc-b7896f0ca38e>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00452-ip-10-147-4-33.ec2.internal.warc.gz"} |
Electrodynamics/Lorentz Transformation
Fields and ForcesEdit
The forces caused by electric and magnetic fields are mostly what we can actually measure in electromagnetism. These vector quantities are related to the scalar and vector potentials as follows:
$\mathbf{E} = -abla \phi - \frac{\partial \mathbf{A}}{\partial t}$
$\mathbf{B} = abla \times \mathbf{A}$
The E is the electric field vector, and the B is the magnetic field vector. Because of these equations, electric fields are frequently called "E Fields", and magnetic fields are frequently called "B
Fields". This book may use either of these notations.
Lorentz EquationEdit
By comparison of these equations with the general expression for force in gauge theory, we find that the electromagnetic force on a particle with charge q is
$\mathbf{F}=q\mathbf{E} + q \mathbf{v} \times \mathbf{B}$
where v is the velocity of the particle. For historical reasons this is called the Lorentz force.
Relativity is, in brief, the study of reference frames. A reference frame is a fixed coordinate system against which local measurements are taken. Consider the common example of two observers:
Observer A is located on a moving train, and Observer B is standing in a field watching the train. Here is what the two observers see:
Observer A
Observer A sees the train as being stationary, and the field as moving.
Observer B
Observer B sees the field as stationary, and the train as moving.
Both observers are essentially the origin of their own coordinate system. Clearly, the two coordinate systems can be related together through some sort of transformation, that is that things that
Observer A can see can be translated into coordinates according to Observer B, and vice-versa.
In linear algebra, a coordinate system has a basis, a small set of unit vectors that can be used to describe all the points in that system. If we can relate the basis vectors of Observer A and
Observer B, then we can relate any point in either system to the other system.
Because basis vectors are vectors (rank-1 tensors), the transformations between them are typically matrices (rank-2 tensors).
Special Relativity is based on the idea that the laws of physics are the same in all inertial reference frames, and that the speed of light, c, is a constant regardless of the frame. An equation is
said to be "Invariant under Lorentze Transformation" if it satisfies these requirements when a lorentz transformation is applied to it. We will see that the requirement of lorentz invariance is an
important one in electrodynamics.
Another subject, General Relativity, expands the mathematical ideas of relativity to non-inertial frames. We will not consider general relativity topics in this book.
Lorentz TransformationsEdit
It is tempting, but naive, for us to consider only 3 basis vectors. These vectors, the spacial vectors, can be called without any lack of generality "X", "Y", and "Z". However, one of the important
results from Einsteins work on relativity is that time is also dependant on the reference frame, and that therefore we need to consider all points in both space (X, Y, and Z vectors) but also in time
(a T vector). All our vectors then have a length of 4, and our transformation matrices must be 4×4 matrices. A 4×4 transformation matrix that uses three spatial coordinates and 1 time coordinate is
known as a lorentz transformation matrix, or simply a "lorentz transformation".
If we have two coordinate systems, (X, Y, Z, T), and (X', Y', Z', T') and they are non-inertial systems, we can relate the two systems using the L transformation functions:
$X' = L_X(X)$
$Y' = L_Y(Y)$
$Z' = L_Z(Z)$
$T' = L_T(T)$
Now, if each element X', Y', Z', and T' are linearly related to X, Y, Z, and T, we can convert L into a matrix:
$\begin{bmatrix}X' \\ Y' \\ Z' \\ T'\end{bmatrix} = \mathbf{L}\begin{bmatrix}X \\ Y \\ Z \\ T\end{bmatrix}$
As we can see from this equation, if we are going to apply a Lorentz transformation to a coordinate system, the coordinates must be specified in vectors of length 4. We will call all vectors with a
length of 4 a "four vector". We will discuss four vectors in a later chapter.
Inertial vs. Non-Inertial FramesEdit
An inertial frame is a frame with no net acceleration. Consider the case above with the two observers, Observer A, and Observer B. Observer A is on a train, and Observer B is looking at the train
from a distance. If the train is moving at a constant speed and in a straight line, then the two frames are inertial. However, if the train is accelerating or decelerating, or if the train is not
moving in a straight line, then the two frames are non-inertial.
The study of inertial frames is a field known as Special Relativity. The study of non-inertial frames is known as General Relativity. In this book, we will consider special relativity only.
Ohm's LawEdit
The continuum form of Ohm's Law is only valid in the reference frame of the conducting material. If the material is moving at velocity v relative to a magnetic field B, a term must be added as
$\mathbf{J} = \sigma \cdot \left( \mathbf{E} + \mathbf{v}\times\mathbf{B} \right)$
The analogy to the Lorentz force is obvious, and in fact Ohm's law can be derived from the Lorentz force and the assumption that there is a drag on the charge carriers proportional to their velocity.
Last modified on 10 July 2007, at 02:52 | {"url":"http://en.m.wikibooks.org/wiki/Electrodynamics/Lorentz_Transformation","timestamp":"2014-04-17T18:57:23Z","content_type":null,"content_length":"21434","record_id":"<urn:uuid:6b8b6fcc-e2dd-41bc-bf86-3c06fb23575d>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00524-ip-10-147-4-33.ec2.internal.warc.gz"} |
Stack and Queues questions
03-29-2011 #1
Registered User
Join Date
Mar 2011
Stack and Queues questions
I am preparing for a test and have answered some questions dealing with stacks and queues the teacher has provided as samples. Could someone check over my work?
// stack.h – complete header file for Stack ADT
#include <iostream>
using namespace std;
class StackFull { /* Empty Exception Class */ };
class StackEmpty { /* Empty Exception Class */ };
class Stack // An array implementation of Stack ADT for storing integers
int* ptr; // Holds address of dynamically-allocated integer array
int top; // Array index of top-most integer on the stack
int size; // Maximum number of integers that may be stored in the stack
Stack(int num); // Dynamically allocate 1-D array of integers of size num
// and configure empty stack
~Stack(); // Deallocates stack array
void Push(int n); // Add integer n to the top of the stack if not full
// Otherwise, throw StackFull exception
int Pop(); // Removes top integer from stack AND returns value to
// client; throws StackEmpty exception if stack is empty
void MakeEmpty(); // Return stack to empty but usable state
bool IsFull() const; // Return true if stack is full; otherwise return false
bool IsEmpty() const; // Return true if stack is empty; otherwise return false
int Capacity() const; // Returns the maximum number of integers that may be
// stored in Stack object
}; // End Class Stack
1) Implement Stack.
Stack::Stack(int num)
ptr=new int[num];
2) Implement Push method
void Stack::Push(int n)
if(IsEmpty() )
throw StackFull();
3) Implement IsEmpty
bool Stack::IsEmpty() const
return (top == 0);
4) Implement Pop
int Stack::Pop()
if (IsEmpty())
throw StackEmpty();
int temp=ptr[top];
return temp;
5) Implement Desctructor
6) Implement Capacity
int Stack::Capacity() const
return size;
7) Implement IsFull
bool Stack::IsFull() const
return (top == size);
Have you tested the code? It seems to have a pretty big bug which you should find easily.
Other than that, such classes also require a user-defined copy constructor and assignment operator to perform a deep copy.
Also you haven't shown the implementation of MakeEmpty. If it does what the comment suggests, then I don't see how the destructor can avoid a leaking memory.
I might be wrong.
Thank you, anon. You sure know how to recognize different types of trees from quite a long way away.
Quoted more than 1000 times (I hope).
You need to provide your own copy constructor/operator, or make this class non-copyable (inherit from boost::noncopyable).
I never put signature, but I decided to make an exception.
The header code was provided and the questions 1-7 were multiple choice questions I had to answer. I am not writing specific code, just using the header to write a few functions. Just wanted to
check and see if I got each one right.
Oh, so these points were actually questions.
You still have a bug in one of your method - the Pop() method.
I never put signature, but I decided to make an exception.
Correct. Just had to chose between four sample codes provided or None of the above, because it is a multiple choice question. Are the others correct except Pop?
Implement Pop
int Stack::Pop()
if (IsEmpty())
throw StackEmpty();
return ptr[top];
int Stack::Pop()
You have no idea how much seeing that upsets me.
Anyway, anon wasn't teasing you. You have a big bug and it isn't localized to any one function.
Put the code into a testbed and see what happens.
Read the other posts. This is not code I am testing. Those 1-7 were questions are dealing with the header file up top.
why does seeing that upset you?
anyways for the int pop..if the first if statement part executes you need to return something..that is one bug
also looks like you don't clean up after yourself..so I assume that option is incorrect
You ended that sentence with a preposition...Bastard!
why does seeing that upset you?
Let me just say that in real life C++ stack implementations such a signature causes a problem.
Read the other posts. This is not code I am testing.
I'm well aware. If you threw the code at a testbed you would know that those options are the wrong ones and have an idea about what the right ones would look like.
@phantomtap: It is unclear whether such a signature causes a problem here.
@OP: Which part is your answer? There appear to be bugs in both Push and Pop. These are your code?
I might be wrong.
Thank you, anon. You sure know how to recognize different types of trees from quite a long way away.
Quoted more than 1000 times (I hope).
Not my code, for each question there are 4 code options to choose from and also 'none of the above'. So for those, pop and push, I have chose the incorrect one. When I get back I will try to
chose the correct one and post it. Most choices are very similar with just 1-2 lines different.
Here is my correction for Push and Pop.
Implement Push
void Stack::Push(int n)
if (IsFull())
throw StackFull();
Implement Pop
int Stack::Pop()
if (IsEmpty())
throw StackEmpty();
return ptr[top];
03-29-2011 #2
The larch
Join Date
May 2006
03-29-2011 #3
Registered User
Join Date
Aug 2010
03-29-2011 #4
Registered User
Join Date
Mar 2011
03-29-2011 #5
Registered User
Join Date
Aug 2010
03-29-2011 #6
Registered User
Join Date
Mar 2011
03-29-2011 #7
03-29-2011 #8
Registered User
Join Date
Mar 2011
03-29-2011 #9
The Dragon Reborn
Join Date
Nov 2009
Dublin, Ireland
03-29-2011 #10
03-29-2011 #11
The larch
Join Date
May 2006
03-29-2011 #12
Registered User
Join Date
Mar 2011
03-29-2011 #13
Registered User
Join Date
Mar 2011 | {"url":"http://cboard.cprogramming.com/cplusplus-programming/136336-stack-queues-questions.html","timestamp":"2014-04-17T10:52:25Z","content_type":null,"content_length":"91860","record_id":"<urn:uuid:ab247bea-4d6d-4519-b1d1-3205df4eaea8>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00015-ip-10-147-4-33.ec2.internal.warc.gz"} |
A. The monomer
B. The dimer
1. The stationary points
2. A thermal sample of dimer configurations
C. The trimer
D. Thermal samples of the tetramer, pentamer and hexamer
E. Stable isomers of the hexamer | {"url":"http://scitation.aip.org/content/aip/journal/jcp/136/24/10.1063/1.4730035","timestamp":"2014-04-16T14:02:04Z","content_type":null,"content_length":"86316","record_id":"<urn:uuid:9c3a4b99-db65-45e7-a1cd-937e818320ec>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00186-ip-10-147-4-33.ec2.internal.warc.gz"} |
Formula to work out tax payable
February 21st 2013, 05:08 AM
Formula to work out tax payable
Hi, I'm trying to work on a formula that can calculate the tax payable given that we have different tax rates for different thresholds. So for example, let's say we have the following tax table.
Taxable Income Marginal Tax Rate
$0 - $10,000 10%
$10,0001 - $20,000 15%
So, if our income is $x, is there some formula I could use to calculate the tax payable. I know that this can be very easily calculated once we have $x. Although is there a formula as a function
of $x that could calculate the tax payable? I was initially thinking of using matrices, although I'm not too sure how that would work. Any suggestions would be appreciated.
February 21st 2013, 05:31 AM
Re: Formula to work out tax payable
If I am interpreting "marginal rate" correctly, it is very simple:
if I (income) is less than $10000, the tax is .1I or I/10.
If I is larger than or equal to $10000, the tax is $1000+ .15(I- 10000).
That "$1000" is, of course the tax on the first $10000 income, and then you apply the 15% rate to the remainder: if you earn $45000, you pay 10% of the first $10000, or $1000, plus 15% of 45000-
10000= 25000 so you pay .15(25000)= $3750 on the rest for a total tax of $4750. | {"url":"http://mathhelpforum.com/business-math/213547-formula-work-out-tax-payable-print.html","timestamp":"2014-04-19T22:41:21Z","content_type":null,"content_length":"4913","record_id":"<urn:uuid:2b67c619-1012-4db0-b0c7-33c5ac9591e8>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00491-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ring-theoretic description of the norm-residue homomorphism
Recently, the norm-residue homomorphism has been the subject of intense discussions in the K-theoretic community following the proof of the Bloch-Kato conjecture by Voevodsky, Suslin and Rost (see
Rost’s lecture at this year’s Arbeitstagung.) The goal of this post is to explain the norm-residue homomorphism in a down to earth ring-theoretic language.
Recall that the description of $K_2$ of fields is given by the following theorem.
Matsumoto’s Theorem.
For any field $F$
$K_{2}(F)= \frac{F^{*}\otimes_{\mathbb{Z}} F^{*}}{ <a \otimes (1-a)~|~ aeq 1>}$
or equivalently in the context of presentations of groups, $K_{2}F$ is the Abelian group with
Generators: $\{x,y\}\quad x,y \in F^{*}$
1. $\{x, 1-x\} =0\; \forall x \in F^{*} x eq 0$ (The Steinberg relation)
2. $\{xy,z\} = \{x,z\} +\{y,z\}\; \forall x,y, z \in F^{*}$
3. $\{x, yz\} =\{x,y\} +\{x,z\}\; \forall x,y, z \in F^{*}.$
Definition. Let $F$ be a field and $A$ be an Abelian group. A Steinberg symbol on $F$ (with coefficients in $A$) is a $\mathbb{Z}$-bilinear map $s:F^{*}\times F^{*}\longrightarrow A$ such that
$s\{x,1-x\}=0\quad \forall x\in F^{*} \quad x eq 1.$
By Matsumoto’s theorem any Steinberg symbol $s:F^{*} \times F^{*} \longrightarrow A$ gives a unique group homomorphism $\tilde{s}:K_2(F) \longrightarrow A$ such that $s(x,y)=\tilde{s}\{x,y\}.$
Norm Residue Algebras.
Let $F$ be a field which contains a primitive $n$-th root of unity $\omega$ and let $\alpha,\beta$ be two given elements in $F^{*}$. The $n^{2}$ dimensional $F$ vector space
$A_{\omega}(\alpha,\beta):= \bigoplus\limits_{0\leq i,j <n} F x^{i}y^{j}$
with the following rules of multiplication:
$x^{n}=\alpha \quad y^{n}=\beta \quad yx=\omega yx$
is a central simple $F$-algebra and it is called the norm residue algebra.
Theorem 1. Let $A$ be a central simple algebra of degree $n$ and let
$f(x)=x^{n}+a_{n-1}x^{n-1}+\dots +a_{0}$
be the minimal polynomial of $x\in A$ over $F$. If $f$ splits into distinct linear factors over $F$, then $A\simeq M_n(F).$
Corollary. Let $\alpha,\beta \in F^{*}$. If either $\alpha$ or $\beta$ has an $n$-th root in $F$, then
$A_{\omega}(\alpha,\beta)\simeq M_n(F).$
As a special case of the above statement we have
$A_{\omega}(\alpha,1)\simeq M_n(F)$.
Theorem 2. $A_{\omega}(\alpha,1-\alpha)\simeq M_n(F)$
We define the non-commutative binomial coefficients
$b_{i}^{n}:=\frac{f_n(c)} {f_i(c)f_{n-i}(c)},$
$f_n(c)=\prod\limits_{j=1}^{n}(c^{j}-1) .$
It can be easily checked that $b_{i}^{n}(c)\in \mathbb{Z}[c]$. Now suppose that $x,y$ are elements of an arbitrary ring $R$ such that $yx=cxy$ for some $c$ in the center of $R$. Induction on $n$
shows that
$(x+y)^{n}= \sum_{i=0}^{n}b_{i}^{n}(c)x^{i}y^{n-i}$.
In particular, for the generators $x$ and $y$ of $A_{\omega}(\alpha,1-\alpha)$, since $b_{0}^{n}(\omega)=b_{n}^{n}(\omega)=1$ and $b_{i}^{n}(\omega)=0$ for all $0<i<n$, we obtain that
Now by the same reason as the previous corollary we have
$A_{\omega}(\alpha,1-\alpha)\simeq M_n(F)$.
Theorem 3. Let $\alpha, \beta, \gamma$ be in $F^{*}$. Then
$A_{\omega}(\alpha,\beta)\otimes_F A_{\omega}(\alpha,\gamma)\simeq A_{\omega}(\alpha,\beta \gamma)\otimes_F A_{\omega}(1,\gamma).$
Let $x_1, y_1$ be the generators for $A_{\omega}(\alpha,\beta)$ and $x_2, y_2$ be the generators for $A_{\omega}(\alpha,\gamma)$. Define
$x_3=x_1\otimes 1 \quad y_3=y_1\otimes y_2 \quad x_4=x_{1}^{-1}\otimes x_2 \quad y_4=1\otimes y_2.$
Let $A'$ be the algebra generated by $x_3, y_3$ and $A''$ be the algebra generated by $x_4, y_4$. Now $x_{3}^{n}=\alpha\otimes 1$, $y_{3}^{n}=\beta\gamma\otimes 1$ and
$y_3x_3=y_1x_1\otimes y_2=\omega (x_1y_1\otimes y_2)=\omega x_3y_3.$
So $x_3$ and $y_3$ satisfy the relations for $A_{\omega}(\alpha,\beta \gamma)$, thus $A'\simeq A_{\omega}(\alpha,\beta \gamma)$. Similarly $A''\simeq A_{\omega}(1,\gamma)$. Notice that $x_3$ and
$y_3$ commute with $x_4$ and $y_4$, hence we have a natural $F$-algebra homomorphism
$\varphi: A'\otimes_F A''\longrightarrow A_{\omega}(\alpha,\beta)\otimes_F A_{\omega}(\alpha,\gamma)$.
Since $A'\otimes_F A''$ is simple, $\varphi$ is injective. Since the dimensions of two sides are equal $n^{4}$ it must be an isomorphism.
Remark. We have already seen that $A_{\omega}(\alpha,1)\simeq M_n(F)$. So by the above theorem we have
$[A_{\omega}(\alpha,\beta \gamma)]=[ A_{\omega}(\alpha,\beta)] [A_{\omega}(\alpha,\gamma)],$ similarly
$[A_{\omega}(\alpha\beta, \gamma)]=[ A_{\omega}(\alpha,\gamma)][A_{\omega}(\beta,\gamma].$
Here $[A]$ denotes the equivalence class of $A$ in the Brauer Group.
$s : F^{*}\times F^{*} \longrightarrow Br(F) \quad s(\alpha, \beta):= [A_{\omega}(\alpha,\beta)].$
The above remark says that $s$ is $\mathbb{Z}$-bilinear. By Theorem 3 we observe that $s$ is a Steinberg symbol, hence we get a homomorphism
$\tilde{s}: K_2(F)\longrightarrow Br (F) \quad \tilde{s}\{\alpha,\beta\}= [A_{\omega}(\alpha,\beta)].$
From Corollary it follows that
which shows that the image of $\tilde{s}$ is contained in
${_n}Br(F):=\{[A]\in Br(F)~|~[A]^{n}=1\}.$
Since $n\{\alpha,\beta\}=\{\alpha^{n},\beta\}$, the homomorphism $\tilde{s}$ annihilates $nK_2(F)$, therefore it induces a homomorphism
$R_{n,F}: K_2(F)/n K_2(F) \longrightarrow_n Br(F)$
which is called the norm residue homomorphism.
The following surprising theorem was proved by A. Merkurjev and
A. Suslin in 1982.
The Merkurjev-Suslin Theorem. Let $F$ be a field which contains an $n$-th primitive root of unity. Then
$R_{n,F}: K_2(F)/nK_2(F)\longrightarrow {_n}Br(F)$
is an isomorphism.
Norm Residue Homomorphism via Galois Cohomology.
The norm residue homomorphism can be described in terms of Galois cohomology. As a preliminary we need to recall the notion of the cup product in the cohomology of groups.
Let $F$ be a field and let $n$ be an integer coprime to char $(F).$ Set
$\mu_{n}=\{ x\in F_{sp} |\quad x^{n}=1 \} .$
The condition $(n,char F)=1$ implies that $\mu_{n}$ has exactly $n$ elements. Assume that $F$ has an $n$-th primitive root of unity, i.e. $\mu_{n} \subset F$. Set $G:= Gal(F_{sp}/F)$ and consider the
following exact sequence of $G$-modules:
$1 \stackrel{\mu_{n}}{\rightarrow} \rightarrow F_{sp}^{*} \stackrel{\text{n}}{\rightarrow} F_{sp}^{*} \rightarrow1$
The associated exact cohomology sequence is
$1 \rightarrow H^{0} (G ,\mu_{n}) \rightarrow H^{0} ( G ,F_{sp}^{*}) \stackrel{\text{n}}{\rightarrow} H^{0}(G, F_{sp}^{*}) \rightarrow$
$H^{1} (G , \mu_{n}) \rightarrow H^{1} (G ,F_{sp}^{*}) \stackrel{\text{ n}}{\rightarrow} H^{1}(G, F_{sp}^{*}) \rightarrow$
$H^{2} ( G ,\mu_{n}) \rightarrow H^{2} ( G ,F_{sp}^{*}) \stackrel{\text{n}}{\rightarrow} H^{2}(G , F_{sp}^{*}).$
As $\mu_{n} \subset F$, the action of $G$ on $\mu_{n}$ is trivial, so $H^{0} (G ,\mu_{n})=\mu_{n}$ . By Hilbert’s Satz $90$ we have $H^{1} ( G ,F_{sp}^{*})=1$, so the above sequence breaks up to the
following exact sequences:
$1 \rightarrow \mu_{n} \rightarrow F^{*} \stackrel{\text{n}} {\rightarrow} F^{*} \stackrel{\delta}{\rightarrow} H^{1}(G,\mu_{n}) \rightarrow 1$
$1 \rightarrow H^{2}(G , \mu_{n}) \stackrel{\lambda} {\rightarrow} H^{2}(G , F_{sp}) \stackrel{\text{n}}{\rightarrow} H^{2}(G , F_{sp}).$
Hence the map $\delta$ induces an isomorphism $H^{1}(G , \mu_{n})\simeq F^{*}/F^{*^{n}})$, and the map $\lambda$ induces an isomorphism between
$H^{2}(G , \mu_{n})$ and $ker( H^{2}(G , F_{sp}) \stackrel{\text{ n}}{\rightarrow} H^{2}(G ,F_{sp})).$
By using that $Br(F) \simeq H^{2}(G , F_{sp})$ we obtain that $H^{2}(G , \mu_{n})\simeq _nBr(F).$ Since $G$ acts trivially on $\mu_{n}$ it follows that $\mu_{n}^{\otimes^{2}}$ is isomorphic to $\mu_
{n}$ as $G$-module, hence
$H^{2}(G , \mu^{\otimes^{2}} ) \simeq H^{2}(G , \mu_{n}) \simeq{_n}Br(F).$
The composition of the following maps
$F^{*} \times F^{*} \rightarrow F^{*}/F^{*^{n}}\times F^{*}/F^{*^{n}}\simeq H^{1}(G,\mu_{n})\times H^{1}(G,\mu_{n})$$\stackrel{\cup}{\rightarrow} H^{2}(G, \mu_{n})\simeq {_n}Br(F)$
gives a $\mathbb{Z}$-bilinear map which can be proved to be a Steinberg symbol, and the induced homomorphism $K_2(F)\longrightarrow {_n}Br(F)$ is the norm residue homomorphism.
1. Kersten, Ina Brauergruppen von Körpern. (German) [Brauer groups of fields] Aspects of Mathematics, D6.1.
2. Milnor, John Introduction to algebraic $K$-theory. Annals of Mathematics Studies, No. 72.
3. Rost, Markus Arbeitstagung 2007 – Norm residue homomorphism.
4. Tate, John Relations between $K\sb{2}$ and Galois cohomology. Invent. Math. 36 (1976), 257–274.
9 comments
This reminds me of another theorem of Merkurjev-Suslin: if F is a perfect field of cohomological dimension 2, then for every central simple algebra A over F the reduced norm map Nm:A^* –> F^* is
surjective. With no hypotheses on F, it is not hard to see the cokernel depends only on [A] in Br(F) and factors through F^*/(F^*)^n, where n = order([A]). Thus, given [A] in Br(F)[n]=H^2(F,\mu_n),
the cokernel is a quotient of H^1(F,\mu_n). It is reasonable to guess the cokernel is the image of the homomorphism H^1(F,\mu_n) –> H^3(F,\mu_n^2) determined by cup product with [A]. If so, that
would explain this MS theorem, since H^3(F,\mu_n^2) vanishes if cd(F)=2. Do you know if this is correct? If so, is the proof related to the MS theorem in your post?
I will take a look. I ran across a comment in Serre’s “Galois cohomology” that MS construct a map from the cokernel of the reduced norm map into H^3(F,\mu_n^2) <>. That hypothesis makes me believe
there is more to this than meets the eye.
The hypothesis that was supposed to appear in “” somehow didn’t compile: MS construct the map if n is square-free.
PARI is already integrated with Python:
Sorry, my comment above was not meant to be on this post.
Dear Madams/Miss/Mrs
Can you help me to proof (Z/abZ) and (Z/aZ)(Z/bZ) is homomorphisme (gcd(a,b)=1). | {"url":"http://vivatsgasse7.wordpress.com/2007/07/12/ring-theoretic-description-of-the-norm-residue-homomorphism/","timestamp":"2014-04-16T19:40:06Z","content_type":null,"content_length":"105470","record_id":"<urn:uuid:248b5570-efba-4f5f-924c-1857fc791a6d>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00365-ip-10-147-4-33.ec2.internal.warc.gz"} |
Untitled Document
SYLLABUS MA 121-651 SUMMER 2013
MA 121 Elements of Calculus
Instructor: Dr. Thomas Lada
Text: Calculus (9-th Ed.) by Bittinger, M. L.
You will view the lectures here:
The lectures may also be found at math videos
Check your NCSU Unity e-mail regularly for course announcements.
We will cover the material in the text through Chapter 6. There will be 3 one hour tesrs that you will take at the DE Proctor Center on the NCSU campus. In addition, there will be a final exam. If
you are far from campus, you may arrange for a remote proctor; you may find information on the NCSU DE web site. If you need a remote proctor, you should make arrangements early in the semester.
We will use webassign for on line homework. You may access webassign by using the link webassign
For questions regarding webassign problems contact my TA Amanda
Grading Policy: Tests 65% of grade, webassign 10%
Final Exam 25% of grade
The test dates are
Test 1 June 13, 14 Chapters R and 1, lectures 1 - 13
Test 2 July 8, 9 Chapters 2 and 3, lectures 14 - 25
Test 3 July 24, 25 Chapters 4 and 5, lectures 26 - 36
Final Exam August 5, 6
You may take your tests and exam on either of the scheduled days.
Here is the link for the homework problems in the 9th edition HOMEWORK-9TH
Here is a list of the TOPICS in each lecture.
Correspondence regarding the course material should be directed to
Dr. Thomas Lada Phone: (919) 515-8773
Mathematics Department
Box 8205
N. C. State University
Raleigh, N.C. 27695
email: lada@math.ncsu.edu | {"url":"http://www4.ncsu.edu/~lada/MA121-651summer%202013.html","timestamp":"2014-04-17T12:30:27Z","content_type":null,"content_length":"3128","record_id":"<urn:uuid:086e2692-3a25-4ed4-8454-085a96cab88b>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00303-ip-10-147-4-33.ec2.internal.warc.gz"} |
Carbohydrate are compounds of carbon, hydrogen and oxygen in which the ratio of hydrogen atoms to oxygen atoms is... - Homework Help - eNotes.com
Carbohydrate are compounds of carbon, hydrogen and oxygen in which the ratio of hydrogen atoms to oxygen atoms is 2:1. A certain carbohydrate is known to be 40% carbon. Determine the emperical
formula for this carbohydrate.
1. Does the ratio means that the mass of hydrogen is twice as big as the oxygen?
2. I searched online for this question and the answer considers the hydrogen and oxygen as one thing,which is H2O for the mass, and use it to calculate the "n". But can we seperate hydrogen and
oxygen when we calculate the mass?
1. The ratio means that the number of atoms of hydrogen to oxygen is 2:1, the mass ratio of these two atoms may be obtained by multiplying atom ratio with their respectivecorresponding atomic masses.
Here it will be 2×1.008 : 1×15.9994 = 1:8 (approximately).
2. Carbohydrates are known as ‘hydrates’ of carbon. This implies that, barring a few exceptions, they have the general formula Cx(H2O)y. So, ratio of the number of atoms of hydrogen to oxygen is
always 2:1. That is the reason; hydrogen and oxygen are bracketed as one during calculations. However, this ratio can be worked out separately and independently if sufficient data are available.
Sufficient means we need three separate equations to work out three separate (and independent) variables then.
Returning to the original numerical problem, here the data is sufficient for framing two equations only. So, one has to take hydrogen and oxygen as dependent and hence one (in the form of H2O). The
carbohydrate contains 40% carbon. So rest of it is the ‘hydrate’ part, i.e. H2O. Assuming these to be the mass ratio, atom/unit ratio is 40/12:60/18 = 3.3333:3.3333 = 1:1. So, the empirical formula
of the carbohydrate is C1(H2O)1 or, CH2O.
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/carbohydrate-compounds-carbon-hydrogen-oxygen-429448","timestamp":"2014-04-19T09:27:31Z","content_type":null,"content_length":"27139","record_id":"<urn:uuid:f2ce82a0-1410-45bf-b7a1-eab05ea13d4a>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00166-ip-10-147-4-33.ec2.internal.warc.gz"} |
Write homogeneous spherical space forms as coset spaces
up vote 0 down vote favorite
Let $S=G/K$ be a sphere written as coset space. I know there are just few possibilities for $G$, and $K$ due to the classification of compact connected groups that can be transitive on a sphere.
If $F \subset G$ is a finite fixed point free subgroup, can we write $S/F = G / (F \times K)$??
This should be true for real projective spaces, but for example how about lens spaces $S^{2n-1}/\mathbb Z_q$?
1 No, generally you can't do this. It's easy enough to work out in the case of lens spaces using linear algebra -- elements of $K$ do not commute with most of the elements of $F$. – Ryan Budney Nov
26 '11 at 21:54
And in the general case what if $F$ and $K$ commute? For example for finite subgroups of $SU(2)$ – David Petrecca Nov 26 '11 at 23:23
Sorry for adding a comment as answer. Anyway it is enough to have $F$ (finite) normal (hence cenral) in $G$, or just central to well-define a $G$-action on $M/F$, but what about other sufficient
cases? For instance, if we see $S^{2n-1} = U(n)/U(n-1)$ and $F$ is the cyclic group acting by multiplying by $e^{2 \pi i/q}$ on each complex coordinates, of course $F$ is central in $G$ and we can
say that $S^{2n-1}/F$ is homogeneous under $U(n)$. How about a $SO(2n)$-action? – David Petrecca Nov 28 '11 at 21:37
add comment
1 Answer
active oldest votes
so what are the spherical space forms homogeneous under? Starting from various ways to write the spheres themselves $S^{n-1} = SO(n)/SO(n-1)$, $S^{2n-1} = SU(n)/SU(n-1)$ and $S^
{4n-1} = Sp(n)/Sp(n-1)$ ??
Is it enough to have $F$ commute with $K$? (notation as in the first post)
up vote 1 down
vote thanks
1 Please don't leave further questions as "answers" - instead, you can add things to your earlier question, as edits or updates – Yemon Choi Nov 27 '11 at 22:17
1 Also, you should register your account, or find a way to use the same cookie, so you don't create new accounts with the same name. – S. Carnahan♦ Nov 28 '11 at 2:39
add comment
Not the answer you're looking for? Browse other questions tagged homogeneous-spaces or ask your own question. | {"url":"http://mathoverflow.net/questions/81959/write-homogeneous-spherical-space-forms-as-coset-spaces","timestamp":"2014-04-21T07:50:52Z","content_type":null,"content_length":"55143","record_id":"<urn:uuid:f999d782-675a-423c-b96e-9731c4be67cc>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00636-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tangent vector space question
I see in my notes (I don't carry The Encyclopedia Britannica around with me) that George Mostow, in his artical on analytic topology, says "The set of all tangent vectors at m of a k-dimensional
manifold constitutes a linear or vector space of which k is the dimension (k real)." Well ok, maybe it is more a paraphrase than a quote.
Shouldn't the dimension of the tangent vector space be k-1? I am imagining the tangent vector space at a point on a three-sphere as a 2-D disk originating at the point, rather as if I had tacked a CD
onto my globe of the Earth.
Then on the real Earth, I am at a point, and my tangent space would be the space between me and the horizon? Say I am at sea far from any coast. Should I rather think of the tangent space as the 2d
surface of the ocean, or as the 3d space in which the ocean waves occur? | {"url":"http://www.physicsforums.com/showthread.php?p=1085309","timestamp":"2014-04-21T04:49:12Z","content_type":null,"content_length":"40306","record_id":"<urn:uuid:bf605ac9-2ac8-41a8-8830-7d7d7be4ed7b>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00568-ip-10-147-4-33.ec2.internal.warc.gz"} |
Course calendar.
LEC TOPICS KEY DATES
1 Review of differential forms, Lie derivative, and de Rham cohomology.
2 Cup-product and Poincaré duality in de Rham cohomology; symplectic vector spaces and linear algebra; symplectic manifolds, first examples; symplectomorphisms
3 Symplectic form on the cotangent bundle; symplectic and Lagrangian submanifolds; conormal bundles; graphs of symplectomorphisms as Lagrangian submanifolds in products; isotopies and
vector fields; Hamiltonian vector fields; classical mechanics
4 Symplectic vector fields, flux; isotopy and deformation equivalence; Moser's theorem; Darboux's theorem
5 Tubular neighborhoods; local version of Moser's theorem; Weinstein's neighborhood theorem
6 Tangent space to the group of symplectomorphisms; fixed points of symplectomorphisms; Arnold's conjecture; Morse theory: Gradient trajectories, Morse complex, homology; action
functional on the loop space, and the basic idea of Floer homology
7 More Floer homology; almost-complex structures; compatibility with a symplectic structure; polar decomposition; compatible triples Homework 1
8 Almost-complex structures: Existence and contractibility; almost-complex submanifolds vs. symplectic submanifolds; Sp(2n), O(2n), GL(n,C), and U(n); connections: definition,
connection 1-form
9 Horizontal distributions; metric connections; curvature of a connection: Intrinsic definition; expression in terms of connection 1-form
10 Twisted de Rham operator; Levi-Civita connection on (TM,g); Chern classes of complex vector bundles (via curvature and Chern-Weil); Euler class and top Chern class
11 Naturality properties of Chern classes and topological definition; equivalence between the two definitions; classification of complex line bundles
12 Chern classes of the tangent bundle; cohomological criterion for existence of almost-complex structures on a 4-manifold, examples; splitting of tangent and cotangent bundles of (M,J), Homework 2
types; complex manifolds, Dolbeault cohomology due
13 Nijenhuis tensor; integrability; square of the dbar operator; Newlander-Nirenberg theorem; Kähler manifolds; complex projective space
14 Kähler forms; strictly plurisubharmonic functions; Kähler potentials; examples; Fubini-Study Kähler form; complex projective manifolds; Hodge decomposition theorem
15 Hodge * operator on a Riemannian manifold; d* operator; Laplacian, harmonic forms; Hodge decomposition theorem; differential operators; symbol, ellipticity; existence of parametrix
16 Elliptic regularity, Green's operator; Hodge * operator and complex Hodge theory on a Kähler manifold; relation between real and complex Laplacians
17 Hodge diamond; hard Lefschetz theorem; holomorphic vector bundles; canonical connection and curvature
18 Holomorphic sections and projective embeddings; ampleness; Donaldson's proof of the Kodaira embedding theorem: local model; concentrated approximately holomorphic sections Homework 3
19 Donaldson's proof of the Kodaira embedding theorem: Estimates; concentrated sections; approximation lemma
20 Proof of the approximation lemma; examples of compact 4-manifolds without almost-complex structures, without symplectic structures, without complex structures; Kodaira-Thurston
21 Symplectic fibrations; Thurston's construction of symplectic forms; symplectic Lefschetz fibrations, Gompf and Donaldson theorems
22 Symplectic sum along codimension 2 symplectic submanifolds; Gompf's construction of symplectic 4-manifolds with arbitrary pi_1
23 Symplectic branched covers of symplectic 4-manifolds.
24 Homeomorphism classification of simply connected 4-manifolds; intersection pairings; spin^c structures; spin^c connections; Dirac operator
25 Seiberg-Witten equations; gauge group; moduli space; linearized equations; compactness of moduli space
26 Seiberg-Witten invariant; properties; vanishing for manifolds of positive scalar curvature; vanishing for connected sums; Taubes non-vanishing for symplectic manifolds; examples of
non-symplectic 4-manifolds, of non-diffeomorphic homeomorphic manifolds | {"url":"http://ocw.mit.edu/courses/mathematics/18-966-geometry-of-manifolds-spring-2007/syllabus/","timestamp":"2014-04-21T00:14:43Z","content_type":null,"content_length":"35394","record_id":"<urn:uuid:08c7c8bf-bf0a-4075-806f-5b2a184e98c7>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00028-ip-10-147-4-33.ec2.internal.warc.gz"} |
CM for primary ideal
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
Let $R$ be a regular local ring, $I$ a prime ideal and $J$ an $I$-primary ideal in $R$. Is it true that if $R/I$ is CM then also $R/J$ is CM? This question is in some way
the inverse of this one.
up vote 2 down vote
favorite ac.commutative-algebra ag.algebraic-geometry
add comment
Let $R$ be a regular local ring, $I$ a prime ideal and $J$ an $I$-primary ideal in $R$. Is it true that if $R/I$ is CM then also $R/J$ is CM? This question is in some way the inverse of this one.
A useful way to think about this issue is to consider $J=I^{(n)}$, the $n$-symbolic power of $I$, which by definition is the $I$-primary component of $I^n$.
When $R$ is a polynomial rings over $\mathbb C$, this is the ideal consisting of functions vanishing to order at least $n$ on $X = \text{Spec}(R/I)$.
It is then well-known that the depth of $R/I^{(n)}$ can go down. For example, take $I$ generated by the $2\times2$ minors of the generic $2\times 3$ matrix inside the polynomial rings
of the $6$ variables, localized at the maximal ideal of those variables. Then $R/I$ is Cohen-Macaulay of dimension $4$, but $R/I^{(n)}$ would have depth $3$ eventually. For more
general statements about ideals of maximal minors, see for example Section 3 of:
Powers of Ideals Generated by Weak d-Sequences, C. Huneke, J, Algebra, 68 (1981), 471-509.
up vote 5 down EDIT: the example above looks specific, but such examples should be abound. I expect most Cohen-Macaulay ideals which are not complete intersections to give an example (it is known
vote accepted that $R/I^n$, the ordinary powers, are CM for all $n>0$ iff $I$ is a complete intersection). The $2\times 2$ minors gives is a generic situation of non-complete intersection but CM
A philosophical comment: it is unlikely that Cohen-Macaulayness will be preserved by basic operations on ideals. So if $R/I, R/J$ are CM, we do not expect $R/\sqrt{I}, R/I^n, R/I^
{(n)}, R/P$ ($P$ an associated primes), or $R/(I+J), R/IJ$ etc. to be CM.
The reason is that to preserve depth one needs to control the associated primes, and these operations only allow you to control the support. However, finding an explicit example is
usually not so obvious.
add comment
A useful way to think about this issue is to consider $J=I^{(n)}$, the $n$-symbolic power of $I$, which by definition is the $I$-primary component of $I^n$.
When $R$ is a polynomial rings over $\mathbb C$, this is the ideal consisting of functions vanishing to order at least $n$ on $X = \text{Spec}(R/I)$.
It is then well-known that the depth of $R/I^{(n)}$ can go down. For example, take $I$ generated by the $2\times2$ minors of the generic $2\times 3$ matrix inside the polynomial rings of the $6$
variables, localized at the maximal ideal of those variables. Then $R/I$ is Cohen-Macaulay of dimension $4$, but $R/I^{(n)}$ would have depth $3$ eventually. For more general statements about ideals
of maximal minors, see for example Section 3 of:
Powers of Ideals Generated by Weak d-Sequences, C. Huneke, J, Algebra, 68 (1981), 471-509.
EDIT: the example above looks specific, but such examples should be abound. I expect most Cohen-Macaulay ideals which are not complete intersections to give an example (it is known that $R/I^n$, the
ordinary powers, are CM for all $n>0$ iff $I$ is a complete intersection). The $2\times 2$ minors gives is a generic situation of non-complete intersection but CM ideal.
A philosophical comment: it is unlikely that Cohen-Macaulayness will be preserved by basic operations on ideals. So if $R/I, R/J$ are CM, we do not expect $R/\sqrt{I}, R/I^n, R/I^{(n)}, R/P$ ($P$ an
associated primes), or $R/(I+J), R/IJ$ etc. to be CM.
The reason is that to preserve depth one needs to control the associated primes, and these operations only allow you to control the support. However, finding an explicit example is usually not so
Counterexample: $k$ a field, $R=k[[X,Y]]$, $I=(Y)$, $J=(XY, Y^2)$.
up vote 2 down vote
add comment
I think it's true. R/I being CM means depth(R/I)=dim(R/I). But as R/I is a factor of R/J by nilradical it follows that dim(R/J)=dim(R/I). As R/I is a factorring of R/J we also have depth
up vote 0 (R/I)$\leq$depth(R/J). But as dim(R/I)=depth(R/I)$\leq$depth(R/J)$\leq$dim(R/J)=dim(R/I) we actually have equality dim(R/J)=depth(R/J), an so R/J is CM.
down vote
add comment
I think it's true. R/I being CM means depth(R/I)=dim(R/I). But as R/I is a factor of R/J by nilradical it follows that dim(R/J)=dim(R/I). As R/I is a factorring of R/J we also have depth(R/I)$\
leq$depth(R/J). But as dim(R/I)=depth(R/I)$\leq$depth(R/J)$\leq$dim(R/J)=dim(R/I) we actually have equality dim(R/J)=depth(R/J), an so R/J is CM. | {"url":"http://mathoverflow.net/questions/59398/cm-for-primary-ideal?sort=votes","timestamp":"2014-04-21T15:25:41Z","content_type":null,"content_length":"60262","record_id":"<urn:uuid:fee08791-f392-4f58-a2e0-6714a8d939f7>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00182-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mean - Concept
We have rational functions whenever we have a fraction that has a polynomial in the numerator and/or in the denominator. An excluded value in the function is any value of the variable that would make
the denominator equal to zero. To find the domain, list all the values of the variable that, when substituted, would result in a zero in the denominator.
A lot of times your Math class might start out with a mini unit on statistics. And even though you might not use these skills throughout your Math class this year, these statistics problems often
show up on standardized tests, if you have a high school exit exam it'll probably be there or if you're looking ahead to your future thinking about the SAT and ACT these guys show up all over that.
So it's important to start building your statistics tool bag now.
The first thing you probably already know and that's the study of mean or average. It's kind of the same idea but we have two different names for it mean or average. In statistics you're going to see
the word mean. The way you calculate a mean is to add up all your items remember mean, sum mean add them up after you've done that you divide by the number of items.
This is how your school is going to calculate your GPA, GPA on your report cards stands for grade point average it comes from taking all of your letter grades assigning numbers to them dividing how
many classes that you're taking. So this is really is applicable in the real world and it's not too hard once you remember this formula. Again the formula for mean or average you probably already
know it, is to sum or add together all of your items and then divide by how many items there are in your set.
mean measure of center average | {"url":"https://www.brightstorm.com/math/algebra/introduction-to-statistics/mean/","timestamp":"2014-04-16T07:35:07Z","content_type":null,"content_length":"56873","record_id":"<urn:uuid:eafdec44-8c53-4e28-9916-a667c63ce5e2>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00415-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linear and nonlinear field transformations and their application in the variational approach to nonperturbative quantum field theory
Abstract: The present work concerns nonperturbative variational studies of the effective potential beyond the Gaussian effective potential (GEP) approximation. In the Hamiltonian formalism, we study
the method of non-linear canonical transformations (NLCT) which allows one to perform variational calculations with non-Gaussian trial states, constructed by nonlinear unitary transformations acting
on Gaussian states. We consider in detail a particular transformation that leads to qualitative as well as quantitative improvement over the Gaussian approximation. In particular we obtain a
non-trivial correction to the Gaussian mass renormalization. For a general NLCT state, we present formulas for the expectation value of the $O(N)$-symmetric $\gamma(\phi\sp2)\sp2$ Hamiltonian, and
also for the one-particle NLCT state energy. We also report on the development of a manifestly covariant formulation, based on the Euclidian path integral, to construct lower-bound approximations to
$\Gamma\sb{1PI}$, the generating functional of one-particle-irreducible Green's functions. In the Gaussian approximation the formalism leads to the Gaussian effective action (GEA), as a natural
variational bound to $\Gamma\sb{1PI}$. We obtain, non-trivially, the proper vertex functions at non-zero momenta, and non-zero values of the classical field. In general, the formalism allows
improvement beyond the Gaussian approach, by applying nonlinear measure-preserving field transformations to the path integral. We apply this method to the $O(N)$-symmetric $\lambda(\phi\sp2)\sp2$
theory. In 4 dimensions, we consider two applications of the GEA. First, we consider the N = 1 $\lambda\phi\sp4$ theory, whose renormalized GEA seems to suggest that the theory undergoes SSB, but has
noninteracting particles in its SSB phase. Second, we study the Higgs mechanism in scalar quantum electrodynamics (i.e., $O(2)$ $\lambda\phi\sp4$ coupled to a U(1) gauge field) in a general covariant
gauge. In our variational scheme we can optimize the gauge parameter, leading to the Landau gauge as the optimal gauge. We derive optimization equations for the GEA and obtain the renormalized
effective potential explicitly. | {"url":"http://scholarship.rice.edu/handle/1911/16581","timestamp":"2014-04-16T10:42:04Z","content_type":null,"content_length":"10252","record_id":"<urn:uuid:ce39985e-66b5-4bcf-808a-851c12e21155>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00064-ip-10-147-4-33.ec2.internal.warc.gz"} |
4D Toto Magnum winning probability
Who here can enlighten me on the winning probability of 4D toto, magnum or 1+3D?
I understand if I buy 1 number only and Magnum only draws a single number. My probability is 1 out of 10,000.
But things get complicated when I buy 7 numbers and Magnum draws 23 numbers. So, how the calculation then? What is my probability of winning?
Any maths experts here?
not exactly know the calculation but normally i will only think of striking the first prize only, so still 1 over 10000 loh
But I just want to know the probability of winning a prize regardless it is a 1st prize or conso prize.
QUOTE(MPIK @ Mar 1 2010, 11:10 AM)
Who here can enlighten me on the winning probability of 4D toto, magnum or 1+3D?
I understand if I buy 1 number only and Magnum only draws a single number. My probability is 1 out of 10,000.
But things get complicated when I buy 7 numbers and Magnum draws 23 numbers. So, how the calculation then? What is my probability of winning?
Any maths experts here?
(23/10000) * 7 = ???
I think the toto 6/49, 6/52, 6/55 is more complicated. . .
you have 23 winning numbers, so your chances of winning something is 23/10000 or 0.23%.
1st prize=2.5k, 2nd prize=1k, 3rd=500, special=200, consolation=60.
if you buy all the digits, you would have "won" rm6600 from a total output of 10k (or a return of 66%)
if you buy all digit big and small, you would have "won" 14100 from a total output of 20k (or a return of 70.5%)
either way, the company wins
it is also interesting that if you buy mbox, you get slightly more winnings/rm due to rounding
This post has been edited by lin00b: Mar 1 2010, 11:34 AM
QUOTE(lin00b @ Mar 1 2010, 11:32 AM)
you have 23 winning numbers, so your chances of winning something is 23/10000 or 0.23%.
Your calculation is correct only if I only buy a single number and Magnum draws 23 numbers
or if I buy 23 numbers and Magnum draws only 1 number. Either way, the probability is 23/10000.
how about if I buy 13 numbers and magnum draws 23 numbers? What's the probability then?
How about you pull out a calculator and use it?
QUOTE(MPIK @ Mar 1 2010, 11:38 AM)
Your calculation is correct only if I only buy a single number and Magnum draws 23 numbers
or if I buy 23 numbers and Magnum draws only 1 number. Either way, the probability is 23/10000.
how about if I buy 13 numbers and magnum draws 23 numbers? What's the probability then?
Aiya, you have 13 No, means each draw you have 13/10000 winning chance.
Draw 23 times, then 13/10000 x 23 lor.
This doesn't seem to be PHD material. maths Probability are taught in secondary school for ages.
Why not Mods move it to Education section?
Dont make them richer more
if wanna get complicated....then sometimes a same number open twice at the same draw.....then become 22/10000.
23/10000 only true if 2 same number cannot open at same draw...
wat if open 3 times.....then 21/10000... but harder than jackpot haha...
QUOTE(cherroy @ Mar 1 2010, 05:26 PM)
Aiya, you have 13 No, means each draw you have 13/10000 winning chance.
Draw 23 times, then 13/10000 x 23 lor.
U mean my chance is 299 out of 10000? (13x23)
So much?
How about if I buy 500 numbers? = 500x23/10000 = 11500/10000? Confirm win??
QUOTE(MPIK @ Mar 1 2010, 05:51 PM)
U mean my chance is 299 out of 10000? (13x23)
So much?
How about if I buy 500 numbers? = 500x23/10000 = 11500/10000? Confirm win??
If you have the budget to buy all the numbers then you'll sure win..for at least once.
QUOTE(alanyuppie @ Mar 1 2010, 05:28 PM)
This doesn't seem to be PHD material. maths Probability are taught in secondary school for ages.
Why not Mods move it to Education section?
This thread should be in PHD because it requires a high level of statistics to compute magnum probability...
QUOTE(MPIK @ Mar 1 2010, 05:51 PM)
U mean my chance is 299 out of 10000? (13x23)
So much?
How about if I buy 500 numbers? = 500x23/10000 = 11500/10000? Confirm win??
provided the above calculation is true (which i doubt, have to ponder a lil more, a little rusty in probability)
given that for rm1 bet, the total payout for 23 numbers is rm6600. that's an average of 286/winning number. if buying 500 numbers guarantee you a winning number(which i dont think it will), on
average you would have "won" 286 after having to pay 500
i think that if you buy x number your chance of winning would be a binomial distribution with n=x, p=0.0023, r=1-0.0023
explanation: for each number you have an independent chance of hitting the winning number with a probability of 0.0023 and a chance of 1-0.0023 of not hitting it. so you have to account for hitting
with 1 number, 2 numbers, 3 numbers and so forth
This post has been edited by lin00b: Mar 1 2010, 09:17 PM
QUOTE(lin00b @ Mar 1 2010, 09:14 PM)
provided the above calculation is true (which i doubt, have to ponder a lil more, a little rusty in probability)
given that for rm1 bet, the total payout for 23 numbers is rm6600. that's an average of 286/winning number. if buying 500 numbers guarantee you a winning number(which i dont think it will), on
average you would have "won" 286 after having to pay 500
i think that if you buy x number your chance of winning would be a binomial distribution with n=x, p=0.0023, r=1-0.0023
explanation: for each number you have an independent chance of hitting the winning number with a probability of 0.0023 and a chance of 1-0.0023 of not hitting it. so you have to account for hitting
with 1 number, 2 numbers, 3 numbers and so forth
But u haven answered MPIK's question:-
What is the probability or odds or winning if he buy 500 numbers and Magnum draws 23 numbers.
QUOTE(Jurlique @ Mar 2 2010, 12:07 AM)
But u haven answered MPIK's question:-
What is the probability or odds or winning if he buy 500 numbers and Magnum draws 23 numbers.
i did. i just did not spoon feed you the number.
binomial distribution with n=x (in this case 500); p=0.0023; r=1-0.0023. input that into your calculator if you have one strong enough.
if you dont know how to solve a binomial distribution, perhaps google might be of help.
you might be able to solve using poisson approximation/normal approximation, but its been years since i touched it, and i cant be bothered to look it up.
QUOTE(lin00b @ Mar 2 2010, 01:37 AM)
i did. i just did not spoon feed you the number.
binomial distribution with n=x (in this case 500); p=0.0023; r=1-0.0023. input that into your calculator if you have one strong enough.
if you dont know how to solve a binomial distribution, perhaps google might be of help.
you might be able to solve using poisson approximation/normal approximation, but its been years since i touched it, and i cant be bothered to look it up.
Yeah, and I think its bout time to stop spoonfeeding him any information. This section is DEFINITELY not the section for basic mathematical application. A simple google search would bring him the
answers, but clearly he wants his homework done for him.
One can pick up any probability book and that would explain the workings of 4D Magnum, don't be so lazy.
dont think too much, the winning rate is 50:50
win or lose niah
Based on the binomial distribution, I found the answer for buying 20 numbers winning probability is equal to 3 out of 8433.
the number is logically consistent (too lazy to check).
to those who understand probability, the odds that 4d offer is really much too biased to the banker.
compared to casino games such as roulette, you have a 1 in 37 chance of increasing your bet 36 times. common roulette has 37 numbers where if you get your number, you win 36x your bet. some casino
have 38 numbers and even then there are comments about it being too biased towards the banker.
QUOTE(lin00b @ Mar 2 2010, 11:27 PM)
the number is logically consistent (too lazy to check).
to those who understand probability, the odds that 4d offer is really much too biased to the banker.
compared to casino games such as roulette, you have a 1 in 37 chance of increasing your bet 36 times. common roulette has 37 numbers where if you get your number, you win 36x your bet. some casino
have 38 numbers and even then there are comments about it being too biased towards the banker.
All gamble games are favour to the dealer, banker or 4D operator. Otherwise, no one wanna be a casino owner.
For roulette with a single zero, there are 37 numbers including zero altogether. If you use probability. The dealer should pays you 37 to 1 if you bet the winning number. However, they would only
pays u 35 to 1.
Similarily to 4D, there are 10,000 chances. But when u strike the 1st prize, they will only pay you 2,500 instead of 10,000 provided Toto only draws a single number just like roulette.
However, the calculation will become too complicated when it involved 2nd, 3rd, Special and Conso prize.
This is why I believe TS wanna get the answer to study the probability of 4D as this kind of university level of probability calculation is not being taught in Secondary School.
roulette pays 36 to 1. the banker's advantage is a mere 1/37. compared to payment of magnum of 6600 for 10000. banker's advantage is 34/100.
and all the concept for this type of calculations is taught in secondary schools. introduction to binomial distribution and normal approximation is f5 add maths syllabus. more complex binomial
distribution and approximation to normal and poisson distribution is f6 science stream maths syllabus.
you think uni would have course to calculate gambling odds?
if you pay more attention in school during the subject of probability then you would know out of the 23 numbers, each of the numbers are not related to one another. There are no proven pattern that
So with each numbers, you have 23 times of winning a number out of the odds of 1/10000.
But the thing that I believe is that for magnum, they do not publish their drawing of numbers. So they could have easily manipulate their system to select winning numbers based on buyers. That means
the less buyers of a number will have higher chance or first prize. This way magnum will maximize their profit. But that does not take into consideration of those buying from third party 4d sellers.
So in theory, if you want to stand higher odds of winning, buy from third party 4d sellers. But these are illegals. Buy at your own risk.
Added on March 3, 2010, 12:03 pm
QUOTE(lin00b @ Mar 3 2010, 02:32 AM)
roulette pays 36 to 1. the banker's advantage is a mere 1/37. compared to payment of magnum of 6600 for 10000. banker's advantage is 34/100.
and all the concept for this type of calculations is taught in secondary schools. introduction to binomial distribution and normal approximation is f5 add maths syllabus. more complex binomial
distribution and approximation to normal and poisson distribution is f6 science stream maths syllabus.
you think uni would have course to calculate gambling odds?
roulettes now have number 0 and 00. In europe they even have 000. So the odds for let's say genting is 1/38. Eventhough for 1 person odds are pretty low, but when it comes to involving a big group of
people, it will always become close to the odds. That means banker is always the winner.
that means if the total betting for the day is 1,000,000 then banker would have won 1000000 x 1/38 = 26,315. The scale will tip a little to + or - but in the end banker is always the big winner.
Therefore, gambling is not the way to make money.
This post has been edited by abubin: Mar 3 2010, 12:03 PM
QUOTE(lin00b @ Mar 3 2010, 02:32 AM)
roulette pays 36 to 1. the banker's advantage is a mere 1/37. compared to payment of magnum of 6600 for 10000. banker's advantage is 34/100.
and all the concept for this type of calculations is taught in secondary schools. introduction to binomial distribution and normal approximation is f5 add maths syllabus. more complex binomial
distribution and approximation to normal and poisson distribution is f6 science stream maths syllabus.
you think uni would have course to calculate gambling odds?
Have you been to the casino? Roulette pays only 35 to 1. For double bet (2 numbers), it pays only 17 to 1, corner bet (4 numbers), it pays only 8 to 1.
Unless you have been gambling online, certain online gambling may pays you 36 to 1.
QUOTE(MPIK @ Mar 3 2010, 01:49 PM)
Have you been to the casino? Roulette pays only 35 to 1. For double bet (2 numbers), it pays only 17 to 1, corner bet (4 numbers), it pays only 8 to 1.
Unless you have been gambling online, certain online gambling may pays you 36 to 1.
house rules may vary from place to place, but the comparison between simple casino games such as roulette and big/small is much less biased towards the banker compared to 4d.
at the end of the day, banker will always get a profit. but you as a player may get profit some of the time if you beat the odds. the key to being a successful gambler is to know when to quit when
you are ahead. for the longer you play, the higher chance for you to lose.
in your case, its best to bet corner then, cause you get 8*4/37(or 38/39 depending on wheel)=36/37; compared to 17*2/37 or 35*1/37
This post has been edited by lin00b: Mar 3 2010, 04:05 PM
QUOTE(cloudwan0 @ Mar 2 2010, 12:25 PM)
dont think too much, the winning rate is 50:50
win or lose niah
4d toto kuda
i just blieve in luck
if u have the luck wateva number u buy it will strike
scientific discussion kopitiam'd liao
The probability of winning Jackpot 1 is a gigantic factorial of 10000!
For those who don't understand the concept of a factorial, here is how it works.
For example, a factorial of five (5!) is expanded like this: 4*3*2*1=24. So there are possible combinations of 24 pairs should the grand prize require a paired number from 0-5.
Now, let's try to figure out 10000!
The multiplication of it gives you a total sum that is nearly as good as infinity which is why it is nearly impossible for one to strike Jackpot 1. Don't believe? Go google up factorial calculator of
permutation and combination to compute it.
This post has been edited by samteng: Mar 5 2010, 12:44 AM
Try this on Mbox
mostly each set of number will come twice a week...
just due to my reseach...
last 2 week came out 5687 n 6798..
so its about probelity...learn more bout it...lolx
just my 2 cents
Added on March 5, 2010, 4:53 am
QUOTE(samteng @ Mar 5 2010, 12:41 AM)
The probability of winning Jackpot 1 is a gigantic factorial of 10000!
For those who don't understand the concept of a factorial, here is how it works.
For example, a factorial of five (5!) is expanded like this: 4*3*2*1=24. So there are possible combinations of 24 pairs should the grand prize require a paired number from 0-5.
Now, let's try to figure out 10000!
The multiplication of it gives you a total sum that is nearly as good as infinity which is why it is nearly impossible for one to strike Jackpot 1. Don't believe? Go google up factorial calculator of
permutation and combination to compute it.
Agree dude...
there is a very slim chance of it...
Wat to do,If easy give ppl win business,who 1 to do...
This post has been edited by anson lee: Mar 5 2010, 04:53 AM
QUOTE(samteng @ Mar 5 2010, 12:41 AM)
The probability of winning Jackpot 1 is a gigantic factorial of 10000!
For those who don't understand the concept of a factorial, here is how it works.
For example, a factorial of five (5!) is expanded like this: 4*3*2*1=24. So there are possible combinations of 24 pairs should the grand prize require a paired number from 0-5.
Now, let's try to figure out 10000!
The multiplication of it gives you a total sum that is nearly as good as infinity which is why it is nearly impossible for one to strike Jackpot 1. Don't believe? Go google up factorial calculator of
permutation and combination to compute it.
odds of jackpot 1 is more likely to be 1/10000 x 1/10000 rather than what you say. you need to get the first number right (1/10000) and then independently the second number right (1/10000)
..and that's how they work. You cannot win by brute-forcing so you either have to be really really lucky or to consistently purchase numbers which ironically may not net you any winnings in the end.
All the same its a loser's game and I think only fools play it. Throwing money away man.
statistics in this case is rubbish. say its 3/1000. what if all the 3 winning chances are accumulated at the 1st three times you buy toto? its 100% win aint it.
Mr. Sam went to a stadium to watch live football. Inside the stadium there were 10000 audiences (excludes Sam).
Halfway though the game, Sam realized that he has lost his mobile phone in the stadium. He sat at the far end corner and he remembered that his trips to purchase pop corn and also went to toilet made
him going though one round of the entire stadium.
In the quest of getting his mobile phone back, he informed the organizer.
The following conditions apply by the organizer:
1) 1 out of the 10000 audiences found the phone
2) The organizer allows him to point/choose 23 audiences out of the 10000 audiences
3) If he manages to point out the audience that found his phone, he will get his phone back
4) Else, says bye-bye to the phone forever
Let's us wish him best of luck on getting his mobile phone back!
QUOTE(C-Note @ Mar 11 2010, 12:27 AM)
statistics in this case is rubbish. say its 3/1000. what if all the 3 winning chances are accumulated at the 1st three times you buy toto? its 100% win aint it.
the chance of that happening is approximately 27/1000000000 (in actuality 1/1000 * 1/999 * 1/998). while it can happen to you, i wouldnt bet on it
This post has been edited by lin00b: Mar 11 2010, 01:54 AM
QUOTE(lin00b @ Mar 1 2010, 09:14 PM)
provided the above calculation is true (which i doubt, have to ponder a lil more, a little rusty in probability)
given that for rm1 bet, the total payout for 23 numbers is rm6600. that's an average of 286/winning number. if buying 500 numbers guarantee you a winning number(which i dont think it will), on
average you would have "won" 286 after having to pay 500
i think that if you buy x number your chance of winning would be a binomial distribution with n=x, p=0.0023, r=1-0.0023
explanation: for each number you have an independent chance of hitting the winning number with a probability of 0.0023 and a chance of 1-0.0023 of not hitting it. so you have to account for hitting
with 1 number, 2 numbers, 3 numbers and so forth
i think this is by far the most logic explanation...
dont buy lar ...how also u lose
unless u are lucky
Winning a lottery or contest is like one in a million.. Unless you're very lucky.
For example a contest. Lets say I submit 3k of form, it might be a lot for us, but its actually only a mere few percent of winning chance maybe less than 0.3%.. Coz we are not the only one who
participate, there are thousand other of people outside who join the same contest..
same goes to lottery, there are so many sets of number, from 0001 to 9999...
the Rules of Probability of winning in betting/gambling is always 50% by the punter or the person betting. It doesn't matter how the bet is being made or what is the condition of gambling rules
adhere to (4d, 6d, i-box, m-box, etc.), it is always either you win or lose (50% chance). The simplest logical explanation is - If you bet 3000 4d nos with Magnum 4d, are you guaranteed a winning of
30% on that draw?? The answer is a very simple -NO!! Not even if you bet 5000 nos. or 7000 nos. that DOES NOT GUARANTEED A WINNING IN ANY PRIZE AT ALL!! Therefore the Rules of Probability of gambling
is always 50% - 50% chance of winning (meaning, it is either you win or lose) simple.
The probability in raking in gambling bets is therefore the totally opposite of the above rules, as it is strictly for the betting operator/casino owner/bookies as they will determine on the
statistic of punters or no. of people betting on their games. The more punters betting, the more income they will generate based on the odds calculation of the games they are providing. This rules is
not applicable to the PUNTERS!! as you are not betting with everyone that placed a bets, but you're betting with the operator/bookies.
Added on March 11, 2010, 1:55 pmIf you bet, you get 50% chance of winning. But if you didn't bet you are 100% did not lost (therefore - didn't lose = wins!!).
This post has been edited by escay.h: Mar 11 2010, 01:55 PM
QUOTE(skon9 @ Mar 11 2010, 10:34 AM)
Winning a lottery or contest is like one in a million.. Unless you're very lucky.
For example a contest. Lets say I submit 3k of form, it might be a lot for us, but its actually only a mere few percent of winning chance maybe less than 0.3%.. Coz we are not the only one who
participate, there are thousand other of people outside who join the same contest..
same goes to lottery, there are so many sets of number, from 0001 to 9999...
actually its from 0000 to 9999
Oh Btw, if you are thinking of winning Magnum4d Jackpot - then u should ask them if they allow you to buy Jackpot Pemutation of 0000-9999 and pay RM10,000 for it for a 100% SURE WIN GUARANTEED!!
HAHAHAHA.... If you do, pls sedekah me poor man 1% of your winning ok ah?
QUOTE(escay.h @ Mar 11 2010, 02:08 PM)
Oh Btw, if you are thinking of winning Magnum4d Jackpot - then u should ask them if they allow you to buy Jackpot Pemutation of 0000-9999 and pay RM10,000 for it for a 100% SURE WIN GUARANTEED!!
HAHAHAHA.... If you do, pls sedekah me poor man 1% of your winning ok ah?
If u buy 0000 - 9999 for jackpot, i dun think count like tat one..
Coz system bet counts like tat:
If you buy 0000 - 9999, means 10,000 numbers..
Den you'll have to pay:
10,000 * (10,000 - 1)
= 10,000 * 9999
But max system bet number is 20?
Wow RM99,990,000 to buy a guaranteed win, must make sure Jackpot Prize is more than RM99,990,001 to win RM1 GUARANTEED from Magnum4d then!!
But are u sure there is a limit on the system bet in the system?
Yes.. System bet max is 20 bet. But i haven't try 20 number with 24 permutations which makes 480 numbers.. But still far away form 10,000..
Magnum was my dad's business btw~ Haha~
I go to shop and check 1st~~
I try to screenshot of the jackpot betting screen..
Jackpot now only 30m..
This post has been edited by amyhs99: Mar 13 2010, 03:12 AM
QUOTE(amyhs99 @ Mar 11 2010, 02:30 PM)
If u buy 0000 - 9999 for jackpot, i dun think count like tat one..
Coz system bet counts like tat:
If you buy 0000 - 9999, means 10,000 numbers..
Den you'll have to pay:
10,000 * (10,000 - 1)
= 10,000 * 9999
But max system bet number is 20?
cant you have the same number for jackpot?
gosh, instead of spending time on these stuff
why not work hard ? -.-
QUOTE(Winniekhoo89 @ Mar 15 2010, 04:42 PM)
gosh, instead of spending time on these stuff
why not work hard ? -.-
work hard cant get 30mil oso...
mind you thins thread is not about encouraging people to gamble, but trying to work out the maths behind various gambling activities.
and showing proof that most of the risk is on the player and not the banker (not that i doubt any of you dont know that) and the only way to win, is not to play
use permutations and combinations of whatever number you want, and divide by the number of results that they give daily. then you should get the answer.
like :10000/23 - eg. 4 digit toto. since 23 results daily.
=1 in 435 will get something.
therefore each day, they get:
to get a jackpot, its : (3/10000)^2
=1 in 150million. to get any jackpot.
to get special price its: (10/10000)x(3/10000)
=1 in 30million.
Spend Rm15 every week for probability to win thousand ringgit consider ok even the probability is too high
That is investment also. Smoker spent more than Rm15 every week and they can nothing
QUOTE(Sifha238 @ Mar 24 2010, 01:21 PM)
Spend Rm15 every week for probability to win thousand ringgit consider ok even the probability is too high
That is investment also. Smoker spent more than Rm15 every week and they can nothing
They get lung cancer
QUOTE(Sifha238 @ Mar 24 2010, 01:21 PM)
Spend Rm15 every week for probability to win thousand ringgit consider ok even the probability is too high
That is investment also. Smoker spent more than Rm15 every week and they can nothing
you need to know the difference between investment and gambling. many investor dont, according to kiyosaki (among others)
QUOTE(lin00b @ Mar 24 2010, 06:05 PM)
you need to know the difference between investment and gambling. many investor dont, according to kiyosaki (among others)
Gambling is haram
so we can discuss about gambling now? if that is so i would like to talk about card counting, implied odds, pot odds and so much more. may i go on?
please continue
Ahhh finally a more acceptable math answer. Can you show the calculation Juelique.
i dont think all these number gambling is truly help people....let say you own this toto..
would you like to give people RM15k for spending buying RM5 ticket of number?? of course not....
if me the owner.... i use statiscally math...to calculate or to find the number that nobody buy using computer program.... its sooo easyyy.....
and give the small2 prize to some people for getting trusted the buyers....
i will get richer n richer....
there is no fast money ...fast money, fast spending oso....
QUOTE(escay.h @ Mar 11 2010, 01:49 PM)
the Rules of Probability of winning in betting/gambling is always 50% by the punter or the person betting. It doesn't matter how the bet is being made or what is the condition of gambling rules
adhere to (4d, 6d, i-box, m-box, etc.), it is always either you win or lose (50% chance). The simplest logical explanation is - If you bet 3000 4d nos with Magnum 4d, are you guaranteed a winning of
30% on that draw?? The answer is a very simple -NO!! Not even if you bet 5000 nos. or 7000 nos. that DOES NOT GUARANTEED A WINNING IN ANY PRIZE AT ALL!! Therefore the Rules of Probability of gambling
is always 50% - 50% chance of winning (meaning, it is either you win or lose) simple.
The probability in raking in gambling bets is therefore the totally opposite of the above rules, as it is strictly for the betting operator/casino owner/bookies as they will determine on the
statistic of punters or no. of people betting on their games. The more punters betting, the more income they will generate based on the odds calculation of the games they are providing. This rules is
not applicable to the PUNTERS!! as you are not betting with everyone that placed a bets, but you're betting with the operator/bookies.
Added on March 11, 2010, 1:55 pmIf you bet, you get 50% chance of winning. But if you didn't bet you are 100% did not lost (therefore - didn't lose = wins!!).
this made more sense .. lol ...
QUOTE(thus @ Apr 5 2010, 10:02 PM)
i dont think all these number gambling is truly help people....let say you own this toto..
would you like to give people RM15k for spending buying RM5 ticket of number?? of course not....
if me the owner.... i use statiscally math...to calculate or to find the number that nobody buy using computer program.... its sooo easyyy.....
and give the small2 prize to some people for getting trusted the buyers....
i will get richer n richer....
there is no fast money ...fast money, fast spending oso....
on one hand you talked about statistical maths, and ramble onto computer rigging numbers.
didnt toto have some representative rolling a barrel of physical number balls and see what drops out? or is that another game?
QUOTE(99chan @ Mar 26 2010, 07:42 PM)
so we can discuss about gambling now? if that is so i would like to talk about card counting, implied odds, pot odds and so much more. may i go on?
i can help with a bit of basic poker maths if u want.
mahjong maths is more interesting ^^
If you have all the opened numbers & winning position for past 10 years maybe we can come up with a tabulation of a "ranking system" that shows from 0000-9999 what are the "probablilities" of each
number opening. For example numbers that came out in higher prize will have it's probability diminished greatly, while those like "consolidation prize" may have less probability deducted.
I don't see how difficult it is from calculation & processing point of view as theres only 10000 numbers.
Difficult task is getting all the historical data, as you know sites like Magnum require manual writing down the numbers as they use jpeg instead of text.
why would historical trends have anything to do with luck? that is the gambler's fallacy.
if i have a coin, and for the past 10 flips, i get 4 heads, 6 tails; what do you think my next flip would most likely be?
You never studied probability do you?
read up on dependent and independent events.
QUOTE(abubin @ Mar 3 2010, 11:57 AM)
But the thing that I believe is that for magnum, they do not publish their drawing of numbers. So they could have easily manipulate their system to select winning numbers based on buyers. That means
the less buyers of a number will have higher chance or first prize. This way magnum will maximize their profit. But that does not take into consideration of those buying from third party 4d sellers.
Magnum's numbers are not drawn by computers but by physical drums and balls. I suggest you go watch a live draw before making comments like that. The most they can do to minimize risk is to place a
lower sales limit on some popular numbers.
On the most pairs of numbers which you can possibly buy on a single Jackpot ticket, it is:
PM24 x PM24 = 576 pairs, where each pair is RM2 = RM1,152
System bet 20 only allows 190 pairs = RM380. No permutation possible for system bet yet.
This post has been edited by Marcian: Apr 8 2010, 03:05 PM
Here's an interesting probability problem i read somewhere:
A contestant on a game show is presented 3 doors, one of which has a car behind it and two do not.
The contestant is asked to pick one.
After choosing a door, the host then MUST open one of the two doors not picked by the contestant. The door opened does NOT have a car behind it.
Now if you're the contestant, and you get to change your choice to the other unopened door, would u do it? does it make a difference? Why?
QUOTE(lin00b @ Apr 8 2010, 12:54 AM)
why would historical trends have anything to do with luck? that is the gambler's fallacy.
if i have a coin, and for the past 10 flips, i get 4 heads, 6 tails; what do you think my next flip would most likely be?
ive read a long argument on that in another forum. and im not so sure its that clearcut.
it all boils down to simple math and probability. You cannot determine the order nor the next number drawn as the actual draw by single numbers is determined by having the 1st number drawn which is 1
/10 which is a 10% chance that the 1st number you choose will be drawn. same goes for the next till the 4th number. this is of course if you do an even distribution. hence to calculate the
probability of your numbers getting selected would be 1/10 * 1/10 *1/10 *1/10 = 0.0001% of your 4 digits getting chosen which still equals to 1/10k. no need to think so much.
it's all a gamble nia.
QUOTE(teongpeng @ Apr 8 2010, 10:44 PM)
Here's an interesting probability problem i read somewhere:
A contestant on a game show is presented 3 doors, one of which has a car behind it and two do not.
The contestant is asked to pick one.
After choosing a door, the host then MUST open one of the two doors not picked by the contestant. The door opened does NOT have a car behind it.
Now if you're the contestant, and you get to change your choice to the other unopened door, would u do it? does it make a difference? Why?
yes, you should change.
W=win L=lose
1st pick : result
W -> change -> L
L -> change -> W
L -> change -> W
there is 2/3 chance is winning if change
and the coin answer is simple, you will have 50:50 of getting head:tail regardless of past results.
Added on April 15, 2010, 8:12 pm
QUOTE(Noyze @ Apr 15 2010, 09:26 AM)
it all boils down to simple math and probability. You cannot determine the order nor the next number drawn as the actual draw by single numbers is determined by having the 1st number drawn which is 1
/10 which is a 10% chance that the 1st number you choose will be drawn. same goes for the next till the 4th number. this is of course if you do an even distribution. hence to calculate the
probability of your numbers getting selected would be 1/10 * 1/10 *1/10 *1/10 = 0.0001% of your 4 digits getting chosen which still equals to 1/10k. no need to think so much.
it's all a gamble nia.
you also have to realize the psychological factors involved. people often treat 4d as aiming for the right number, that is why you get a lot of "aiyah, nearly kena!" when results are announced.
these nearly kena includes, missed by 1 number, or numbers are jumbled up.
which would mean for a single bet of unique number (no repeat), you will have around 40+24=64 "nearly kena"
in case of (x)box when they automatically jumble up the number for you, for a single unique number, you will have around 24*40=960 "nearly kena"
so people think that they are nearly there and continue to bet.
This post has been edited by lin00b: Apr 15 2010, 08:12 PM
QUOTE(lin00b @ Apr 15 2010, 08:06 PM)
and the coin answer is simple, you will have 50:50 of getting head:tail regardless of past results.
To simpletons that might be the obvious thing to say. But in a sequence, when a coin comes up heads for the last 10 times (or whatever times)...it is getting more and more likely that the next flip
will be tail.
50/50 means wutever goes up must come down. When u draw a graph that shows heads N number of times....you are getting closer and closer to the peak (unknown) when the switch would occur.
Perhaps someone with good indepth mathematical knowledge can explain this further for you (with forumlas even).
QUOTE(teongpeng @ Apr 16 2010, 12:42 AM)
To simpletons that might be the obvious thing to say. But in a sequence, when a coin comes up heads for the last 10 times (or whatever times)...it is getting more and more likely that the next flip
will be tail.
50/50 means wutever goes up must come down. When u draw a graph that shows heads N number of times....you are getting closer and closer to the peak (unknown) when the switch would occur.
Perhaps someone with good indepth mathematical knowledge can explain this further for you (with forumlas even).
no, thats gambler's fallacy. the odds of getting 10 heads in a row is 0.5^10; however, given that you have already achieved the improbable results of 9 heads in a row, the odds of getting the 10th
head is still 0.5. results that have already occurred has no effect on future results IF the events are independent (which a simple coin toss, is independent)
QUOTE(lin00b @ Apr 16 2010, 01:36 AM)
no, thats gambler's fallacy. the odds of getting 10 heads in a row is 0.5^10; however, given that you have already achieved the improbable results of 9 heads in a row, the odds of getting the 10th
head is still 0.5. results that have already occurred has no effect on future results IF the events are independent (which a simple coin toss, is independent)
im not going to argue with you further on this because i cant elaborate. but i know i'm right (as always).
to put money where my mouth is....if u can flip heads 10 times on a normal coin...i'll give u 10:9 odds that the 11th throw will be tails. up for it? the 10 heads throw must be in one sequence.
if you are adamant that its still 50/50...then 10:9 odds would be a steal for you am i right?
This post has been edited by teongpeng: Apr 16 2010, 01:50 AM
from wikipedia:
The gambler's fallacy, also known as the Monte Carlo fallacy (due to its significance in a Monte Carlo casino in 1913)[1] or the fallacy of the maturity of chances, is the belief that if deviations
from expected behaviour are observed in repeated independent trials of some random process then these deviations are likely to be evened out by opposite deviations in the future. For example, if a
fair coin is tossed repeatedly and tails comes up a larger number of times than is expected, a gambler may incorrectly believe that this means that heads is more likely in future tosses.[2] Such an
expectation could be mistakenly referred to as being due. This is an informal fallacy. It is also known colloquially as the law of averages.
or more specific to this discussion:
We can see from the above that, if one flips a fair coin 21 times, then the probability of 21 heads is 1 in 2,097,152. However, the probability of flipping a head after having already flipped 20
heads in a row is simply 1/2 . This is an application of Bayes' theorem.
This can also be seen without knowing that 20 heads have occurred for certain (without applying of Bayes' theorem). Consider the following two probabilities, assuming a fair coin:
probability of 20 heads, then 1 tail = 0.520 × 0.5 = 0.521
probability of 20 heads, then 1 head = 0.520 × 0.5 = 0.521
The probability of getting 20 heads then 1 tail, and the probability of getting 20 heads then another head are both 1 in 2,097,152. Therefore, it is equally likely to flip 21 heads as it is to flip
20 heads and then 1 tail when flipping a fair coin 21 times. Furthermore, these two probabilities are as equally likely as any other 21-flip combinations that can be obtained (there are 2,097,152
total); all 21-flip combinations will have probabilities equal to 0.521, or 1 in 2,097,152. From these observations, there is no reason to assume at any point that a change of luck is warranted based
on prior trials (flips), because every outcome observed will always have been equally as likely as the other outcomes that were not observed for that particular trial, given a fair coin. Therefore,
just as Bayes' theorem shows, the result of each trial comes down to the base probability of the fair coin: .
This post has been edited by lin00b: Apr 16 2010, 02:02 AM
» Click to show Spoiler - click again to hide... «
Yes i know that. its obvious. and i've been holding on to that same idea since i was small kid. But in the last coupla years or so i came across an argument on another forum and made me rethink the
whole concept.
Consider this:
is it more likely to get
a)21 heads or;
b)20 heads and a tail (non-consequetively).
Is it more likely to get 21 heads consequetively, or is it more like to get (lets say) 17 heads consequetively?
Do you see where im going?
This post has been edited by teongpeng: Apr 16 2010, 02:35 AM
QUOTE(teongpeng @ Apr 16 2010, 02:34 AM)
» Click to show Spoiler - click again to hide... «
Yes i know that. its obvious. and i've been holding on to that same idea since i was small kid. But in the last coupla years or so i came across an argument on another forum and made me rethink the
whole concept.
Consider this:
is it more likely to get
a)21 heads or;
b)20 heads and a tail (non-consequetively).
Is it more likely to get 21 heads consequetively, or is it more like to get (lets say) 17 heads consequetively?
Do you see where im going?
ai ai ai......
but in the end each and every toss the chance is 50:50......
mathematically of cause the chance of getting consequetives diminishes as more tosses are made. but the last toss the chance to get heads or tails is still 50:50
QUOTE(teongpeng @ Apr 16 2010, 01:47 AM)
im not going to argue with you further on this because i cant elaborate. but i know i'm right (as always).
to put money where my mouth is....if u can flip heads 10 times on a normal coin...i'll give u 10:9 odds that the 11th throw will be tails. up for it? the 10 heads throw must be in one sequence.
if you are adamant that its still 50/50...then 10:9 odds would be a steal for you am i right?
"to put money where my mouth is"
why don't u go to Genting and bet ur life savings on BIG/SMALL after u see 20 times SMALL/BIG in one sequence. ur chance is still less than 50% bro, on the 21st game
QUOTE(slimey @ Apr 16 2010, 03:54 AM)
ai ai ai......
but in the end each and every toss the chance is 50:50......
mathematically of cause the chance of getting consequetives diminishes as more tosses are made. but the last toss the chance to get heads or tails is still 50:50
its 50/50 if the the last toss were to be made singularly.
but if u had known the past sequence,
bet on the more likely sequential percentage and gain the upper hand?
Added on April 16, 2010, 8:05 am
QUOTE(acougan @ Apr 16 2010, 07:43 AM)
"to put money where my mouth is"
why don't u go to Genting and bet ur life savings on BIG/SMALL after u see 20 times SMALL/BIG in one sequence. ur chance is still less than 50% bro, on the 21st gameÂ
ok, but why not i bet against you since to be fair, we both need to put money where our mouth is
This post has been edited by teongpeng: Apr 16 2010, 08:06 AM
QUOTE(teongpeng @ Apr 16 2010, 02:34 AM)
» Click to show Spoiler - click again to hide... «
Yes i know that. its obvious. and i've been holding on to that same idea since i was small kid. But in the last coupla years or so i came across an argument on another forum and made me rethink the
whole concept.
Consider this:
is it more likely to get
a)21 heads or;
b)20 heads and a tail (non-consequetively).
Is it more likely to get 21 heads consequetively, or is it more like to get (lets say) 17 heads consequetively?
Do you see where im going?
you just failed to understand the whole "past result does not affect future result". before you make a series of tosses, the probability of getting a predetermined series of result is 0.5^n, where n
is the number of tosses. however, regardless of series length and where you are in that series, the probability of getting head in the next toss is always 0.5
QUOTE(lin00b @ Apr 16 2010, 08:39 AM)
you just failed to understand the whole "past result does not affect future result". before you make a series of tosses, the probability of getting a predetermined series of result is 0.5^n, where n
is the number of tosses. however, regardless of series length and where you are in that series, the probability of getting head in the next toss is always 0.5
If you are talking about tossing a coin DISREGARDING the previous results then yes, the probability is 50:50
but if you are talking about tossing a coin many times and determine the probability of getting a predefined result such as 3 heads and 3 tails then it would not be 50:50 (to be exact, it is 5/16)
1) what are the odds of getting a coin to show heads in 1 flip? 50:50
2) what are the odds of getting a coin to flip heads after knowing that it has given head, tails, tails, on its previous 3 throws? it is STILL 50:50 (note that the events are independent)
3) what are the odds of getting a coin to flip Head, Tail, Tail, Head in correct sequence? 1/16 : 15/16
See where I am going?
Edit: wrong probability
This post has been edited by cheecken0: Apr 16 2010, 09:27 AM
QUOTE(teongpeng @ Apr 16 2010, 07:57 AM)
ok, but why not i bet against you since to be fair, we both need to put money where our mouth is
act i alrdy lost my shirt once on the 19th game couple yrs back(19 games consecutively against me till i ran out)
i was naive like u once, js becos u think ur theory is "right", doesn't mean real-life odds will support it
QUOTE(acougan @ Apr 16 2010, 09:20 AM)
act i alrdy lost my shirt once on the 19th game couple yrs back(19 games consecutively against me till i ran out)
i was naive like u once, js becos u think ur theory is "right", doesn't mean real-life odds will support it
lol. poor you. but saying something is more likely to happen does not mean it definitely will happen. to get 20 heads in a row is just bad luck on your part.
Added on April 16, 2010, 6:00 pm
QUOTE(cheecken0 @ Apr 16 2010, 09:08 AM)
If you are talking about tossing a coin DISREGARDING the previous results then yes, the probability is 50:50
but if you are talking about tossing a coin many times and determine the probability of getting a predefined result such as 3 heads and 3 tails then it would not be 50:50 (to be exact, it is 5/16)
1) what are the odds of getting a coin to show heads in 1 flip? 50:50
2) what are the odds of getting a coin to flip heads after knowing that it has given head, tails, tails, on its previous 3 throws? it is STILL 50:50 (note that the events are independent)
3) what are the odds of getting a coin to flip Head, Tail, Tail, Head in correct sequence? 1/16 : 15/16
See where I am going?
Edit: wrong probability
thank you.
Added on April 16, 2010, 6:02 pm
QUOTE(lin00b @ Apr 16 2010, 08:39 AM)
you just failed to understand the whole "past result does not affect future result". before you make a series of tosses, the probability of getting a predetermined series of result is 0.5^n, where n
is the number of tosses. however, regardless of series length and where you are in that series, the probability of getting head in the next toss is always 0.5
its not that i fail to understand. i already told u i understood. all im asking is for ppl to explore an alternate view to the same problem. haih.....
Learn to think out of the box ppl!
This post has been edited by teongpeng: Apr 16 2010, 06:02 PM
QUOTE(Marcian @ Apr 8 2010, 02:57 PM)
Magnum's numbers are not drawn by computers but by physical drums and balls. I suggest you go watch a live draw before making comments like that. The most they can do to minimize risk is to place a
lower sales limit on some popular numbers.
On the most pairs of numbers which you can possibly buy on a single Jackpot ticket, it is:
PM24 x PM24 = 576 pairs, where each pair is RM2 = RM1,152
System bet 20 only allows 190 pairs = RM380. No permutation possible for system bet yet.
hmm.. meaning if I go to Magnum, I can only buy 190 pairs of numbers max?
QUOTE(teongpeng @ Apr 16 2010, 05:58 PM)
lol. poor you. but saying something is more likely to happen does not mean it definitely will happen. to get 20 heads in a row is just bad luck on your part.
Added on April 16, 2010, 6:00 pm
thank you.
Added on April 16, 2010, 6:02 pmits not that i fail to understand. i already told u i understood. all im asking is for ppl to explore an alternate view to the same problem. haih.....
Learn to think out of the box ppl!
there is no "out of the box" to think of, this is not a creative question. this is how much you understand statistics and probability (as well, as how sometimes, whats "logical" is not always true)
Added on April 16, 2010, 8:30 pm
QUOTE(cheecken0 @ Apr 16 2010, 09:08 AM)
If you are talking about tossing a coin DISREGARDING the previous results then yes, the probability is 50:50
but if you are talking about tossing a coin many times and determine the probability of getting a predefined result such as 3 heads and 3 tails then it would not be 50:50 (to be exact, it is 5/16)
1) what are the odds of getting a coin to show heads in 1 flip? 50:50
2) what are the odds of getting a coin to flip heads after knowing that it has given head, tails, tails, on its previous 3 throws? it is STILL 50:50 (note that the events are independent)
3) what are the odds of getting a coin to flip Head, Tail, Tail, Head in correct sequence? 1/16 : 15/16
See where I am going?
Edit: wrong probability
i'm not so sure you got the 3 head/3 tail (in any sequence) right, but too lazy to check at the moment.
true, now answer my original question,
if i have a coin, and for the past 10 flips, i get 4 heads, 6 tails; what do you think my next flip would most likely be?
which some genius answered,
ive read a long argument on that in another forum. and im not so sure its that clearcut.
what do you think?
This post has been edited by lin00b: Apr 16 2010, 08:30 PM
QUOTE(lin00b @ Apr 16 2010, 08:22 PM)
i'm not so sure you got the 3 head/3 tail (in any sequence) right, but too lazy to check at the moment.
I have worked it out. I used the binomial distribution to find out the possible combinations of 3 heads and 3 tails. If there is any error, it should fall under incorrect substitution not misuse of
QUOTE(lin00b @ Apr 16 2010, 08:22 PM)
what do you think?
Your next coin flip would still give equal chances of both heads and tails. because it is the same as not knowing the results of the previous flips and flipping it as normal. so yeah.
Answered from an SPM student's point of view.
tis kind of thread oso got in phd section?
QUOTE(MPIK @ Mar 1 2010, 11:10 AM)
Who here can enlighten me on the winning probability of 4D toto, magnum or 1+3D?
I understand if I buy 1 number only and Magnum only draws a single number. My probability is 1 out of 10,000.
But things get complicated when I buy 7 numbers and Magnum draws 23 numbers. So, how the calculation then? What is my probability of winning?
Any maths experts here?
go ask pakcik2 at kedai kopi..they all master lo
I think it depends how you see the results & individual numbers as related or independant events. Say the older results are:
1st issue) 1234
2nd issue) 1234
If you regard each digit's as independant event, then the probability will be 1/10.
From the 1st issue result, the probability of result "1234" coming out would be 1/10*1/10*1/10*1/10 = 0.0001
If you view the 2nd result "1234" as another independant event, the probability would still be 0.0001
But if you take into the account of the probability of 2nd result "1234" coming out after the 1st result "1234", they the probability will become dependant of previous result, so the probability of
"1234" to come out again after the 1st "1234" would be 0.0001*0.0001, which will become 0.00000001 (1^-8).
obviously the chances of the 2nd result coming to be "1234" right after the 1st result "1234" is pretty much impossible, and that other numbers stand higher chances than it.
So the crucial point is to determine whether the winning numbers (or even individual digits) are independant events or dependant to the previous results, which we have been discussing so far.
this also can become phd topic?
better a discussion in practical statistics and probability than another "do you believe in XXX"
QUOTE(skeleton202 @ Apr 18 2010, 10:32 PM)
tis kind of thread oso got in phd section?
QUOTE(f4tE @ Apr 21 2010, 03:34 PM)
this also can become phd topic?
If the topic title is named Practical applications of theoretical Statistics: gambling with 4D as an example, will you comment any differently?
Then again, 4D is more to... classical statistics anyway.
Just a question:
How do you know if the lottery is skewed to pick a number which was not bought?
Didn't read through the whole thread so I don't know if the question is answered already. But anyway, to answer this question:
But things get complicated when I buy 7 numbers and Magnum draws 23 numbers. So, how the calculation then? What is my probability of winning?
This is an elementary probability question actually; since Magnum draws 23 numbers the chances of an individual win is 23/10000 or 0.0023. A simple way to compute the probability is by a calculation
of Bernoulli trials, where you can find the probability of winning more than once,i.e. 1-(probability of losing all 7 times)
1-(0.9977^7)=0.0159893349, or approximately 1.6%
However, to calculate for expected value, you must take the average return; it is 100% mathematically correct, because in all probability calculations, we are calculating the effect in the long run
anyway, so it's no point considering each prize individually (if we did, the calculations would be 10 times longer, but I guarantee that if you combined them you'll end up with the same final answer
as me). Therefore, you'd still end up losing in the long run (as you probably know), if the cost of playing a single number is RM1 and the return from a win is the average of RM6600/23(from post #5
by lin00b)=RM287
Your average return on a RM7 bet, i.e. risk RM7 to win RM287 with probability 0.0159893349:
E.V. is RM4.58824393/RM7=0.655463419, meaning you get back about 65.5 sen for every RM1 you gamble in the long run
I should point out that the difference in probability of winning more than once in a single game compared to winning exactly once is only marginal. Consider the probability of winning exactly once:
(7C1)*(0.0023)*(0.9977^6)=0.0158790936, or approximately 1.6%(again!)
therefore 287*0.0158790936=RM4.5566093
Your E.V. is 0.650944186, meaning you get back about 65 sen for every RM1 you gamble. Just a 0.5% difference with that of winning more than once.
But anyway, to answer the question, the mathematical probability of winning a 23/10000 game with 7 trials at least once is 0.0159893349, or approximately 1.6%
I should also add that the game IS independent of previous results as long as the drawn numbers are genuinely random; the reason why we seldom see the same number included in the winners twice in a
row is because the probability for such a coincidence to occur is 1 - (0.9977^23) = 5%, which does happen actually...once in 20 times. As for the same number being the first prize winner twice in a
row, that's a simple 0.01% chance.
And this type of game is not rigged as far as I know, since for the drawn numbers to be biased(i.e. deliberately picked based on the frequency of the numbers people bought), Magnum (or any similar
company) must have access to the numbers people have bought. But, if I'm not mistaken, that is not the case; to claim your winnings all you have to do is bring your number slip as proof of purchase,
and there is no record of exactly which numbers were bought. Either that, or there are witnesses to watch the numbers being randomly drawn. Now everything I just mentioned in this paragraph is 100%
my theory without reference, and there's a good chance that I'm downright wrong about the procedure that they take.
But again, mathematically, the numbers drawn can't be effectively rigged because statistically, every number combination was probably bought with an evenly distributed frequency! Maybe with
exceptions of 1111,2222 etc but that only marginally affects the distribution, and besides, if that was the case, those would be the published winners. Given the scenario, the increase of their edge
would be only a small amount, and it has to be weighed against the risk of getting caught, which is highly likely, and will result in bankruptcy (who would gamble with a cheating company if they were
exposed?). Long story short, it's not profitable for these companies to cheat when probability says they get to keep RM1 for every RM3 that is 'invested' onto them.
This post has been edited by kenwei: Apr 24 2010, 02:05 PM
QUOTE(advocado @ Apr 21 2010, 02:50 PM)
I think it depends how you see the results & individual numbers as related or independant events. Say the older results are:
1st issue) 1234
2nd issue) 1234
If you regard each digit's as independant event, then the probability will be 1/10.
From the 1st issue result, the probability of result "1234" coming out would be 1/10*1/10*1/10*1/10 = 0.0001
If you view the 2nd result "1234" as another independant event, the probability would still be 0.0001
But if you take into the account of the probability of 2nd result "1234" coming out after the 1st result "1234", they the probability will become dependant of previous result, so the probability of
"1234" to come out again after the 1st "1234" would be 0.0001*0.0001, which will become 0.00000001 (1^-8).
obviously the chances of the 2nd result coming to be "1234" right after the 1st result "1234" is pretty much impossible, and that other numbers stand higher chances than it.
So the crucial point is to determine whether the winning numbers (or even individual digits) are independant events or dependant to the previous results, which we have been discussing so far.
Ahhh, talking probability. I like. Have you any idea how Magnum conducts the 4D draw? I don't think so. If you did, you will not have so much question on independent events. Let me iron it out for
you. There are 6 drums. Lets name them P, SC, d1, d2, d3, d4. SC contain 13 blue balls (marked A to M) and 10 white balls. The digit drums (d1-d4) each contained 10 balls maked 0-9. The SC drum and
the digits drum will draw a ball for 23 times. If the white ball is drawn on SC, the the corresponding numbers balls that are drawn will be a consolation prize. The balls drawn on the digit durms
will be put back into the drum to draw the next number. Balls on the SC drum will not put back into the SC drum. Once balls had been depleted from the SC drum, we had 13 special prizes and 10
consolation prizes. The consolation prizes were labeled A to J accordingly. The P drum now has 13 blue balls, Label A to J. It will be drawn 3 times. The 3 of the corresponding 13 special prizes with
the same letter will be selected as 1st, 2nd, 3rd prize. The balance 10 will be the special prize.
I am not good in math and would appriciate you work out the probability.
QUOTE(faceless @ Apr 26 2010, 03:30 PM)
Ahhh, talking probability. I like. Have you any idea how Magnum conducts the 4D draw? I don't think so. If you did, you will not have so much question on independent events. Let me iron it out for
you. There are 6 drums. Lets name them P, SC, d1, d2, d3, d4. SC contain 13 blue balls (marked A to M) and 10 white balls. The digit drums (d1-d4) each contained 10 balls maked 0-9. The SC drum and
the digits drum will draw a ball for 23 times. If the white ball is drawn on SC, the the corresponding numbers balls that are drawn will be a consolation prize. The balls drawn on the digit durms
will be put back into the drum to draw the next number. Balls on the SC drum will not put back into the SC drum. Once balls had been depleted from the SC drum, we had 13 special prizes and 10
consolation prizes. The consolation prizes were labeled A to J accordingly. The P drum now has 13 blue balls, Label A to J. It will be drawn 3 times. The 3 of the corresponding 13 special prizes with
the same letter will be selected as 1st, 2nd, 3rd prize. The balance 10 will be the special prize.
I am not good in math and would appriciate you work out the probability.
seems the be a rather roundabout ways of doing things, i'm not sure what the P drum adds to the results.
however, to the buyer, the odds of winning something should remain the say. i see nothing that affects the independence of the winning numbers.
QUOTE(lin00b @ Apr 26 2010, 08:03 PM)
seems the be a rather roundabout ways of doing things, i'm not sure what the P drum adds to the results.
Agree. The method I describle was used in the 1990s. They previously (1980's) used only 5 drums. Four digits drums and Prize drum. The prize drum had balls marked 1st, 2nd, 3rd, ten balls marked "S"
for special and 10 unmarked balls for consolation. Their reason was it adds to the suspense. In the 80s you will have no idea when the first prize is drawn. The 90s system, leaves the last moments to
draw the top 3 prizes. Their draw is open to public. I am not sure if they had change the system today as I no longer work there. The system is only applicable to Magnum. I had not seen how Toto or
Kuda conduct their draw.
So, Avocado working is correct? I had given back all the math I learnt to the teacher.
avocado is wrong, as previous result do not affect the next result. thats why for magnum you sometime see 1 number hitting 2 prize. toto discard the repeated number.
kenwei is more logical. or you can refer to my solution in earlier post
Hahahah, I can remember the first time that happens. The draw committee were looking at each other dumbstruck. They were silent for 1 minute. Finally the decision was a number that had been drawn
should be deem as drawn, therefore it should qualify to win two prizes.
instead of calculating probabilty.. it is better that u consider 4d is gambling and it is juz pure luck
Anything that had odds can be turned into gambling, like football. Calculation probability will keep the mind thinking.
Funny that this discussion came up. After all it was gambling that gave birth to probability.
Regarding the phenomena where when something keeps increasing it will eventually fall, its called regression towards the mean. I means that no matter how much a value drifts it will eventually zero
and swing so that it will average out towards the mean. Thats why families that have tall kids will have very short ones etc.
Having said that, given that nothing in this world is perfect, there will be a slight skew in the numbers. That is given. Even roulette tables can have slight tile or balance problems leading to
certain numbers being favoured. There was a maths professor that tried that out and actually won some money from a casino.
But having said that.......It has been worked out on how to make sure that the eventual winner is the seller of the numbers. And when enough repetitions have been performed, the method or the balls
will be changed to prevent the mean number due to imperfections will be shown. | {"url":"https://forum.lowyat.net/topic/1340809/all","timestamp":"2014-04-16T04:13:12Z","content_type":null,"content_length":"547078","record_id":"<urn:uuid:e365446d-90a0-4e48-ae42-03175019806d>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00170-ip-10-147-4-33.ec2.internal.warc.gz"} |
Corona, NY Science Tutor
Find a Corona, NY Science Tutor
...My concentration is Regents Earth science, and I have experience preparing students for the Regents exam. Earth Science may seem like another language to you, but I have unique tools to help
you understand difficult and new concepts and to help you prep for the Regents!I have completed: - the ...
3 Subjects: including astronomy, geology, Regents
...We all excel faster in some areas and slower in others. I'm here to help you wherever you are in that spectrum. I prefer to use concepts the student is familiar with to explain the unfamiliar.
25 Subjects: including psychology, English, writing, reading
...Right now my schedule is pretty flexible consisting mostly of reviewing my own material, and other school related tasks. I have experience with tutoring biology and chemistry. I tutor by
figuring out the weaknesses in study technique and study material at the first session then decide at the end of each session what we will do the next session.
10 Subjects: including organic chemistry, biochemistry, genetics, microbiology
...For example, I'm very familiar with power series methods, with transforming a differential equation of order n into a system of n first-order differential equations with easily accessible
solutions, with numerical and computational methods, with families of solutions like Bessel functions, and wi...
37 Subjects: including organic chemistry, microbiology, philosophy, physical science
...I have scored in the 97th percentiles on the math section of the SAT and math subject tests (SAT IIs). I can teach any math up to calculus. I have been through many stages of learning. I
wasn't a very strong student in high school, particularly in the sciences.
13 Subjects: including chemistry, geometry, precalculus, algebra 1
Related Corona, NY Tutors
Corona, NY Accounting Tutors
Corona, NY ACT Tutors
Corona, NY Algebra Tutors
Corona, NY Algebra 2 Tutors
Corona, NY Calculus Tutors
Corona, NY Geometry Tutors
Corona, NY Math Tutors
Corona, NY Prealgebra Tutors
Corona, NY Precalculus Tutors
Corona, NY SAT Tutors
Corona, NY SAT Math Tutors
Corona, NY Science Tutors
Corona, NY Statistics Tutors
Corona, NY Trigonometry Tutors | {"url":"http://www.purplemath.com/Corona_NY_Science_tutors.php","timestamp":"2014-04-19T06:57:30Z","content_type":null,"content_length":"23855","record_id":"<urn:uuid:d7344d62-cd86-450f-a7fd-498d497fe8db>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00275-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent US4225850 - Non-fingerprint region indicator
The invention herein described was made in the course of or under a contract or subcontract with the U.S. Government.
1. Application Ser. No. 722,308 filed Sept. 10, 1976 for MINUTIAE PATTERN MATCHER by John P. Riganati et al, now U.S. Pat. No. 4,135,147.
2. Application Ser. No. 722,244 filed Sept. 10, 1976 for AUTOMATIC PATTERN PROCESSING SYSTEM by John P. Riganati et al, now U.S. Pat. No. 4,151,512.
3. Application Ser. No. 847,987 filed Nov. 2, 1977 for METHOD AND APPARATUS FOR AUTOMATIC EXTRACTION OF FINGERPRINT CORES AND TRI-RADII by John P. Riganati et al, now U.S. Pat. No. 4,156,230.
4. Application Ser. No. 934,657 filed Aug. 17, 1978 for SYSTEM FOR EXPANDING THE VIDEO CONTRAST OF AN IMAGE by Stanley A. White et al.
1. Field of the Invention
This invention relates to pattern identification systems and more particularly to a non-fingerprint-like region indicator for indicating whether or not a region or portion of a pattern of interest is
free of fingerprint-like pattern data, as to be of no interest in the processing of such pattern.
2. Description of the Prior Art
The advent of high speed digital data processors has enabled the mass handling of fingerprint pattern data by automatic data retrieval and pattern identification systems. Such systems involve the
treatment of an image as a large matrix of many discrete elements which provide a mosaic pattern resembling the image of interest. For a grey-coded image, for example, each discrete element or pixel
thus has an intensity score of weighting and also an assigned set of coordinates corresponding to the position of such pixel in the image plane or field.
The range of average pixel intensity (for the pixels in a given region) may be quite wide over the large number of regions comprising the total image pattern. Due to dynamic range limitations of the
data processing equipment, such range of average pixel intensity over such regions may need be compressed or "normalized" relative to equipment performance limitations. Such gain normalization may be
achieved in a manner corresponding to automatic gain control techniques. In this way, image definition is retained in each region or sub-pattern of the overall pattern, without either saturating the
equipment (by regions with strong average intensities) or losing image definition (in regions of weak average intensities).
Other equipment limitations occur with regard to logic limitations of the pattern matcher to correlate an imperfect, broken, smudged or dirty image with a clean reference image. Such image-data
processing limitations have led to the development of pre-processors for masking or editing such image data, whereby the image content thereof could be made more useful to the pattern identification
system, rather than rejected as a mismatch or pattern not of interest.
Still further devices have been devised for reducing the volume of the data retrieval and processing involved in the pattern matching process by the use of automatic image classification techniques.
By means of such techniques, recognized fingerprints are classified in accordance with image classifications occurring in the automatic pattern recognition process. An automatic recognition process
of interest is the encoding of minutiae data of a fingerprint into a relative information vector (RIV) format. Other techniques include the identification of cores and deltas. An exposition of
exemplary forms of such pattern processing techniques is included in the following copending patent applications, all of which are assigned to Rockwell International Corporation, assignee of the
subject patent application:
1. Application Ser. No. 722,308 for MINUTIAE PATTERN MATCHER, filed Sept. 10, 1976 by John P. Riganati et al
2. Application Ser. No. 722,244 for AUTOMATIC PATTERN PROCESSING SYSTEM, filed Sept. 10, 1976 by John P. Riganati et al
3. Application Ser. No. 847,987 for METHOD AND APPARATUS FOR AUTOMATIC EXTRACTION OF FINGERPRINT CORES AND TRI-RADII, filed Nov. 2, 1977 by John P. Riganati et al
4. Application Ser. No. 934,657 for SYSTEM FOR EXPANDING THE VIDEO CONTRAST OF AN IMAGE, filed Aug. 17, 1978 by Stanley A. White et al.
However, all of such fingerprint pattern recognition systems require pre-processing or pre-editing of regions of the image data, not only to format the data in a form and style compatible with the
pattern recognition system mechanization, but to also either fix-up or edit-out useless regions of an otherwise useful image of interest, so as to avoid "false alarms" and unnecessary rejects or
useless attempts to perform pattern recognition of unrecognizable data.
What is desired is further means for indicating or pre-editing pattern data signals relative to non-fingerprint-like regions within an image field or pattern of interest, with a view to reducing
unnecessary processing of less relevant image data and improving the consequent quality and speed of image matching.
By means of the concept of the invention there is provided signalling means for enhancing the performance of automatic fingerprint identification systems by reducing the system response to local
non-fingerprint regions of an image field pattern of interest. Such non-fingerprint region indicator or signalling means may be employed to "vote" with or to supplement the editing provided by other
pre-editing systems or used in lieu of other types of pre-editing of electrical image-pattern signals.
In a preferred embodiment of the invention there is provided a two-dimensional generalized sequency analyzer, such as a fast Fourier transform (2D-FFT) machine or the like, responsive to a
binary-coded image signal for identifying discrete frequency terms occurring within a selected bandwidth corresponding to a spatial frequency region associated with descriptions of a fingerprint
image. There is also provided logic means responsive to the bandwidth limited generalized transform analyzer for further identifying the relative energy levels of the bandwidth-limited spectral
content of the binary-coded image signal to provide the identity of a non-fingerprint region within the scanned image, represented by the binary-coded signal.
In normal operation of the above-described arrangement, the failure to detect discrete frequency terms indicative of a fingerprint in a particular region of a scanned image results in a
"non-fingerprint-like region" indication for such region. Such indication may be used to avoid, screen, block or edit-out such region from the image pattern being subjected to the pattern recognition
process. In this way, the processing time required to effect the overall pattern recognition process may be reduced, and the certainty of recognition is improved by the exclusion of image data not of
Accordingly, it is an object of the invention to enhance the performance of automatic fingerprint pattern recognition systems.
It is another object of the invention to provide alternative signalling means for supplementary pre-editing of grey-coded electrical signals corresponding to a scanned fingerprint image for providing
an indication of a non-fingerprint-like region within such image.
It is a further object to distinguish a non-fingerprint-like region amid a fingerprint image of interest by the relative absence (from said region) of discrete fast Fourier transform elements within
a preselected bandwidth of spatial frequencies.
A still further object is to identify a non-fingerprint-like region of an image as being an image region not of interest in the pattern recognition of fingerprint patterns.
These and other objects of the invention will become readily apparent from the following description, taken together with the accompanying drawings in which:
FIGS. 1a and 1b are a representative spatial bar pattern and its associated Fourier transform.
FIGS. 2a and 2b are a stylized minutia pattern and the location of its Fourier components in a half plane.
FIGS. 3a and 3b are a stylized delta pattern and the location of its Fourier components in a half plane.
FIGS. 4a, 4b and 4c are a stylized core-like pattern and its Fourier components in a half plane.
FIGS. 5a and 5b are a representative noisy pattern and the location of its Fourier components in the half plane, which FIG. 5c illustrates the magnitudes of the associated Fourier coefficients.
FIG. 6 is a representation of the bandpass limited Fourier transform response region of interest.
FIG. 7 is a block diagram of a system in which the concept of the invention may be advantageously employed.
FIG. 8 is a block diagram of a system embodying the concept of the invention.
FIG. 9 is a schematic arrangement, partially in block form, of an exemplary mechanization for the data compaction block element included in FIG. 8.
FIG. 10 is an illustration of an exemplary compaction of a representative pixel pattern, as performed by the device of FIG. 9.
FIG. 11 is a timing diagram for the device of FIG. 9.
FIG. 12 is a schematic arrangement, partially in block form, of the binary-coding thresholded signalling block of FIG. 8.
FIG. 13 is a responsive diagram, illustrating the four-state, three-level thresholded response of the logic device 33 of FIG. 8.
FIGS. 14a and 14b are a schematic arrangement, partially in block form, of the "scoring" or confidence logic block of FIG. 8.
The purpose and function of the invention described herein is to supplement or augment the pre-edit or preliminary image signal conditioning and masking in an automatic fingerprint identification
system, by indicating the probable absence of fingerprint-like structure in an identified region of a fingerprint pattern of interest. The performance of such function involves a consideration of the
gross structure of a fingerprint pattern, and treats a local region of a fingerprint pattern as a system of thick parallel lines. The gross structure of a fingerprint pattern resembles parallel thick
curves. However, within a local region, these curves appear as almost straight lines.
Several exceptions to such appearance of a local fingerprint region appearance as parallel thick line structure are:
1. the existence of minutia
2. the existence of cores and deltas
3. the existence of gaps, pores and scars and other noise effects, such as poor contrast and smudging.
The last item (image noise) may be internal to the specific fingerprint itself (gaps, pores, scars) or due to the quality of the detection and reproduction of the print (poor contrast and smudging).
It has been discovered that an examination of the spatial frequency spectral content or Fourier transform term provides a means of automatically detecting and indicating a non-fingerprint region
within a fingerprint pattern field of interest. Referring to FIG. 1a there is illustrated a representative spatial bar pattern corresponding to a fingerprint-like region within a fingerprint pattern
field of interest. As illustrated, a vertical array of three horizontally parallel lines or ridges is interleaved by three valleys for a region or frame 64 mils square (0.064 inches by 0.064 inches),
corresponding to a spatial periodicity of three cycles per frame. However, it has been determined that within the reference region or frame size (0.064"×0.064") the contemplated spatial periodicity
includes the range from 2 to 5 cycles per frame. The corresponding predominant frequency term is shown in FIG. 1b, the ordinate, ω[y], of which represents spatial frequency in a reference or vertical
direction, and abscissa, ω[x], of which represents spatial frequency occurring in a directional orthogonal to the first direction.
The existence of a minutia region, as illustrated in idealized form in FIG. 2a, produces two predominant Fourier terms, spaced somewhat closely together in the frequency domain, as shown in FIG. 2b.
Such idealized form of FIG. 2 may be viewed as if produced by rotation of the lower lines of FIG. 1, with the two frequency terms of FIG. 2b resulting from a decomposition of the predominant term of
FIG. 1b.
For a delta-like region (of a fingerprint), as illustrated in the idealized form in FIG. 3a, three predominant Fourier terms occur, as shown in FIG. 3b. For a core-like region (of a fingerprint), as
illustrated by the idealized form of FIG. 4a, two (somewhat like) predominant discrete Fourier frequency terms occur, displaced 90° apart as shown in FIG. 4b, the amplitude of such high power or
predominant terms being shown (in the ω axis) in FIG. 4c.
A Fourier transform estimate of a noisy pattern (FIG. 5a) usually results in many low-energy, higher-frequency terms, a typical representation for which is shown in FIG. 5b. The low energy content of
such Fourier terms of FIG. 5b are shown in the ω axis of the discrete spectral energy distribution depicted in FIG. 5c, and is to be compared with the energy levels depicted in FIG. 4c for the
spectral distribution of FIG. 4b. For a light or datk patch, (i.e., little or no pattern), the discrete Fourier terms would be clustered about the origin in FIG. 5b.
In view of the foregoing, it is to be appreciated that a means of testing a spatial spectrum of interest for fingerprint content or identification as a non-fingerprint indicating region has been
conceived as a combination of bandpass limiting of the spatial spectrum response to within a preselected spatial frequency region of interest. Such bandpass limited discrete Fourier transform
response region is depicted graphically in FIG. 6, as excluding frequencies above and below those corresponding to 2-5 cycles per 64 mil frame. Thus, both high frequency noise and low spatial
frequency modulation terms are discarded. Referring again to FIG. 5c, showing the spectral energy distribution associated with a noisy or noise-type pattern, it is to be seen that many of the
discrete frequency Fourier terms associated therewith would lie outside the preselected bandpass region, while those terms within the bandpass, being low-level relative to the terms of interest in
FIGS. 4a, 4b and 4c, are susceptible to thresholding.
Because the discrete Fourier transform terms for spatial frequencies for a given fingerprint region are relatively few, the effects of non-fingerprint or non-regular source contributions to such
terms can be attenuated by a "scoring" technique, contrived to discriminate in terms of that predominant discrete frequency (f[1]) having the highest energy level (e[1]). Such scoring system will
attenuate the energy term (e[i]) for other than the largest discrete Fourier term by a factor (d[1i] ^1/2) corresponding to the reciprocal of the square or other function of the spectral interval
between the predominant frequency energy term e[1] and the energy terms (e[i]) for such other discrete Fourier terms. For at least two such other terms (i.e., three predominant terms):
E=e[1] +e[2] /d[12] ^2 +e[3] /d[13] ^2. (1)
In this way, the energy contributed by randomly present discrete frequencies within the spatial spectrum of interest is attenuated and the score, E, tends to approach the value of e[1]. The score E
may then be further tested by thresholding as an indication of the confidence level with which such term indicates the presence of a fingerprint-like region (high-level thresholding) or, conversely,
the presence of a non-fingerprint-like region (null or low-level thresholding).
A system in which the concept of the invention may be advantageously employed is shown in FIG. 7.
Referring now to FIG. 7, there is illustrated in block diagram form an automatic fingerprint reader system comprising an input section 20, decision logic section 21, and display indicator 22. Input
section 20 includes electro-optic means for converting an image or optical impression into a series of grey-coded electrical signals which are applied to the decision logic 21 for a determination as
to whether the image may include fingerprint data of interest, while utilization means 22 displays the machine decision in that regard. Alternatively, block element 22 may comprise fingerprint
classifier and comparator means.
There is further provided in the arrangement of FIG. 7 non-fingerprint-like region indicator means 23, responsive to the output 24 of signalling device 20, for "weighting", gating or otherwise
modulating the output of display indicator 22 in accordance with the concept of the invention. In other words, element 23 provides a control signal output indicative of the occurrance of a
non-fingerprint-like region in the review of the regions of a fingerprint image of interest, for the purpose of preventing spurious image non-match decisions based on processing of irrelevant image
The device of block 23, embodying the concept of the invention, is shown in further detail in FIG. 8.
Referring now to FIG. 8, there is illustrated a block diagram of a system embodying the concept of the invention. There is provided data compaction means 30 responsive to a "grey-coded" pixel image
signal output of the sensor or input stage of an automatic fingerprint reader system. Such "grey-coded" signal represents a scanned image as a matrix of grey-coded pixels or discrete elements of a
mosaic or pattern corresponding to the pattern of interest. The data density of the signal of interest is greater than that required for the non-fingerprint indicator function of the invention.
Accordingly, to unnecessarily cope with such data density would merely slow-down the data processing function or require unnecessary data processing capacity, neither of which is desirable.
Therefore, 4:1 data compaction of the grey-coded data is provided by means of element 30, whereby an image of 32×32 pixels is reduced in definition or resolution to an image of 16×16 pixels. In other
words, a 4-pixel cluster of 2×2 pixels is supplanted by a single grey-coded pixel, the code or intensity of which is the average of those pixels being supplanted. Such compaction technique is also
useful in interfacing the data input source with the fast Fourier transform device 32 to be employed for spatial frequency analysis of the input data. Such interfacing or scaling may be selected to
overlap the sub-areas or regions utilized by other testing or data pre-editing schemes, so that the editing function provided thereby may be supplemental to that provided by such other testing
schemes for such regions.
If deemed desirable, an AGC function may be interposed at the input to data compaction means 30, in accordance, for example, with the teachings of U.S. Application Ser. No. 934,657 for SYSTEM FOR
EXPANDING THE VIDEO CONTRAST OF AN IMAGE, filed Aug. 17, 1978, by Stanley A. White, assignor to Rockwell International Corporation, assignee of the subject application. However, signal normalization
within the fingerprint reader input system (i.e., element 20 in FIG. 1) may obviate any necessity for such additional function.
Because the fast Fourier analyzer 32 in the arrangement of FIG. 8 employs binary-coded (i.e., black-white coded) data, there is interposed between the output of data compaction means 30 and fast
Fourier transform device 32 binary-coding means 31. Binary coder 31 functions as an adaptive threshold device to threshold each of the (compacted) grey-coded pixels within each discrete region (of
16×16 pixels) comprising the pattern field of interest, at a threshold level corresponding to the median value grey-code for the the 256 pixels within such region. The result is a binary-coded image
of 16×16 pixels representing a 64×64 mil^2 region, the image definition of which is affected only by the changes in contrast within the region and which is unaffected by the general lightness or
darkness of such region and corresponding, say, to the image pattern regions depicted in FIGS. 1a, 2a, 3a, 4a or 5a.
The binary-coded 16×16 pixel output of thresholding means 31 is applied to the input of a 16×16 element, two-dimensional fast Fourier transform machine (2D-FFT) 32. 2D-FFT 32 serves to scan or
analyze the data frame (for each region of the image field of interest) in two mutually orthogonal directions (corresponding to, say, horizontal and vertical directions of the pattern region in FIG.
1a) to develop or extract discrete Fourier terms for each frequency of the preselected range of discrete frequencies (i.e., 2-5 cycles per 64 mil frame size). Such limited bandpass is achieved by
merely omitting use of the output taps for Fourier terms corresponding to discrete frequencies outside (i.e., above and below) such preselected bandwidth.
The organization of data extraction means 32, as illustrated in FIG. 8 may be comprised of a fast Fourier transform 32a, input-coupled to the output of binary coding means 31 by means of a buffer
32b, and employing a scratch pad memory 32c. The output of element 32a may also include an output buffer 32d.
The output of data extraction means 32, representing the discrete Fourier energy-versus-discrete frequency terms extracted from the binary-coded image pattern, is fed to means 33 for processing the
extracted data. Such extracted data processing means comprises confidence logic 33a for implementing the "scoring" or confidence expression of Equation (1) above and also multi- (three-) level
thresholding means for presenting such score as a four-level (two-bit) binary coded representation (i.e., 0, 1, 2 or 3, as shown in FIG. 13). The output signal of logic 33a may be buffered by buffer
means 33b, if desired, for appropriate interfacing with output signal utilization means (not shown).
The construction, arrangement and utilization of signal buffering means for purposes of interfacing between various functionally-cooperating digital equipments is well-understood in the art and does
not constitute a novel aspect of the disclosed invention. Accordingly, elements 32b, 32d and 33b are shown in block form only for convenience in exposition.
The construction, arrangement and cooperation of data compaction means 30 is explained in further detail by means of FIGS. 9, 10 and 11.
Referring now to FIG. 9, there is illustrated in block diagram form, data compaction means 30 of FIG. 1. There are provided a pair of clocked, double-throw switches SW[1] and SW[2], a pair of delay
elements 35 and 36, and a pair of digital summers 37 and 38.
An armature terminal 39 of first digital switch means SW[1] comprises an input terminal responsive to, say, a 4-bit input signal, the signal representing the scanning of a pattern of grey-coded
pixels corresponding to a matrix of 750×750 elements (shown in FIG. 10). First switch SW[1] is operated at the clock rate at which such matrix is scanned and in synchronism with the scanning of
successive elements in a given row (or a given element in successive columns) of the matrix, being in a mutually exclusive one of alternate states for odd and even columns, respectively. As
illustrated, in a first state switch SW[1] is connected to an input of first delay 35, and (in a second switched state) is connected to an input of digital summer 37. Digital summer 37 is further
responsively coupled to an output of delay means 35. An armature terminal 40 of second switch SW[2] is coupled to the output of first summer 37, while second delay 36 and second summer 38 are
arranged to cooperate with second switch SW[2] in like manner as the cooperation among elements 35 and 37 with element SW[1]. Second switch SW[2] is operated at the reduced clock rate at which
successive elements in a given column are sequentially scanned, corresponding to the rate at which successive rows of the input matrix is sequentially scanned and in synchronism therewith. Such
reduced clock rate corresponds to the reciprocal of the interval of time required to successively scan each of the elements or pixels of a given column. The means of clocking switches SW[2] and SW[2]
is well understood in the prior art and does not constitute a novel aspect of the invention and are therefore not shown in further detail.
Each of delay elements 35 and 36 may be comprised of prior art clocked shift registers, the delay provided by first delay means 35 corresponding to one-half the switching periodicity or reciprocal of
the switching frequency of associated switch SW[1]. In other words, the time delay provided by element 35 is equal to one-half the interval for switch SW[1] to switch from one state to the second
state, from the second state to the first state, and back to the second state. Similarly, second delay means 36 provides a time delay corresponding to one-half the periodicity of associated switch SW
In normal operation of the arrangement of FIG. 9, switch SW[1] applies the 4-bit pixel data of odd numbered columns of pixels (of an exemplary 750×750 matrix in FIG. 10) to delay means 35 and the
pixel data of even numbered columns to digital summer 37, such cyclically alternate status of switch SW[1] being illustrated by curve 41 on FIG. 11. Thus, for input data representing a scanning of
sequential pixels of row 1, first pixel (1,1) and second pixel (1,2) of FIG. 10 are presented on successive alternate states of switch SW[1], grey-coded amplitude data for pixel (1,1) being fed to
delay element 35 and the amplitude data for the next pixel (1,2) being subsequently fed directly to digital summer 37.
First delay means 35 delays the data of first pixel (1,1) by an amount corresponding to one pixel data period (curve 42 in FIG. 11), as to make such delayed data coincident with the SW[1] sample time
for the second pixel (1,2) on curve 41. The combining of the amplitude data for pixels (1,1) and (1,2) is thus effected by means of digital summer 35 (curve 43 in FIG. 11).
The cooperation among elements SW[2], 36 and 38 is similar to that of elements SW[1], 35 and 37, whereby adjacent sets of pixel pairs are combined to complete the 4:1 data compaction process. Switch
SW[2] (in FIG. 9) applies the pixel amplitude data for odd rows (of the first or input matrix in FIG. 10) to second delay means 36 and that for even rows directly to digital summer 38, such cyclical
alternate state of switch SW[2] being illustrated by curve 44 in FIG. 11. Thus, for input data representing a scanning of sequential pixels of row 1 (i.e., combined pixel pairs ((1,1) (1,2)) and
successive pixel pairs ((1,3) (1,4)), etc. for row 1 of the input matrix in FIG. 10) are applied to delay means 36. The pixel pairs (2,1) (2,2) and successive pixel pairs (2,3) (2,4) of input matrix
row 2 are applied to the input of summer 38.
Second delay means 36 delays the data of each combined first column or vertical pixel pair (1,1) (1,2) and (1,3) (1,4) by an amount corresponding to one row scan period (curve 45 at (1,1) (1,2) in
FIG. 11), as to make such delayed odd-row data coincident with the SW[2] sample time for the successive even-numbered row of combined pixel data (curve 43 at (2,1) (2,2) in FIG. 11). The combining of
the amplitude data for the cluster of four pixels (1,1), (1,2), (2,1) and (2,2) of the input matrix of FIG. 10 is thus effected by means of digital summer 38, resulting in an output pixel (1,1) of
the output matrix of FIG. 10. In similar fashion, the data for the four input pixels (1,3), (1,4), (2,3) and (2,4) is combined to provide a single output pixel (1,2), the second pixel in column 1 of
the output matrix of FIG. 1. Upon subsequent completion of column 1 of the output matrix, the pixel amplitude data for columns 3 and 4 of the input matrix of FIG. 10 are processed to form column 2 of
the output matrix. Input pixels (3,1) and (4,1) of curve 41 (FIG. 11) are delayed (at curve 42) and combined with a respective one of input pixels (3,2) and (4,2) at curve 43 (FIG. 11). Duo-pixel
(3,1) (3,2) is delayed (at curve 45 in FIG. 11) and then combined with duo-pixel (4,1) (4,2) at curve 46 to provide the output pixel (2,1) in the output matrix of FIG. 10. In like fashion the
amplitudes of input pixels (3,3), (3,4), (4,3) and (4,4) are combined to form the amplitude of output pixel (2,2). Thus, it is to be appreciated that the output matrix of FIG. 10 represents a 2×2 or
4:1 data compaction of the of the input matrix of FIG. 10, and that the arrangement of FIG. 9 cooperates to provide such 4:1 data compaction.
Although the amplitude of each grey-coded output pixel of the output matrix of FIG. 10 has been described in terms of the amplitude sum of the grey-coded input pixels, it is clear that such sum may
be scaled (by an attenuation factor of 4) to restore the scaling of the input signal. Also, while combining (and scaling) of the input amplitudes has been described as an averaging technique, it is
clear that median values may be employed from among the grey-code utilized, as is well understood in the art.
The data-compacted, grey-coded output matrix of FIG. 10 is next binary-coded prior to being subjected to the fast Fourier transform extraction process, as shown in further detail in FIG. 12.
Referring now to FIG. 12, there is illustrated in further detail the binary-coding means 31 of FIG. 8. There is provided differential signalling means 40 responsive to the compacted data matrix
signal output of element 30 (in FIG. 8) and further responsive to a preselected reference signal amplitude for providing an output signal indicative of the difference between the applied inputs
thereto. Such differential signalling means may comprise a difference amplifier responsive to two signals of like polarity for providing an output indicative of the sense of the difference
therebetween. Alternatively, a signal summing amplifier may be employed and the applied sense of the threshold reference signal selected to be opposed to the polarity of the applied grey-coded data
signal, whereby the sense of the output is indicative of whether or not the signal amplitude exceeds the preselected threshold value. The value of such threshold is nominally selected to be one-half
of the grey-coded amplitude range. A coincidence, or AND, gate 42 responsive to the output of element 40 and to a sign bit timing pulse or clock, provides a "1" state output upon the coincidence of
the timing pulse and a signal amplitude state in excess of the threshold; otherwise a "0" state output occurs at gate 42. Accordingly, the arrangement of FIG. 12 cooperates to convert an applied
grey-coded input to a binary-coded output.
The binary-coded output of element 31 (in FIG. 8) is applied as an input to the fast Fourier transform machine 32a of FIG. 8, which machine extracts for preselected discrete (spatial) frequencies the
Fourier power or energy terms for each of two mutually orthogonal directions. For example, in each of the x and y axes of a data frame (or selected region of an image field of interest) a predominant
frequency term (∓ω[x].sbsb.1 and ∓ω[y].sbsb.1) may exist. In the two-dimensional discrete frequency plane representation of FIG. 6 such data may plot as two points (+w[1], +y[1]) and (-w[1], -y[1]).
As illustrated in FIG. 6, the distance between each discrete point in the direction of either axis represents a frequency interval of 1 cycle per frame, the two regions (0, to ∓1 cycles per frame)
and (6 cycles per frame and above) being masked. In other words, the unmasked region corresponds to the discrete frequencies:
∓ω[x] =2-5 cycles per frame
∓ω[y] =2-5 cycles per frame.
Thus, it is to be appreciated that the discrete output terminals of data extraction means 32 corresponds to the continuous frequency plane in an analog optical Fourier processor, while the use of
only a preselected plurality of adjacent taps for bandpass filtering corresponds to the optical masking employed with such optical techniques. Such optical techniques of displaying a bandpass limited
(continuous) Fourier transform of a time domain signal is discussed in the work of Cutrona (See, for example, the article "On the Application of Coherent Optical Processing Techniques to Synthetic
Aperture Radar", by L. J. Cutrona, et al, at pages 1026 et seq. Proceedings of the IEEE, Vol. 54, No. 8, August 1966). A further discussion of the frequency plane of an optical system for processing
time domain data corresponding to a spatial line image is presented in U.S. Pat. No. 3,545,841 issued to M. J. Dentino et al for Non-Scanning Optical Processor for Display of Video Signals. FIG. 4b
of such U.S. patent shows a plus and minus modulation frequency component (+ω[s].sbsb.1 and -ω[s].sbsb. 1) displayed in the (optical) frequency plane. While such optical techniques as applied to
continuous Fourier transform tend to be limited to one-dimensional Fourier transforms, two-dimensional fast Fourier transform machines for discrete Fourier transform processing digital electronic
data signals are well understood in the digital systems art. See, for example, the discussion at pages 240 and 241 in the article "What is the Fast Fourier Transform?" by W. T. Cochran et al in the
volume, Digital Signal Processing, edited by Rabiner and Rader, (pp 240-250), published by the IEEE Press (1972), which article describes the solution speeds achieved in the operation of an IBM 7094
computer for such purposes. Such 1972 reference volume also includes (at pages 263, et seq. thereof) the article "A Method for Computing the Fast Fourier Transform with Auxiliary Memory and Limited
High Speed Storage" by Richard C. Singleton, which reference (at page 264 thereof) discusses not only the earlier use of the IBM 7094 but also the alternate use of the Burroughs B5500 for high speed
computation of the fast Fourier transform. The article "An Algorithm for computing the Mixed Radix Fast Fourier Transform", by Richard C. Singleton, included at pages 294, et. seq. of the above-noted
IEEE Press reference, describes in an Appendix I a FORTRAN program used on CDC 6400, CDC 6600, IBM 360/67, and Burroughs B5500 machines for such computations. Other references are included in such
volume and also listed in the bibliography accompanying the reference articles published therein. A further reference to the prior art of FFT machines is included in U.S. Pat. No. 3,748,451 to L. D.
Ingwersen for General Purpose Matrix Processor with Convolution Capabilities, particularly Columns 1, 2 and 3 thereof.
Accordingly, the availability and use of commercially available machines for two-dimensional fast Fourier transform computation of discrete Fourier transforms is sufficiently well-known and
understood in the art, as to obviate further exposition of the structure and means employed for such purpose. Therefore, element 32b is shown in block form in FIG. 8 for convenience only in
The output of the fast Fourier transform machine 32 will appear on a number of output lines, the frame time coincidence of the data on such lines representing a data set for a given data point in the
two-dimensional matrix of FIG. 6: A discrete frequency in each of the two frequency coordinates (∓ω[x], ∓ω[y]) and an associated power or energy level (e[i]). Such energy level for each discrete
two-dimensional frequency point or data set is then processed by a system of comparators to determine the predominant data set, or data set having the predominant energy level, designated at e[i] in
Equation (1), above. Such a system of comparators is shown in FIG. 14a.
Referring now to FIGS. 14a and 14b there is illustrated in further detail an exemplary mechanization of the extracted data processor 33 of FIG. 8. FIG. 14a schematically depicts in block diagram form
the logic for determining the data sets for the predominant energy term e[1], and the next two most predominant terms e[2] and e[3] of Equation (1), together with the associated two-dimensional
frequency coordinates (ω[x], ω[y]) associated therewith, while FIG. 14b depicts the mechanization of the term E of Equation (1), from a determination of the frequency spacings between the predominant
data set and the other two relevant data sets and from the energy levels associated with each such data set.
In the arrangement of FIG. 14a there are provided three systems of comparators, each cooperating in the manner of a peak detector. The first comparator system is comprised of an input register 40
responsive to the eight-bit output of the 2D-FFT 32 (of FIG. 8), a one-word data storage means 41, a comparator 42 and double-pole, double throw switching means 43. Such first comparator system
further includes associative address storage means comprising a second register 44, second storage means 45 and second double-pole, double-throw switching means 46. Thus, elements 40 and 44 serve as
register means to temporarily store a current discrete Fourier term sample (e) and the associated two-dimensional frequency address therefor (ω[x], ω[y]); storage elements 41 and 45 serve to
temporarily store a previous data sample and the associated address therefor; and comparator 42 serves to compare the current Fourier sample in register 40 with the stored earlier sample in storage
means 41. If the current sample in register 40 is greater than the earlier sample in storage means 41, the two-state output of comparator 42 serves as a logic control signal to switch switching means
43 and 46 as shown in FIG. 14a, whereby a clocking-out of registers 40 and 44 and storage means 41 and 45 causes the previous data sample in storage 41 and its corresponding address in storage 45 to
be transmitted to a further set of registers 141 and 144. The greater valued current data sample and its associated address (in being clocked out of registers 40 and 44, respectively) are transmitted
to and substituted in storage means 41 and 45, respectively.
Where, however, a previous sample, stored in storage 41, is greater than the current data sample (in register 40), then comparator 42 serves to switch switching means to an alternative state, whereby
the previous sample and associated address remain in storage means 41 and 45, and the comparatively lesser current data sample in register 40 and the associated address in register 44 are discarded
to registers 141 and 145 (in FIG. 14a).
Hence, in discarding the lesser of a current and a previous data sample, while storing the larger of the two, the arrangement and cooperation of comparator 42 is seen to function as a peak detector
for the detection of the term e[1] of Equation (1).
There is also provided in FIG. 14a elements 141, 142, 143, 144, 145 and 146 forming a second comparator and comprised of like functional elements as the first comparator and similarly functioning as
a second peak detector responsive to the discards of the first peak detector for detecting peak values (e[2]) less than those detected by such first peak detector, together with the associated
two-dimensional frequency address of such second highest peak value. There is further provided the combination of elements 241, 242, 243, 244, 245 and 246, all cooperating as a third comparator and
comprised of like functional elements as the first and second comparator, and functioning as a third peak detector. Such third peak detector is responsive to the discards of the second peak detector
to detect peak values (e[3]) less than those detected by either of the first and second peak detectors (e[3] <e[2] <e[1]) and similarly discarding all other data samples.
Because the construction, arrangement, clocking and cooperation of digital registers storage means, comparators and switching means are well understood in the art, these elements are shown
schematically or in block form only, for purposes of convenience in exposition in FIG. 14a. Although an exemplary mode of successive peak detection has been shown, any one of alternative schemes
occurring to mind may be employed.
Although the address registers 44, 144 and 244 and the address storage means 45, 145 and 245 have each been illustrated as single signal line input and outputs, it is to be understood that parallel
data processing may be preferably employed, whereby such single paths may represent a plurality of parallel signalling lines, a set for ω[x] signals and a separate set for ω[y] signals (as more
clearly indicated in FIG. 14b).
The three predominant Fourier terms (of Equation (1)), and the associated two-dimensional frequency addresses therefor having been determined, the frequency spacing or increment terms d[1i] ^1/2 and
may then be determined and applied, and several product terms combined, to effect the score, E, of Equation (1) by means of the exemplary arrangement of FIG. 14b.
Referring now to FIG. 14b, there is provided first read-only memory means (ROM's) 50, 51, 52, and 53, second read-only memory means (ROM's) 60, 61, 62 and 63, first and second comparators 70 and 71,
first and second multipliers 72 and 73, and signal combining means 74.
The function of ROM's 50, 51, 52, 53, 60, 61, 62 and 63 in FIG. 14b is to determine alternative values for d[12] ^1/2 and d[13] ^1/2 of Equation (1), from the frequency-spacings between the maximum
energy term e[1] and each of the next most prominent energy terms e[2] and e[3],
d[1i] ^2 =Δω[x].sbsb.i^2 +Δω[y].sbsb.i^2
d[1i] ^2 =|ω[x].sbsb.1 -ω[x].sbsb.i |^2 +|ω[y].sbsb.1 -ω[y].sbsb.i |^2
Alternative values of the frequency spacing intervals d[1i] are determined by alternatively employing the mirror frequency values of the dominant Fourier energy terms, to be explained more fully
hereinafter. The reciprocals of the alternative values are compared by comparator means and the larger of the reciprocal values is selected, the output d[12] ^1/2 appearing on the output line of
comparator 70 and the output d[13] ^1/2 appearing on the output line of comparator 71. Multiplier 72 gain scales or multiplies the e[2] signal (from register 142 in FIG. 14a) to provide the product e
[2] /d[12] ^2, corresponding to the second term in the right hand member of Equation (1). Multiplier 73 similarly gain-scales or multiplies the e[3] signal (from register 242 in FIG. 14a) to provide
the product e[3] /d[13] ^2, corresponding to the third term in the right hand member of Equation (1). Combining means 74 serves to combine the output e[1] (from register 42 in FIG. 14a) and the
outputs of multipliers 72 and 73 so as to complete the mechanization of Equation (1).
The determination of the terms d[12] ^1/2 and d[13] ^1/2 by means of the ROM's (read only memories) of FIG. 14b involves using or testing the proximate one of a mirror, or conjugate, set of
frequencies for the sets of coordinate frequency addresses recorded in registers 45, 145 and 245. In other words a lesser spatial frequency difference between the proximate Fourier energy term may
occur between a so-called "positive" frequency set of one term and "negative" frequency set of another term. For example, the spatial frequency difference between a frequency address or set (ω
[x].sbsb.1,ω[y].sbsb.1) for the first one of the three predominent discrete Fourier energy terms (say, e[1]) and a mirror frequency address (i.e., -ω[x].sbsb.2,-ω[y].sbsb.2) for e[2], the second of
the predominant energy terms may be the least difference, and therefore such lesser spacing between such mirror frequency term and the principal term would be employed in mechanizing Equation (1).
For simplification and convenience in the determination of the (closest) spacing between the frequency address of the principal term (e[1]) and the (mirror) frequency address of the lesser prominent
term, several additional structural and functional features are included in the disclosed embodiment. First, all frequency addresses (including mirror frequencies) are re-coded as positive numbers,
beginning with the upper left hand corner in the frequency address grid illustrated in FIG. 6. Thus:
ω[x].sbsb.i,ω[y].sbsb.i =(ω[x].sbsb.i +5),(ω[y].sbsb.i +5).
In this way, only differences between positive numbers are involved in determining the frequency spacings d[12] and d[13]. This re-coding may be done in a scratch pad memory 32c, ancillary to the FFT
output buffer in FIG. 8. In practice, the actual mirror frequency terms of the FFT analysis are not placed in memory in the embodiment of FIG. 8. Instead, such mirror image address terms are
synthesized by ROM's 50, 51, 52 and 53 (in FIG. 14b), employing the re-coded frequency address outputs from FFT buffer 32d (of FIG. 8). In this way, the amount of buffer storage required is reduced.
For example, an address for first term e[1] (ω[x].sbsb.1 =+1 cycle/frame, ω[y].sbsb.1 =+3 cycle/frame), and an address for a second term e[2] (ω[x].sbsb.2 =+1 cycle/frame, ω[y].sbsb.2 =-3 cycle/
frame) are recorded in FFT scratch pad memory 32c as:
ω[x].sbsb.1,ω[y].sbsb.1 =(1+5),(5-3)=(6,2)
ω[x].sbsb.2,ω[y].sbsb.2 =(1+5),(5-(-3))+(6,8)
Δω[x].sbsb.12 =|6-6|=0
Δω[y].sbsb.12 =|2-8|=6
d[12] =Δω[12] =6.
Such situation may be appreciated from observation of the situation illustrated in FIG. 6.
By treating the mirror image of the first term, e[1], a different (lesser) spacing frequency results:
(ω[x].sbsb.1,ω[y].sbsb.1)*=(-1+5), (5-(-3))=(4,8)
(ω[x].sbsb.2,ω[y].sbsb.2)=(1+5), (5-(-3))=(6,8)
Δω[x].sbsb.12 *=|4-6|=2
Δω[y].sbsb.12 *=|8-8|=0
d[12] *=Δω[12] *=2.
Such alternate situation may also be appreciated from inspection of FIG. 6. In can also be shown that the quantity d[12] * can be alternatively computed or determined as:
d[12] *=|10-ω[x].sbsb.1 -ω[x].sbsb.2 |=|10-6-8|=2.
More generally:
d[12] ^2 =|ω[x].sbsb.1 -ω[x].sbsb.2 |^2 +|ω[y].sbsb.1 -ω[y].sbsb.2 |^2 =Δω[x].sbsb.12^2 +Δω[y].sbsb.12^2
(d[12] *)^2 =|10-ω[x].sbsb.1 -ω[x].sbsb.2 |^2 +|10-ω[y].sbsb.1 -ω[y].sbsb.2 |^2.
In other words, the components of d[1i] are the coordinate differences between a pair of coordinate sets, and the components of d[1i] * are the 10's complements of the sums of corresponding
coordinates of a pair of coordinate sets.
In the arrangement of FIG. 14b, ROM's, 50, 51, 52 and 53 each provide dual outputs, corresponding to the coordinate differences and the 10's complement of the sums of corresponding coordinates of a
pair of coordinate sets. ROM's 50 and 52 provide the data for d[12] and d[12] *, while ROM's 51 and 53 provide the data for d[13] and d[13] *. ROM 50 is responsive to the ω[x].sbsb.1 and ω[x].sbsb.2
outputs of registers 45 and 145 respectively (of FIG. 14a) as addressing means to provide (i.e., look-up) the two outputs |ω[x].sbsb.1 -ω[x].sbsb.2 | and |10-ω[x].sbsb.1 -ω[x].sbsb.2 |. ROM 52 is
responsive to the ω[y].sbsb.1 and ω[y].sbsb.2 outputs of registers 45 and 145 to provide the two outputs |ω[y].sbsb.1 -ω[y].sbsb.2 | and |10-ω[y].sbsb.1 -ω[y].sbsb.2 |. Similarly, ROM's 51 and 52 are
responsive to the outputs of registers 45 and 245 as addressing means to provide appropriate outputs, ROM 51 providing the two outputs |ω[x].sbsb.1 -ω[x].sbsb.3 | and |10-ω[x].sbsb.1 -ω[x].sbsb.3 |,
and ROM 53 providing the outputs |ω[y].sbsb.1 -ω[y].sbsb.3 | and |10-ω[y].sbsb.1 -ω[y].sbsb.3 |. In other words ROM's 50 and 52 provide the components in a determination of the spacing frequency d
[12] or d[12] *, and ROM's 51 and 53 provide the components for a determination of d[13] or d[13] *.
ROM's 60, 61, 62 and 63 in FIG. 14b determine (or look-up) functions of the spacing terms d[12], d[12] *, d[13] and d[13] *, respectively in response to an appropriate respective address. For
example, ROM 60 is responsive to the differential outputs of ROM's 50 and 52 to provide an output d[12] ^-2 corresponding to the reciprocal of the terms of the squares of the applied inputs to ROM
d[12] ^-2 =[|ω[x].sbsb.1 -ω[x].sbsb.2 |^2 +|ω[y].sbsb.1 -ω[y].sbsb.2 |^2 ]^-1.
ROM 61 is responsive to the 10's complement summed outputs of ROM's 50 and 52 to provide an output (d[12] *)^-2 corresponding to the reciprocal of the squares of the applied inputs:
(d[12] *)^-2 =[|10-ω[x].sbsb.1 -ω[x].sbsb.2 |^2 +|10-ω[y].sbsb.1 -ω[y].sbsb.2 |^2 ]^-1.
The respective reciprocal terms d[12] ^-2 and (d[12] *)^-2 from ROM's 60 and 61, respectively, are then applied to comparator 70 for a determination of the greater of them (corresponding to a least
spacing interval).
Similarly, ROM 62 is responsive to the differential outputs of ROM's 51 and 53 to provide an output d[13] ^-2 corresponding to the reciprocal of the sums of the squares of the applied inputs to ROM
d[13] ^-2 =[|ω[x].sbsb.1 -ω[x].sbsb.3 |^2 +|ω[y].sbsb.1 -ω[y].sbsb.3 |^2 ]^-1.
ROM 63 is responsive to the 10's complement summed outputs of ROM's 51 and 53 to provide the term (d[13] *)^-2 corresponding to the reciprocal of the squares of the applied inputs. The respective
recirpocal terms d[13] ^-2 and (d[13] *)^-2 from ROM's 62 and 63, respectively, are then applied to comparator 71 for a determination of the greater of them, corresponding to a least spacing
Thus, the embodiment illustrated in FIG. 14b mechanizes Equation (1) for the term E, corresponding to the degree of fitness of a fingerprint area under examination as an area of interest. Such signal
may be submitted to a multiple level thresholding, as indicated by FIG. 13, within output buffer 33b (in FIG. 8) as an element of the non-fingerprint region indicator 23 of FIG. 7. In this way, a
control signal of discrete levels may be provided for control of an automatic fingerprint reader system, as indicated in FIG. 7.
The design and functioning of a ROM, as a look-up table responsive to selected addressing, is well understood in the art, while the number of addresses and responses sought for the few discrete
address frequencies employed is rather limited as to be well within the art of such commercially available devices. One such commercially available device is INTEL Model 2701, manufactured by INTEL
Corp. of Santa Clara, California. In other words, a ROM may be considered as a single-valued arbitrary function generator, an appropriate output from which is provided in response to a selected
address (or argument) among a list of discrete addresses, as is well understood in the art.
Accordingly, there has been described highly useful means for monitoring an image of interest to determine the existence of a fingerprint region of interest or, conversely, a non-fingerprint region
not of interest, whereby machine memory and data processing time may be economized in the automatic processing of fingerprint data.
Although the concept of the invention has been described in terms of an embodiment employing a two-dimensional fast Fourier transform machine, it is clear that the concept of the invention is not so
limited, and that any suitable form of sequency function analyzer may be employed. Also, although the testing or scoring signal has been indicated as employing two laser predominant energy terms in
addition to the most predominant energy term, it is clear that the concept is not so limited and that any number may be employed, but that at least a second predominant term is to be employed.
Further, although such subsequent, lesser predominant terms have been "scored" or attenuated by the reciprocal of the square of the frequency spacing d[1i] ^1/2, the concept is not so limited and
such attenuation may be omitted or provided as any suitable function of the spacing frequency.
Although the invention has been disclosed and illustrated in detail, it is to be clearly understood that the same is by way of illustration and example only and is not to be taken by way of
limitation, the spirit and scope of this invention being limited only by the terms of the appended claims. | {"url":"http://www.google.com/patents/US4225850?dq=5579430","timestamp":"2014-04-25T06:25:13Z","content_type":null,"content_length":"163667","record_id":"<urn:uuid:f6cb8695-cae5-4eb5-97c1-dec188d1e719>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00193-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bezier curves and equally distributed parametric points (easy! ?)
November 15th 2008, 05:01 PM #1
Nov 2008
Bezier curves and equally distributed parametric points (easy! ?)
I am an amateur developing the math to describe the motion of a robot of sorts.
At this stage I'd like to use bezier curves as user input to describe the motion path/s that it will make over time... (imagine it sitting flat on the cartesian 'floor')
I'd first convert the free parameter 't' (time) into another parameter say 's' (speed) via a function of t so the user can have control over the velocity/accel of the robot throughout its
traversing of the Bezier curve.. ('t' would tick away as usual and the 's' would actually be the free parameter).
This will work nicely with linear Bezier paths as if you use equal increments of 't' then the parametric points on the linear path are also equally distributed - this means my new 's' should
translate perfectly onto the path also.
Problem comes when I want to use Bezier curves of higher than degree two...
The points as you traverse along the path become distributed unequally, with more 'resolution' or bunching around the apex (in the case of a parabola/degree two quadratic Bezier), which will
translate into slower motion around this area.
I'd like to know how to compensate for this exactly, some sort of inverse function that needs to be applied to 't' ?? ... similar to what I wanted to do with it to convert it into 's', but before
doing so in that there would now actually be a function upon a function ...
I imagine maybe if I gave it a crack something to do with the Arc Length procedure might help, but am guessing/hoping that this problem has already been solved or another way of thinking about it
might help ...
any ideas ?
Hope I am making sense ! (?)
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/calculus/59741-bezier-curves-equally-distributed-parametric-points-easy.html","timestamp":"2014-04-18T05:08:27Z","content_type":null,"content_length":"30841","record_id":"<urn:uuid:47b8500b-5f41-4a88-be7f-c7e364e46169>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00244-ip-10-147-4-33.ec2.internal.warc.gz"} |
Estimation of channel parameters and background irradiance for free-space optical link
Free-space optical communication can experience severe fading due to optical scintillation in long-range links. Channel estimation is also corrupted by background and electrical noise. Accurate
estimation of channel parameters and scintillation index (SI) depends on perfect removal of background irradiance. In this paper, we propose three different methods, the minimum-value (MV),
mean-power (MP), and maximum-likelihood (ML) based methods, to remove the background irradiance from channel samples. The MV and MP methods do not require knowledge of the scintillation distribution.
While the ML-based method assumes gamma–gamma scintillation, it can be easily modified to accommodate other distributions. Each estimator’s performance is compared using simulation data as well as
experimental measurements. The estimators’ performance are evaluated from low- to high-SI areas using simulation data as well as experimental trials. The MV and MP methods have much lower complexity
than the ML-based method. However, the ML-based method shows better SI and background-irradiance estimation performance.
© 2013 Optical Society of America
OCIS Codes
(350.4600) Other areas of optics : Optical engineering
(200.2605) Optics in computing : Free-space optical communication
ToC Category:
Optics in Computing
Original Manuscript: November 28, 2012
Revised Manuscript: February 27, 2013
Manuscript Accepted: March 24, 2013
Published: May 6, 2013
Afsana Khatoon, William G. Cowley, Nick Letzepis, and Dirk Giggenbach, "Estimation of channel parameters and background irradiance for free-space optical link," Appl. Opt. 52, 3260-3268 (2013)
Sort: Year | Journal | Reset
1. L. C. Andrews and R. L. Phillips, Laser Beam Propagation through Random Media, 2nd ed. (SPIE, 2005).
2. A. K. Majumdar and J. C. Ricklin, Free-Space Laser Communications (Springer, 2008).
3. L. C. Andrews, R. L. Phillips, and C. Y. Hopen, Laser Beam Scintillation with Applications (SPIE Optical Engineering, 2001).
4. J. W. Strohbehn, Laser Beam Propagation in the Atmosphere (Springer-Verlag, 1978), vol. 25.
5. P. Beckmann, Probability in Communication Engineering (Harcourt, Brace & World, 1967).
6. E. Jakeman and P. N. Pusey, “A model for non-Rayleigh sea echo,” IEEE Trans. Antennas Propag. 24, 806–814 (1976). [CrossRef]
7. M. A. Al-Habash, L. C. Andrews, and R. L. Phillips, “Mathematical model for the irradiance probability density function of a laser beam propagating through turbulent media,” Opt. Eng. 40,
1554–1562 (2001). [CrossRef]
8. A. Jurado-Navas, J. M. Garrido-Balsells, J. F. Paris, and A. Puerta-Notario, “A unifying statistical model for atmospheric optical scintillation,” in Numerical Simulations of Physical and
Engineering Processes (Intech, 2011), pp. 181–206.
9. E. Jakeman and P. N. Pusey, “Significance of k distributions in scattering experiments,” Phys. Rev. Lett. 40, 546–550(1978). [CrossRef]
10. F. S. Vetelino, C. Young, L. Andrews, and J. Recolons, “Aperture averaging effects on the probability density of irradiance fluctuations in moderate-to-strong turbulence,” Appl. Opt. 46,
2099–2108 (2007). [CrossRef]
11. N. Wang and J. Cheng, “Moment-based estimation for the shape parameters of the gamma–gamma atmospheric turbulence model,” Opt. Express 18, 12824–12831 (2010). [CrossRef]
12. J. Anguita, M. Neifeld, B. Hildner, and B. Vasic, “Rateless coding on experimental temporally correlated FSO channels,” J. Lightwave Technol. 28, 990–1002 (2010). [CrossRef]
13. S. D. Lyke, D. G. Voelz, and M. C. Roggemann, “Probability density of aperture-averaged irradiance fluctuations for long range free space optical communication links,” Appl. Opt. 48, 6511–6527
(2009). [CrossRef]
14. D. Giggenbach, W. Cowley, K. Grant, and N. Perlot, “Experimental verification of the limits of optical channel intensity reciprocity,” Appl. Opt. 51, 3145–3152 (2012). [CrossRef]
15. A. Papoulis, Probability, Random Variables, and Stochastic Processes (McGraw-Hill, 1991).
16. L. C. Andrews, R. L. Phillips, C. Y. Hopen, and M. A. Al-Habash, “Theory of optical scintillation,” J. Opt. Soc. Am. A 16, 1417–1429 (1999). [CrossRef]
17. H. A. David, Order Statistics (Wiley, 2003).
18. S. M. Kay, Fundamentals of Statistical Signal Processing: Estimation Theory (Prentice-Hall, 1993).
19. S. S. Rao, Engineering Optimization Theory and Practice(Wiley, 1996).
20. A. Dogandzic and J. Jin, “Maximum likelihood estimation of statistical properties of composite gamma-lognormal fading channels,” IEEE Trans. Signal Process. 52, 2940–2945(2004). [CrossRef]
21. M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions with Formulas, Graphs and Mathematical Tables (Dover, 1972).
22. A. Khatoon, W. Cowley, and N. Letzepis, “Channel measurement and estimation for free space optical communications,” in Australian Communications Theory Workshop (AusCTW), 2011 (IEEE, 2011),
pp. 112–117.
OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies.
In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.
« Previous Article | Next Article » | {"url":"http://www.opticsinfobase.org/ao/abstract.cfm?uri=ao-52-14-3260","timestamp":"2014-04-19T18:05:02Z","content_type":null,"content_length":"132685","record_id":"<urn:uuid:ae945d0c-7b6d-42c0-a07b-1156bcec0e7b>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00429-ip-10-147-4-33.ec2.internal.warc.gz"} |
Basket Case Fractal - submission tenders open.
July 1, 2011
Basket Case Fractal – submission tenders open.
I’ve decided today that my next project will be to create a basket case fractal. I have a picture in my mind of a gift basket type thingy, but I can’t decide what to put into it.
I’ll be working with an iteration of z = z^2 + c with a view then to producing a gift basket of the Mandelbrot kind.
The initial z word will be zero, for no other reason than it seems to be a fitting place to start. Thus giving
zero = zero squared + c
as my first iteration of the basket case fractal.
c is the point of my indecision. It’s complex, which suggests that it is partly real and partly imaginary.
Real numbers can be represented on a one dimensional line called the real number line.
Since complex numbers have two parts, a real one and an imaginary one, we need a second dimension to graph them. We simply add a vertical dimension to the real number line for the imaginary part.
Ok. That makes perfect sense.
However, I’m not looking to put a complex number into my gift basket. I’m looking for a highly distinctive selection.
If it was up to me I’d go for a selection of biscuits and cheese, but I’m not doing this for myself. Let me reiterate – I want this to be a gift basket fractal, so submissions for c are now open.
Bonnie McClellan
July 1st, 2011 at 6:55 am
Well, I’m a real fan of 9.
it’s odd but has the harmony of being 3 3′s
A trinity of trinities. Nine sides make a nonagon
(which sounds like the kind of form one could toss off as if it were nothing)
the number on the threshold of tripping into the binary.
It could be nicely divided amongst friends in telling ways:
One for you and two for me
Two for you and one for me
One for you, one for me,
one to leave
for the mystery guest.
• Brad
July 5th, 2011 at 5:08 pm
It’s amazing what that 9 revealed when i plugged it into the complex plane. Of course I had to convert it from an absolute nine into a complex representation first, then weed out the obvious
cliches (not that the resulting cliches were without value – I rediscovered Robert Burns at one point!) Looking forward to experimenting more with it. Thanks Bonnie!
July 1st, 2011 at 11:08 am
i like bonnie’s nine. back at fractal central i will have to try that 0 thing as i am not exactly clear on zero as an integer, since any other everyday math proposes 0+0 or 0-0
or 0×0 is 0. so there might not an iteration to generate, it would make the “black hole” space instead. the space around the iterations so to speak spatially where most iterations are viewed 2D, so
there’d have to be an abstract perspective for viewing the depth lets say of the space around the iterations. and 0 would work perfectly well for that, it would seem, but how does a 0 jumpstart an
• Brad
July 5th, 2011 at 5:18 pm
This turns out to be very fascinating indeed Kathi. Fractal central I assume is at (0,0i), and to take that as a ‘c’ should produce something like a black hole singularity, since as you say, it
can’t generate an iteration.. Thing is, when I tried it myself, I found a result. It made me feel sad though, and I’m not sure I want to show anyone what I found. Thank you Kathi.
□ tipota
July 7th, 2011 at 4:31 am
i went back to the online generator and also got some results tho they were unusual in spatial composition as shown in the frame of the generator. however, i had to override presets which i
was only allowed to do using julia so the mandelbrot result was not exactly 0 based, but the julia result was ‘almost’ all black space but for some feathers in one corner
July 1st, 2011 at 10:48 pm
and the card attached to the basket reads “you’ve been gleicked!”
• Brad
July 5th, 2011 at 5:25 pm
Thanks Mark :) I can see an opportunity to print a series of ‘iGleick U’ cards, but I fear there may be more than one legal barrier standing in the way. I’ve read though that mathematical results
can’t be held to copyright law, so maybe there’s a way around it?
□ breathenoah
July 5th, 2011 at 11:02 pm
funny you say that about math and copyright law. when i read gleick’s book years ago, i was left with the distinct impression of “so what”. i never got what was so special about complex
numbers, it was just plain old math to me.
Follow Instructions on Top Card | {"url":"http://maekitso.wordpress.com/2011/07/01/basket-case-fractal-submission-tenders-open/","timestamp":"2014-04-18T05:29:28Z","content_type":null,"content_length":"68930","record_id":"<urn:uuid:ff6030e3-cc10-44e2-bd32-a7b62d11cd78>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00352-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mac OS X Users/C programmers?
11-29-2003 #1
Registered User
Join Date
Nov 2003
Mac OS X Users/C programmers?
I'm buying a laptop and open to using any operating system. Macs being Unix based makes them very appealing to me and I'm trying to convince myself Mac is the way to go. I am used to reading
specs for Pentium based machines and don't know how to compare to the processor speeds for Macs. Is a G4 933 MHz going to perform like a Pentium IV 2.4 GHz machine? (They cost about the same.)
To solve this problem I have written a bit of C code (and stolen pieces from the internet too) that I'd like to use as a benchmark for testing various Mac and Wintel machines. I want to be able
to walk into a Mac store, pop in a CD and run the benchmark software. Like taking a car for a test drive. My problem is I cannot complile the software for Mac. If someone could compile it on Mac
using gcc and email me the Mac executable, gcc version number, and command used to compile (optimizers, etc.) I'd be very grateful. Of course I'll post results once I've done the tests.
I've attached the code below optimistically hoping someone will help me.
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include <time.h>
#define size 50000000
#define size2 5000000 /*should be greater than number of primes < 2*size+3 */
struct vector {
double *p;
long l; /*length (a.k.a. number of elements)*/
void integral(void){
/*based on code from http://www.math.purdue.edu/~dvb/timing.html*/
double sum = 0, delta = 0.025, x = -5;
double y, z;
int i, j,k;
clock_t start;
for(i=0; i <400; i++)
{x += delta;
y = -5;
for(j=0; j<400; j++)
{y += delta;
{z += delta;
sum += exp(-x*x)*cos(y*y)*cos(z*z);}}}
/*printf("integral=%f10\n", sum);*/
printf("execution time for integral = %g seconds\n",((double)(clock()-start))/CLOCKS_PER_SEC);
void prime (void){
/*based on code from http://www.math.purdue.edu/~dvb/timing.html*/
int i, p, count=1;
int x, xisprime, sqrtx;
clock_t start;
p = 3;
for(x=3; count < 300000; x+=2)
{ xisprime = 1;
sqrtx = (int) sqrt(x);
for(i=3; i<= sqrtx; i++)
if(x%i == 0)
{xisprime = 0;
/*printf("the %ith prime is %i\n", count, p);*/
printf("execution time for prime = %g seconds\n",((double)(clock()-start))/CLOCKS_PER_SEC);
void goldbach(void){
/*based on code from http://www.math.purdue.edu/~dvb/timing.html*/
int i,j, diff,inc,count=0;
int limit, top,maxp = 3, maxq=3;
char *isprime; /* treated like an integer array */
int *primes;
double x;
int test;
clock_t start;
isprime = (char *)malloc((size_t)(size*sizeof(char)));
limit = (int) sqrt(2*size + 3.5) ;
primes = (int *)malloc((size_t)(size2*sizeof(int)));
if((isprime==NULL)|| (primes == NULL))
{printf("Out of memory\n");
for(i=0;i<limit; i++)
{inc = 2*i+3;
for(j=3*(i+1); j<size; j+=inc)
primes[count] = 2*i+3;
top= 2*size;
for(i=6; i <= top ; i+=2)
{test = 0;
for(j=0; primes[j]<i;j++)
{diff = i - primes[j];
{/*printf("%i = %i + %i\n", i, primes[j], diff);*/
test = 1;
{maxp = primes[j];
if(test == 0)
{printf("Goldbach fails for %i\n", i);
/*printf("Goldbach true up to %i\n max partition: %i+%i\n", top, maxp, maxq);*/
printf("execution time for Goldbach = %g seconds\n",((double)(clock()-start))/CLOCKS_PER_SEC);
void dynamicfnc(void){
struct vector vec;
long requestedlength=3000;
register long i,j;
clock_t start;
vec.p=(double*)calloc(requestedlength, sizeof(double));
if (vec.p==NULL) {
printf("vec.p==NULL in vec.c/allocvec");
printf("execution time for dynamicfnc = %g seconds\n",((double)(clock()-start))/CLOCKS_PER_SEC );
int main(void) {
clock_t start;
printf("total execution time = %g seconds\n",((double)(clock()-start))/CLOCKS_PER_SEC );
return 0;
Last edited by petermichaux; 11-30-2003 at 03:32 PM.
My system's performance
Results on my desktop
System: Pentium III 450 MHz, 256 MB Ram, Windows XP
Compiler: Dev-C++ 5.0 beta 8 (4.9.8.0) with Mingw/GCC 3.2
Total time to run benchmark: 252 s = 4 min 12 s
Last edited by petermichaux; 11-29-2003 at 09:33 PM.
hint on buying a new comp, buy the new power mac G5 with doubel proccesors it's all you can dream for IMHO...although it's no labtop
They say that if you play a Windows Install CD backwords, you hear satanic messages. That's nothing; play it forward and it installs Windows.
If you're going to buy a Mac, I would go with the G5 - if you can afford it. But a G4 would be fine too. I don't know If the 933mhz model will perform like a Pentium IV 2.4 GHz, though, but I
think it'll give you the power you'll need. Of course it will depend on what you're going to use it for, but for programming, word processing, web, mulimedia and all that it will be plenty.
I compiled your program for you. gcc (GCC) 3.1 20020420 (prerelease), compiled on a G4 733/768/Mac OS X 10.2.8. Compiled with cc bench.c -o bench -Wall (Btw the pause command does not exist..)
Last edited by kristy; 11-30-2003 at 03:31 AM.
Thank you
Thanks Kristy. I will run that executable on some macs as soon as I can. How did your computer do with the test?
What do you mean the command pause doesn't exist? Did you remove the line of code
If I don't use this line on my PC the console window disappears before I can read the ouput of the program. (Now I'm thinking you are executing from the command line and the output shows up in
that terminal and never disappears. It's been a while since I've used command line.)
I want a laptop so no G5 for me.
Last edited by petermichaux; 11-30-2003 at 12:10 PM.
Originally posted by petermichaux
(Now I'm thinking you are executing from the command line and the output shows up in that terminal and never disappears. It's been a while since I've used command line.)
Exactly. The system function passes a command to the shell, but there's no command called PAUSE. I didn't remove that line, you'll just get a message at the end of the program from the shell,
saying something like 'PAUSE, command not found' (see my test results). I should also mention that I had to comment out two unused variables to make it compile with -Wall If you want me to
recompile it or something, let me know, and I will.
My test results:
execution time for integral = 27.24 seconds
execution time for prime = 29.14 seconds
execution time for Goldbach = 63.7 seconds
execution time for dynamicfnc = 18.64 seconds
total execution time = 138.72 seconds
sh: PAUSE: command not found
not working
I downloaded your attachment to my Windows XP computer. Burned it onto a cd.
Now I'm at a computer store and I'm trying to run it. I can only save it on the Mac's desktop by draging the icon from the cd to the desktop.
How do I run the file? I double clicked the icon and the Mac asks me which application to use.
If I open a terminal to try the ./bench command to execute I can't even find the right directory.
What's going on here?? Man, I feel useless on another OS.
Open the Terminal and cd to the current directory and run the file with ./bench. If you don't know where you are, drag the bench file to the Terminal, and press enter.
I would first copy the file to the desktop (just drag the icon to the desktop). It's not necessary, but it's faster than running it from a cd.
Kristy, thanks again for your help.
I want to buy a Mac but these results aren't swaying me. A $2000 PC is faster than a $4300 Mac. And what if that PC was running Linux? Probably faster. Arg. Apple is a frustrating company to
(Prices in Canadian dollars.)
int prime gold dynam total price
Dell PIII 450MHz 256MB 79 44 61 68 252
iMac 800MHz 256MB 25 27 64 17 133
iBook G4 800MHz 128MB 18 21 76 12 127 1500
Powerbook G4 1GHz 256MB 26 28 75 18 147 2300
Powerbook G4 1.33GHz 512MB 15 16 43 10 84 4300
HP P4 2.4GHz 512MB 32 8 33 6 80 2000
VAIO P4 2.66GHz 512MB 29 8 33 3 73 2000
Last edited by petermichaux; 11-30-2003 at 08:14 PM.
Originally posted by kristy
Exactly. The system function passes a command to the shell, but there's no command called PAUSE. I didn't remove that line, you'll just get a message at the end of the program from the shell,
saying something like 'PAUSE, command not found' (see my test results).
Hence my wonder at the use of system("PAUSE") when a good ol' getchar() should suffice -- and is in all implementations of C on all platforms.
Definition: Politics -- Latin, from
poly meaning many and
tics meaning blood sucking parasites
-- Tom Smothers
Originally posted by petermichaux
Kristy, thanks again for your help.
I want to buy a Mac but these results aren't swaying me. A $2000 PC is faster than a $4300 Mac. And what if that PC was running Linux? Probably faster. Arg. Apple is a frustrating company to
It doesn't matter which one runs this particular program faster. First off, what compiler did you use for your mac? Second, what optimization flags did you use for your mac? Third, what compiler
did you use for your PC? What optimization flags... see the point?
Depending on what compiler and flags you use, you're going to get varried results. It also depends on what you plan on doing. Some CPUs are better at floating point. Some at ingeters.
Buy a computer that does what you need it to the best.
And why is this in the C forum? Shouldn't this be in the "Help me pick out a computer please!" forum?
Hope is the first step on the road to disappointment.
Originally posted by quzah
It doesn't matter which one runs this particular program faster.
This program has components of things I actually do. It also tests two different things. Also if I'm going to do a test I have to pick some program.
First off, what compiler did you use for your mac? Second, what optimization flags did you use for your mac? Third, what compiler did you use for your PC? What optimization flags... see the
gcc on mac with no optimizers. mingw port of gcc on PC with no optimizers
Depending on what compiler and flags you use, you're going to get varried results.
tried to minimize this. These are the compilers I would use on each platform so I think it is a fair test of the ENTIRE system that I'll be using to code.
It also depends on what you plan on doing. Some CPUs are better at floating point. Some at ingeters.
This program has components of things I actually do.
Buy a computer that does what you need it to the best.
Trying to figure that out. Any suggestions I can use to help me?
And why is this in the C forum?
The program is in C and I was looking for a C programmer to help me. I'm new here but it seems to me like a resonable place to look .
Shouldn't this be in the "Help me pick out a computer please!" forum?
Just to highlight the difference a compiler makes:
I just ran this test on my AthlonXP1900+ with 256M DDR and WinXP. I first compiled it with MSVC++6. Here are my scores.
Integral= 10.78
Prime= 17.17
Goldbach= 33.03
Total= 198.52
Then I recompiled it using the DJGPP version of gcc.
Integral= 15.88
Prime= 17.64
Goldbach= 36.43
Total= 73.96
Same computer, same settings, same programs running. Just two different compilers, both with only default flags on. Interesting results, eh?
Very interesting results. Thanks for posting them.
I'm pretty surprised myself, I thought they would be a lot closer to each other than they are.
P.S. just to butt my head in...I prefer pc
11-29-2003 #2
Registered User
Join Date
Nov 2003
11-30-2003 #3
Wannabe Coding God
Join Date
Mar 2003
11-30-2003 #4
UNIX chick
Join Date
Mar 2003
11-30-2003 #5
Registered User
Join Date
Nov 2003
11-30-2003 #6
UNIX chick
Join Date
Mar 2003
11-30-2003 #7
Registered User
Join Date
Nov 2003
11-30-2003 #8
UNIX chick
Join Date
Mar 2003
11-30-2003 #9
Registered User
Join Date
Nov 2003
11-30-2003 #10
Been here, done that.
Join Date
May 2003
11-30-2003 #11
11-30-2003 #12
Registered User
Join Date
Nov 2003
12-01-2003 #13
12-01-2003 #14
Registered User
Join Date
Nov 2003
12-01-2003 #15 | {"url":"http://cboard.cprogramming.com/c-programming/47759-mac-os-x-users-c-programmers.html","timestamp":"2014-04-18T04:25:42Z","content_type":null,"content_length":"106041","record_id":"<urn:uuid:b0841b28-2062-487a-8236-aca007aaae31>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00194-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hempstead, NY Statistics Tutor
Find a Hempstead, NY Statistics Tutor
...From basic univariate statistics to complex multiple regressions, SAS can handle almost any mathematical analysis you ask it to. What's great is that it is much easier than other programming
languages, and is intuitive to learn. I have taken the Praxis 1 and received 99th percentile in all three categories.
26 Subjects: including statistics, calculus, writing, GRE
...Through the several Statistics courses I have taught, I have reviewed with my Statistics students the fundamental concepts of explanatory analysis, inferential statistics and probability.
Students learn such topics as: measurements of center (mean, median, midrange, and mode) and spread (standar...
21 Subjects: including statistics, physics, calculus, geometry
...My teacher's role as a guide will provide access to information rather than acting as the primary sourceof information where the students' search for knowledge is met as they learn to discover
answers to their questions. I will teach the needs of each child so that all learners can feel capable ...
16 Subjects: including statistics, reading, biology, English
...Throughout my academic career, I have always loved learning and teaching others what I have learned. Most importantly, I have a passion for teaching. From tutoring my own siblings to tutoring
children at the Boys and Girls Club to teaching college-level courses and tutoring college students, I ...
18 Subjects: including statistics, English, reading, French
...I have experience tutoring students of all ages, (elementary through graduate school) in many subject areas - although my real passion is math. I have a BA in statistics from Harvard and will
be starting nursing school shortly. As someone who is not a typical "math person", I can relate to those struggling to understand material - I get it.
18 Subjects: including statistics, chemistry, geometry, biology
Related Hempstead, NY Tutors
Hempstead, NY Accounting Tutors
Hempstead, NY ACT Tutors
Hempstead, NY Algebra Tutors
Hempstead, NY Algebra 2 Tutors
Hempstead, NY Calculus Tutors
Hempstead, NY Geometry Tutors
Hempstead, NY Math Tutors
Hempstead, NY Prealgebra Tutors
Hempstead, NY Precalculus Tutors
Hempstead, NY SAT Tutors
Hempstead, NY SAT Math Tutors
Hempstead, NY Science Tutors
Hempstead, NY Statistics Tutors
Hempstead, NY Trigonometry Tutors | {"url":"http://www.purplemath.com/Hempstead_NY_Statistics_tutors.php","timestamp":"2014-04-17T04:51:41Z","content_type":null,"content_length":"24310","record_id":"<urn:uuid:62bae6ef-8acd-4e73-bcc8-283a40ed5e0c>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00047-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Problem!
centripetal character of the tension force [tex] -\omega^{2} l\vec{r} [/tex],with
[tex]\omega[/tex] is the angular speed, and it is not equal [tex]\sqrt{g/l}[/tex], indeed, it is not constant, At the highest point, [tex]\omega = 0 [/tex], on the other hand, [tex]\omega = maximun[/
tex] at the lowest point. [tex]\sqrt{g/l}[/tex]is the average [tex]\omega[/tex] under a whole period, so, the tension on the string is greater than mg,,,,, | {"url":"http://www.physicsforums.com/showthread.php?t=59127&page=2","timestamp":"2014-04-18T13:58:00Z","content_type":null,"content_length":"66615","record_id":"<urn:uuid:cfd34013-22c2-407a-a835-99cd532d17e1>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00564-ip-10-147-4-33.ec2.internal.warc.gz"} |
mathematics in india past
Some Information About
"mathematics in india past"
is hidden..!! Click Here to show mathematics in india past's more details..
Do You Want To See More Details About
"mathematics in india past"
? Then
with your need/request , We will collect and show specific information of mathematics in india past's within short time.......So hurry to Ask now (No Registration , No fees ...its a free service from
our side).....Our experts are ready to help you...
In this page you may see mathematics in india past related pages link And You're currently viewing a stripped down version of content. open "Show Contents" to see content in proper format with
Page / Author Tagged Pages
Title: mathematics in India
pastpresent and future
Page Link: mathematics in
India pastpresent and future
- future of mathematics in india, mathmatics in india past present and future, seminar on indian mathematics in past present and future, mathematics past present and
Posted By: Guest future in india, presentation on mathematics in india past present and future related to science, mathematics in india past, maths in india past present and future,
Created at: Friday 03rd of project on mathematics in india past present and future, mathematics in india past present and future, project mathematics in india past present and future, mathematics
August 2012 04:44:44 PM in india past present and future pdf file, mathematics in india past present and future in pdf file, indian mathematics past present and future, mathematics,
Last Edited Or Replied at
:Saturday 11th of August
2012 05:07:48 AM
Title: speech on mathematics
in india past present and
Page Link: speech on
mathematics in india past speech on mathematics in india past present and future, past present future of mathematics in india, mathematics in india past present and future, mathematics in india
present and future - present and future, mathematics in india past present future, current mathematics in india at present, mathematics past present and future in india, future of
Posted By: Guest mathematics in india, mathematics in india future, speech on mathematics in india present, future of mathematics in india pdf, speach about mathematics of india past
Created at: Monday 23rd of present and future, mathematics in india past present and future ppt, speech in matematics present, mathematics of india past present and future, mathematics in india
July 2012 08:33:34 AM past,
Last Edited Or Replied at
:Saturday 04th of August
2012 05:23:39 AM
Title: mathmatics in india
past present and future
Page Link: mathmatics in
india past present and
future - mathematics in india future wikipedia, project on mathematics in india past present and future, seminar topic on future mathematics in india, mathematics in india past
Posted By: Guest present and future pdf, mathematics in india past, seminar on mathematics in india past present and future, seminar topics for mathematics in india of past, mathematics
Created at: Sunday 22nd of in india past present and future wikipedia, seminar on mathematics in india past present future, mathematics in india past present and future wiki, mathematics in india
July 2012 12:54:38 PM in present past and future, past present and future of mathematics in india seminar, uses of mathematics in india past present and future, live sex,
Last Edited Or Replied at
:Saturday 04th of August
2012 05:22:52 AM
Title: mathematics in india
past present and future
Page Link: mathematics in
india past present and
future seminar - mathematics in india past present and future ppt, seminar on mathematics in india past present and future, mathematics in past present and future seminar, one page
Posted By: Guest about mathematics in india past present future, mathematics in india past present and future, a seminar maths in india present past and future, mathematics past present
Created at: Saturday 21st of and future seminar topics, past present and future of mathematics in india, mathematics of india past present and future, free project for mathematics in india past
July 2012 07:51:21 AM present and future, mathematics in india past present and future seminar, future of mathematics ppt, future mathematics of india project, maths in india future,
Last Edited Or Replied at
:Saturday 06th of October
2012 05:48:00 AM
Title: mathematics in india
past present and future wiki
Page Link: mathematics in
india past present and
future wiki - mathematics in india past present and future, mathematic in india future, math in india past present future, maths past present and future seminar top, mathematics,
Posted By: Guest mathematics in india past present and future wiki, mathematics in past present and future in india, math past present future in india, mathematics past present and
Created at: Saturday 21st of future ppt, past present and future of mathematics in india, pdf mathematics in india past present and future, mathematics in india past present and future wikipedia,
July 2012 02:23:59 AM mathamatics in india past present future, maths in india past present and future, mathematics in india future, project on mathematics in india past present and future,
Last Edited Or Replied at
:Saturday 15th of September
2012 01:30:35 PM
Title: topic past present
and future of math in india
Page Link: topic past
present and future of math
in india - future of mathematics in india, mathematics future in india, mathematics in india past present and future, use of mathematics past present and future, science seminar
Posted By: Guest mathematics present past future topics, mathematics in india past present and future seminar, indian maths seminar past, maths past present and future in india, seminar
Created at: Friday 20th of presentation on topic mathematics in past present future, mathematics past present and future in india, mathematics in india past present and future pdf, seminar topics
July 2012 12:43:31 PM mathematic in india past future present, seminar past present future mathematics, past present and future of mathematics in india, mathematics future of india,
Last Edited Or Replied at
:Wednesday 25th of July 2012
01:00:31 PM
Title: Past Present and
Future of Mathematics in
Page Link: Past Present and
Future of Mathematics in
India - maths in india past present and future ppt, maths in india future, indian math past present future, mathematics in india past present and future, maths in past present
Posted By: Guest and future project, past present and future of mathematics in india, seminars presented in india maths present past future, mathematics in india past present and future
Created at: Friday 20th of ppt, maths in india past present future ppt, powerpoint presentations on mathematics in india past present and future, future of mathematics in india, seminar on
July 2012 04:01:10 AM mathematics in india past present and future, past present and future of maths in india, mathematics in india past present and future seminar, mathematics, cpbsh,
Last Edited Or Replied at
:Sunday 05th of August 2012
01:32:21 PM
Title: article on
mathematics in india its
past present and future
Page Link: article on
mathematics in india its
past present and future - mathematics in india in future, its past present and future of mathematics in india ppt, mathematics in india past present and future ppt, past present and future of
Posted By: Guest mathematics in india ppt, topic past present and future of math in india, power point presentation on mathematics in india past present future, mathematics in india
Created at: Wednesday 18th past present and future, article on mathematics in india past present and future, seminar topic on mathematics in india past present and future, http seminarprojects
of July 2012 06:13:01 PM com thread article on mathematics in india its past present and future pid 96969, seminar topic mathematics in india past present and future, mathematics in india,
Last Edited Or Replied at
:Tuesday 31st of July 2012
09:17:35 AM
Title: past present and
future of mathematics in
india ppt
Page Link: past present and
future of mathematics in present and future of indian mathematics, future of the indian mathematics pdf file, mathematics past present and future ppt, mathematics in india past present and
india ppt - future ppt, a topic about mathematics is india present past future, rural banking, past of mathematics in india, mathematics past present and future in india,
Posted By: Guest mathematics in india past present and future, mathematics in india present past future in pdf format, general presentation topic maths in past present and future, past
Created at: Friday 13th of present and future of mathematics in india ppt, indian mathematics in future, future of indian mathematics, mathematics in india in present time project, mathematics,
July 2012 04:27:14 PM yhs 002,
Last Edited Or Replied at
:Sunday 22nd of July 2012
02:34:47 PM
Title: mathematics in india
past present and future ppt
Page Link: mathematics in
india past present and
future ppt - mathematics in past present and future, present and future of mathematics ppt, mathematics in india past present and future ppt, mathematics in india past present and
Posted By: Guest future, past present and future mathematics in india, past present and future of mathematics in india, mathematics in india at present, future of mathematics in india,
Created at: Thursday 12th of seminar on math past present and future, indian mathematics past present and future ppt, forum mathematics in india past present and future, project report on past
July 2012 01:27:12 PM present and future of mathematics in india, mathematics in india past and present, past present and future of mathematics in india ppt, mathematics in india future,
Last Edited Or Replied at
:Monday 12th of November
2012 11:24:56 AM
Title: download free ppt
slides for laplace transform
Page Link: download free ppt
slides for laplace transform
- laplace transform ppt presentations, laplacetransform ppt slides, images related to laplace transforms ppt, free seminar project, laplace transform ppt slides, ppt
Posted By: Guest slides for seminar, electronics ppt slides download, download ppt on laplace transform, laplace transform ppt pdf, present past and future of mathematics in india,
Created at: Sunday 08th of internet based with online games project ppt, pdf low power and area efficient carry select adder, seminar on laplace transforms, laplace transform ppt presentations
July 2012 03:00:41 PM with slides download, download ppt on laplace, mathematics in india past present and future, use of maths in india at present, ppt on mathematics in india past, laplace
Last Edited Or Replied at ppt,
:Monday 01st of October 2012
07:46:17 AM | {"url":"http://seminarprojects.com/s/mathematics-in-india-past","timestamp":"2014-04-18T20:48:37Z","content_type":null,"content_length":"53134","record_id":"<urn:uuid:d5a91c60-4f69-46e8-aba2-1dd842e3fc97>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00187-ip-10-147-4-33.ec2.internal.warc.gz"} |
MATH 151 - FALL 2007
Time: 10:10 - 11:00 Monday and Wednesday, labs Thursdays at various times
Place: Buehler 300. Recitation sections will meet in the rooms assigned. Labs will be held in Ayres 15.
Instructor: Dr. Louis Gross, Professor of Ecology and Evolutionary Biology and Mathematics
Office: 401B Austin Peay. Office Hours: Monday and Wednesday 11-1 and by appointment. Phone: 974-4295 Email: gross@tiem.utk.edu
Teaching Assistants: Erin Bodine (Section #1) bodine@math.utk.edu, Tom Lewis (Section #2,4), Eleanor Abernethy (Section #3,5), Rachael Miller (Sections #7) rmiller@math.utk.edu.
Course web page
Course overview
This course provides an introduction to a variety of mathematical topics of use in analyzing problems arising in the biological sciences. It is designed for students in biology, agriculture,
forestry, wildlife, pre-medicine and other pre-health professions. Students who desire a strong mathematical grounding, enabling them to take most advanced math courses, should consider taking the
sequence Math 141-2 instead. Math 151 is the first of a two course sequence, and depending upon your curriculum, will partially satisfy graduation requirements for your major. The general aim of the
sequence is to show how mathematical and analytical tools may be used to explore and explain a wide variety of biological phenomena that are not easily understood with verbal reasoning alone.
Prerequisites are two years of high school algebra, a year of geometry, and half a year of trigonometry.
This course includes a laboratory component which makes use of computer facilities in the Math Department. No prior background in the use of the main software package for the course (Matlab) is
expected, though students are expected to have familiarity with standard word-processing and graphing (e.g. spreadsheet) tools. Although there is a textbook, we will not be following it very closely
at times, and will be covering topics not in the text on occasion. As we will not be following the text for part of the course, students should plan to attend all class sessions, although no formal
roll will be taken in lectures. The text for the course is:
Mathematics for the Biosciences
by Michael Cullen. A supplement which includes a variety of additional material not covered in the text, as well as projects and sample exams, is available from Graphic Creations (1809 Lake Ave.,
behind Wendy's on Cumberland Avenue) for approximately $4.
Course Goals:
Develop your ability to quantitatively analyze problems arising in the biological areas of interest to you.
Illustrate the great utility of mathematical models to provide answers to key biological problems.
Develop your appreciation of the diversity of mathematical approaches potentially useful in the life sciences.
Provide experience using computer software to analyze data, investigate mathematical models and provide some exposure to programming.
Course Grading:
The grade will be based on several components: (a) There will be a set of brief (5-10 minute) quizes, generally given once a week during the Tuesday class period (in weeks for which there is no exam
scheduled); (b) There will be a set of assignments based on the use of the computer to analyze particular sets of data, or problems. These may be worked on within a study group, as long as it is
clearly noted who participated, and each individual writes their own results; (c) There will be a set of three exams during the term, in addition to a comprehensive final. The exams will not be
computer based. They will focus on the key concepts and techniques discussed in the course. Of the three regular exams given, the one with the lowest score will be dropped. The final exam will be
given Thursday December 13 from 10:15-12:15 in the lecture room (Buehler 300). The weighting of these components of the grade are: (a) 20%, (b) 20%, (c) 60% (the final exam counts 30% of the course
grade, and the two regular exams not dropped will together count 30% of the grade).
All students are expected during the first 2 weeks of class (by Sept. 6) to take the COMPETE (Collegiate Online Math Preparation Exams, Tutorials, & Exercises) Pre-Calculus Exam available on-line
through the MathClass.org site description of how to take this Exam is here and the site you register at is here . This is an assessment exam to ensure that you are prepared for this course. It is
based solely on high-school level material. All students are expected to complete the exam with a score of 20 or better (out of 25). Your grade on this exam will be reported to your teaching
assistant, and if you do not score 20 or above you will be expected to go through the variety of appropriate tutorials for the material you did not answer correctly, and retake a similar exam. You
are welcome to go through some of the tutorials before taking the exam if you wish to refresh your memory.
Participants are expected to regularly work problems from the text, or problems assigned by the instructor. These should be worked on individually, but will not be graded. Questions about these
problems may be addressed during lab periods or by attending the office hours of your teaching assistant. Note that the quiz problems will mostly be chosen directly from the assigned homework.
Opportunities for extra credit may be made available as the course proceeds, for those desiring this. These are usually additional computer-based laboratory projects.
Course Outline:
The pace of the material covered will be adjusted as necessary, but the approximate time to be spent on various topics over the semester and the dates of coverage are given below.
Descriptive statistics - analysis of tabular data, means, variances, histograms, linear regression - Aug. 22-30
Exponentials and logarithms, non-linear scalings, allometry, log-log and semi-log plots - Text sections 5 & 15 - Sept. 5 - Sept. 19
Exam 1 - Sept. 20
Matrix algebra - Text sections 55 & 56 with supplementary material - addition, subtraction, multiplication, inverses, matrix models in population biology, eigenvalues, eigenvectors, Markov chains,
application to ecological succession - Sept. 24 - Oct. 17
Exam 2 - Oct. 18
Discrete probability - Text sections 57 to 61 with supplementary material - applications to population genetics, behavioral sequence analysis - Oct. 22 - Nov. 14
Exam 3 - Nov. 15
Difference equations - Text sections 47 to 49 with supplementary material - linear and nonlinear examples, equilibrium, stability and homeostasis, logistic model, introduction to limits - Nov. 19 -
Dec. 3
Final Exam - Dec. 13 - 10:15AM-12:15PM in BU 300 | {"url":"http://www.tiem.utk.edu/~gross/math151fall07/syllabus.html","timestamp":"2014-04-19T20:10:08Z","content_type":null,"content_length":"7532","record_id":"<urn:uuid:7e7da3fe-ae5e-4986-9c22-7729434b7215>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00279-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bicycle over curb
1. The problem statement, all variables and given/known data
You are trying to raise a bicycle wheel of mass m and radius R up over a curb of height h. To do this, you apply a horizontal force F.
What is the least magnitude of the force F_vec that will succeed in raising the wheel onto the curb when the force is applied at the center of the wheel?
2. Relevant equations
F = 0
torque_total = 0
3. The attempt at a solution
I've tried for the last hours to figure out how to do this but I'm stuck. I've drawn a free body diagram with the horizontal force and also, the force the curb is exerting on the wheel as well? They
must equal right..? I don't really have any idea how to start this problem other than that though. So frusterated sigh. :( Any help is appreciated, thanks. | {"url":"http://www.physicsforums.com/showthread.php?t=160214","timestamp":"2014-04-20T21:23:36Z","content_type":null,"content_length":"50815","record_id":"<urn:uuid:3d391215-5f04-46cd-ac34-d58f9b7a3923>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00150-ip-10-147-4-33.ec2.internal.warc.gz"} |
Are these groups isomorphic (Cancellation in torsionfree, virtually Abelian groups)
up vote 4 down vote favorite
I wondered whether it is possible to find two finitely generated, virtually Abelian, torsionfree groups $G,H$ that are not isomorphic but that become isomorphic after crossing with $\mathbb{Z}$. I
have the following candidates:
Consider $K:=(\mathbb{Z}[t]/(t^5+1))\rtimes_{\cdot t} \mathbb{Z}$. Let $\varphi$ be the automorphism of $K$ given by $\cdot t$ on $\mathbb{Z}[t]/(t^5+1)$ and $(0,s)\mapsto (1,s)$ , where $s$ denotes
a generator of the other copy of $\mathbb{Z}$. My candidates are $G:=K\rtimes\mathbb{Z}$ and $H:=K\rtimes_{\varphi^3}\mathbb{Z}$.
Are these two groups isomorphic ?
Each one contains a finite index subgroup isomorphic to the other one. Crossing with $\mathbb{Z}$ gives $K\rtimes \mathbb{Z^2}$ where a basis of $\mathbb{Z}^2$ acts by $\varphi,\mbox{id}$
respectively $\varphi^3,\mbox{id}$. The isomorphism is given by a base change of $\mathbb{Z}^2$.
$G$ contains a finite index subgroup isomorphic to $H$ and vice versa. If it turns out that they are actually isomorphic, one might still hope to get an example by replacing $5$ with a bigger number
(that is coprime to 3 to make the base change work).
Related: Hirshon, some cancellation theorems with applications to nilpotent groups (The example given there is torsionfree, nilpotent, but maybe not virtually Abelian).
2 I am not sure about your candidate but there certainly are non-homotopy equivalent closed flat manifolds that become affinely diffeomorphic after taking product with a circle, see Charlap's book
on flat manifolds, section IV.8. If memory serves, this is also published in a journal (maybe annals?). – Igor Belegradek Jan 30 '13 at 14:49
1 I found the paper: Compact Flat Riemannian Manifolds, I by Charlap Annals of Mathematics, Vol. 81, No. 1 (Jan., 1965), pp. 15-30. – Igor Belegradek Jan 30 '13 at 15:53
@Igor: Thank you. The precise place where this is mentioned is the remark after Theorem 3.10 on p. 29. In that chapter $\mathbb{Z}/p$-flat manifolds are studied. Flat manifolds are isometrically
covered by $\mathbb{R}^n$. A $\mathbb{Z}/p$ manifold is a manifold where the quotient of its fundamental group by the subgroup of translations is $\mathbb{Z}/p$. Maybe its possible to cook up
easier groups by unraveling the theory. I will have a look at it. – HenrikRüping Jan 31 '13 at 14:49
Let me just mention, that the groups above seem to be $\mathbb{Z}/10\oplus \mathbb{Z}/10$-manifolds, so the theory from chapter 3 there does not apply directly. Still it would be nice to find an
elementary argument that shows that these two groups are not isomorphic. – HenrikRüping Jan 31 '13 at 14:50
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged gr.group-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/120315/are-these-groups-isomorphic-cancellation-in-torsionfree-virtually-abelian-gro","timestamp":"2014-04-17T19:17:24Z","content_type":null,"content_length":"52535","record_id":"<urn:uuid:ce1dc8db-16a3-4100-adb5-abffe5acebef>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00467-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bissection Method Baseball Question
April 15th, 2010, 07:49 PM
Bissection Method Baseball Question
I've been asked to write a code which will calculate the angle at which a batter must strike so that the ball lands 200ft from home plate.
Apparently to solve it I have to use a bissection method in order to find the roots of
R(theta) - 200ft = 0
I really have no idea what this means, if anyone can maybe help get me started, it would be a great help! | {"url":"http://www.javaprogrammingforums.com/%20algorithms-recursion/4093-bissection-method-baseball-question-printingthethread.html","timestamp":"2014-04-21T15:16:58Z","content_type":null,"content_length":"3732","record_id":"<urn:uuid:56f09b95-e57c-4306-8b82-1669f847bc1e>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00222-ip-10-147-4-33.ec2.internal.warc.gz"} |
Brick Math Tutor
Find a Brick Math Tutor
I have been teaching for 20 years. I am currently teaching both regular ed and special ed mathematics in a k-8 district in Monmouth county. I have a very hands on approach to learning as well as
relating my topics to real life situations.
13 Subjects: including algebra 1, elementary (k-6th), study skills, soccer
...The products I've worked on included guidance systems for the military and my most significant accomplishment was working on the guidance and navigation system for the International Space
Station. The company I worked for assembled the Control Moment Gyroscopes for Space Station attitude and con...
3 Subjects: including algebra 1, electrical engineering, Microsoft PowerPoint
I am an experienced elementary/middle school teacher. I have just retired after 26 years of service to my students. In the last years of my career I was an Honors Pre-Algebra & Algebra teacher.
4 Subjects: including algebra 1, prealgebra, elementary math, elementary science
...For students whose goal is to learn particular subjects, I make sure that the student understands the basics prior to delving into the details. In a nutshell, I provide tutoring based on the
student's need. Thank you for your time reading this profile!
15 Subjects: including algebra 1, algebra 2, calculus, chemistry
...As part of my current job, I train graduate students in economic research methods. I am an enthusiastic tutor in college level statistics. I am also available to teach AP statistics, business,
and economics to high school students.
9 Subjects: including calculus, precalculus, SAT math, economics
Nearby Cities With Math Tutor
Berkeley Township, NJ Math Tutors
Bricktown, NJ Math Tutors
Brielle Math Tutors
Howell, NJ Math Tutors
Jackson Township, NJ Math Tutors
Jackson, NJ Math Tutors
Lakewood, NJ Math Tutors
Manasquan Math Tutors
Manchester Township Math Tutors
Manchester, NJ Math Tutors
Point Pleasant Beach Math Tutors
Point Pleasant, NJ Math Tutors
Tinton Falls, NJ Math Tutors
Toms River Math Tutors
Wall Township, NJ Math Tutors | {"url":"http://www.purplemath.com/brick_math_tutors.php","timestamp":"2014-04-18T13:41:22Z","content_type":null,"content_length":"23474","record_id":"<urn:uuid:f1940964-80c8-42f3-8436-f0860479c174>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00560-ip-10-147-4-33.ec2.internal.warc.gz"} |
Are you afraid of equations?
Jellymatter is, we claim, not afraid of equations, but apparently scientists are. A study in PNAS claims to have found that theoretical biology papers are cited less when they are densely packed with
mathematical language. The authors argue that this impedes progress, since empirical work needs to be backed up and commensurate with some theory to have deeper scientific meaning.
I think it’s a very interesting point. Mathematics is often said to be the language of science, but actually, contrary to perceptions that many people might have, most scientists aren’t big fans of
maths. But using maths is one of the ways that scientists try to make the meaning of their work precise – without it, scientific theory is too vague and subject to interpretation. In that way, maths
should be an aid rather than a hindrance to communication.
Maths also gives you a way to be sure of what logically follows from the statements you make. As a simple example, think of the simultaneous equations you learn in school – if you are given that x+y=
2, and x-y=0, you can show that as a logical consequence of these assumptions x and y must both themselves be equal to 1. Much of the use of maths in scientific theory comes down to derivations like
this, and it allows you to be sure that your results follow logically from your propositions.
Yet in spite of its advantages, it seems maths turns off a lot of scientists. Actually probably the most influential work in theoretical biology, Darwin’s On the origin of species, contains
essentially no maths. By contrast, I’ve recently been reading Norbert Wiener’s Cybernetics, which is again an influential work, but it relies on a lot of maths, and I wonder if perhaps this puts
people off really appreciating Wiener’s arguments.
Finally, at the end of the story on this paper here, one of the authors is quoted arguing that maybe a good middle way is not to exclude maths from theoretical work, but to make sure to add more
explanatory text. This is clearly a sensible suggestion. We also perhaps shouldn’t generalise too much – maybe maths does put off some scientists, but as I have pointed out, it also has some strong
advantages. While highly mathematical papers might not be the most popular or influential, they do still have a place and a utility.
Personally, although I’m certainly not afraid of equations, I do find an excess of mathematical language to be distasteful. I find that excessively formal language is sometimes used to disguise a
lack of clear thought about the work. There’s a kind of trade-off, I think: mathematical language allows you to express your thoughts in an extremely clear way (at least, it does if the thoughts were
quantitative in the first place), but it also demands a certain amount of effort on the part of the reader, and if you jump straight in with the “we define $\mathfrak{R}$ as a hypermorphic polybongo
set of degree $\mathcal{R}$” then it’s hard for the reader to assess whether that effort is worth investing. For me the ideal paper expresses its ideas clearly in natural language, and uses an
appropriate amount of mathematical formalism only where such precision is required. Edwin Jaynes’ papers are often excellent examples of this style.
Anyway I have to go now because there seems to be some kind of insect in the room HELP IT’S A BEE
Yeah, I think that’s the point being made partly. Also, you have to distinguish between actual *maths* papers (where it may be interesting to the reader to just define some abstract entity and derive
something from it), and scientific theory papers, where the reader is expecting there to be some kind of meaningful point, in terms of something they could empirically test.
• Even in purely mathematical papers there’s a need to explain the ideas clearly in natural language before launching into the formalism. Unless they’re really really pure maths papers with no
possible practical application I guess.
I find it to be a particular problem with probability theory and information theory. A lot of papers in that field like to define everything in the most general possible way using measure theory.
But this makes it difficult for mere mortals to understand, and that’s a shame because if information theory was as widely understood as classical hypothesis testing statistics, we’d all be a lot
better off.
Oh yes, overgeneralisation is a pain. Exactly at what point is it too general or not general enough? It’s hard to know, but as you say I think to most “mortal” statisticians the basic concepts are
pdfs, cdfs, means, variances etc, rather than the measure theoretic basis of all that which of course is an interesting mathematical entity, but not always needed (though of course it could be needed
Lebesgue integrals, are, I’m told, more general than Riemann integrals. But I couldn’t tell you off hand why, and for most of the work I do, I just use “integrals”, as in, y’know, the thing you learn
in school or university that’s the opposite of differentiation…
4 Comments to “Are you afraid of equations?” | {"url":"http://jellymatter.com/2012/06/27/are-you-afraid-of-equations/","timestamp":"2014-04-18T13:35:45Z","content_type":null,"content_length":"93256","record_id":"<urn:uuid:3ef65720-84d1-43c7-abfa-e813737b9e7e>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00154-ip-10-147-4-33.ec2.internal.warc.gz"} |
Inverse Matrix
February 26th 2009, 12:54 PM
Inverse Matrix
What would be the best way to find the inverse of this matrix?
[1 3 0 -1]
[-3 -5 -8 3]
[-4 -12 6 4]
[0 -1 2 1]
I have been told to use the I method using the matrix
By performing the row reductions on A to transform into I, along the way doing the same row reductions to I in order to find the inverse?
This seems to be more complicated than it has to be, any insights or ideas as to how I should approach this question
February 26th 2009, 12:59 PM
I know a few different ways to find inverses of matrices but the way you mention is the simplest.
February 26th 2009, 01:06 PM
I am unsure of how to find out if there is an actual inverse for this matrix, I know with smaller matrices the determinant can be found by take ad-bc and as long as it is not zero, it means there
is an inverse. How might I go about this with the larger 4 x 4 matrix?
February 27th 2009, 06:13 AM
To found out how, read a few online lessons on matrix inversion. :wink: | {"url":"http://mathhelpforum.com/advanced-algebra/75918-inverse-matrix-print.html","timestamp":"2014-04-17T08:39:04Z","content_type":null,"content_length":"5523","record_id":"<urn:uuid:b06c3af0-b391-4250-b32d-d23043151221>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00307-ip-10-147-4-33.ec2.internal.warc.gz"} |
Basic concepts
Let $X$ be a topological space and $\mathcal{U}$ an open cover thereof. A (continuous) path $\gamma \colon I \to X$ can pass through many of the elements of $\mathcal{U}$ as it winds its way around
$X$. We can decompose that path into segments such that each segment lies wholly inside one of the open sets in $\mathcal{U}$. The Schedule Theorem says that this can be done continuously over all
paths in $X$.
This was proved by Dyer and Eilenberg and applied to the question of fibrations over numerable spaces.
The idea of a schedule is that it is a way of decomposing a length into pieces and then assigning a label to each piece. This clearly fits with the stated purpose of these things since we wish to
decompose a path into pieces and assign an open set to each piece.
To make this precise, we start with a set of labels. Following Dyer and Eilenberg, let us write this as $A$. Lengths are positive real numbers and so we also need the set of such, Dyer and Eilenberg
denote this by $T$; thus $T \coloneqq \mathbb{R}_{\ge 0}$.
The schedule monoid of $A$ is the free monoid on the set $A \times T$. It is written $S A$. Its elements are schedules in $A$.
A schedule in $A$ is thus a finite ordered list of pairs $(a,t)$ where $a \in A$ and $t \in T$.
There are two notions of length for a schedule. There is the word length which simply counts the number of pairs. Then there is the function $l \colon S A \to T$ defined by $l((a_1,t_1) \cdots
(a_k,t_k)) = t_1 + \cdots + t_k$. There is also a right action of $T$ on $S A$ which simply multiplies all of the lengths: $((a_1,t_1) \cdots (a_k,t_k)) \cdot t = ((a_1,t_1 t) \cdots (a_k,t_k t))$.
Then $l(s t) = l(s) t$.
A schedule is said to be reduced if all of its terms, $(a,t)$, has non-zero length, i.e. $t \gt 0$. The set of reduced schedules forms a submonoid of $S A$ which is written $R S A$.
The empty schedule is reduced.
There is a retraction map $\rho \colon S A \to R S A$ defined by removing all terms with zero length part.
The schedule monoid is given a topology so that the labels are discrete and the lengths topologised as usual. More concretely, given a word $a_1 a_2 \dots a_k$ of elements in $A$, the set of
schedules of the form $(a_1,t_1) (a_2,t_2) \cdots (a_k,t_k)$ is in bijection with $T^k$ and we make that bijection a homeomorphism. Then $S A$ is topologised by taking the coproduct over the set of
words in $A$. The reduced schedule monoid is topologised as the quotient of this.
Let $X$ be a topological space. Let $P X$ denotes its Moore path space. Suppose that we have a family $\mathcal{U}$ of subsets of $X$ indexed by some set $A$. Then we consider a schedule in $A$ as
giving an ordered list of these subsets together with the times to be spent in each. For a path in $X$, and a schedule of the appropriate length, then we can ask whether or not the path fits (or
obeys) the schedule. We make that precise as follows.
Suppose that we have $\alpha \in P X$ and $s \in S A$, and suppose that $s = (a_1, t_1) \cdots (a_k,t_k)$. Then we say that $\alpha$fits the schedule s, written $\alpha \Vert s$, if the following
conditions hold:
1. $l(\alpha) = l(s)$
2. We can split $\alpha$ into subpaths according to the times $\{t_i\}$. Let $\alpha_i$ be the $i$th segment. Then $\alpha_i \in P U_{a_i}$.
Here, $l \colon P X \to T$ is the function that assigns to a Moore path its length. The schedule designates a decomposition of $[0,l]$ into subintervals with $t_i$ being the length of the $i$th
subinterval. Then saying that $\alpha$ fits the schedule $s$ means that $\alpha$ spends the $i$th subinterval in the open set $U_{a_i}$.
Schedule Theorem
We can now state the main theorem.
Let $X$ be a topological space. Let $\mathcal{U}$ be a locally finite open covering of $X$ by numerable open sets with indexing set $A$. Then there is a covering $\mathcal{F}$ of $P X$ by closed sets
and a family of continuous functions $f \colon F \to S A$, indexed by $F \in \mathcal{F}$ such that:
1. for each $\alpha \in P X$, there some finite subfamily $\{F_1, \dots, F_k\} \subseteq \mathcal{F}$ such that $\alpha$ is in the interior of $\bigcup F_j$,
2. for each $\alpha \in F$, $\alpha \Vert f_F(\alpha)$, and
3. for each $\alpha \in F \cap F'$, $\rho(f_F(\alpha)) = \rho(f_{F'}(\alpha))$
The first condition is purely about the covering. Dyer and Eilenberg use the term local covering for a covering by closed sets with this property.
There exists a continuous function $h \colon P X \to R S A$ such that $\alpha \Vert h(\alpha)$ if $l(\alpha) \gt 0$ and $h(\alpha) = \Lambda$ if $l(\alpha) = 0$.
Here, $\Lambda \in R S A$ is the empty word.
Globalisation Theorem
The original motivation for the notion of schedules was to prove the globalisation theorem for (Hurewicz) fibrations.
Let $p \colon Y \to B$ be a continuous function. Suppose that $\mathcal{U}$ is a locally finite covering of $B$ by numerable open sets with the property that for each $U \in \mathcal{U}$ then the
restriction $p_U \colon Y_U \to U$ is a fibration. Then $p$ is a fibration.
The link between the globalisation theorem and the schedule theorem is the characterisation of Hurewicz fibrations in terms of Hurewicz connections.
Proof of the Schedule Theorem
Let $X$ be a topological space. Let $\mathcal{U}$ be a locally finite open covering of $X$ by numerable open sets and indexing set $A$.
Let us write $A^*$ for the free monoid on $A$. Then there is a function $A^* \times T \to S A$ which takes $(a_1 a_2 \cdots a_k, t)$ to the schedule $(a_1,t/k)(a_2,t/k)\cdots (a_k,t/k)$. We say that
a path $\alpha \in P X$evenly fits $s \in A^*$, and write this as $\alpha \Vert_e s$, if it fits the schedule corresponding to $(s,l(\alpha))$.
We need an initial technical result.
There is a locally finite covering $\mathcal{W} = \{W_s \mid s \in A^*\}$ of $P X$ by numerable open sets such that for $\alpha \in W_s$ then $\alpha$ evenly fits the word $s$.
As $\mathcal{W}$ is locally finite and its elements are numerable, we can choose a numeration that is also a partition of unity. That is, we can choose continuous functions $q_s \colon X \to [0,1]$
with the property that $q_s^{-1}((0,1]) = W_s$ and $\sum_s q_s = 1$.
Let $\mathcal{B}$ be the set of finite subsets of $A^* \setminus \Lambda$ (where $\Lambda$ is the empty word). For $b \in \mathcal{B}$ we define
\begin{aligned} D_b &\coloneqq \{\alpha \in P X \mid \sum_{s \in b} q_s(\alpha) = 1 \} \\ &=\{ \alpha \in P X \mid q_s(\alpha) = 0 \; \text{for all}\; s otin b\} \end{aligned}
This is a covering of $P X$ by closed sets. As $\mathcal{W}$ is locally finite, for $\alpha \in P X$ there is some neighbourhood $V$ which meets only a finite number of the $\mathcal{W}$. These are
indexed by elements of $A^*$, indeed of $A^* \setminus \Lambda$, and so the set of indices is an element, say $b$, of$\mathcal{B}$. Then for $s otin b$, $q_s \mid V = 0$ and so for $\beta \in V$, $\
sum_{s \in b} q_s(\beta) = 1$, whence $V \subseteq D_b$. Thus each $\alpha$ is contained in the interior of some $D_b$.
Now let us put a total ordering on $A^*$. This induces a total ordering on each $b \in \mathcal{B}$ and thus allows us to define the partial sums of the summation $\sum_{s \in b} q_s$. Write these as
$Q_i$, with $Q_0$ as the zero function.
Fix $b \in \mathcal{B}$ and write it as $b = \{s_1,s_2,\dots,s_k\}$ in the inherited ordering. Let $e = (l_1,r_1,\dots,l_k,r_k)$ be a list of integers with the property that $1 \le l_i \le r_i \le \#
s_i$ where $\#s_i$ is the word length of $s_i$. Define:
$D_{(b,e)} = \left\{ \alpha \in D_b \mid \frac{l_i -1}{\# s_i} \le Q_{i - 1}(\alpha) \le \frac{l_i}{\# s_i} \; \text{and} \; \frac{r_i - 1}{\# s_i} \le Q_i(\alpha) \le \frac{r_i}{\# s_i} \right\}.$
This is closed in $D_b$ and the collection $\{D_{(b,e)}\}$ is a finite cover of $D_b$. The family $\{D_{(b,e)}\}$ ranging over all $b \in \mathcal{B}$ and suitable $e$ is the family $\mathcal{F}$
that we are looking for. It has the required covering property since the interiors of the $D_b$ cover $P X$.
Define $f_{(b,e)} \colon D_{(b,e)} \to S A$ as follows:
$f_{(b,e)}(\alpha) = \sigma_1 \cdots \sigma_k l(\alpha)$
where $\sigma_i$ is the schedule with $\# \sigma_i = r_i - l_i + 1$ and $l(\sigma_i) = q_{s_i}(\alpha)$, and if $s_i = a_1 \cdots a_n$ then if $l_i \lt r_i$ we have
$\sigma_i = \left( a_{l_1}, \frac{l_i}{n} - Q_{i - 1}(\alpha)\right) \left(a_{l_i+1}, \frac{1}{n} \right) \cdots \left(a_{r_i - 1}, \frac{1}{n} \right) \left( a_{r_i}, Q_i(\alpha) - \frac{r_i - 1}{n}
otherwise, $\sigma_i = (a_{l_i}, Q_i(\alpha) - Q_{i-1}(\alpha))$.
This is continuous and for $\alpha \in D_{(b,e)}$ then $\alpha$ fits $f_{(b,e)}(\alpha)$. Moreover, for $\alpha \in D_{(b,e)} \cap D_{(b',e')}$ then $\rho f_{(b,e)}(\alpha) = \rho f_{(b',e')}(\alpha)
• Dyer, E. and E., Samuel. (1988). Globalizing fibrations by schedules. Fund. Math., 130, 125–136. MR0963792
• Dyer, Eilenberg, MR0963792 | {"url":"http://ncatlab.org/nlab/show/schedule","timestamp":"2014-04-21T07:30:36Z","content_type":null,"content_length":"82141","record_id":"<urn:uuid:698fbcab-ef9e-4ce6-9974-50fde6c5722f>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00094-ip-10-147-4-33.ec2.internal.warc.gz"} |
definations related to mathematics:-
I did not invent these, just collected it
Mathematics is made of 50 percent formulas, 50 percent proofs, and 50 percent imagination.
"A mathematician is a device for turning coffee into theorems" (P. Erdos)
Addendum: American coffee is good for lemmas.
An engineer thinks that his equations are an approximation to reality. A physicist thinks reality is an approximation to his equations. A mathematician doesn't care.
Old mathematicians never die; they just lose some of their functions.
Mathematicians are like Frenchmen: whatever you say to them, they translate it into their own language, and forthwith it means something entirely different. -- Goethe
Mathematics is the art of giving the same name to different things. -- J. H. Poincare
What is a rigorous definition of rigor?
There is no logical foundation of mathematics, and Gödel has proved it!
I do not think -- therefore I am not.
Here is the illustration of this principle:
One evening Rene Descartes went to relax at a local tavern. The tender approached and said, "Ah, good evening Monsieur Descartes! Shall I serve you the usual drink?". Descartes replied, "I think
not.", and promptly vanished.
A topologist is a person who doesn't know the difference between a coffee cup and a doughnut.
A mathematician is a blind man in a dark room looking for a black cat which isn't there. (Charles R Darwin)
A statistician is someone who is good with numbers but lacks the personality to be an accountant.
Classification of mathematical problems as linear and nonlinear is like classification of the Universe as bananas and non-bananas.
A law of conservation of difficulties: there is no easy way to prove a deep result.
A tragedy of mathematics is a beautiful conjecture ruined by an ugly fact.
Algebraic symbols are used when you do not know what you are talking about.
Philosophy is a game with objectives and no rules.
Mathematics is a game with rules and no objectives.
Math is like love; a simple idea, but it can get complicated.
"Let us realize that: the privilege to work is a gift, the power to work is a blessing, the love of work is success!"
- David O. McKay | {"url":"http://www.mathisfunforum.com/viewtopic.php?id=12817","timestamp":"2014-04-20T18:51:34Z","content_type":null,"content_length":"14777","record_id":"<urn:uuid:4b57ecb7-b588-46fb-b04b-d1cfe60f7606>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00222-ip-10-147-4-33.ec2.internal.warc.gz"} |
Running Aggregate Functions
Running aggregate functions are similar to functional aggregates in that they take a set of records as input, but instead of outputting the single aggregate for the entire set of records, they output
the aggregate based on records encountered so far.
This section describes the running aggregate functions supported by the Oracle BI Server.
Calculates a moving average (mean) for the last n rows of data in the result set, inclusive of the current row.
MAVG (n_expression, n)
n_expression Any expression that evaluates to a numerical value.
n Any positive integer. Represents the average of the last n rows of data.
The average for the first row is equal to the numeric expression for the first row. The average for the second row is calculated by taking the average of the first two rows of data. The average for
the third row is calculated by taking the average of the first three rows of data, and so on until you reach the nth row, where the average is calculated based on the last n rows of data.
This function calculates a moving sum for the last n rows of data, inclusive of the current row.
The sum for the first row is equal to the numeric expression for the first row. The sum for the second row is calculated by taking the sum of the first two rows of data. The sum for the third row is
calculated by taking the sum of the first three rows of data, and so on. When the nth row is reached, the sum is calculated based on the last n rows of data.
MSUM (n_expression, n)
n_expression Any expression that evaluates to a numerical value.
n Any positive integer. Represents the sum of the last n rows of data.
The following example shows a report that uses the MSUM function.
MONTH REVENUE 3_MO_SUM
JAN 100.00 100.00
FEB 200.00 300.00
MAR 100.00 400.00
APRIL 100.00 400.00
MAY 300.00 500.00
JUNE 400.00 800.00
JULY 500.00 1200.00
AUG 500.00 1400.00
SEPT 500.00 1500.00
OCT 300.00 1300.00
NOV 200.00 1000.00
DEC 100.00 600.00
This function calculates a running sum based on records encountered so far. The sum for the first row is equal to the numeric expression for the first row. The sum for the second row is calculated by
taking the sum of the first two rows of data. The sum for the third row is calculated by taking the sum of the first three rows of data, and so on.
RSUM (n_expression)
n_expression Any expression that evaluates to a numerical value.
The following example shows a report that uses the RSUM function.
JAN 100.00 100.00
FEB 200.00 300.00
MAR 100.00 400.00
APRIL 100.00 500.00
MAY 300.00 800.00
JUNE 400.00 1200.00
JULY 500.00 1700.00
AUG 500.00 2200.00
SEPT 500.00 2700.00
OCT 300.00 3000.00
NOV 200.00 3200.00
DEC 100.00 3300.00
This function takes a set of records as input and counts the number of records encountered so far. It resets its value for each group in the query. If a sort order is defined on any column, then this
function does not get incremented for adjoining identical values for the sorted column. To avoid this issue, reports should either not contain a sort order on any column or contain sort orders on all
RCOUNT (Expr)
Expr An expression of any data type.
The following example shows a report that uses the RCOUNT function.
MAY 300.00 2
JUNE 400.00 3
JULY 500.00 4
AUG 500.00 5
SEPT 500.00 6
OCT 300.00 7
This function takes a set of records as input and shows the maximum value based on records encountered so far. The specified data type must be one that can be ordered.
RMAX (expression)
expression An expression of any data type. The data type must be one that has an associated sort order.
The following example shows a report that uses the RMAX function.
JAN 100.00 100.00
FEB 200.00 200.00
MAR 100.00 200.00
APRIL 100.00 200.00
MAY 300.00 300.00
JUNE 400.00 400.00
JULY 500.00 500.00
AUG 500.00 500.00
SEPT 500.00 500.00
OCT 300.00 500.00
NOV 200.00 500.00
DEC 100.00 500.00
This function takes a set of records as input and shows the minimum value based on records encountered so far. The specified data type must be one that can be ordered.
RMIN (expression)
expression An expression of any data type. The data type must be one that has an associated sort order.
The following example shows a report that uses the RMIN function.
JAN 400.00 400.00
FEB 200.00 200.00
MAR 100.00 100.00
APRIL 100.00 100.00
MAY 300.00 100.00
JUNE 400.00 100.00
JULY 500.00 100.00
AUG 500.00 100.00
SEPT 500.00 100.00
OCT 300.00 100.00
NOV 200.00 100.00
DEC 100.00 100.00 | {"url":"http://docs.oracle.com/cd/E27437_01/books/OnDemOLH/runaggfuncanswhelp.html","timestamp":"2014-04-19T11:25:13Z","content_type":null,"content_length":"23548","record_id":"<urn:uuid:6765b0fc-7ef5-4b9b-aa5f-db581488897e>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00256-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
3x + 10 ≥ 4 Please solve and explain, Thanks.
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50834570e4b041b051b95250","timestamp":"2014-04-17T04:24:52Z","content_type":null,"content_length":"42097","record_id":"<urn:uuid:96f25b9e-c263-47db-b979-52723dda530c>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00222-ip-10-147-4-33.ec2.internal.warc.gz"} |
Clear Language, Clear Mind
Actually im busy doing an exam paper for linguistics class, but it turned out to be not so difficult, so i spent som time on Khan Academy doing probability and statistics courses. i want to master
that stuff, especially the stuff i dont currently know the details about, like regression.
anyway, i stumpled into a comment asking about the way the standard deviation is calculated. why not just use the absolute value insted of squaring stuff and taking the square root after? i actually
tried that once, and it gives different results! i tried it out becus the teacher’s notes said that it wud giv the same results. pretty neat discovery IMO.
anyway, the other one has a name as well: en.wikipedia.org/wiki/Absolute_deviation
here’s a paper that argues that we shud really return to the MD (mean deviation). i didnt understand all the math, but it sure is easier to calculate and the meaning of it easier to grasp, altho its
probably too difficult to switch now that most of statistics is based on the SD. still cool tho.
Revisiting a 90-year-old debate the advantages of the mean deviation
ABSTRACT: This paper discusses the reliance of numerical analysis on
the concept of the standard deviation, and its close relative the variance.
It suggests that the original reasons why the standard deviation concept
has permeated traditional statistics are no longer clearly valid, if they
ever were. The absolute mean deviation, it is argued here, has many
advantages over the standard deviation. It is more efficient as an
estimate of a population parameter in the real-life situation where the
data contain tiny errors, or do not form a completely perfect normal
distribution. It is easier to use, and more tolerant of extreme values, in
the majority of real-life situations where population parameters are not
required. It is easier for new researchers to learn about and understand,
and also closely linked to a number of arithmetic techniques already
used in the sociology of education and elsewhere. We could continue to
use the standard deviation instead, as we do presently, because so much
of the rest of traditional statistics is based upon it (effect sizes, and the
F-test, for example). However, we should weigh the convenience of this
solution for some against the possibility of creating a much simpler and
more widespread form of numeric analysis for many.
Keywords: variance, measuring variation, political arithmetic, mean
deviation, standard deviation, social construction of statistics
it also has a new odd use of “social construction” which annoyed me when reading it.
I was researching a different topic and came across this paper. I was rewatching the Everything is a remix series. Then i looked up som mor relevant links, and came across these videos. One of them
mentioned this article.
Complex to the ear but simple to the mind (Nicholas J Hudson)
Background: The biological origin of music, its universal appeal across human cultures and the cause of its beauty
remain mysteries. For example, why is Ludwig Van Beethoven considered a musical genius but Kylie Minogue is
not? Possible answers to these questions will be framed in the context of Information Theory.
Presentation of the Hypothesis: The entire life-long sensory data stream of a human is enormous. The adaptive
solution to this problem of scale is information compression, thought to have evolved to better handle, interpret
and store sensory data. In modern humans highly sophisticated information compression is clearly manifest in
philosophical, mathematical and scientific insights. For example, the Laws of Physics explain apparently complex
observations with simple rules. Deep cognitive insights are reported as intrinsically satisfying, implying that at some
point in evolution, the practice of successful information compression became linked to the physiological reward
system. I hypothesise that the establishment of this “compression and pleasure” connection paved the way for
musical appreciation, which subsequently became free (perhaps even inevitable) to emerge once audio
compression had become intrinsically pleasurable in its own right.
Testing the Hypothesis: For a range of compositions, empirically determine the relationship between the
listener’s pleasure and “lossless” audio compression. I hypothesise that enduring musical masterpieces will possess
an interesting objective property: despite apparent complexity, they will also exhibit high compressibility.
Implications of the Hypothesis: Artistic masterpieces and deep Scientific insights share the common process of
data compression. Musical appreciation is a parasite on a much deeper information processing capacity. The
coalescence of mathematical and musical talent in exceptional individuals has a parsimonious explanation. Musical
geniuses are skilled in composing music that appears highly complex to the ear yet transpires to be highly simple
to the mind. The listener’s pleasure is influenced by the extent to which the auditory data can be resolved in the
simplest terms possible.
Interesting, but it is way too short on data. its not that difficult to acquire som data to test this hypothesis. varius open source lossless compressors ar freely available, im thinking particularly
of FLAC compressors. then one needs a juge library of music, and som sort of ranking of the music related to the quality of it. if the hypothesis is correct, then the best music shud com out on top,
at least relativly within genres, or within bands etc. i think i will test this myself.
This conversation followed me posting the post just before, and several people bringing up the same proof.
Aowpwtomsihermng = Afraid of what people will think of me, so i had Emil remove my name-guy
[09:57:00] Emil – Deleet: mathbin.net/109013
[09:58:50] Aowpwtomsihermng: Your mates know their algebra.
[10:00:09] Emil – Deleet: this guy is a mathematician
[10:00:27] Emil – Deleet: fysicist ppl have not chimed in yet
[10:00:32] Emil – Deleet: they are having classes i think
[10:08:18] Aowpwtomsihermng: Have you worked out the inductive proof yet?
[10:09:33] Emil – Deleet: no
[10:09:40] Emil – Deleet: i dont know how they work in detail
[10:09:43] Emil – Deleet: and it takes time
[10:09:49] Emil – Deleet: and i already crowdsourced the problem
[10:10:00] Emil – Deleet: so… doesnt pay for me to look for it
[10:10:19] Aowpwtomsihermng: CBA, right?
[10:10:24] Emil – Deleet: i didnt even need any fancy math proof to begin with
[10:10:30] Emil – Deleet: since i already proved it to my satisfaction
[10:10:54] Aowpwtomsihermng: Induction in the logical rather than mathematical sense…
[10:11:00] Emil – Deleet: yes
[10:11:17] Aowpwtomsihermng: Not as rigorous, but useful anyway.
[10:11:23] Emil – Deleet: or abduction
[10:11:46] Emil – Deleet: mathematical certainty is overrated
[10:11:48] Emil – Deleet: ;)
[10:11:59] Emil – Deleet: just look at economics
[10:12:02] Emil – Deleet: :P
[10:12:27] Aowpwtomsihermng: You never know, it might have worked for the first twenty numbers then stopped working. Unlikely, but possible.
[10:12:48] Aowpwtomsihermng: At least now you know that’s not the case.
[10:12:49] Emil – Deleet: astronomically unlikely
[10:12:56] Emil – Deleet: and i also tried other random numbers
[10:13:02] Emil – Deleet: like 3242
[10:13:21] Emil – Deleet: IMO, not much certainty was gained
[10:13:50 | Edited 10:14:04] Emil – Deleet: its approximately as likely that we missed an error in the proof as it is that abduction/induction fails in this case
[10:14:26] Aowpwtomsihermng: But once you have two or three proofs, then that likelihood drops dramatically.
[10:14:46] Emil – Deleet: perhaps
[10:15:00] Aowpwtomsihermng: But I take your point, it’s not a *great* deal of extra certainty.
[10:15:15] Emil – Deleet: for practice, its an irrelevant increase
[10:15:34] Emil – Deleet: if it comes at a great time cost – not worth it
[10:15:41] Emil – Deleet: thats what mathematicians are for ;)
[10:15:50] Emil – Deleet: (with the implication that their time isnt worth much! :D)
[10:16:55 | Edited 10:17:14] Aowpwtomsihermng: Right, right. We programmers and mathematicians are mere cogs in the machinery of your grand device.
[10:17:19] Emil – Deleet: ^^
[10:17:36] Emil – Deleet: at least ure part of something great ^^
[10:17:37] Emil – Deleet: :P
I was once at a party, and i was somewhat bored and i found this way of calculating the next square. It works without multiplication, so its suitable for mental calculation.
Seeing that i have recently learned python, here’s a python version of it:
n = 10 # how many sqs to return
b = []
def sq(x):
return x*x
for y in range(1,n):
print sq(y)
def sqx(x):
if x == 1:
return 1
if x == 2:
return 4
return (sqx(x-1)-sqx(x-2))+sqx(x-1)+2
a = []
for y in range (1,n):
print sqx(y)
In english. First, set the first two squares to 1 and 4, since this method needs to use the two previous squares to calculate the next. Then calculate the absolute difference between these two.
Suppose we are looking for 3^2, so previous two are 1 and 4. Abs diff is 3. Add 2 to this, result 5. Add 5 to previous square, so 4+5=9. 9 is 3^2.
I have no idea why this works, i just saw a pattern, and confirmed it for the first 20 integers or so.
In the code above, i have defined the function recursively. It is much slower than the other function. I suppose both are slower than the low-level premade function pow(n,m). But it certainly is
cool. :P
I just wanted to look up some stuff on the questions that a teacher had posed. Since i dont actually have the book, and since one cant search properly in paper books, i googled around instead, and
ofc ended up at Wikipedia…
and it took off as usual. Here are the tabs i ended up with (36 tabs):
and with three more longer texts to consume over the next day or so:
plato.stanford.edu/entries/compositionality/ (which i had discovered independently)
plato.stanford.edu/entries/meaning/ (long overdue)
And quite a few other longer texts in pdf form also to be read in the next few days. | {"url":"http://emilkirkegaard.dk/en/?cat=1766","timestamp":"2014-04-23T22:07:13Z","content_type":null,"content_length":"52107","record_id":"<urn:uuid:036d8afb-a214-49ed-9c1c-730092273d84>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00223-ip-10-147-4-33.ec2.internal.warc.gz"} |
Learn about Decimal Equation by Addition and Subtraction
Learn about Decimal Equation by Addition and Subtraction In this track, let’s learn how to do addition and subtraction equations. We’re given two problems. The first problem asks us to solve the
equation and for us to check our answer. The equation is p - 3.2 = 7.8. Let’s do this one first, and then we’ll come back to the second one. So the equation is p - 3.2 = 7.8. As we can see, this is a
subtraction equation because we are subtracting a value from p. So this is a subtraction equation. So the first thing we’ve got to do is remember that subtraction and addition are inverse operations.
What that means is if a number is subtracted, I can undo that by adding the same number to it. So since in this equation, 3.2 is subtracted from p, I can undo it by adding 3.2 to both sides. So what
we will do is we’ll take the equation, p - 3.2 = 7.8 and I will add 3.2 to both sides to get the value of p. So when I add 3.2 to the left side, well of course p - 3.2 + 3.2 will give me p + 0, which
is p, and on the right inside, I will get 7.8 + 3.2 which is 11.0. So p = 11.0 is the value that I get. Now, the question or the problem also asks us to check our answer. So let’s do that. Let’s take
this particular equation and replace p with 11. So p - 3.2 will become 11.0 - 3.2 = 7.8. Since all I did was replace the value of p as 11 in this equation. Once I did that, let’s actually see if this
is correct. 11 - 3.2 is actually equals 7.8. So the left side and the right side are equal, which means the answer is correct and the value of p is indeed 11.0. The second problem says that Zack
bought a pen and a pencil. So he bought two things. His total bill was $12.50. The pen cost $10.30. We need to figure out how much that the pencil cost. So let’s do problem second. The cost of the
pen plus pencil equals $12.50 or $12.5. We know that the cost of the pen equals $10.3. So what do I know? Let’s do this. Let’s have q be the cost of pencil. So what do I know? What I know is if the
cost of pen plus pencil was $12.5 and the pen was $10.3 that means the cost of the pen, which is 10.30 plus cost of the pencil is q equal 12.50. That’s the equation we have. Now that I know that the
equation is 10.3 + q = 12.5, notice that this is an addition equation. So what I should be able to do is write the equation down, 10.3 + q = 12.50. I should be able to subtract 10.3 from both sides
to solve for q. So what I get is 0 + q = 12.5 - 10.3, which is equal to 2.2, which means q = 2.2 or cost of pencil equals $2.20. We can also double check our answer. Let’s do that a real quick. If
the cost of pencil is $2.20, let’s substitute or replace that here and see what happens. So the cost of pen is 10.3 plus the cost of pencil is 2.2. Both of them should be equal to 12.5. Well the left
side is 12.5 and the right side is 12.5, which is correct. So, what have we learned? What we’ve learned in this particular track or video is if we have an equation that has a subtraction in it, it’s
a subtraction equation. And in order to solve for p, I should add the value that’s been subtracted. Remember to do it on the left side as well as the right side, and that will solve for p. And I can
double check that by replacing the value. Similarly, if we have an addition equation where the cost of the pen plus pencil equals 12.5 and we’re given the cost of the pen, I can write it in the form
of an equation. Now since this is an addition equation, I can undo it by subtracting the value that’s added from both sides that gives me the value of q which is the cost of the pencil. | {"url":"http://www.healthline.com/hlvideo-5min/learn-about-decimal-equation-by-addition-and-subtraction-285015620","timestamp":"2014-04-18T00:23:19Z","content_type":null,"content_length":"38845","record_id":"<urn:uuid:5755a462-8700-4175-aaa3-bd86c4bf64ae>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00609-ip-10-147-4-33.ec2.internal.warc.gz"} |
Manalapan, FL Math Tutor
Find a Manalapan, FL Math Tutor
...I love to learn, and I like to think that in addition to lessons, I teach others to love learning, too! I received a minor in psychology at Western Washington University, for which I took the
following courses: Intro to Psychology, Psychology of Child Rearing, Cognition, Social Psychology, Lifes...
47 Subjects: including algebra 1, ACT Math, SAT math, English
- Economics major in undergrad at JMU who completed CPA requirements at FAU while working.- Worked for a Big 4 public accounting firm for 4 1/2 years as an auditor, included teaching staff and
clients proper accounting processes and GAAP. I have direct experience with Sarbanes-Oxley 404 (SOX), acco...
6 Subjects: including algebra 1, Microsoft Excel, SAT math, prealgebra
...I also have experience in Excel, PowerPoint, Word and can read, write and speak Spanish fluently. In addition, I have excellent reading, writing and spelling skills and have successfully
tutored many students in these subjects.I can read, write and speak Spanish fluently. I have taught many students Spanish in a fun and effective manner.
19 Subjects: including prealgebra, reading, English, Spanish
...Although I prefer teaching Biology and Math I am open to helping my students with whatever subject they need assistance with. In the past I have tutored Algebra 1 and 2, Precalculus, AP and
regular Biology, Beginners Hebrew, AP and regular US and World History, and honors Chemistry. When I tuto...
10 Subjects: including algebra 1, algebra 2, biology, chemistry
...I've taken math classes up to calculus 1, and I've always excelled at it. Therefore, I believe that I am more than able to teach pre-algebra in a clear and concise manner. As a Freshman in
college, I understand the plight of students in High School.
11 Subjects: including prealgebra, SAT math, algebra 1, algebra 2 | {"url":"http://www.purplemath.com/manalapan_fl_math_tutors.php","timestamp":"2014-04-19T02:22:44Z","content_type":null,"content_length":"24078","record_id":"<urn:uuid:44733897-e973-4e62-9b6a-7ea327318fd0>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00339-ip-10-147-4-33.ec2.internal.warc.gz"} |
Should I upgrade from an 83 to an 89 for next semester?
April 28th 2009, 12:37 PM #1
Should I upgrade from an 83 to an 89 for next semester?
Right now I have a Ti 83 that I have had since eighth grade. At this point (wrapping up my first semester of Calculus) I use it infrequently -- mostly to find decimal values and to double check
my arithmetic before I submit work.
My university "recommends, but does not require, a calculator capable of advanced functions (such as a Ti 89) for use as a learning tool to aid students in familiarizing themselves with the
material and concepts presented. However, no student will be allowed access to a calculator of any kind during any test or assignment which contributes to the student's final grade."
Thus far I have put off upgrading in part because of the expense, and in part because I was worried about becoming dependent on the calculator's capabilities. However, at this point I am starting
to wonder if not having one is going to put me at a disadvantage as the material becomes harder. What is your opinion -- is a Ti 89 a valuable learning tool or an expensive crutch?
Right now I have a Ti 83 that I have had since eighth grade. At this point (wrapping up my first semester of Calculus) I use it infrequently -- mostly to find decimal values and to double check
my arithmetic before I submit work.
My university "recommends, but does not require, a calculator capable of advanced functions (such as a Ti 89) for use as a learning tool to aid students in familiarizing themselves with the
material and concepts presented. However, no student will be allowed access to a calculator of any kind during any test or assignment which contributes to the student's final grade."
Thus far I have put off upgrading in part because of the expense, and in part because I was worried about becoming dependent on the calculator's capabilities. However, at this point I am starting
to wonder if not having one is going to put me at a disadvantage as the material becomes harder. What is your opinion -- is a Ti 89 a valuable learning tool or an expensive crutch?
If the calculator is not used in the exam then you're not disadvantaged by not having one. Most things you might like to do (checking assignment work etc.) can be done using on-line freeware.
Right now I have a Ti 83 that I have had since eighth grade. At this point (wrapping up my first semester of Calculus) I use it infrequently -- mostly to find decimal values and to double check
my arithmetic before I submit work.
My university "recommends, but does not require, a calculator capable of advanced functions (such as a Ti 89) for use as a learning tool to aid students in familiarizing themselves with the
material and concepts presented. However, no student will be allowed access to a calculator of any kind during any test or assignment which contributes to the student's final grade."
Thus far I have put off upgrading in part because of the expense, and in part because I was worried about becoming dependent on the calculator's capabilities. However, at this point I am starting
to wonder if not having one is going to put me at a disadvantage as the material becomes harder. What is your opinion -- is a Ti 89 a valuable learning tool or an expensive crutch?
Definitely not. I use a TI-82 since high school (I'm now a sophomore) and it is more than enough until now. Furthermore we are not allowed to use any calculator in maths classes, but in physics
Hello sinewave85!
This is a very good question. I was in a similar boat, as I had a TI-83 for 9-11th grade and got my TI-89 my senior year of high school. The TI-89 definitely has some amazing features that can be
useful, if applied the right way. If not, like you said it becomes a crutch. Whether that would happen or not would be up to you. Like others have said, I agree that you would not be at a
disadvantage to other students if you didn't buy one. You would just need to find other ways of achieving the same things, which isn't too difficult.
1) I would buy one if it is affordable for you
2) If you don't want to or don't have the money now, it won't hurt you
3) It will only become a crutch if you allow it to
Hello sinewave85!
This is a very good question. I was in a similar boat, as I had a TI-83 for 9-11th grade and got my TI-89 my senior year of high school. The TI-89 definitely has some amazing features that can be
useful, if applied the right way. If not, like you said it becomes a crutch. Whether that would happen or not would be up to you. Like others have said, I agree that you would not be at a
disadvantage to other students if you didn't buy one. You would just need to find other ways of achieving the same things, which isn't too difficult.
1) I would buy one if it is affordable for you
2) If you don't want to or don't have the money now, it won't hurt you
3) It will only become a crutch if you allow it to
Thanks so much for all the advice!
My main interest in one is, as The Second Solution said, the ability to check over my work -- both major steps of complex problems and finished solutions. The cost is significant for me, but so
is the time. I find myself spending almost as much time going back over my work line by line as I do working the problems in the first place, and it adds up to a lot of time each week spent on my
one math course. With relatively low-intensity freshman courses, I have found the time, but I worry that may change as I get into higher-level work. If I could, for instance, enter something like
$\frac{d}{dx}\left(\sin^{-1}(xy) + \frac{\pi}{2} = \cos^{-1}(y)\right)$
and see quickly if I had the right answer, that would be great.
It is not so much that I worry about abusing it (putting questions through the calculator without first doing the work on paper) but rather becoming so attached to it psychologically that I feel
lost without it. Since I am a distance student, about 95% of my final grade is one four-hour test at the end of the semester -- a bad time to be switching habits.
As I said, thanks to all for the advice. I felt kind of silly asking this question, and I appreciate the thoughtfull responses! Now, I suppose, it is just a matter of weighing my priorities.
I agree this comes down to your priorities. I don't know your financial situation, but I think that in the end the cost of the calculator shouldn't be the determining factor. You could work part
time or live meekly for a while to earn the $100-something it costs. Remember your time spent learning is an investment and tools like this can pay out many times over in the future. That isn't
to tell you to buy it no matter what, but just something to think about.
Finally, I think you should think about this. After Calculus III (multi-variable) and Differential Equations I (introductory class), computations are less and less a part of math classes. Higher
level math classes deal with generalities and proofs and stop asking questions where the answer is a number. I think that it would be easily possible that you would stop using any calculator for
the last year or so of undergraduate math study.
That's about every possible thing I can think of on this topic, so good luck
An on-line integrator: Wolfram Mathematica Online Integrator
An on-line derivative calculator: Step-by-Step Derivatives
I agree this comes down to your priorities. I don't know your financial situation, but I think that in the end the cost of the calculator shouldn't be the determining factor. You could work part
time or live meekly for a while to earn the $100-something it costs. Remember your time spent learning is an investment and tools like this can pay out many times over in the future. That isn't
to tell you to buy it no matter what, but just something to think about.
Finally, I think you should think about this. After Calculus III (multi-variable) and Differential Equations I (introductory class), computations are less and less a part of math classes. Higher
level math classes deal with generalities and proofs and stop asking questions where the answer is a number. I think that it would be easily possible that you would stop using any calculator for
the last year or so of undergraduate math study.
That's about every possible thing I can think of on this topic, so good luck
Ok, you've convinced me.
An on-line integrator: Wolfram Mathematica Online Integrator
An on-line derivative calculator: Step-by-Step Derivatives
Thanks so much for the links. Those will help alot! I really appreciate all of the input.
I happen to own an 89.
In my calculus II class the exams are given in two parts. Anything involving integration of any type is on the non-calculator portion of the test. So I spent all that money on it and don't even
get to take advantage of any of it's features. I mean I would just like to be able to use it for things like factoring or polynomial long division, or any other areas where I am likely to make a
careless mistake, but no I am not allowed a calculator at all. Do yourself a favor, save your money.
I happen to own an 89.
In my calculus II class the exams are given in two parts. Anything involving integration of any type is on the non-calculator portion of the test. So I spent all that money on it and don't even
get to take advantage of any of it's features. I mean I would just like to be able to use it for things like factoring or polynomial long division, or any other areas where I am likely to make a
careless mistake, but no I am not allowed a calculator at all. Do yourself a favor, save your money.
Thanks for the input, gammaman. Not being allowed to use the calculator on tests is on the con's list for me. I wonder, however, if you find the calculator helpful or enlightening on daily work.
Does it make it easier to work though the material efficiently or to understand the concepts presented? Or do you not use it at all, given its exclusion from tests?
April 29th 2009, 12:37 AM #2
Jan 2009
April 29th 2009, 10:32 AM #3
April 29th 2009, 11:06 AM #4
MHF Contributor
Oct 2005
April 29th 2009, 01:12 PM #5
April 29th 2009, 01:41 PM #6
April 29th 2009, 02:23 PM #7
MHF Contributor
Oct 2005
April 29th 2009, 04:05 PM #8
April 30th 2009, 09:32 AM #9
April 30th 2009, 10:05 AM #10
April 30th 2009, 01:23 PM #11
Feb 2009
May 2nd 2009, 07:37 AM #12 | {"url":"http://mathhelpforum.com/calculators/86272-should-i-upgrade-83-89-next-semester.html","timestamp":"2014-04-18T04:00:09Z","content_type":null,"content_length":"71345","record_id":"<urn:uuid:46069a2b-7535-4b69-9caf-75e28fa7fbf7>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00039-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
Most Active Subjects
Questions Asked
Questions Answered
Medals Received
Questions Asked
Questions Answered
Medals Received
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/lolroark/answered","timestamp":"2014-04-20T11:19:57Z","content_type":null,"content_length":"53975","record_id":"<urn:uuid:ab814d96-f70d-420d-8a75-28c99867498d>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00243-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proof Of Mathematical Induction
March 19th 2007, 09:30 AM #1
Junior Member
Aug 2006
Proof Of Mathematical Induction
The manner in which mathematical induction is introduced in my textbook has me wondering about its proof:
Theorem: The Principle of Mathematical Induction
Suppose that the following two conditions are satisfied with regard to a statement about natural numbers:
CONDITION I: The statement is true for the natural number 1.
CONDITION II: If the statement is true for some natural number k, it is also true for the next natural number k + 1.
Then the statement is true for all natural numbers.
The text then goes on to say that this principle will not be proven, and then provides an analogy of falling dominos.
What does the proof of this theorem look like?
The proof of the validity of mathematical induction is by contradiction and depends upon the axiom of well ordering. That axiom states: Every subset of positive integers contain a first or a
least integer. Using that axiom, suppose that the statement fails for some positive integer then it fails for sum first J. We know that J is not 1 because it is true for 1. Therefore, J-1 is
positive integer and the statement is true for J-1 because J is the first for which it is not true. However if it is true for J-1 it must be true for (J-1)+1 =J. Thus there is a contradiction.
The formal proof of this is based on Peano Axioms.
Thanks. It looks like the proof involves first supposing that the two conditions are true, then denying the second condition and using it to produce the contradiction of the condition being true
and not true at the same time. After which there is the image of the mathematician in a tophat and tap shoes, clicking his heels with a flourishing "ta da!"
Thanks. It looks like the proof involves first supposing that the two conditions are true, then denying the second condition and using it to produce the contradiction of the condition being true
and not true at the same time. After which there is the image of the mathematician in a tophat and tap shoes, clicking his heels with a flourishing "ta da!"
In at least some developments of arithmetic the principle of mathematical
induction is an axiom.
In at least some developments of arithmetic the principle of mathematical induction is an axiom.
Yes, that's what I thought makes more sense after seeing the proof.
March 19th 2007, 10:32 AM #2
March 19th 2007, 10:33 AM #3
Global Moderator
Nov 2005
New York City
March 19th 2007, 12:08 PM #4
Junior Member
Aug 2006
March 19th 2007, 02:59 PM #5
Grand Panjandrum
Nov 2005
March 19th 2007, 08:24 PM #6
Junior Member
Aug 2006 | {"url":"http://mathhelpforum.com/discrete-math/12742-proof-mathematical-induction.html","timestamp":"2014-04-20T09:03:28Z","content_type":null,"content_length":"47303","record_id":"<urn:uuid:0686d611-9ebe-421a-bdac-e7b908f16ff8>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00451-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Irrationals: A Story of the Numbers You Can't Count On
The Irrationals: A Story of the Numbers You Can't Count On Follow
Author:Julian Havil
Publisher: Princeton University Press
ISBN: 978-0691143422
Audience: Mathematicians, scientists, and engineers
Rating: 3.5
Reviewer: Mike James
The irrationals are the most confusing and fascinating type of number, so a book that might make things seem easier is worth considering.
Author:Julian Havil
Publisher: Princeton University Press
ISBN: 978-0691143422
Audience: Mathematicians, scientists, and engineers
Rating: 3.5
Reviewer: Mike James
Understanding how numbers work is essential for any scientist or engineer. The irrationals are the most confusing and fascinating type of number, so a book that might make things
seem easier is worth considering.
The subtitle of this book, A story of the numbers you can't count on, suggests, to me at least, that issues such as countability and orders of infinity might be at the heart of this
book and this might make it particularly interesting to the programmer and computer scientist.
Unfortunately this isn't really the case. The book is more concerned with explaining in part the history of the irrationals and about proof that particular numbers are irrational
and transcendental. This is a mathematician's view of irrational numbers and there isn't much here for the general reader' even if they have a technical competence in a mathematical
Not only is the book math-oriented it rarely manages to say anything in a simple way that a non-math expert could decode. This starts right at the beginning in the introduction we
have a "novel" definition of the irrationals -
"The set of all read numbers having different distances from all rational numbers"
After you have spend some time figuring out what this means, it is a "nice" definition and if you already know some things about the irrationals it does make you think. Howeve,r
this in the introduction where you might hope for a gentle guide to the irrationals and what is coming next, not a clever novel definition that is remarkably subtle.
From here we have a history of the irrationals as those numbers that upset Pythagoras. The problem here is that the chapter is great but if you haven't a firm grasp on irrational
numbers a lot of it will simply make no sense. The next few chapters follow the history up to Fermat and then go on to the use of continued fractions to explore irrationals, mostly
pi and e.
By Chapter 6, which starts to consider the transcendentals, we have spent a lot of time dealing with how irrationals occur, what problems they cause, and proving that particular
numbers are irrationals, but the reader hasn't been offered a good definition of an irrational, let alone any deep insights into what makes an irrational different from a rational.
Yes, there is a definition that says that an irrational is not a ratio of two integers and its decimal expansion never repeats or terminates, but these aren't deep insights and they
are given almost in passing. I suppose my complaint is that there is no physics or philosophy in the discussion.
In the chapter on transcendentals we once again have no real discussion of what a transcendental number is. The chapter just continues from the previous chapter with a particular
problem. Chapter 7 goes on to show that pi and e are transcendental, but again the reader isn't provided with much of a clue as to what this might mean.
Chapter 8 returns to continued fractions to prove that the golden ratio is the most irrational of numbers - again more speculation on the ideas would have been welcome. Chapter 9
deals with the issue of the randomness of the decimal expansion of the irrationals. - this about the only chapter, and it is very short, that a non-mathematician is likely to get
something out of. Chapter 10 finally gets to the modern theory of the irrationals, and we meet the three rigorous theories of the irrationals - Weierstrass-Heine,
Cantor-Heine-Melray and Dedekind.
The final chapter is on the question "does irrationality matter?" In many ways this could have been the most interesting chapter for the general reader - after all there are many
who question the whole idea of the continuum as a physical thing. But this is a math book, so any questions of the "reality" of irrationals is avoided. Whether the chapter concludes
that irrationals matter or not isn't clear to me, even after a number of readings, but given that they are the subject of the book I suppose the verdict must be that they are.
The final chapter also illustrates another problem with trying to read the book - the chapters don't really address the subjects of their titles at all directly. The chapter about
"do the irrationals matter" starts off with no hint that this is the question being discussed. It goes off at a tangent and discusses tables, then a collection of problems, the
approximation of pi and so on. Never does it directly address the question, or discuss it, or explain what the examples have to do with it. This is fairly typical of the rest of the
chapters, where it can be difficult to discover what the "plan of attack" is.
I am sad to say that I cannot recommend this book to a general audience, In places it is a very difficult read, and I'm not even sure about recommending it to the beginning
mathematician unless they are interested in this particular branch of number theory and already know a lot of the background.
Android Apps with App Inventor
Author: Jörg H. Kloss
Publisher: Addison-Wesley
ISBN: 978-0321812704
Aimed at: Newcomers to App Inventor
Rating: 2
Pros: Includes a dozen projects
Cons: Not beginner-friendly
Reviewed by: Harry Fairhead
Do you need a book to learn how to use App Inventor?
+ Full Review
Android Application Development in 24 Hours (3rd Ed)
Authors: Carmen Delessio, Lauren Darcey & Shane Conder
Publisher: Sams
ISBN: 978-0672334443
Audience: Java programmers moving to Android
Rating: 3
Reviewer: Harry Fairhead
Android is one of the most complex of systems to create programs for. Can you really learn how to develope apps in 24 [ ... ]
+ Full Review
More Reviews
Last Updated ( Sunday, 16 September 2012 )
RSS feed of book reviews only
RSS feed of all content
Copyright © 2014 i-programmer.info. All Rights Reserved. | {"url":"http://i-programmer.info/bookreviews/65-mathematics/4798-the-irrationals-a-story-of-the-numbers-you-cant-count-on.html","timestamp":"2014-04-18T08:02:51Z","content_type":null,"content_length":"39689","record_id":"<urn:uuid:32128bff-8265-43ef-beab-7c30d0080190>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00238-ip-10-147-4-33.ec2.internal.warc.gz"} |
Earth Particles
Each particle, or corpuscle, of earth is a cube (six-sided geometrical solid). This is what earth particles look like, according to Plato s description in the Timaeus. In the center is the
earth-particle Plato describes at 55b-c, with 4 isosceles triangles making up each square face of the cube. On the left is a simpler isotope with 2 isosceles triangles per face; on the right is a
more complex isotope with 8 isosceles triangles per face.
Image taken from Friedländer, Plato, vol. 1, An Introduction.
Return to the PHIL 320 Home Page
Copyright © 2002, S. Marc Cohen | {"url":"http://faculty.washington.edu/smcohen/320/earth.htm","timestamp":"2014-04-18T18:54:24Z","content_type":null,"content_length":"1774","record_id":"<urn:uuid:31731a78-a82b-470e-aa95-c33d1b37b2ab>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00255-ip-10-147-4-33.ec2.internal.warc.gz"} |
Prove line in same plane as triangle that intersects interior, intersects a side
October 5th 2010, 06:55 PM #1
Prove line in same plane as triangle that intersects interior, intersects a side
Please let me know if the following proof is correct or not. If not a hint would be appreciated.
Given $\triangle ABC$ and line l lying in the same plane as $\triangle ABC$. Prove that if l intersects the INT( $\triangle ABC$) then it intersects at least one of the sides.
Proof: Let D and E be points on l and D,E $\in INT(\triangle ABC).$ Since D,E $\in INT(\triangle ABC)$ they are on the same side of $\overleftrightarrow{AC}$ as B. There are seven possibilities:
1) $\overleftrightarrow{DE}$ forms a line such that A,B,C are all on the same side of $\overleftrightarrow{DE}.$
2) A is on $\overleftrightarrow{DE}.$ Then we are done by the crossbar theorem.
3) B is on $\overleftrightarrow{DE}.$ Then we are done by the crossbar theorem.
4) C is on $\overleftrightarrow{DE}.$ Then we are done by the crossbar theorem.
5) A is on the opposite side of $\overleftrightarrow{DE}$ as B and C. Then by the plane separation theorem $\overleftrightarrow{DE}$ intersects $\overline{AB}$ and intersects $\overline{AC}.$
6) B is on the opposite side of $\overleftrightarrow{DE}$ as A and C. Then by the plane separation theorem $\overleftrightarrow{DE}$ intersects $\overline{AB}$ and intersects $\overline{BC}.$
7) C is on the opposite side of $\overleftrightarrow{DE}$ as A and B. Then by the plane separation theorem $\overleftrightarrow{DE}$ intersects $\overline{BC}$ and intersects $\overline{AC}.$
For 1), we have already determined that since D,E $\in INT(\triangle ABC)$ they are on the same side of $\overleftrightarrow{AC}$ as B. If A, B, and C were all on the same side of $\
overleftrightarrow{DE},$ this would be a contradiction. So, 1) cannot be true. Therefore one of the other conditions must be true. All of the other conditions have l intersecting a side of the
triangle so the theorem is proved.
I think that your proof works. Nonetheless, here are some comments.
I am not sure that you need two points on the line interior to the triangle.
If any one of A, B or C is on the line you are done.
If assuming that A, B, & C are on the same side of the line leads to a contradiction then you are also done.
Last edited by Plato; October 6th 2010 at 06:12 AM.
October 6th 2010, 05:48 AM #2 | {"url":"http://mathhelpforum.com/geometry/158546-prove-line-same-plane-triangle-intersects-interior-intersects-side.html","timestamp":"2014-04-17T20:12:20Z","content_type":null,"content_length":"40792","record_id":"<urn:uuid:071cd5d8-2dd5-4e53-ae95-f7dd46cd956f>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00064-ip-10-147-4-33.ec2.internal.warc.gz"} |
calculation of material properties in transformation media
Hi everybody,
I'm focusing on meta-materials. I have recently read Schurig's paper "
Calculation of material properties and ray tracing in transformation media
" (OPTICS EXPRESS, Vol. 14 (21), 9794-9804, OCT 16 2006,
). In the article, the components of the permittivity tensor are given by
[tex]\varepsilon^{i'j'} = \left|\rm{det}(\Lambda^{i'}_{i})\right|^{-1} \Lambda^{i'}_{i} \Lambda^{j'}_{j} \varepsilon^{ij}[/tex]
where the Jacobian matrix
[tex] \Lambda_{\alpha}^{\alpha'} = \frac{\partial x^{\alpha'}}{\partial x^{\alpha}} [/tex]
and the roman indices run from1 to 3, for the three spatial coordinates, as is standard practice.
Working out the algebra, the components of the permittivity (permeability) tensor can be obtained by
[tex] \left(\varepsilon^{i'j'}\right) = \left|\rm{det}\left(\Lambda\right)\right|^{-1}\Lambda^T \Lambda
where [tex] \Lambda [/tex] is a matrix, which components are the counterpart of the contravariant coefficients [tex] \Lambda_{\alpha}^{\alpha'} [/tex].
For cylindrical cloak, the components of the transformation matrix are
[tex] \left(\Lambda^{i'}_{j}\right) = \left(
\frac{\rho'}{\rho}-\frac{ax^2}{\rho^3} & -\frac{axy}{\rho^3} & 0 \\
-\frac{ayx}{\rho^3} & \frac{\rho'}{\rho}-\frac{ay^2}{\rho^3} & 0 \\
0 & 0 & 1 \\
It is easy to find the material properties. For instance, the z component of the permittivity tensor is
[tex]\varepsilon_z = \varepsilon^{3,3} = \frac{\rho^2}{\rho'(\rho'-a)} = \frac{1}{\left|\rm{det}\left(\Lambda\right)\right|}[/tex]
However, in the paper "
Full-wave simulations of electromagnetic cloaking structure
" (PHYSICAL REVIEW E, 74, 036621 (2006),
), the components of the relative permittivity and permeability tensor specified in cylindrical coordinates are given
[tex]\varepsilon_z = \mu_z = \left(\frac{b}{b-a}\right)^2 \frac{\rho-a}{\rho}[/tex]
It can be seen that the two formula are not equal obviously. And the other nonzero components of the permittivity and permeability tensor are not equal too.
I have deduced the formulas for many times. Depressingly, I can not figure out the problem. Could somebody please give me some comments on the calculation of material properties in transformation | {"url":"http://www.physicsforums.com/showthread.php?p=2685117","timestamp":"2014-04-16T07:36:52Z","content_type":null,"content_length":"28397","record_id":"<urn:uuid:6c2b679e-3e54-47a4-a532-4c280cd5ef1c>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00638-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Surprise at my failure to resolve an issue in an elementary paper by Rado
Replies: 44 Last Post: Nov 10, 2013 12:23 PM
Messages: [ Previous | Next ]
Paul Surprise at my failure to resolve an issue in an elementary paper by Rado
Posted: Nov 3, 2013 6:31 PM
Posts: 385
Registered: About three days ago, I got stuck when reading a paper by the great combinatorist, Rado. So I posted the question on sci.math and on stackexchange, although it's only been on stack
7/12/10 exchange about 36 hours.
No one responded on stack exchange. On sci.math, fom was very kind and spent a lot of time and effort trying to resolve the issue. I am genuinely appreciative of fom's efforts. However,
despite fom's help and support, I am absolutely no nearer to resolving this point than I was when I began this thread.
Is it possible for anyone else to help? It's a very elementary point and could be trivially resolved by David Ullrich, quasi, Fred etc. (Just naming 3 active and well-informed posters at
random here.)
I refer to the post: http://mathforum.org/kb/thread.jspa?threadID=2604214
I stand by the question at the beginning of the thread -- I don't feel that question has been properly answered at all.
For convenience, the post is repeated here:
Many thanks for any help or insights anyone can provide.
Problem understanding Rado's proof of the canonical Ramsey theorem
I am having trouble understanding the paper with the URL: http://www.cs.umd.edu/~gasarch/TOPICS/canramsey/Rado.pdf
I get stuck around the middle of page 2 where it says:
f(z0, ..., z_r-1) = f(y0,..., y_r-1)
This assertion doesn't seem to follow from the quantifiers defining L.
I do see that there exists _some_ y0, y1,... and _some_ y0', y1' , ...
to make the above equality true but that's not enough because here the yi and yi' are arbitrary.
We are given that rho_0 does not belong to L. However, L is defined by a "for all" statement. So, for rho_0, the for-all statement is false and we can find some yi and yi' to make f(z0,
..., z_r-1) = f(y0,..., y_r-1) true.
But the author is stating something much stronger -- that we can deduce the equality for an arbitrary yi and yi'.
My ultimate goal is to understand _any_ proof of the Canonical Ramsey theorem. (I'm limiting my search to free web sources for now). The original Erdos/Rado paper does seem somewhat
convoluted, which is presumably why Rado felt a need to rewrite the proof. Imre Leader proves it too. However, he only details a simple case, and leaves the rest to the reader.
Many thanks for any help or insights.
Paul Epstein | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2604565","timestamp":"2014-04-20T19:16:47Z","content_type":null,"content_length":"72445","record_id":"<urn:uuid:ea09380c-9cf5-41fe-af07-3a58f2cfbabb>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00284-ip-10-147-4-33.ec2.internal.warc.gz"} |
A complete system of theoretical and mercantile arithmetic
A complete system of theoretical and mercantile arithmetic: Comprehending a full view of the various rules necessary in calculation. With a practical illustrations of the most material regulations
and transactions that occur in commerce. Particularly, interest, stocks, annuities, marine insurance, exchange, &c., &c. Comp. for the use of the students at the Commercial institution, Woodford
(Google eBook)
George G. Carey
We haven't found any reviews in the usual places.
Definition Characters c 3
Subtraction II 11
Division 19
Simple Proportion 38
Reduction of Decimals 70
Addition of Decimals 83
Discount 263 263
Profit ami Loss 270
? 278
Alligation 286
Terminable Annuities 305
Division of Decimals 96
Arithmetical Scales 104
Single Position 118
Involntion 124
Duodecimals 133
Proportion ly Logarithms 144
Reduction 202
Simple Proportion 210
Popular passages
RULE. Divide as in whole numbers, and from the right hand of the quotient point off as many places for decimals as the decimal places in the dividend exceed those in the divisor.
Operations with Fractions A) To change a mixed number to an improper fraction, simply multiply the whole number by the denominator of the fraction and add the numerator.
Subtract the square of this figure from the left-hand period, and to the remainder annex the next period for a dividend.
... 10 per cent per month, until the whole is paid,) he will receive three receipts, which separately contain an engagement to transfer to the person possessing them, £10,000 stock in the 3 per
cents, £5,000 stock in the 4 per cents, and £31.
Exchequer bills are issued for different hundreds or thousands of pounds, and bear an interest of 2±d . per cent. per diem, from the day of their date, to the time when they are advertised to be paid
off. Navy bills are merely bills of exchange, drawn at 90 days...
Subtract the logarithm of the divisor from the logarithm of the dividend, and obtain the antilogarithm of the difference.
Multiply the divisor, thus augmented, by the last figure of the root, and subtract the product from the dividend, and to the remainder bring down the next period for a new dividend.
Rule. — Multiply each numerator by all the denominators except its own for the new numerators, and multiply all the denominators together for a common denominator.* Example.
Then multiply the second and third terms together, and divide the product by the first term: the quotient will be the fourth term, or answer.
And if the given number be a proper vulgar fraction ; subtract the logarithm of the denominator from the logarithm of the numerator, and the remainder will be the logarithm sought ; which, being that
of a decimal fraction, must always have a negative index.
Bibliographic information | {"url":"http://books.google.ca/books?id=-sw2AAAAMAAJ&redir_esc=y","timestamp":"2014-04-24T14:44:09Z","content_type":null,"content_length":"139719","record_id":"<urn:uuid:70eb9226-aef3-4f60-8780-12c9f5280ac5>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00160-ip-10-147-4-33.ec2.internal.warc.gz"} |
Unconstrained Optimization in Engineering Design
Main.UnconstrainedOptimization History
Hide minor edits - Show changes to output
Changed lines 7-26 from:
In this section we will examine some theory for the optimization of unconstrained functions. We will assume all functions are continuous and differentiable. Although most engineering problems are
constrained, much of constrained optimization theory is built upon the concepts and theory presented in this section.
In this section we will examine some theory for the optimization of unconstrained functions. We will assume all functions are continuous and differentiable. Although most engineering problems are
constrained, much of constrained optimization theory is built upon the concepts and theory presented in this section. | {"url":"http://apmonitor.com/me575/index.php/Main/UnconstrainedOptimization?action=diff","timestamp":"2014-04-17T06:41:27Z","content_type":null,"content_length":"16949","record_id":"<urn:uuid:b65d85c9-12c1-45ff-8bac-07dca3069619>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00634-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Retraction math help
The HNN-theorem says that every countable group can be embedded into a group with two generators. Could this be helpful?
In other words would the group
be a countable group with countably many retracts? (In light of the previous question this may be the right way to go) | {"url":"http://www.physicsforums.com/showpost.php?p=1155561&postcount=29","timestamp":"2014-04-17T07:31:27Z","content_type":null,"content_length":"7065","record_id":"<urn:uuid:c1ef4937-cbb9-4495-a4cd-8be613fc2904>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00013-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is the lifetime of a memoized value in a functional language like Haskell?
up vote 13 down vote favorite
In a pure functional language with lazy semantics (such as Haskell), results of computations are memoized so that further evaluations of a function with the same inputs do not recompute the value but
get it directly from the cache of memoized values.
I am wondering if these memoized values get recycled at some point in time?
1. If so, it means that the memoized values must be recomputed at a later time, and the benefits of memoization are not so exiting IMHO.
2. If not, then ok, this is clever to cache everything... but does it mean that a program - if run for a sufficient long period of time - will always consume more and more memory ?
Imagine a program performing intensive numerical analysis: for example to find roots of a list of hundred of thousands mathematical functions using a dichotomy algorithm.
Every time the program evaluates a mathematical function with a specific Real Number, the result will be memoized. But there is only a really small probability that exactly the same Real Number will
appear again during the algorithm, leading to memory leakage (or at least, really bad usage).
My idea is that maybe memoized values are simply "scoped" to something in the program (for example to the current continuation, call stack, etc.), but I was unable to find something practical on the
I admit I don't have looked deeply at the Haskell compiler implementation (lazy?), but please, could someone explain to me how it works in practice?
EDIT: Ok, I understand my mistake from the first few answers: Pure semantics implies Referential Transparency which in turn does not imply automatic Memoization, but just guarantees that there will
be no problem with it.
I think that a few articles on the web are misleading about this, because from a beginner's point of view, it seems that the Referential Transparency property is so cool because it allows implicit
haskell functional-programming memoization
See stackoverflow.com/questions/3951012/… – Don Stewart May 22 '11 at 16:11
add comment
2 Answers
active oldest votes
Haskell does not automatically memoize function calls, precisely because this would quickly consume tons of memory. If you do memoziation yourself, you get to choose at what scope
the function is memoized. For example, let's say you have the fibonacci function defined like this:
fib n = fibs !! n
where fibs = 0 : 1 : zipWith (+) fibs (tail fibs)
Here the memoization is only done within one call to fib, whereas if you leave fibs at the top level
up vote 16 down
vote accepted fib n = fibs !! n
fibs = 0 : 1 : zipWith (+) fibs (tail fibs)
then the memoized list is kept until the garbage collector can determine that there are no more ways to reach fibs from any part of your program.
+1 For the suggestive example - though I bet that in this case actually the compiler will see that fibs does not need to be a local value and lift it silently to the top. –
Ingo May 5 '11 at 13:45
2 @Ingo: I was under the impression that the compiler does not float a let outside a lambda, precisely because this can have major consequences for memory usage, but I might be
wrong. – hammar May 5 '11 at 13:57
2 Note that even in the case of the memoized fib above the fibs value may get garbage collected when the function fib is no longer reachable (ghc does this). – augustss May 5 '11
at 15:47
1 The word you guys are looking for is CAF, see also this question. – barsoap May 6 '11 at 19:23
1 Is this behavior mandated by the Haskell standard? Or, does it just so happen that GHC does things this way? – Jamer Jun 2 '12 at 4:29
show 1 more comment
My idea is that maybe memoized values are simply "scoped" to something in the program (for example to the current continuation, call stack, etc.), but I was unable to find something
practical on the subject.
This is correct. Specifically, when you see something like:
fun x y = let
a = .....
b = ....
c = ....
up vote 4 in ....
down vote
or an equivalent where clause, the values a, b and c may not be computed until actually used (or they may be computed right away because the strictness analyser can proove that the values
would be evaluated later anyway). But when those values depend on current function parameters (here x and y), the runtime will most likely not remember every combination of x and y and
the resulting a, b and c.
Note also, that in a pure language, it is also okay to not remember the values at all - this is only practical insofar as memory is cheaper than CPU time.
So the answer to your question is: there is no such thing as lifetime of intermediary results in Haskell. All one can say is that the evaluated value will be there when needed.
add comment
Not the answer you're looking for? Browse other questions tagged haskell functional-programming memoization or ask your own question. | {"url":"http://stackoverflow.com/questions/5898162/what-is-the-lifetime-of-a-memoized-value-in-a-functional-language-like-haskell/5898410","timestamp":"2014-04-23T15:38:05Z","content_type":null,"content_length":"82115","record_id":"<urn:uuid:eb97c099-b250-4679-86e1-288df4839e59>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00117-ip-10-147-4-33.ec2.internal.warc.gz"} |
Definition of Denominator
● The number below the bar in a fraction is called the Denominator.
● We can also define denominator as the number of parts that a whole is divided into.
More about Denominator
• A fraction with zero (0) in the denominator is undefined.
Example of Denominator
• The given circle is divided into 3 parts.
Out of the 3 parts, one part is shaded.
So, we write this model as
In this fraction 3 is the denominator whereas 1 is the numerator.
Solved Example on Denominator
Identify the denominator in the fraction
A. 39
B. 19
C. 40
D. 20
Correct Answer: A
Step 1: In a fraction, the number below the bar is called the denominator.
Step 2: In the given fraction, the number below the bar is 39.
Step 3: So, the denominator is 39.
Related Terms for Denominator | {"url":"http://www.icoachmath.com/math_dictionary/Denominator.html","timestamp":"2014-04-19T19:53:36Z","content_type":null,"content_length":"8037","record_id":"<urn:uuid:a6b72f56-8645-41e9-aefe-ababc299b3b8>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00416-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proof Builders of the Great Pyramid had access to a modern computer , page 4
I typically agree with the idea that we, as modern humans, have become so ego-centric that we believe everything about us, and our civilization are the greatest, most advanced in all of history. I
really do believe that, in general, society doesn't give our ancestors enough credit. In short, while I don't dismiss the OP's opinion, I believe the Egyptians were advanced enough, both
mathematically and technically, to build the pyramids on their own... no aliens (sorry Ancient Aliens), no "future tech"... just brains and ingenuity. Well... maybe not JUST, but you get the idea. | {"url":"http://www.abovetopsecret.com/forum/thread930216/pg4","timestamp":"2014-04-16T19:03:50Z","content_type":null,"content_length":"57189","record_id":"<urn:uuid:45f5be7d-8caf-413b-b418-de4e1180ea97>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00083-ip-10-147-4-33.ec2.internal.warc.gz"} |
A shortcut to E=mc
A shortcut to E=mc^2
Have you ever considered how to actually derive the famous E=mc^2 formula? Does it require knowledge of Special Relativity or even worse: an understanding of the theory?
Well, no!
The only skill needed is some high-school physics and a bit of curiosity.
There is a box of mass M and horizontal length L in the picture below. The center of mass of the box in rest hangs somewhere above point x[c].
To derive the formula it is enough to answer the question: What will happen to the center of mass position if a photon is emitted from the left wall of the box, picture a), and caught by the right
wall, as in the picture b)?
Lets assume that the photon has energy E. It is known^1 that photons propagate with the speed of light c and carry momentum
From the momentum conservation principle, after the photon emission, the box should move leftwards with some velocity v. After the photon is absorbed the box must stop due to the momentum
The velocity v is calculated in the following manner. Box moves the distance Δx to the left before the photon hits the right wall. Time of flight of the photon Δt is given by
Thus, the mean velocity v of the box
So far, so good. But what has happened to the box's center of mass? Initially it has been positioned over the point x[c], then the box moved leftwards and stopped. Did the center of mass move as
well? Of course, not. There was no external force acting on the system, so the center of mass remained at the initial position.
Summarizing, there are two undeniable facts
1. The box moved,
2. Its center of mass did not.
The only way out of the apparent contradiction is to admit, that besides the energy and momentum the photon carried some mass m from the left to the right wall. (Notice, that m is not the rest mass
of the photon. Photons have zero rest mass.)
From the momentum conservation, the magnitudes of momenta of the box and photon are equal
Substituting the previously evaluated velocity v in the expression above one gets
For the center of mass to remain still, the mass displacement of the box must be balanced by displacement of the mass carried by the photon.
From the two last equations one obtains
Or in more recognizable form:
Sanity check: Has any Special Relativity been used in this derivation?
The original idea, similar to the box example, comes from A. Einstein (1906).
^1The dispersion relation for light is derived from Maxwell's equations in vacuum. First convert them to a wave equation for the magnetic field B and then try a solution B=B[0]cos(kx-wt).
Zbigniew Karkuszewski, March 14^th, 2008
More curiosities | {"url":"http://fotonowy.pl/index.php?main_page=page&id=6","timestamp":"2014-04-16T07:13:42Z","content_type":null,"content_length":"20169","record_id":"<urn:uuid:f5067662-3bfe-41c1-883d-a1ebf5646c9d>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00187-ip-10-147-4-33.ec2.internal.warc.gz"} |
Does such a subgroup exist?
up vote 8 down vote favorite
I am looking for a certain masa in a $II_1$ factor which is singular and has nontrivial Takesaki invariant. For this I am looking for an example of an inclusion of groups $H\subset G$ such that:
• $G$ is a countable icc (infinite conjugacy class) group
• $H$ is abelian
• $\forall g\in G-H,\{ hgh^{-1} |h\in H \}$ is infinite
• $ |H\backslash G/H| \geq 3$
• there exists $g\in G-H$ and $h_1\neq h_2\in H$ with $h_1 g=gh_2$.
Does such an example exist?
gr.group-theory oa.operator-algebras von-neumann-algebras
add comment
4 Answers
active oldest votes
[generalization of Agol's answer]
Take $H$ a group and let $K$ act on $H$ by isomorphisms (write the action as $\sigma$) and consider $G=H\rtimes_\sigma K$. Then
• condition 2 is satisfied if $H$ is abelian
up vote 7 down vote accepted • condition 4 is satisfied if $K$ contains at least 3 elements
• condition 5 is satisfied if $K$ acts non-trivially
• condition 3 is satisfied if $\{h^{-1}\sigma_k(h) : h\in H\}$ is infinite for all $k \in K$
• condition 1 is satisfied if $K$ acts with infinite orbits on $H$ and condition 3 is satisfied.
1 In fact, both examples (once Yemon's is modified) are of this type. – Richard Kent Nov 24 '09 at 13:16
add comment
I think a lattice in the rank 3 solvable Lie group Sol works. For any 2x2 matrix A ∈ SL[2] ℤ with tr(A) > 3, take the extension G of H=ℤ^2 by ℤ, where 1 acts by A on ℤ^2. We may write
elements of G as (k, h), k ∈ ℤ, h ∈ ℤ^2. The subgroups (k,0) and (0,h) are additive in the coordinates, and (k,0)(0,h)=(k,h). We have the relation (0,h)(1,0) =(1,0)(0, A(h)) (so the 5th
condition holds). For example, the matrix
up vote 7 $$\begin{pmatrix} 2 & 1 \\\\ 1& 1\end{pmatrix}$$
down vote
gives rise to the fundamental group of 0-framed surgery on the figure 8 knot complement. G is countable icc, and H=ℤ^2 is a normal subgroup, G/H = ℤ, so the 2nd and 4th conditions are
satisfied. The 3rd condition is satisfied, since for h ∈ H= ℤ^2, (0,h)(k,g)(0,-h) = (k, g+A^k(h) -h), which one can see is infinite as one varies h (for k ∈ ℤ - 0 ).
add comment
This is off the cuff, and I'm very much a dilettante when it comes to group theory, so I hope there isn't an error in what follows. Corrections welcome, of course.
[EDIT: it has been pointed out below that the group given below doesn't quite work. I'm leaving the bulk of this "answer" here, in case it suggests a correct solution or warns people off
the same mistake I made.]
I think the group $G$ with presentation $\langle g, h | hg =gh^n \rangle$, where $n\geq 2$, will do the job, with $H$ being the group generated by $h$. [Conditions 2,5]
Elements of this group have a normal form with all the $g$s on the left and all the $h$s on the right. [EDIT: this is not quite right, one has to take care over negative powers of $g$.]
Multiplying on the left or on the right by an element of $h$ should, once we bring to normal form, not change the index of $g$ in the normal form, and so there are infinitely many double
$H$-cosets, taking care of Condition 4.
up vote 2
down vote Also, given an element of the form $g^ah^b$ where $a\neq 0$, then some back-of-the-envelope scribbling indicates that repeated conjugation by $h$ ought to increase the absolute value of
the index of $h$ in the resulting normal form, so that conjugation by $h$ cannot be an operation of finite order. [S:That would take care of Condition 3.:S] [EDIT: this is incorrect/
insufficient, see comments below.]
Finally, I think Condition 1 should follow from some further case-by-case analysis (given a non-identity element in $H$, conjugate repeatedly by $g$; and all the elements in $G-H$ are
taken care of by condition 3).
(The group $G$ is an example of a Baumslag-Solitar group, and these beasts have been quite well studied over the years, I'm told. I don't know if you can do similar games with other B-S
1 Those baumslag-solitar groups are almost good, icc with a lot of cosets. But the problem is that the condition 3 is not verified (look at the element $ghg^{-1}$). Thanks anyway, we are
not so far from an example I hope. – Arnaud Brot Nov 24 '09 at 1:50
1 There is a map $G \to Z$ given by killing $h$. The kernel of this map is abelian. Can you use the same $G$ and let $H$ be this kernel? Anything not in the kernel will have nonzero
exponent sum in $g$, which might help you with condition 3. – Richard Kent Nov 24 '09 at 2:14
It seems to work with the kernel of your map. The element of G have the form $g^k h^l g^{-s}$ and those in the kernel are the $g^k h^l g^{-k}$. Then we have the condition 3. And we
still have an infinity of cosets. Thank you a lot for your help. – Arnaud Brot Nov 24 '09 at 4:02
In case it isn't obvious to the reader, all credit belongs to Richard; I just made some false statements, which is easy as any fule kno. Richard, do you want to write this up as an
answer rather than a comment, so we can vote that up and let my error sink? – Yemon Choi Nov 24 '09 at 4:26
Oh, I don't deserve much credit, I just knew that bigger subgroup was abelian, Yemon had the really good idea of using G and its normal form. – Richard Kent Nov 24 '09 at 13:01
add comment
A more singular example:
Take an infinite index inclusion of abelian groups $K\subset H$. Let a non-trivial group $L$ act on $K$ by automorphisms. Then the amalgamated free product $G=H\underset{K}{\ast} (K\
up vote 2 rtimes L)$ satisfies the conditions. Moreover, $L(H)\subset L(G)$ $\,$is a singular masa. On can use the results of Ioana, Peterson and Popa for this, but maybe there are more elementary
down vote ways to see this.
add comment
Not the answer you're looking for? Browse other questions tagged gr.group-theory oa.operator-algebras von-neumann-algebras or ask your own question. | {"url":"http://mathoverflow.net/questions/6635/does-such-a-subgroup-exist","timestamp":"2014-04-20T14:07:40Z","content_type":null,"content_length":"71250","record_id":"<urn:uuid:0d6757f2-be01-4369-853e-f8d492bc87cd>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00325-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
September 20th 2010, 08:41 PM #1
Apr 2010
If a point C lies between points A and B, can it be proven from Euclid's postulates that C lies on line AB?
This is not an area of mathematics that I'm particularly good at, but I'm pretty sure the answer is "no", unless you can define "lies between".
For example, you could argue that "lies between" means "has equal distance to", in which case the statement isn't even true, let alone you can prove it from Euclid's postulates.
September 20th 2010, 10:02 PM #2
Dec 2008
The Netherlands | {"url":"http://mathhelpforum.com/geometry/156885-between.html","timestamp":"2014-04-17T11:17:15Z","content_type":null,"content_length":"30709","record_id":"<urn:uuid:a6602452-cc4e-48e7-8485-63b8409b4654>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00091-ip-10-147-4-33.ec2.internal.warc.gz"} |