content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
BE540 Topic - Hypothesis Testing
Topic 7 - Hypothesis Testing
Scroll down for (1) PubHlth 540 2008, (2) Links to Additional Readings, (3) Links to Illustrative Applets, (4) Links to Calculators, and (5) Computer Illustrations, and (6) Miscellaneous.
Download adobe reader (click here)
updated 12-1-2008
(1) PubHlth 540 2008
Lecture Notes Topic 7. Hypothesis Testing (pdf, 55 pp)
Reading in Text (Rosner, 6th Edition) Logic of Testing pp 226-230
Reading in Text (Rosner, 6th Edition) One Sample Tests pp 230-245, 267-270, 270-274
Reading in Text (Rosner, 6th Edition) Paired Data Tests pp 298-302
Reading in Text (Rosner, 6th Edition) Two Sample Tests pp 304-310, 310-317, 317-322
Reading in Text (Rosner, 6th Edition) Sample Size & Power pp 245-252, 253-259, 331-334
Week 11 Practice Problems optional, full credit already given (pdf) Solutions (pdf)
Self Evaluation Quiz forthcoming Solutions forthcoming
(2) Links to Additional Readings (articles, lectures, tutorials, powerpoint, etc)
(Source: Stanford University) Powerpoint introduction to confidence intervals and hypothesis testing at Stanford (html)
(Source: University of Michigan)Powerpoint introduction to confidence intervals and hypothesis testing at University of Michigan (html)
(3) Links to Illustrative Applets
(Source: R. Hale Penn State) Educational Psychology 9. One Sample z and t tests (html)
(Source: R. Hale Penn State) Educational Psychology 10. Two Sample tests (html)
(4) Links to Calculators
(Source: Texas A&M Cybernostics Projects) Lots of nice calculators (html)
(Source: Gary H. McClelland Univ. Colorado, Boulder) Normal Distribution Calculator(html)
(Source: Texas A&M) Student t Distribution calculator (html)
(Source: Texas A&M) Chi Square Distribution Calculator (html)
Source: Texas A&M) F Distribution Calculator (html)
(Source: Stat Trek Online Statistical Table) F Distribution Calculator (html)
(5) Computer Illustrations
(6) Miscellaneous | {"url":"http://people.umass.edu/~biep540w/webpages/testing.htm","timestamp":"2014-04-18T18:11:25Z","content_type":null,"content_length":"14619","record_id":"<urn:uuid:25fcbc78-3137-4b40-b341-68c9fcd0a44f>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00009-ip-10-147-4-33.ec2.internal.warc.gz"} |
Printing Floats
22 Printer 22.1 The Lisp Printer 22.1.3 Default Print-Object Methods 22.1.3.1 Printing Numbers
22.1.3.1.3 Printing Floats
If the magnitude of the float is either zero or between 10^-3 (inclusive) and 10^7 (exclusive), it is printed as the integer part of the number, then a decimal point, followed by the fractional part
of the number; there is always at least one digit on each side of the decimal point. If the sign of the number (as determined by float-sign) is negative, then a minus sign is printed before the
number. If the format of the number does not match that specified by *read-default-float-format*, then the exponent marker for that format and the digit 0 are also printed. For example, the base of
the natural logarithms as a short float might be printed as 2.71828S0.
For non-zero magnitudes outside of the range 10^-3 to 10^7, a float is printed in computerized scientific notation. The representation of the number is scaled to be between 1 (inclusive) and 10
(exclusive) and then printed, with one digit before the decimal point and at least one digit after the decimal point. Next the exponent marker for the format is printed, except that if the format of
the number matches that specified by *read-default-float-format*, then the exponent marker E is used. Finally, the power of ten by which the fraction must be multiplied to equal the original number
is printed as a decimal integer. For example, Avogadro's number as a short float is printed as 6.02S23.
For related information about the syntax of a float, see Section 2.3.2.2 Syntax of a Float. | {"url":"http://franz.com/support/documentation/current/ansicl/subsubsu/printin0.htm","timestamp":"2014-04-17T06:55:25Z","content_type":null,"content_length":"6528","record_id":"<urn:uuid:2d5d3696-0cf7-4d7e-9cb3-d39afbc7f68e>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00650-ip-10-147-4-33.ec2.internal.warc.gz"} |
Coppell Algebra 2 Tutor
...Outside of classes I took, I have a lot of experience with genetics in the practical setting of the laboratory. My research at SMU focused on genetic pathways. On one of my projects I had to
construct a very specific fly strain, a fly strain that didn't already exist.
30 Subjects: including algebra 2, reading, chemistry, English
...I am currently a senior mathematics major at UT Dallas with a cumulative GPA of 3.78. I completed advanced math courses in high school, culminating with AP Calculus and AP Statistics. I
achieved a 5 on the AP Calculus BC exam and a 4 on the AP Statistics exam.
7 Subjects: including algebra 2, algebra 1, SAT math, ACT Math
...I do not have experience with Microsoft ACCESS. I can help with query strategies and with syntax of most query statements. I am a certified tutor in all math topics covered by the ASVAB.
15 Subjects: including algebra 2, chemistry, physics, calculus
...I do not have another job besides helping you. On top of languages, I decided to put together an SAT/ACT course after studying for my GRE (Graduate School Exam). I based my SAT/ACT course on
how my GRE course was structured. I have had 100% success on raising scores.
29 Subjects: including algebra 2, reading, Spanish, GRE
...I know I can help you reach a solid understanding of the concepts and methods involved! With practice and step by step discussions, you can lay a foundation that will help you far beyond a
single Geometry class! Pre-Algebra can be fun!
23 Subjects: including algebra 2, English, writing, calculus | {"url":"http://www.purplemath.com/coppell_tx_algebra_2_tutors.php","timestamp":"2014-04-16T10:56:47Z","content_type":null,"content_length":"23676","record_id":"<urn:uuid:0d302ab2-fe35-4c32-ad11-9a79b47a3b4d>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00032-ip-10-147-4-33.ec2.internal.warc.gz"} |
Brainfuck algorithms
From Esolang
This article presents a number of algorithms for use with the Brainfuck language.
In the interest of generality, the algorithms will use variable names in place of the < and > instructions. Temporary cells are denoted "temp". When using an algorithm in a program, replace the
variable names with the correct number of < or > instructions to position the pointer at the desired memory cell.
If "a" is designated to be cell 1, "b" is cell 4, and the pointer is currently at cell 0, then:
If a particular algorithm requires cell value wrapping, this will be noted, along with a non-wrapping version, if known. Certain assumptions, such as that a temporary memory cell is already zero, or
that a variable used for computation can be left zeroed, are not made. Some optimizations can therefore be performed when exact conditions are known.
Header comment[edit]
The usefulness of this type of comment is that instructions commonly used for punctuation (such as "," and ".") may be used freely. The use of "[" and "]" inside a comment should be avoided, unless
they are matched. This commenting style does not work well for internal code comments, unless strategically placed where the cell value is known to be zero (or can be modified to be zero and
To make no assumption about the initial cell value, use:
Since loops only terminate when / if the current cell is zeroed, comments can safely be placed directly behind any other loop.
EsoAPI 1.0 installation check[edit]
Attribution: Jeffry Johnston
.-.>-.---.>>]<[ EsoAPI compatible program -]
Read all characters into memory[edit]
Attribution: User:quintopia
This will clear a cell no matter its sign in an unbounded signed implementation. Also clobbers the cell to the left of x and the cell to the right of temp.
[† Each of these lines should have their polarities reversed if temp, the cell to the right of temp, and the cell to the left of x, respectively, contain a negative value.]
Note that rather than just clearing the two cells at temp, one could condition upon the sign that x had.
x = x + y[edit]
x = x - y[edit]
x = x * y[edit]
x = x / y[edit]
Attribution: Jeffry Johnston
swap x, y[edit]
Requires another variable signx, where 0 = positive, 1 = negative.
x = not x (bitwise)[edit]
Produces an answer for 8-bit cells. For other sized cells, set temp1 to 2^(bits)-1.
Find a zeroed cell[edit]
To the right[edit]
To the left[edit]
x(y) = z (1-d array) (2 cells/array element)[edit]
Attribution: Jeffry Johnston
The cells representing x, temp0, and temp1 must be contiguous, with x being the leftmost cell and temp1 the rightmost, followed by adequate memory for the array. Each array element requires 2 memory
cells. The pointer ends at x.
The code up through "-]+" creates a trail of 1's that the later loops will use to find the destination cell. The cells are grouped as "data cell, 1-cell". The destination cell is "data cell, 0-cell",
so that the "[>>]" stops in a useful place. The x cell is always 0, and serves as the left-side stop for the "[<<]" statements (notice that t1 is cleared by the first loop, but the loop's trailing
"+" converts it to the first 1-cell in the trail). Next, the trail is followed and "[-]" clears the destination cell. The array is now prepared, so an add-to loop of the form "temp0[dest+temp0-]"
moves the value in temp0 to the destination cell. Finally, with ">[>>]" the trail of 1's is followed one last time forward, and cleared on the way back, ending at the left stop, x. Contiguous memory
required for the array is 3 + 2 * number of array elements.
x = y(z) (1-d array) (2 cells/array element)[edit]
Attribution: Jeffry Johnston
The cells representing y, temp0, and temp1 must be contiguous, with x being the leftmost cell and temp1 the rightmost, followed by adequate memory for the array. Each array element requires 2 memory
cells. The pointer ends at y.
y>>[[>>]+[<<]>>-]+[>>]<[<[<<]>+< (pointer is at y)
x(y) = z (1-d array) (1 cell/array element)[edit]
Attribution: Konstantinos Asimakis
The cells representing space, index1, index2 and Data must be contiguous and initially empty (zeroed), with space being the leftmost cell and Data the rightmost, followed by adequate memory for the
array. Each array element requires 1 memory cell. The pointer ends at space. index1, index2 and Data are zeroed at the end.
For an explanation on how this algorithm works read this article.
x = y(z) (1-d array) (1 cell/array element)[edit]
Attribution: Konstantinos Asimakis
The cells representing space, index1, index2 and Data must be contiguous and initially empty (zeroed), with space being the leftmost cell and Data the rightmost, followed by adequate memory for the
array. Each array element requires 1 memory cell. The pointer ends at data. index1, index2 and Data are zeroed at the end.
x = x == y[edit]
Attribution: Jeffry Johnston
The algorithm returns either 0 (false) or 1 (true).
And if you don't need to preserve x or y, the following does the task without requiring any temporary blocks. Returns 0 (false) or 1 (true).
x = x != y[edit]
Attribution: Jeffry Johnston
The algorithm returns either 0 (false) or 1 (true).
x = x <= y[edit]
Attribution: Ian Kelly
x and y are unsigned. temp1 is the first of three consecutive temporary cells. The algorithm returns either 0 (false) or 1 (true).
temp1[-] >[-]+ >[-] <<
y[temp0+ temp1+ y-]
temp0[y+ temp0-]
x[temp0+ x-]+
temp1[>-]> [< x- temp0[-] temp1>->]<+<
temp0[temp1- [>-]> [< x- temp0[-]+ temp1>->]<+< temp0-]
x = x < y[edit]
Attribution: Ian Kelly
x and y are unsigned. temp1 is the first of three consecutive temporary cells. The algorithm returns either 0 (false) or 1 (true).
temp1[-] >[-]+ >[-] <<
y[temp0+ temp1+ y-]
temp1[y+ temp1-]
x[temp1+ x-]
temp1[>-]> [< x+ temp0[-] temp1>->]<+<
temp0[temp1- [>-]> [< x+ temp0[-]+ temp1>->]<+< temp0-]
z = x ≥ y[edit]
Attribution: User:ais523
This uses balanced loops only, and requires a wrapping implementation (and will be very slow with large numbers of bits, although the number of bits otherwise doesn't matter.) The temporaries and x
are left at 0; y is set to x-y. (You could make a temporary copy of x via using another temporary that's incremented during the loop.)
x[ temp0+
y[- temp0[-] temp1+ y]
temp0[- z+ temp0]
temp1[- y+ temp1]
y- x- ]
z = sign(x-y)[edit]
Attribution: User:quintopia
This is a comparison of two numbers for non-wrapping implementations. The signs of the two numbers must be known. Part of it can also be used to find the sign of an unknown number if both it and its
opposite are available. The four cells to the right of z must be free and clear (and will be again when the algorithm terminates), an assumption that must be made in a non-wrapping implementation, as
the direction to clear these cells could not be known to this algorithm. The code blocks indicated by parenthetical comments could contain code which depends on the result of the comparison; there is
no particular reason in practice to wait for the value of z to be set to one of {-1,0,1}.
†The polarity of these lines should be reversed if x and y are negative.
x = not x (boolean, logical)[edit]
Attribution: Jeffry Johnston
The algorithm returns either 0 (false) or 1 (true).
x = x and y (boolean, logical)[edit]
Attribution: Jeffry Johnston
The algorithm returns either 0 (false) or 1 (true).
x = x or y (boolean, logical)[edit]
Attribution: Jeffry Johnston
The algorithm returns either 0 (false) or 255 (true).
if (x) { code }[edit]
or alternatively:
if (x == 0) { code }[edit]
Attribution: Jeffry Johnston (39 OPs)
Daniel Marschall (33 ops):
if (x) { code1 } else { code2 }[edit]
Attribution: Jeffry Johnston (39 OPs)
Attribution: Daniel Marschall (32 OPs)
This is an alternate approach. It's more efficient since it doesn't require copying x, but it does require that temp0 and temp1 follow x consecutively in memory.
Attribution: Ben-Arba (25 OPs)
x = pseudo-random number[edit]
Attribution: Jeffry Johnston
This algorithm employs a linear congruential generator of the form:
V = (A * V + B) % M
A = 31821, B = 13849, M = period = 65536, V = initial seed
A and B values were obtained from the book:
Texas Instruments TMS320 DSP DESIGNER'S NOTEBOOK Number 43 Random Number Generation on a TMS320C5x, by Eric Wilbur
Assumes 8-bit cells. After the code is executed, the variable "x" holds a pseudo-random number from 0 to 255 (the high byte of V, above). The variable cells "randomh" and "randoml" are the internal
random number seed and should not be altered while random numbers are being generated.
Divmod algorithm[edit]
A clever algorithm to compute div and mod at the same time:
# >n 0 d
# >0 n d-n%d n%d n/d
If one does not need to preserve n, use this variant:
# >n d
# >0 d-n%d n%d n/d
Print value of cell x as number[edit]
x >>++++++++++<<[->+>-[>+>>]>[+[-<+>]>+>>]<<<<<<]>>[-]>>>++++++++++<[->-[>+>>]>[+[-
Solution from Stack Overflow.
See also[edit] | {"url":"http://esolangs.org/wiki/brainfuck_algorithms","timestamp":"2014-04-18T10:34:08Z","content_type":null,"content_length":"53050","record_id":"<urn:uuid:239906ae-6315-4018-bb74-a09fbf9d81f5>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00017-ip-10-147-4-33.ec2.internal.warc.gz"} |
little-o notation
Definition: A theoretical measure of the execution of an algorithm, usually the time or memory needed, given the problem size n, which is usually the number of items. Informally, saying some equation
f(n) = o(g(n)) means f(n) becomes insignificant relative to g(n) as n approaches infinity. The notation is read, "f of n is little oh of g of n".
Formal Definition: f(n) = o(g(n)) means for all c > 0 there exists some k > 0 such that 0 ≤ f(n) < cg(n) for all n ≥ k. The value of k must not depend on n, but may depend on c.
Generalization (I am a kind of ...)
big-O notation.
See also ω(n).
Note: As an example, 3n + 4 is o(n²) since for any c we can choose k > (3+ √(9+16c))/2c. 3n + 4 is not o(n). o(f(n)) is an upper bound, but is not an asymptotically tight bound.
Strictly, the character is the lower-case Greek letter omicron.
Author: PEB
Little o is a Landau Symbol.
Go to the Dictionary of Algorithms and Data Structures home page.
If you have suggestions, corrections, or comments, please get in touch with Paul E. Black.
Entry modified 17 December 2004.
HTML page formatted Tue Dec 6 16:16:32 2011.
Cite this as:
Paul E. Black, "little-o notation", in Dictionary of Algorithms and Data Structures [online], Paul E. Black, ed., U.S. National Institute of Standards and Technology. 17 December 2004. (accessed
TODAY) Available from: http://www.nist.gov/dads/HTML/littleOnotation.html | {"url":"http://www.darkridge.com/~jpr5/mirror/dads/HTML/littleOnotation.html","timestamp":"2014-04-19T04:38:42Z","content_type":null,"content_length":"3240","record_id":"<urn:uuid:f367c285-596f-481f-884a-735cf739da81>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00008-ip-10-147-4-33.ec2.internal.warc.gz"} |
Life on the lattice
This post is intended as the first in a series about techniques for the extraction of information on excited states of hadrons from lattice QCD calculations.
As a reminder, what we measure in lattice QCD are correlation functions
of composite fields
. From Feynman's functional integral formula, these are equal to the vacuum expectation value of the corresponding products of operators. Changing from the Heisenberg to the Schrödinger picture, it
is straightforward to show that (for infinite temporal extent of the lattice) these have a spectral representation
C(t)=Σ[n] |ψ[n]|^2 e^-E[n]t
, which in principle contains all information about the energies
and matrix elements
of all states in the theory.
The problem with getting that information from the theory is twofold: Firstly, we only measure the correlator on a finite number of timeslices; the task of inferring an infinite number of
from a finite number of
is therefore infinitely ill-conditioned. Secondly, and more importantly, the measured correlation functions have associated statistical errors, and the number of timeslices on which the excited
states' (
) contributions are larger than the error is often rather small. We are therefore faced with a difficult data analysis task.
The simplest idea of how to extract information beyond the ground state would be to just perform a multi-exponential fit with a given number of exponentials on the measured correlator. This approach
fails spectacularly, because multi-exponential fits are rather ill-conditioned. One finds that changing the number of fitted exponentials will affect the best fit values found rather strongly,
leading to a large and unknown systematic error; moreover, the fits will often tend to wander off into unphysical regions (negative energies, unreasonablely large matrix elements for excited states).
This instability therefore needs addressing if one wishes to use a
-based method for the analysis of excited state masses.
The first such stabilisation that has been proposed and is widely used is known as
constrained fitting
. The idea here is to augment the χ
functional by prior information that one has about the spectrum of the theory (such as that energies are positive and less than the cutoff, but if one wishes also perhaps more stringent constraints
coming e.g. from effective field theories or models). The reason one may do this is
Bayes' theorem
, which can be read as stating that the probability distribution of the parameters
given the data
is the product of the probability distribution of the data given the parameters times the probability distribution of the parameters absent any data:
P(M|D)=P(D|M)/P(D) P(M)
; taking the logarithm of both sides and maximising of
, we then want to maximise
log(P(D|M)) + log(P(M))
. Now
is known to be proportional to
, so if
was completely flat, we would end up minimizing
. If we take
to be Gaussian instead, we end up with an augmented
that contains an additional term
Σ[n] (M[n]-I[n])^2/σ[n]^2
that forces the parameters
towards their initial guesses ("priors")
, and hence stabilises the fit -- in principle even with an infinite number of fit parameters. The widths
are arbitrary in principle; fitted values
that noticeably depend on
are determined by the priors and not the data and must be discarded. In practice the lowest few energies and matrix elements do not show a significant dependence on
or on the number of higher states included in the fit, and may therefore be taken to have been determined by the data.
Bayesian fitting is a very powerful tool, but not everyone is happy with it. One objection is that adding any external information, even as a constraint, compromises the status of lattice QCD as a
first-principles determination of physical quantities. Another common worry is the GIGO (garbage in-garbage out) principle with regards to the priors.
A way to address the former concern that has been proposed is the
Sequential Empirical Bayes Method
(SEBM). Here, one first performs an unstabilised single-exponential fit at large times
, where the ground state is known to dominate. Then one performs a constrained two-exponential fit over a larger range of
using the first fit result as a prior (with its error as the width). The result of this fit is then used as the prior in another three-exponential fit over an even larger time range, and so forth.
(There is some variation as to the exact procedure followed, but this is the basic idea). In this way, all priors have been determined by the data themselves.
In the next post of this series we will look at a completely different approach to extracting excited state masses and matrix elements that does not rely on
at all.
There was a time when the only textbooks on lattice QCD were Montvay&Münster and Creutz. Not so any more. Now the new textbook "Quantum Chromodynamicson the Lattice: An Introductory Presentation" by
Christof Gattringer and Christian Lang (Lecture Notes in Physics 788, Springer) offers a thorough and accessible introduction for beginners.
Gattringer and Lang start from a derivation of the path integral in the context of Quantum Mechanics, and after deriving the naive discretisation of lattice fermions and the Wilson gauge action
present first the lattice formulation of pure gauge theory, including the Haar measure and gauge fixing, with Wilson and Polyakov loops and the static quark potential as the observables of interest.
Numerical simulation techniques for pure gauge theory are discussed along with the most important data analysis methods. Then fermions are introduced properly, starting from the properties of
Grassmann variables and a discussion of the doubling problem and the Wilson fermion action, followed by chapters on hadron spectroscopy (including some discussion of methods for extracting excited
states), chiral symmetry on the lattice (leading through the Nielsen-Ninomiya theorem and the Ginsparg-Wilson relation to the overlap operator) and methods for dynamical fermions. Chapters on
Symanzik improvement and the renormalisation group, on lattice fermion formulations other than Wilson and overlap, on matrix elements and renormalisation, and on finite temperature and density round
off the volume.
The book is intended as an introduction, and as such it is expected that more advanced topics are treated briefly or only hinted at. Whether the total omission of lattice perturbation theory (apart
from a reference to the review by Capitani) is justified probably depends on your personal point of view -- the book clearly intends to treat lattice QCD as a fully non-perturbative theory in all
respects. There are some other choices leading to the omission or near-omission of various topics of interest: The Wilson action is used both for gluons and quarks, although staggered, domain wall
and twisted mass fermions, as well as NRQCD/HQET, are discussed in a separate chapter. The calculation of the spectrum takes the front seat, whereas the extraction of Standard Model parameters and
other issues related to renormalisation are relegated to a more marginal position.
All of these choices are, however, very suitable for a book aimed at beginning lattice theorists who will benefit from the very detailed derivations of many important relations that are given with
many intermediate steps shown explicitly. Very little prior knowledge of field theory is assumed, although some knowledge of continuum QFT is very helpful, and a good understanding of general
particle physics is essential. The bibliographies at the end of each chapter are up to date on recent developments and should give readers an easy way into more advanced topics and into the research
In short, this book is a gentle, but thorough introduction to the field for beginners which may also serve as a useful reference for more advanced students. It definitely represents a nice addition
to your QCD bookshelf. | {"url":"http://latticeqcd.blogspot.com/2010_01_01_archive.html","timestamp":"2014-04-20T21:27:52Z","content_type":null,"content_length":"93161","record_id":"<urn:uuid:05e5c0a6-3d43-4da1-b09c-d9b04bcf1846>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00602-ip-10-147-4-33.ec2.internal.warc.gz"} |
Aliso Viejo Accounting Tutor
...I help you with homework and on-line tests. I prefer to teach you the concepts, but I understand when you mostly just want to get the tests and homework done / passed. Many of my students noted
I am able to very quickly dive into their curriculum and understand and explain how to derive the (ri...
3 Subjects: including accounting, finance, Microsoft Excel
...We take the time necessary to discuss the definitions of the terms we use and the principles of the subject area in which we're working. We work together through as many specific questions and
problems as necessary to make the client comfortable enough to work without me. The best tutor is the one who works to make his help unnecessary.
13 Subjects: including accounting, English, physics, calculus
...Please let me know what other information you may need. I hold a 2nd dan black belt in Tae Kwon Do. I have 15 years of teaching experience including group classes and private lessons.
24 Subjects: including accounting, statistics, finance, economics
...I have worked in both public and private firm with 9+ years of industry experience. I have strong background in accounting, marketing, business, and with the government. Currently work as an
auditor for the state.
8 Subjects: including accounting, algebra 1, finance, business
I am a CPA looking to help students learn accounting. I graduated with a Bachelors degree in accounting from California State University Northridge in May 2008. While I was in college, I was a
part of the accounting association, Beta Alpha Psi, which has high grade point standards and requires members to tutor other college students.
1 Subject: accounting | {"url":"http://www.purplemath.com/aliso_viejo_accounting_tutors.php","timestamp":"2014-04-16T22:21:30Z","content_type":null,"content_length":"23927","record_id":"<urn:uuid:7fdd4a0d-9e9a-415a-a247-51cb5dfccbaf>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00428-ip-10-147-4-33.ec2.internal.warc.gz"} |
code that produces all possible trees with n nodes.
up vote 1 down vote favorite
I'm looking for code that produces all possible trees with no self edges (or their adjacent matrices) with n nodes, anyone have any idea if this is written anywhere?
graph-theory graph-colorings
1 See stackoverflow.com. – Ricky Demer Jan 18 '11 at 2:46
2 To prevent comments such as the one above, ask "An algorithm that..." instead of "code that...". Usually, you'll also get an implementation of that algorithm. – Derrick Stolee Jan 18 '11 at 3:52
3 Why is this not appropriate? There are plenty of contexts (often involving operads) where it is useful to have lists of trees to test conjectures and so on. – Neil Strickland Jan 18 '11 at 11:35
2 Dear Neil, if marvin wants to use it for some mathematical reason (for operads, for instance), then he should give background on his application. The more background one gives, the less likely one
will be sent to SO. – Harry Gindi Jan 18 '11 at 11:42
The question seems completely fine. Trees are a basic mathematical object; maybe the poster is just interested in properties of the set of trees on n nodes. For the purpose of asking for the code,
2 he really doesn't need to tell us precisely which properties. I don't think being pointed to stackoverflow is helpful: stackoverflow is for questions about programming. It seems just as likely
that professional mathematicians will know of a tool for generating lists of trees than that professional programmers will, so mathoverflow seems at least as suitable as stackoverflow, probably
more so. – James Martin Jan 18 '11 at 13:01
show 1 more comment
closed as off topic by Harry Gindi, Igor Pak, Felipe Voloch, Mark Sapir, Andy Putman Jan 18 '11 at 17:14
Questions on MathOverflow are expected to relate to research level mathematics within the scope defined by the community. Consider editing the question or leaving comments for improvement if you
believe the question can be reworded to fit within the scope. Read more about reopening questions here.If this question can be reworded to fit the rules in the help center, please edit the question.
2 Answers
active oldest votes
In sage the command
produces a list of all trees on 9 vertices. As sage is open source, the code is available for inspection. The command
up vote 14 down vote
[tt.am() for tt in graphs.trees(9)]
will provide the adjacency matrices.
add comment
It is well known that there is a bijection between the set of trees on $n$ nodes and sequences of length $n-2$ with values in $[n]$. These sequences are called Prüfer sequences. Indeed,
up vote 5 the wikipedia page has code which will convert any Prüfer sequence into a tree. So a naïve algorithm would be to run the wikipedia algorithm over all Prüfer sequences.
down vote
add comment
Not the answer you're looking for? Browse other questions tagged graph-theory graph-colorings or ask your own question. | {"url":"http://mathoverflow.net/questions/52371/code-that-produces-all-possible-trees-with-n-nodes/52374","timestamp":"2014-04-19T17:29:44Z","content_type":null,"content_length":"53182","record_id":"<urn:uuid:d30dfc75-be57-4af6-adec-4e9183c83951>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00593-ip-10-147-4-33.ec2.internal.warc.gz"} |
Brevet US3229388 - Educational device
Jan. 18, 1966 F. M. SMITH EDUCATIONAL DEVICE 2 Sheets-Sheet 1 Filed July 51, 1963 Hg 6 INVENTOR.
FRANK M. SMITH Jan. 18, 1966 SMITH 3,229,388
EDUCAT IONAL DEVI CE Filed July 31, 1963 2 Sheets-Sheet 2 A no e In.
[a o o 0 7 o o 0 0 o 0 1 INVENTOR. H/ FRANK M. SMITH BY 9,4 Fig. 9 2
AT TORNEY United States Patent 3,229,388 EDUCATIONAL DEVICE Frank M. Smith, 2328 Hollywood Blvd., Hollywood, Fla. Filed July 31, 1963, Ser. No. 298,)29 3 Claims. (CI. 35-70) This invention relates to
an educational device, and, in particular, to a set of blocks, bearing numerical indicia, and peculiarly adapted to teaching the fundamental operations of arithmetic to the very young. It is
appreciated that blocks in various forms and designs for this general purpose have been proposed heretofore, but have apparently failed to find favor or acceptance, due to serious shortcomings, it
is, therefore, a general object of the present inven tion to provide a block system with features representing improvements over prior art devices, and not subject to shortcomings as aforesaid.
Other objects include the following: To provide a universally symmetrical basic unit in the block system, whereby the only variable to be observed is one of length, with consequent simplification of
the object example to the student; to further avoid confusion by distributing the various forms of number indicia about the block, so that, when properly oriented, one at a time is dominantly
viewable; specifically, to provide blocks of square cross section, with indicia on all four sides, and further, to provide such blocks in duplicate, so that all four indicia may be viewed
simultaneously, if desired; to provide for a subtracton process by superposition, as in the case of addition; to provide for objective viewing of the arithmetic operations by surface-to-surface
association of the blocks, whether in side-by-side relation, or upon superposition; and to provide a system which is conveniently boxed and easily removed from its container.
These and other objects, which will be apparent, are attained by the present invention, a preferred form of which is described in the following specification, as illustrated in the drawing, in which:
FIG. 1 is a bracketed view, in perspective, of the set of the two layers of blocks, their storage box, and the cover therefor, shown in exploded form;
FIG. 2 is a front elevational view of the box, on reduced scale;
FIG. 3 is a sectional view through the box and contents, taken on the plane of the line 33 of FIG. 2, on the general scale of that of FIG. 1;
FIG. 4 is a perspective view of one of the blocks;
FIG. 5 is a perspective view, showing two other sides of the block of FIG. 4;
FIG. 6 is a sectional view, transversely of the block of FIG. 4, taken on the plane of the line 6-6 thereof;
FIG. 7 is a bracketed view, which may be considered as either in top plan or side elevation, showing a particular arrangement of two duplicate sets of blocks;
FIG. 8 is a top plan view of the placement of a group of blocks in an elementary problem in addition;
FIG. 9 is a top plan view of an arrangement of a panel of blocks, showing examples of operations of addition and alternative subtraction;
FIG. 10 is a bracketed view in perspective, showing a pair of blocks in position preliminary to an operation of subtraction by superposition; and
FIG. 11 is a perspective view of the blocks of FIG. 10, in final placement.
Referring to the drawings by characters of reference,
3,229,388 Patented Jan. 18, 1966 there is shown a rectangular storage box 10 of generally conventional, rectangular construction, with a slide cover 12. For ready access to the contents, for removal,
the front wall of the box is provided with an inwardly, downwardly slanted, bevelled, upper edge 14. As seen in FIG. 3, the box is also constructed so that the length of its chamber is somewhat in
excess of the maximum length of block, for further reasons of access, as indicated in the phantom line showing in FIG. 3.
With reference to the example shown in FIG. 7, the blocks are in ten different lengths, representing consecutive multiples of a unit block 16, up to a block 18, of a maximum length equal to ten
units, and in the entire complement of blocks, these are provided in at least two, complete, identical sets, for reasons to be set forth hereinafter.
The blocks are in the form of rectangular prisms, of square cross section, and each of the four sides is provided with a form of indicia, which is different on each of the four sides, but is similar,
as a group, on the respective blocks of different length. This is illustrated in FIGS. 4 and 5, showing the four different sides of a block 18 in two different views, one side 20 bearing the cardinal
numeral corresponding to the length of the block, as expressed in the number of multiples of the dimension of the unit cube; another side 22, bearing the word name of the cardinal numeral, a third
side 24-, bearing the sequence of cardinal numbers, from 1 to the maximum representing the length of the block, and a fourth side 26, hearing a series of dots spaced for registering with the sequence
cardinals, lengthwise of the block. To avoid confusion, reference characters have not been applied to all of the blocks, of various sizes, but it will be understood that each has a makeup
corresponding to that of block 18, the only difference being in the name or numeral of the cardinals, and the number of dots. Where reference characters are needed for reference to the other lengths
of blocks, those of block 18 will be employed.
Among people who have matured beyond the stage of early childhood, there is a tendency to become oblivious to the early problems of grasping the significance of the operations of simple arithmetic,
and to take the various rules for granted. For instance, most people take the act of counting, in sequence, as a natural fact of life, without realizing that it is actually the statement of a series
of sums obtained by adding 1 to the preceding number at each step. Although this was once unobvious to most individuals, they have long since lost sight of the fact. FIGURE 8 illustrates a
demonstration of this operation, wherein a block carrying a designation of 2. has been positioned in juxtaposition to a block of length 3. By interposing the block of unit length in the corner space
at the end, the student provides himself with a graphic and forceful illustration of the fact that 1, when added to 2 is the same as 3. He may also go further and place the 2 block at the right end
of the 3 block and the 1 block at the left end thereof, in illustration of another mathematical fact, usually taken for granted, that 2 plus 1 is the same as 1 plus 2. Pursuing this concrete
procedure, the student may pro ceed to find that 1, added to any number produces the next higher number in a sequence which he ultimately understands as the counting process.
The operation illustrated in FIG. 7 is even more elementary than the concept of counting, since it directly illustrates the physical meaning of any cardinal number,
by impressing on the mind that a certain number of dots corresponds to a given numeral, the common ground of recognition being the identical lengths of the two blocks under comparison.
It is also possible to compare the dots with the cardinal number by simultaneously observing two sides of a single block, viewed as in FIG. 5, for instance, or some other rotated position of the
The general problems of addition may involve the use of more than two blocks in association with a single block representing the sum to be determined, or illustrated. However, as a first step, it is
preferable to work with three blocks, as in the case of FIG. 8, but varying the value of the added component. For instance, a rectangular panel of blocks may be arranged as in FIG. 9, so as to add up
to 10, in series of two in each row. Thus, removal of the 10 block will illustrate the problem of subtracting 10 from 10, which leaves 0. Conversely, replacing the 10 block illustrates the addition
of 10 to 0, equalling 10. In the same manner, 1 plus 9 and 2 plus 8 are shown as equalling 10, and so on, in sequence down through 8 plus 2 and 9 plus 1 to 10 plus again, and the converse operations
of withdrawal will illustrate subtraction. Following this stage, the students education may be advanced by the more complicated problem of matching a length with more than two components, such as 2
plus 3 plus 5, to equal 10, and at this stage the students interest will probably have been aroused to a state of independent thinking, due to the challenges involved, and as nourished by the degree
of recreational aspects inherent in the activity. For these advanced problems, certain of the blocks of lesser length, especially length 1, are provided in plurality, beyond the duplicate sets of
blocks shown in FIG. 7, and the array of such blocks, in addition to those shown in in FIG. 9, is shown grouped in the lower layer of the box, in FIG. 1.
Subtraction by superposition is illustrated in FIGS. 10 and 11, wherein the problem consists in subtracting 5 from 7. In this case the student selects the 7 block, and places it so that the surface
24 with the cardinal numbers in sequence, is facing upwardly. Preferably, the sequence will run from right to left, and since the student has learned to count at this stage, he will be able to
understand and accept a mere change in direction. This reversal, constitutes a preferred form because it caters to a natural, and strong inclination to work from the left end ,in superposing the
block constituting the subtrahend, and
while the operation may also be carried out with the cardinal sequence in the normal, left-to-right order this will demand caution and alertness, to superpose at the right end.
Assuming then, that the problem is to subtract 5 from 7, the student will select the 5 block, and lay it on the 7 block, in such manner that their left ends are flush. This leaves exposed the first
two units on upper face 24 of the 7 block, which the student immediately interprets as the equivalent of a block 2 units long, immediately leading to the answer that the remainder, or difference, is
2. This single example adequately illustrates the case of subtraction for any combination of block pairs.
After subtraction by superposition the student may be indoctrinated into the general concept of checking results by selecting a block having the value of his remainder, and performing the addition by
placing it on the vacant corner.
It will be understood that in lieu of facing upwardly, the sequence surface 24 may be arranged to face sidewise, in which case the subtrahend, or short block is placed in front of the surface 24.
Although the surface 20, containing the cardinal numeral is turned upward, as the'working face, in the problem illustrated in FIGS. 10 and 11, any of the other three faces may be used as the
operator, and as in all other problems involving the blocks, two faces will, in general, be either plainly exposed, or easily viewable simultaneously, so that the intercomparisons, which are part of
the educational problem are of frequent occur rence and, therefore, conducive to efiicient results in exploiting the memory process.
Many of the hereinbefore-enumerated objects flow from a construction wherein the blocks are square in cross section, and arranged in lengths which are multiples of the common dimension of the unit
cube. This entails a universality of symmetry in the unit block, which narrows the degree of required perception by the student to a simple comparison of multiples of length, all other things being
equal and, therefore, cancelling out any differences in the basic unit from the array of factors to be considered. This reduction of the problem to simple, and basic factors, is an important
consideration in a first introduction of the very young to such a complicated subject as mathematics.
Another feature of blocks of this construction is the placing thereon of four different forms of indicia of the number representing the length, or number of multiples of the unit cube, and a related
feature is that two of these may be plainly exposed at any position of use or examination.
Yet another important feature resides in the provision of the multiples, in any block, in sequence, on one face. This enables the very competently instructive system of subtraction by superposition,
which is present in addition to the system of subtraction by withdrawal.
In still another feature, the provision of the sequence of blocks, from 1 to 10, in duplicate, leads to the ready comparison study of numbers and physical objects, as illustrated in FIG. 7, and this
same duplication makes possible a complete decimal system of two-block addition or subtraction, as illustrated in FIG. 9.
Preferably, the blocks will be constructed of maple wood, with an outer coating of plastic 28 (FIG. 6), of non-toxic material, which is durable, attractive, comfortable to the feel, and easily
cleaned, and the indicia, shown generally by the numeral 30 in FIG. 6, may be impressed, intaglio, into the plastic in the molding operation, with or without underlying recesses in the wood, or may
be imprinted by a decal process, or other means.
Generally speaking while a certain, preferred embodiment has been shown and described, various modifications will be apparent, in the light of this disclosure, and the invention should not,
therefore, be deemed as limited, except insofar as shall appear from the spirit and scope of the appended claims.
I claim:
1. A complement of educational blocks, each block comprising a rectangular solid of square cross section, and having a length corresponding to a multiple of the dimension of a side of said cross
section, and the lengths of said blocks covering a range from 1 to 10, inclusive of said multiples, and each carrying indicia corresponding to said multiple, said indicia comprising a symbol on at
least one side, and ranging from 1 to the value represented by said symbol, said sequential arrangement running from right to left, in the direction of increasing values; and a rectangular open-top
box having side walls, said complement of blocksin said box, one dimension of said box being slightly in excess of the length of the longest of the blocks, for finger clearance, and one of said walls
having a beveled inner corner on its top edge, three of the side walls of said box being provided with a common groove parallel to the outer edge of the top of said box and the top edge of the other
of said side walls terminates in a line coincident with the fioor of said groove and said top being sized for slideable positioning in said groove to provide a roof for said box, said top being of
planiform construction and including an exteriorly mounted handle to slide the top relative to the side walls.
2. An educational device comprising, a complement of educational blocks, each block having a plastic exterior coating and comprising a rectangular solid, of square cross section, and having a
four-sided length corresponding to a multiple of the dimenison of a side of said cross section, and the respective lengths of said blocks covering a range from 1-10, inclusive, of said multiples,
said blocks having indicia on all four sides, and said indicia comprising on each side respectively, (1) a chief cardinal numeral representative of said multiple, (2) the word for the chief cardinal
numeral, (3) a continuous ascending sequential arrangement of equidistantly-spaced cardinal numerals from one to the chief cardinal numeral and (4) a plurality of equally spaced defined areas
repeated continuously along the block length a number of times equal to the chief cardinal numeral, and a rectangular open top box having side walls for holding said blocks arranged therein in two
layers so that the said lower layer may be used as a counting board, one dimension of said box being slightly in excess of the length of the longest of the blocks for finger clearance and one of said
walls having a beveled inner edge.
3. An educational device as set forth in claim 2 wherein the indicia are in relief so that a user may manipulate the blocks by feel without actually seeing the indicia.
Reiereuces Cited by the Examiner UNITED STATES PATENTS 2,494,497 1/ 1950 Trapnell 3 1.4 2,795,863 6/1957 Warwick 35-73 X 2,950,542 8/1960 Steelman 353 1.8 3,002,295 10/ 1961 Armstrong 3531 3,094,792
6/1963 Morgan et a1 35----31 FOREIGN PATENTS 321,251 5/ 1902 France.
EUGENE R. CAPOZIO, Primary Examiner.
LAWRENCE CHARLES, Examiner.
W. GRIEB, Assistant Examiner. | {"url":"http://www.google.fr/patents/US3229388","timestamp":"2014-04-18T15:50:44Z","content_type":null,"content_length":"70969","record_id":"<urn:uuid:172a1685-243c-4d7e-b542-f9a7494f3b4e>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00477-ip-10-147-4-33.ec2.internal.warc.gz"} |
Programming Praxis - Minimum Spanning Tree: Prim’s Algorithm
Programming Praxis – Minimum Spanning Tree: Prim’s Algorithm
In today’s Programming Praxis exercise we have to implement an algorithm for finding the minimum spanning tree of a graph. The Scheme solution weighs in at 15 lines, so let’s see if we can do better.
As usual, some imports:
import Data.List
import qualified Data.List.Key as K
Both the Scheme solution and the pseudocode algorithm on Wikipedia take both a list of vertices and a list of edges as input, but since the list of vertices is implicitly defined in the edges there’s
really no point in specifying it separately. To get the starting vertex, we just take the first vertex of the first edge. Other than that, we do pretty much what the pseudocode algorithm says: check
if there’s an edge with one connected and one unconnected point. If multiple exist, add the shortest. Add the unconnected vertex to the list of connected ones. Stop if there are no more edges with
unconnected vertices.
prim :: (Eq a, Ord b) => [(a, a, b)] -> [(a, a, b)]
prim es = f [(\(v,_,_) -> v) $ head es] [] where
f vs t = if null r then t else f (union vs [x,y]) (m:t) where
r = filter (\(a,b,_) -> elem a vs /= elem b vs) es
m@(x,y,_) = K.minimum (\(_,_,c) -> c) r
A quick test shows we get the same result as the Scheme version:
main :: IO ()
main = print $ prim [('A', 'D', 5), ('A', 'B', 7), ('B', 'D', 9)
,('B', 'C', 8), ('C', 'E', 5), ('B', 'E', 7)
,('D', 'E', 15), ('D', 'F', 6), ('E', 'F', 8)
,('F', 'G', 11), ('E', 'G', 9)]
And with that we’ve reduced a 15-line solution to four lines. Not bad.
Tags: algorithm, bonsai, code, Haskell, kata, minimum, praxis, prim, programming, spanning, tree
JackeLee Says:
April 24, 2010 at 9:09 am | Reply
Yea, your solution is 4-line length, but runs in O(|V| * |E|^2), where E is set of edges and V set of verticles. This is not good.
Remco Niemeijer Says:
April 24, 2010 at 11:10 am | Reply
Oh, I have no doubt that more efficient algorithms exist. However, in my programming in general and this blog in particular, my primary focus is simple (which often equates to short) code. Only when
performance becomes a problem do I start looking at more efficient algorithms, which have a tendency to be longer and more complicated. In this case, the test data was so small that the extra effort
was not warranted.
JackeLee Says:
April 27, 2010 at 6:24 pm | Reply
No, there is nothing wrong with Jarnik-Prim algorithm, in imperative language one can implement it in O(|E| + |V| * log(|V|)). You just don’t use any data structure to increase speed, instead you
search the same list again and again to find what you need ;-). | {"url":"http://bonsaicode.wordpress.com/2010/04/09/programming-praxis-minimum-spanning-tree-prims-algorithm/","timestamp":"2014-04-20T10:47:11Z","content_type":null,"content_length":"55280","record_id":"<urn:uuid:c6d6e72a-48f6-41a1-8e65-f68905f19502>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00009-ip-10-147-4-33.ec2.internal.warc.gz"} |
density of states
solid state physics
quantum mechanics
, a function quantifying how many distinct
energy states
exist at (or very near) a given
energy level
A high density of states indicates either very small energy gaps or a high degree of degeneracy. Statistical mechanics allows calculation of internal energy from the density of states for electrons
and phonons. This in turn allows calculation of such thermodynamic functions as heat capacity, entropy, Gibbs free energy and Helmholtz free energy of the material.
The electron density of states is also somewhat useful in calculating the Fermi level and electrical conductivity of a material. | {"url":"http://everything2.com/title/density+of+states","timestamp":"2014-04-16T14:16:20Z","content_type":null,"content_length":"27588","record_id":"<urn:uuid:f3da9550-df1f-44ac-b416-8c0d398ab699>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00078-ip-10-147-4-33.ec2.internal.warc.gz"} |
Problem 9
9. Linda took a multiple choice test. Each questionhad 4 possible answers from which to choose.There were 6 questions where she had to guess. Make up a probability distribution for this exeriment.
• a) Make up a probability disrtrbution for this experiment.
• b) Draw the graph.
• c) What is the probability that she got them all right?
• d) What is the probabity that she got them all wrong?
• e) What is the probability that she got at least 4 right?
• f) What is the probability that she got at least 4 wrong?
• g) What is the expected number of correct guesses?
The experiment is guessing on an answer, and, in this case, it is being repeated 6 times. If ech quiestion has 4 possible answers, the probability of getting an answer correct by randomly guessing is
1/4 or 25% on each question. The probaility of getting x answers righ by guessing is
P(x) = C(n, x)p^xq^n-x
where q = 1 - p. The probability distribution, then, looks like
b) So the graph looks like
c) The probability of getting them all right is 1/4096, which from the graph, we see is what they call a vanishingly small probability.
d) The probability that she got them all wrong is 729/4096.
e) To find the probability that she got at least 4 right, add up the probabilities of all the outcomes in the event. The event that she got at least 4 right is
{4, 5, 6}
Look up their probabilities in the distribution.
and add them up. The probability is 154/4096.
f) The event of getting at least 4 wrong is
{0, 1, 2, 3}
Notice that getting at least 4 wrong is the complementary event from getting from getting at least 4 right, so the probabilities of these two events add up to one.
g) To get the expectede number of correct guesses, multiply the probability of getting that many correct by the number of correct.
6144/4096 = 2048x3/2048x2 = 3/2 = 1.5
Or one could use the formula that the expected number of successes is np = 6x(1/4) = 3/2.
Notice that the graph peaks out above x = 1.5. | {"url":"http://www.sonoma.edu/users/w/wilsonst/Courses/Math_131/MT2/SMT2p9.html","timestamp":"2014-04-21T14:46:45Z","content_type":null,"content_length":"3500","record_id":"<urn:uuid:464c6b1d-4540-45d3-8df9-0202eb7f6597>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00069-ip-10-147-4-33.ec2.internal.warc.gz"} |
Genetic Algorithm Systematic Trading Development -- Part 1
I want to start with a brief introduction to what I consider one of the most powerful learning methodologies to come out of Artificial Intelligence in the last several decades-- the Genetic
Algorithm. Although it was originally developed to model evolutionary biology in the late 50s, most give credit to John Holland for his detailed contributions to the development of the field. A
professor of adaptive systems at the University of Michigan, he wrote a text, titled "Adaptation in Natural and Artificial Systems" in 1975, that is considered a landmark book in the field.
Although GAs are designed to borrow from our Genetic Biology, I want to quickly describe why they are powerful with respect to trading systems development. As a trader, you might often develop
systems using creative ideas borrowed from Technical Analysis books you may have read. One of the problems with earlier TA books in general, IMO, is that they often have "cherry picked" examples of
parameter sets, without much explanation as to how they arrived at the parameters, nor how well they might perform against other parameters. In statistics, we are often interested in gathering many
different samples to build a distribution profile of the outcomes as an estimate of the true population of all possible candidate solutions. We often look at these distributions of models to gather a
quantitative deduction about whether our particular system (with the parameters we selected) has performed better than any other potential system in the universe of all possible candidate solutions.
If the system performed better than some designated percentage of the sample distribution area of 100% (often set at 1% or 5% in common literature), then we can say that the result compared to the
universe of candidates is "statistically significant". Using different parameters for the same set of systematic rules will give different sample outcomes that make up that distribution. For
instance, using moving average crossovers, we might end up selecting one pair of moving average values to determine entry and exit with a resulting profit of .1%, while another combination over the
same period yielded 2.3%. Ultimately we want to find the set of pairs that performs the best, or at least significantly better than Buy and Hold, otherwise there's typically not much incentive to
trade in and out as commission costs and other negative effects make it prohibitive. We could try to run various parameters by guessing or enumerating over the search space of potential solutions,
but at a certain point, the number of combinations becomes unwieldy and is not computationally efficient. The first step might be to evaluate the parameters of our system and look for those
parameters that yield statistically significant results, the next might be to compare that candidate to buy and hold or other potential system candidates using a t-test of the separate distributions.
Let's take an example of a potential set of rules to illustrate this idea. Suppose we sat down one day and decided upon a rule that said to buy if the m period moving average was greater or less than
the n period moving average. First, we need to decide upon what range of values to use for the averages. If we discretize the range of values to integer values, i.e. 1 to 512 steps each, we would
have 512X512x2 (where 2 represents greater or less than)= 542,288 different parameters to enumerate through (or try). Although that doesn't seem too large of a number of combinations to try with
today's computational power, as we begin to make the rules more complex, the number of combinations will begin to run into the millions. It's just not feasible to try all of them, so we want to find
some method to reduce the number of potential candidates, while at the same time finding the best possible results. What we are trying to do is find an 'optimal' algorithm to converge to the best
solutions quickly. There are numerous algorithms employed in the field of machine learning, under the category of optimization algorithms that exist to achieve this goal. The genetic algorithm is one
such optimization algorithm that borrows directly from our own evolutionary genetic system to find the best potential candidate, without having to literally try out every single possible combination.
Fig1. Example of searching for statistically superior parameters.
Above, we see an example distribution of possible candidate solutions in the population of potential parameter pairs with the x-axis representing binned ranges of potential gain for the system, and y
representing the frequency of parameter pair outcomes corresponding to that gain. Our Genetic Algorithm will help us to find those solutions that are statistically significant compared to potential
Next: Genetic Algorithm Systematic Trading Development -- Part 2
1 comment:
1. I really enjoy your posts and blog.
If you're interested, we would like to talk with you. Head over to onlineinvestingai.com/blog for contact info! | {"url":"http://intelligenttradingtech.blogspot.com/2010/02/genetic-algorithm-systematic-trading.html","timestamp":"2014-04-19T11:58:40Z","content_type":null,"content_length":"69012","record_id":"<urn:uuid:35e3c6ee-5e59-416e-bc3a-69743840f099>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00497-ip-10-147-4-33.ec2.internal.warc.gz"} |
So after yesterday’s post on Simple Simulation using Copulas I got a very nice email that basically begged the question, “Dude, why are you making this so hard?” The author pointed out that if what I
really want is a Gaussian correlation structure for Gaussian distributions then I could simply use the mvrnorm() function from
Stochastic Simulation With Copulas in R
A friend of mine gave me a call last week and was wondering if I had a little R code that could illustrate how to do a Cholesky decomposition. He ultimately wanted to build a Monte Carlo model with
correlated variables. I pointed him to a number of packages that do Cholesky decomp but then
Starting an EC2 Machine Then Setting Up a Socks Proxy… From R!
I do some work from home, some work from an office in Chicago and some work on the road. It’s not uncommon for me to want to tunnel all my web traffic through a VPN tunnel. In one of my previous blog
posts I alluded to using Amazon EC2 as a way to get around
Bootstrapping the latest R into Amazon Elastic Map Reduce
I’ve been continuing to muck around with using R inside of Amazon Elastic Map reduce jobs. I’ve been working on abstracting the lapply() logic so that R will farm the pieces out to Amazon EMR. This
is coming along really well, thanks in no small part to the Stack Overflow community. I have no
Chicago R Meetup: Healthier than Drinking Alone
I’m kinda blown away by the number of folks who have joined the Chicago R User Group (RUG) in the last few weeks. As of this morning we have 65 people signed up for the group and 25 who have said
that they are planning on attending the meetup this Thursday (yes, only 3 days
Virtual Conference: R the Language
On Tuesday May 4th at 9:30 PM central, 10:30 eastern, I’ll be giving a live online presentation as part of the Vconf.org open conference series. I’ll be speaking about R and why I started using R a
couple years ago. This is NOT going to be a technical presentation but rather an illustration of how
Simulating Dart Throws in R
Back in November 2009 Wired wrote an article about some grad students who decided to try to stochastically model throwing darts. Because I don’t actually read printed material I didn’t see the
article until a couple of months ago. My immediate thought was, “hey, I drink beer. I throw darts. I build stochastic models. Why
Chicago R User Group… It’s for the sexy people!
I think we all know that Morris Day was talking about when he wrote the lyrics to “The Bird”: Yes! Hold on now, this dance ain’t for everybody. Just the sexy people. White folks, you’re much too
tight. You gotta shake your head like the black folks. You might get some tonight. Look out! That’s right, he was talking about the new
The Future of Math is Statistics
The future of math is statistics… and the language of that future is R:I’ve often thought there was way too little “statistical intuition” in the workplace. I think Author Benjamin would agree.
Lookup Performance in R
Rumor has it that Joe Adler, author of the O’Reilly Book R in a Nutshell, has joined Linked In as a data scientist. But that does not keep him from still pumping out some interesting content over at
OReilly.com. His latest article is about lookup performance in R. He does a great job giving code | {"url":"http://www.r-bloggers.com/author/jd-long/page/2/","timestamp":"2014-04-16T04:13:46Z","content_type":null,"content_length":"37496","record_id":"<urn:uuid:f5f1d728-8339-41a1-abea-0fe7356ebbbf>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00097-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: 74 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. AC-32. NO. 1, JANUARY 1987
dx, f)=zIle(x, 0, 0 1 1 .
a) A symmetrical control structure for p(x, t) is implied by (10). This
more restrictive constraint, in comparing with [7] and [8], arises since
only some functional properties on +(.) are assumed and utilized. A
particular example of (10) is
b) The condition on the uncertainty as shown in (3) is sometimes
referred toas a matching condition [8]. Discussions on mismatched
uncertainty are in [4], [131, [141.
c) The practicality of (A.3) is preempted by the matching condition.
This is so, since one can always choose an asymptotically stable nominal
part f1(x, t ) and then assume
f(x, t1-h (X, t ) E ~ ( B ( x ,t)) (12)
where Ui @(x, t))denotes the range space of B(x, t).
d) As an example, if y(llull) = b llull q, b > 0, q > 1, then $(p) =
Proof of Theorem: As a consequence of the Carathedory assump-
tions on the functions on the right-hand side of (3), one canreadily show.
using elementary results from the theories of continuous and measurable
functions, that | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/566/3037861.html","timestamp":"2014-04-19T14:40:50Z","content_type":null,"content_length":"8313","record_id":"<urn:uuid:4c30d757-3719-4088-8aea-98110f11667d>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00104-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: I need help - derivative
Replies: 0
Maciej I need help - derivative
Posted: Nov 5, 2007 5:25 AM
Posts: 8
Registered: 11/5/07 Hi
I need to count derivative (with respect for a, then with respect for b) and find value of parameters (a and b) of that function
dy/da= [b*(a^-b)*(x1^b-1)*[e^(-(x1/a)^b)]] *
* [b*(a^-b)*(x2^b-1)*[e^(-(x2/a)^b)]] *
* ... *
* [b*(a^-b)*(xn^b-1)*[e^(-(xn/a)^b)]]
dy/db= [b*(a^-b)*(x1^b-1)*[e^(-(x1/a)^b)]] *
* [b*(a^-b)*(x2^b-1)*[e^(-(x2/a)^b)]] *
* ... *
* [b*(a^-b)*(xn^b-1)*[e^(-(xn/a)^b)]] | {"url":"http://mathforum.org/kb/thread.jspa?threadID=1649278","timestamp":"2014-04-21T07:32:16Z","content_type":null,"content_length":"13954","record_id":"<urn:uuid:0b1cb3a2-98b6-46a7-8b72-882973c03323>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00025-ip-10-147-4-33.ec2.internal.warc.gz"} |
Middle Grades Math: T
Middle Grades Math
Quick Links
Unit Downloads
The Number System
In this unit, students will explore different representations of rational numbers, such as fractions, decimals (with finite or repeating digits) and percents. Students will extend addition,
subtraction, multiplication and division to all rational numbers. Students will use rational approximations of irrational numbers to estimate their values, compare sizes and approximate their
locations on a number line.
This unit was developed from the Common Core State Standards. For more information download the unit guide (on the right) where the specific content standard is identified. There may be other CCSS
that align to the activities as well that are not referenced on the guide. | {"url":"http://education.ti.com/en/timathnspired/us/middle-grades-math/the-number-system","timestamp":"2014-04-18T13:21:11Z","content_type":null,"content_length":"79746","record_id":"<urn:uuid:f90df6c7-95ae-45cb-9e78-592a31cd4c52>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00451-ip-10-147-4-33.ec2.internal.warc.gz"} |
Time Scaling of Signals
1. 13th January 2013, 08:43 #1
Junior Member level 1
Join Date
Jun 2011
0 / 0
Time Scaling of Signals
I have a very basic question to ask! I have just started a course on signals and systems and I am finding time shifting and time scaling very confusing. I don't understand how by multiplying
by a factor >1 the original signal gets compressed i.e. why is x(4t) is a compressed version of x(t). Please can any1 help regarding this. Can any1 give me a sound explanation regarding this
question! Thanks in advance...
2. 13th January 2013, 16:23 #2
Full Member level 3
Join Date
Jun 2012
18 / 18
Re: Time Scaling of Signals
let explain the problem with your example: x(t)=t so x(4t)=4t. if we want calculate two function in 1, x(1)=1 but in second function we should use t=1/4 because we want 1 in parentheses and
if you draw these functions you can see second function is compress.
3. 13th January 2013, 16:35 #3
Junior Member level 1
Join Date
Dec 2012
Madurai, India
1 / 1
Re: Time Scaling of Signals
Refer the below link for the clear concept on your doubt. I hope it would be more helpful and easy to understand
4. 15th January 2013, 04:11 #4
Junior Member level 1
Join Date
Jun 2011
0 / 0
Re: Time Scaling of Signals
Thanks for your answer ahmad1954!!!
I didn't quite understand what you meant by "if we want calculate two function in 1"...what i did understand is that in the transformed graph the time axis is 1/a times (say t') the original
graph's time axis (say t)...but how does the transformed graph become x(2t') i cannot understand. Can you please elaborate a bit. Thanks in advance...!!!!
5. 18th January 2013, 14:26 #5
Full Member level 3
Join Date
Jun 2012
18 / 18
Re: Time Scaling of Signals
x(t)=t and x(4t)=4t. suppose y(t)=x(4t)=4t. so x(t)=t and y(t)=4t. now if draw these function you can see scaling on y(t). do you understand?
Last edited by ahmad1954; 18th January 2013 at 14:36.
6. 20th January 2013, 10:22 #6
Junior Member level 1
Join Date
Feb 2010
0 / 0
Re: Time Scaling of Signals
If t becomes 2t means actually the FREQUENCY is multiplied twice not the time.
Hence time period automatically contracts. Hope you got it. | {"url":"http://www.edaboard.com/thread277010.html","timestamp":"2014-04-20T00:39:44Z","content_type":null,"content_length":"73656","record_id":"<urn:uuid:4a6410b6-9abf-4c97-8e83-15179416701c>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00626-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Godel's First Incompleteness Theorem as it possibly relatesto Physics
Vaughan Pratt pratt at cs.stanford.edu
Thu Oct 16 13:09:12 EDT 2008
Andrej Bauer wrote:
> For example Pour-El and Richards have an example of a wave equation
> whose initial condition (time t=0) is computable but the solution is not
> computable at time t=1.
In 1983 John Baez showed that this negative result necessarily depends
on the physics being classical, by showing that both the n-particle
Coulombic Hamiltonian itself and the time evolution of n-particle
systems governed by it are recursive, see Baez, J.C., Recursivity in
quantum mechanics, Trans. Am. Math. Soc., 280:1, 339-350 (Nov. 1983),
online at http://math.ucr.edu/home/baez/recursivity.pdf or (higher
resolution) at http://www.jstor.org/stable/1999617.pdf
Vaughan Pratt
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2008-October/013136.html","timestamp":"2014-04-16T17:32:39Z","content_type":null,"content_length":"3557","record_id":"<urn:uuid:8a3c8a4e-ad8e-45d0-bab1-79c0cd235397>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00635-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cosmic Variance
Today is the much celebrated pi-day . Ok, perhaps it’s not that big a holiday – I don’t think Hallmark is selling any pi-day cards yet – but anyone who uses google today knows that something
mathematical and geeky is being honored. I promise not to go into diatribes about calculations of the first few million digits of pi, or how many digits one needs to keep in order to calculate the
radius of the universe to atomic accuracy. Instead, I merely want to relay a simple short story a colleague of mine recounted to me years ago.
Several years ago, before pi-day was famous, a student called the phone number associated with the digits in pi that appear after the decimal point, i.e., 1-415-926-5358. Apparently this is rather
common now, and in fact, appears to be promoted as a mnemonic for the first 10 decimal places for those folks we need to have those numbers handy at all times. But this story happened in earlier
times, back before the Bay Area split into several area codes. And, as the clever reader has already guessed, that student reached the SLAC main gate. How cool to phone pi and reach the main gate of
a major national scientific research laboratory!
Alas, time and phone numbers march on, and nowadays phoning pi yields a “your call cannot be completed as dialed” message. (And I’m told that I cannot publish this post without noting that 3-14-15
will be a more accurate pi day.)
Great, now I want to know how many digits one needs to keep in order to calculate the radius of the universe to atomic accuracy, and I’LL NEVER KNOW! ;;_;;
Another phone-number confusion story:
Years ago, a new branch of Bank of America got a new phone number that was very close (small Hamming distance) to the number that had been used by a biophysics lab at UC Berkeley for years. So
the lab started to get a lot of phone calls for BoA, through customer misdial.
The lab folks called BoA to see if they could get them to change their number: No dice.
So they devised the following action plan: Whenever anyone called for BoA, they answered: “Bank of America is out of money, call Wells Fargo.”
The BoA number was changed within a week.
3-14-16 would be even more accurate (btw, ’rounding the Pi’ sounds so redundant
• http://www.imperativocientifico.blogspot.com
I’m Brazilian and crazy about science.
I’d like to congratulate you for the very interesting contents of this blog.
I have a science blog, written in Portuguese (soon it’ll have posts in English as well). Take a look at this link:
I hope we can change information, or develop a good “blogger” relationship. My e-mail address is: rtmatosribeiro@yahoo.com.br.
How can I keep contact with you?
Rafael Tadeu de Matos Ribeiro – Brazil
It’s only pi day in the United States and places that use their date notation. The rest of us don’t get pi day, although 14th January 2059 will be a sop to younger readers.
We got e-day though, 27/1/82 (or 83, depending how you feel about rounding).
Wouldn’t it be more correct to say: “3-14-15 will be a more precise pi day” ??
3-14-15 may be a more precise pi day if you choose to truncate pi.
rounding pi would make it 3-14-16, which might be just a bit more precise.
• http://www.savory.de/blog.htm
3-14 of any american year, at 1:59:27 (33 seconds before 2 in the morning) surely
I will rather celebrate PI day on 22nd of July
The most precise pi day (rounding up) would have been in 1593, the year in which the Vatican opened its trial of Giordano Bruno, Dominican cosmologist. | {"url":"http://blogs.discovermagazine.com/cosmicvariance/2010/03/14/phone-pi/","timestamp":"2014-04-20T19:15:03Z","content_type":null,"content_length":"101754","record_id":"<urn:uuid:42a40dc2-b570-4ede-82ad-027e99680dd7>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00650-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kids.Net.Au - Encyclopedia > Electromagnetic field
The electromagnetic field (EMF[?]) is composed of two related vectorial fields, the electric field and the magnetic field. This means that the vectors (E and B) that characterize the field each have
a value defined at each point of space and time. If only E, the electric field, is nonzero and is constant in time, the field is said to be an electrostatic field[?].
The electromagnetic field generates a force F on a charged particle, is given by the Lorentz equation
= q (\mathbf{E} + \mathbf{v} \times \mathbf{B}),</math>
where <math>q</math> is the charge of the particle, and v is its current velocity (expressed as a vector.)
The behaviour of electromagnetic fields can be described with Maxwell's equations, and their quantum basis by quantum electrodynamics.
All Wikipedia text is available under the terms of the GNU Free Documentation License | {"url":"http://encyclopedia.kids.net.au/page/el/Electromagnetic_field","timestamp":"2014-04-19T04:33:28Z","content_type":null,"content_length":"13093","record_id":"<urn:uuid:43a158a3-13d8-4e94-9358-7f20a3747e88>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00357-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lincoln University
Web Content Display
Mathematics Summer Camp
Summer Enrichment Program
July 2nd through July 16th, 2012
The Lincoln University Mathematics Summer Camp Program focuses on developing problem solving abilities, critical thinking and logical reasoning by exposing the students to a level of mathematics that
they will not encounter in their high school classrooms, and as a result, motivating them to pursue a degree in math or a math-related field. Lincoln University professors, invited professors and
graduate students from University of Missouri will present appropriate topics from advanced level mathematics. These presentations will touch on various branches of mathematics including
combinatorics, number theory, probability, topology, and dynamical systems. The sessions will be conducted in a fun, interactive way through projects, games, puzzles, and applications. As the
students work in small groups, they will be guided by Lincoln University mathematics professors through the process of thinking logically and creatively to solve problems. In addition to working on
problems, participants will also enjoy recreational breaks and trips to local museums and research centers exposing them to a wider application of mathematics.
This summer program will be conducted on Lincoln University campus from Monday to Thursday for a two week period. We begin at 9:00AM and end at 4:00PM every day.
There will be no application fee for applying and registration fee for students accepted into Math Circle program. Each student participant will receive a $100.00 stipend. This program is open to all
high school students.
Online applications are now being accepted at bluetigerportal.lincolnu.edu/web/dept-of-computer-science-mathematics-and-technology/registration-form
You’ll never see math the same way again… | {"url":"http://www.lincolnu.edu/web/cstm/lu-mathcircle-program","timestamp":"2014-04-17T18:37:25Z","content_type":null,"content_length":"53128","record_id":"<urn:uuid:c5dc4850-cbe6-4dda-bcc0-7482972d45bc>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00265-ip-10-147-4-33.ec2.internal.warc.gz"} |
Spectral gap of tempered distributions
up vote 2 down vote favorite
Hi, Let $\Lambda\subset\mathbb{R}$ be an infinite discrete set of finite density (for simplicity one may take the density equals 1) and $\delta_{\lambda}$ is a unit mass located at the point $\lambda
\in\Lambda$. Define the tempered distribution $\delta_{\Lambda}=\underset{\lambda\in\Lambda}{\sum}\delta_{\lambda}$. It is known that if there exists a finite set $\Sigma$ such that for every two
successive elements of $\Lambda$, $\lambda,\mu$ we have $\lambda-\mu\in\Sigma$ and $\delta_{\Lambda}$ has a spectral gap then $\Lambda$ must be periodic, i.e. a finite union of copies of a translated
I am trying to understand if I can drop the condition of having a finite set of differences. In other words I am trying to construct a set $\Lambda$ so that $\delta_{\Lambda}$ will have a spectral
gap but $\Lambda$ will have an infinite set of differences. Obviously for periodicity of $\Lambda$ I cannot drop this condition altogether because $\Lambda$ cannot be periodic if its set of
differences is inifinite, but constructing such a distribution will show that the two conditions are separate. And in a more general tone, How can one get intuition regarding whether a tempered
distribution of the kind $\delta_{\Lambda}$ has a spectral gap at all?
fourier-analysis ca.analysis-and-odes schwartz-distributions
1 Could you please clarify what do you exactly mean by Spectral gap? – Rami Feb 5 '12 at 1:04
1 I believe he is referring to the recent result of Alex Iosevich, Mihail N. Kolountzakis ``Periodicity of the spectrum in dimension one'' arxiv.org/abs/1108.5689 where they prove such a result. In
the context of that paper `spectral gap' would mean the existence of a > 0 such that supp (\widehat{\delta_\Lambda}) \cap (0,a) = \phi . – Vagabond Feb 5 '12 at 6:02
see equation 10 page 6 of the above mentioned paper. – Vagabond Feb 5 '12 at 6:05
That is exactly what I meant. – Itay Feb 5 '12 at 6:36
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged fourier-analysis ca.analysis-and-odes schwartz-distributions or ask your own question. | {"url":"http://mathoverflow.net/questions/87541/spectral-gap-of-tempered-distributions","timestamp":"2014-04-16T14:12:07Z","content_type":null,"content_length":"50628","record_id":"<urn:uuid:ac10374c-ef46-431a-82cf-5f44a69938a3>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00283-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-user] Orthogonal polynomial evaluation
Pauli Virtanen pav@iki...
Mon Apr 13 12:35:19 CDT 2009
Mon, 13 Apr 2009 06:52:16 -0700, Lou Pecora wrote:
> I recall that stable routines to calculate Bessel functions use
> descending recurrence relations (descending in indices), rather than
> ascending recurrence which is unstable. Only trick is that the values
> have to be renormalized at the end (or as you go to keep the numbers in
> floating point range, I think) since descending recurrence does not set
> the scale initially. I wonder if this is the case for other recurrence
> relations. That is, if one recurrence is unstable (e.g. ascending),
> then the opposite (descending) will be stable. Perhaps the scipy
> routines can be easily reworked, if so. Just a thought.
Yes, the errors in polynomial evaluation come from a source somewhat
similar to the reason why Bessel recurrence relations have to be
evaluated in a certaion direction: loss of precision.
The problem here is that how the routines in Scipy currently work is that
they compute the polynomial coefficients and produce numpy.poly1d
objects. All poly1d objects evaluate the polynomials using the *same*
Horner recurrence scheme, but it is not numerically stable for high-order
polynomials. The problem is that each of the orthogonal polynomial would
need to be evaluated using a specialized recurrence relation.
We don't currently have implementations of stable evaluation algorithms.
So, a complete rewrite of the orthogonal polynomial routines is required.
Ticket: http://projects.scipy.org/scipy/ticket/921
Pauli Virtanen
More information about the SciPy-user mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-user/2009-April/020705.html","timestamp":"2014-04-20T21:08:00Z","content_type":null,"content_length":"4118","record_id":"<urn:uuid:4c505681-41f0-4b02-8772-bd1242c9554c>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00100-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sunnyvale, TX Geometry Tutor
Find a Sunnyvale, TX Geometry Tutor
...I focus on comprehension of the concepts and application of knowledge. I utilize teaching techniques such as flashcards and visualization association to increase the student's ability to
comprehend the material. Each technique that I use builds upon the previous one and allows for a more compre...
27 Subjects: including geometry, reading, statistics, accounting
...Many of my undergraduate courses were in Computer Science. I know 10+ programming languages and have taught C/C++ in several colleges. I know data structures and algorithms.
48 Subjects: including geometry, chemistry, physics, calculus
...I also taught college algebra and basic math at the university level for several semesters. I have taught precalculus at the university level while in graduate school. I have a bachelor's and
master's degree in mathematics.
10 Subjects: including geometry, calculus, statistics, algebra 1
...The basic concepts to be learned are: a) the scientific method, b) safety in the science lab, c) study of i. Earth science (rock cycle, landforms, energy resources, fossils and evidence of
extinct life), ii. life science (life cycle, food chain, ecosystems, carbon-oxygen-nitrogen cycles, physica...
14 Subjects: including geometry, Spanish, TOEFL, algebra 1
I am a current medical student in the DFW area. I have had a lot of experience with tutoring in both high school and at a college level. I have also had experience teaching science and math in
high schools through Teach for America. I am very enthusiastic and patient!
13 Subjects: including geometry, reading, biology, algebra 1
Related Sunnyvale, TX Tutors
Sunnyvale, TX Accounting Tutors
Sunnyvale, TX ACT Tutors
Sunnyvale, TX Algebra Tutors
Sunnyvale, TX Algebra 2 Tutors
Sunnyvale, TX Calculus Tutors
Sunnyvale, TX Geometry Tutors
Sunnyvale, TX Math Tutors
Sunnyvale, TX Prealgebra Tutors
Sunnyvale, TX Precalculus Tutors
Sunnyvale, TX SAT Tutors
Sunnyvale, TX SAT Math Tutors
Sunnyvale, TX Science Tutors
Sunnyvale, TX Statistics Tutors
Sunnyvale, TX Trigonometry Tutors | {"url":"http://www.purplemath.com/Sunnyvale_TX_geometry_tutors.php","timestamp":"2014-04-20T16:26:51Z","content_type":null,"content_length":"23831","record_id":"<urn:uuid:16107817-8175-489f-a467-fd6c8b8ed42e>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00397-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help! Related rates with the chain rule
April 18th 2009, 08:14 PM
Help! Related rates with the chain rule
Here is the question
I inflate a spherical balloon by blowing 1000cm^3 of air into it every 5 seconds.
a/ what is the rate of change of the volume of the balloon (in cm^3/sec), at the moment when the radius of the balloon is 5cm?
b/ what is the rate of change of the radius of the balloon (in cm/sec), at the moment when the radius of the balloon is 5cm?
So the Dv/Dt is (-200cm^3/sec) right? Then what?
Please Help!!
April 18th 2009, 08:35 PM
V =4/3pi^r^3
dV/dt = 4pi*r^2*dr/dt
Now use your data | {"url":"http://mathhelpforum.com/calculus/84381-help-related-rates-chain-rule-print.html","timestamp":"2014-04-20T16:08:22Z","content_type":null,"content_length":"3783","record_id":"<urn:uuid:cecf0f64-c59e-42ef-a028-44e13d81dbf0>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00230-ip-10-147-4-33.ec2.internal.warc.gz"} |
Diffeomorphisms between spheres and hyperplanes in infinite-dimensional Banach spaces
Azagra Rueda, Daniel (1997) Diffeomorphisms between spheres and hyperplanes in infinite-dimensional Banach spaces. Studia Mathematica, 125 (2). pp. 179-186. ISSN 0039-3223
Official URL: http://journals.impan.gov.pl/sm/
We prove that for every infinite-dimensional Banach space X with a Frechet differentiable norm, the sphere S-X is diffeomorphic to each closed hyperplane in X. We also prove that every
infinite-dimensional Banach space Y having a (not necessarily equivalent) C-p norm (with p is an element of N boolean OR {infinity}) is C-p diffeomorphic to Y \ {0}.
Item Type: Article
Uncontrolled Keywords: Infinite-dimensional Banach space; Unit sphere; Hyperplane; Diffeomorphism
Subjects: Sciences > Mathematics > Functions
ID Code: 12280
Deposited On: 23 Feb 2011 10:43
Last Modified: 06 Feb 2014 09:21
Repository Staff Only: item control page | {"url":"http://eprints.ucm.es/12280/","timestamp":"2014-04-21T07:17:08Z","content_type":null,"content_length":"22291","record_id":"<urn:uuid:92011f17-2c45-4335-9a9c-58501f71b664>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00604-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tiburon, CA Science Tutor
Find a Tiburon, CA Science Tutor
...I am currently employed as an undergraduate research assistant for the head of the Chemistry department, as well as a peer-tutor for the Learning and Writing Center. I have been tutoring
General and Organic Chemistry for about a year now with the university, both in groups and individually. I h...
2 Subjects: including chemistry, organic chemistry
...I am a trained engineer, with a M.S. from UC Berkeley, and a B.S. from the University of Illinois at Urbana-Champaign. At UC Berkeley I taught CE100, an introductory fluid mechanics course,
for which I obtained outstanding student reviews. In the past I have also independently tutored engineering graduate students in physics, water chemistry, calculus, and fluid mechanics.
15 Subjects: including civil engineering, Spanish, calculus, ESL/ESOL
...Lastly, the first session is always free. You can read everything I write here and hopefully learn something about me, but there's no substitute for actually trying something first hand! If
you've made it this far, thanks for reading and I hope to see you soon!I received a Bachelor's degree in Biology from the University of California, Santa Cruz.
5 Subjects: including chemistry, biology, physical science, physics
...I passed the WyzAnt Spanish test with flying colors - 100%. I have a Bachelor's degree in Physical education and a Master's degree in Nutritional Science. I am a Registered Dietitian licensed
by the Commission on Dietetic Registration in 2005. My experience includes 9 years working as a dietitian and 15 total years in nutrition and dietary.
17 Subjects: including chemistry, elementary (k-6th), vocabulary, Microsoft Word
...I can teach both American and British English. I have access to a wide variety of resources and I can focus on any of the skills (Grammar, Listening, Reading, Writing, Speaking, Pronunciation)
that a student might need. I've taught study skills to 7th, 8th, 11th and 12th graders over the course of my teaching career.
31 Subjects: including psychology, biology, philosophy, algebra 1
Related Tiburon, CA Tutors
Tiburon, CA Accounting Tutors
Tiburon, CA ACT Tutors
Tiburon, CA Algebra Tutors
Tiburon, CA Algebra 2 Tutors
Tiburon, CA Calculus Tutors
Tiburon, CA Geometry Tutors
Tiburon, CA Math Tutors
Tiburon, CA Prealgebra Tutors
Tiburon, CA Precalculus Tutors
Tiburon, CA SAT Tutors
Tiburon, CA SAT Math Tutors
Tiburon, CA Science Tutors
Tiburon, CA Statistics Tutors
Tiburon, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/Tiburon_CA_Science_tutors.php","timestamp":"2014-04-21T02:31:46Z","content_type":null,"content_length":"24138","record_id":"<urn:uuid:372b7ab6-959a-47d6-8992-6a5284fea6b9>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00557-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Chapt15.54 generalization of the Maxwell Eq and deriving Darwin
Evolution #1301 New Physics #1504 ATOM TOTALITY 5th ed
Replies: 8 Last Post: Apr 21, 2013 5:16 PM
Messages: [ Previous | Next ]
resolving magnetic monopole with dipole Chapt15.55 the 5th Maxwell
Equation #1306 New Physics #1509 ATOM TOTALITY 5th ed
Posted: Apr 20, 2013 3:43 PM
Marvelous how a good night's sleep helps solve a physics and math
problem. The problem was, as you may well remember is that magnetic
monopoles are attraction only, and that magnetic dipoles are both
attraction and repulsion the same as electric charge.
So I need to have the Maxwell Equations have the feature of magnetic
monopoles attract only, and never repel.
These are the 4 Maxwell Equations with magnetic monopoles and the
Dirac Equation.
div*E = r_E
div*B = r_B
- curlxE = dB + J_B
curlxB = dE + J_E
div*E + div*B + (-1)curlxE + curlxB = r_E + r_B + dB + dE + J_E + J_B
Now Wikipedia has a good description of how Dirac derived his famous
equation which gives this:
(Ad_x + Bd_y + Cd_z + (i/c)Dd_t - mc/h) p = 0
And we can see that by summation of the Maxwell Equations we derive
the Dirac Equation.
Let me modify my terms so that the wave function as p is denoted as
(Ad_x + Bd_y + Cd_z + (i/c)Dd_t - mc/h) f_w(p) = 0
Now I think the solution is going to have to be a 5th Maxwell Equation
that recognizes magnetic dipoles and magnetic monopoles where the
monopoles are attract only, yet the dipoles like electric charge are
both attract and repel. I am guided by the Dirac Equation, which may
turn out to be an error on my part, but I instinctively would assume
that Dirac, looking blindly for a relativistic Schrodinger Equation,
would stumble upon, not a complex equation but a equation of most
primitive form. So I am going to assume Dirac would have found a most
primitive relativistic Schrodinger Equation, not a polished one.
The Dirac Equation has 5 terms in all. That is asymmetrical, but it
would not be asymmetrical if it had 6 terms in all. And the Maxwell
Equations have an asymmetry of the negative sign in Faraday's law. So
in my sleep and rest, I found the most easiest solution and way out of
this problem. We have a 5th Maxwell Equation of -div*M = 0. This
equation is similar to the no magnetic monopole equation of Gauss's
law of magnetism except it has M for monopole instead of B for dipole.
And it has a negative sign that allows attraction force only and no
div*E = r_E
div*B = r_B
- div*M = 0
- curlxE = dB + J_B
curlxB = dE + J_E
And now we modify the Dirac Equation with a total of 6 terms rather
than 5 terms:
(Ad_x + Bd_y + Cd_z + (i/c)Dd_t - mc/h - (div*M)) f_w(p) = 0
Now let us look back and reflect on this. There are asymmetries still,
but we should not look bad or harsh on asymmetry because the Universe
is growing and in growing we need asymmetry. Mind you, most asymmetry
means a mistake is made, but we need some asymmetry when it is the
Universe in question. The reason that pi or "e" are not rational is
because the Universe is growing and not standing still. So asymmetry
is needed if an object is growing.
Approximately 90 percent of AP's posts are missing in the Google
newsgroups author search starting May 2012. They call it indexing; I
call it censor discrimination. Whatever the case, what is needed now
is for science newsgroups like sci.physics, sci.chem, sci.bio,
sci.geo.geology, sci.med, sci.paleontology, sci.astro,
sci.physics.electromag to?be hosted by a University the same as what
Drexel?University hosts sci.math as the Math Forum. Science needs to
be in education?not in the hands of corporations chasing after the
next dollar bill. Only Drexel's Math Forum has done a excellent,
simple and fair author-archiving of AP sci.math posts since May 2012
as seen here :
Archimedes Plutonium
whole entire Universe is just one big atom
where dots of the electron-dot-cloud are galaxies | {"url":"http://mathforum.org/kb/message.jspa?messageID=8896442","timestamp":"2014-04-20T21:38:42Z","content_type":null,"content_length":"30730","record_id":"<urn:uuid:1217731b-4b59-4bbc-afd2-9b9cffbe4bce>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00427-ip-10-147-4-33.ec2.internal.warc.gz"} |
Local to global flatness question
up vote 1 down vote favorite
This is a bit of a bizzare question, but I'm going to ask it anyway.
If $X={\rm Spec}R$ is an affine variety, and $\mathfrak{m}$ a closed point, then the localization $R\to R_{\mathfrak{m}}$ gives a morphism
$f:{\rm Spec}R_{\mathfrak{m}}\to{\rm Spec}R$,
yielding an adjunction between ${\rm Qcoh} R$ and ${\rm Qcoh} R_{\mathfrak{m}}$. This is just a very long-winded way of saying localization and extension of scalars between module categories. Note
that $f_*(R_{\mathfrak{m}})$ is flat as an $R$-module, since localization is exact.
I'm wondering if this just generalizes as is to the scheme world. Take a scheme $X$ (with nice properties as you like, if necessary), and a closed point $x\in X$. We still have a map
$f:{\rm Spec }\mathcal{O}_{X,x}\to X$,
given as factorization through an open affine. Is $f_*(\mathcal{O}_{X,x})$ flat as a $\mathcal{O}_X$-module? I can see that it is flat at all points $y$ for which $x$ and $y$ both live in a common
open affine (basically by above), but does this extend to all points? Morally, I'd like to think so, but I can't write down a proof.
2 It seems to me that you're saying your map factors through an open affine $U \subset X$ such that the map to $U$ is flat. But open immersions are flat and a composition of flat maps is flat, so
that would seem to do it. – Mike Skirvin Jun 3 '11 at 16:35
@Mike: This argument is not quite correct, see the link to the counterexample in Lei's answer. – Martin Brandenburg Jun 4 '11 at 9:13
add comment
1 Answer
active oldest votes
This is of course true, for any semi-separated scheme (i.e. the diagonal is affine), or maybe you assume $X$ is separated if you like, and you can take any point (not necessarily
closed). The reason that the sheaf is flat is that ${\rm Spec}(O_{X,x})\to X$ is affine and flat. In general if $f: X\to Y$ is flat and affine then $f_*O_X$ is $O_Y-$flat. This is
up vote 3 down But I don't think it is true that $f: X\to Y$ is flat implies $f_*O_X$ is $O_Y-$flat. See link text
vote accepted
In the answer of Jason Starr, the map is flat, while the 0-th direct image is not flat at all.
Lei: Such an example was given by Jason Starr under mathoverflow.net/questions/65267/… – Philipp Hartwig Jun 4 '11 at 7:21
Thanks Philipp, I just find it:) – Lei Jun 4 '11 at 8:07
Lei, many thanks for this. I was aware of the subtlety that flat sheaves are not preserved under flat maps (see also mathforum.org/kb/…) but was unsure of what assumptions were
needed in my situation to make it work. – Michael Jun 6 '11 at 7:23
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/66833/local-to-global-flatness-question","timestamp":"2014-04-16T22:50:31Z","content_type":null,"content_length":"55851","record_id":"<urn:uuid:05c9e135-eccd-4376-886b-00fe628c36c7>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00515-ip-10-147-4-33.ec2.internal.warc.gz"} |
Superstition Mountain, AZ Math Tutor
Find a Superstition Mountain, AZ Math Tutor
...In the fall I will be heading off to graduate school to obtain my PhD but in the mean time I would love the opportunity to explore teaching. My sincerest hope is that my quirky, enthusiastic
personality can breathe a little bit of life into what people often consider to be very difficult or bori...
13 Subjects: including algebra 1, English, prealgebra, writing
...Reputation as very patient teacher with the ability to make difficult concepts easy to understand. Qualifications for tutoring Geometry students include: 1. Master's Degree in Mathematics from
Youngstown State University. 2.
10 Subjects: including algebra 1, algebra 2, calculus, geometry
...The students used textbooks or workbooks that were based upon phonics, spelling rules, and word origins. The methodology I used consisted of word study, practice tests, sentence dictation,
word analysis exercises, and final tests. Most of my students became excellent spellers.
43 Subjects: including trigonometry, geometry, prealgebra, algebra 2
...Currently, I tutor any math subject up to the calculus level. I am confident in my skills and your potential to succeed. Please feel free to contact me with any questions you might have.
9 Subjects: including prealgebra, trigonometry, statistics, Vietnamese
...I take the time to understand your knowledge or gaps in the subject, and fill those in. I have helped students create organizational systems to help them track homework, quizzes, tests and
major assignments. I have a fingerprint card that shows I have passed a criminal background check.
24 Subjects: including SAT math, grammar, reading, writing
Related Superstition Mountain, AZ Tutors
Superstition Mountain, AZ Accounting Tutors
Superstition Mountain, AZ ACT Tutors
Superstition Mountain, AZ Algebra Tutors
Superstition Mountain, AZ Algebra 2 Tutors
Superstition Mountain, AZ Calculus Tutors
Superstition Mountain, AZ Geometry Tutors
Superstition Mountain, AZ Math Tutors
Superstition Mountain, AZ Prealgebra Tutors
Superstition Mountain, AZ Precalculus Tutors
Superstition Mountain, AZ SAT Tutors
Superstition Mountain, AZ SAT Math Tutors
Superstition Mountain, AZ Science Tutors
Superstition Mountain, AZ Statistics Tutors
Superstition Mountain, AZ Trigonometry Tutors
Nearby Cities With Math Tutor
Bensch Ranch, AZ Math Tutors
Carefree Math Tutors
Cordes Lakes, AZ Math Tutors
Dudleyville Math Tutors
Eleven Mile Corner, AZ Math Tutors
Eleven Mile, AZ Math Tutors
Groom Creek, AZ Math Tutors
Iron Springs Math Tutors
Mobile, AZ Math Tutors
Peeples Valley, AZ Math Tutors
Rio Verde Math Tutors
Saddlebrooke, AZ Math Tutors
Spring Valley, AZ Math Tutors
Strawberry, AZ Math Tutors
Toltec, AZ Math Tutors | {"url":"http://www.purplemath.com/Superstition_Mountain_AZ_Math_tutors.php","timestamp":"2014-04-16T22:03:28Z","content_type":null,"content_length":"24318","record_id":"<urn:uuid:4e141738-0b68-4825-89d4-e98ea2bbb95d>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00100-ip-10-147-4-33.ec2.internal.warc.gz"} |
Area of Polygon
Date: 08/20/99 at 14:31:06
From: Ed Bachmann
Subject: Area of a polygon
Dr. Math,
I found it stated in a popular little paperback on math that a
surveyor can calculate the area of a polygon this way:
1. Lay out and measure a line of any length within the polygon (the
"base line").
2. Measure the angle to each vertex of the polygon from each end of
the line.
The resulting length of the single line and the angles supposedly
contain sufficient information to compute the area.
In trying to solve this I've been able to compute the areas of
triangles that have as one of their sides the base line, and I've been
able to solve some, but not all, of the other triangles. I'm stuck on
a few peripheral triangles for which I can't seem to get enough
information to solve.
Is there a general algorithm to solve this problem for polygons of any
number of sides; convex, concave, or when you do not have the lengths
of any sides? If I can ever figure this out, I want to write a program
to do the calculations.
Thanks very much,
Date: 08/20/99 at 23:02:37
From: Doctor Peterson
Subject: Re: Area of a polygon
Hi, Ed.
It's obvious that there is enough information in the angles and the
baseline, because they are sufficient to draw the figure (using ASA
repeatedly), and therefore the area is determined by these numbers.
What you want to know, of course, is HOW?
I'm not familiar with any traditional method used by surveyors, but I
can at least come up with a method that will work. It takes two steps.
First, if we choose a coordinate system with one end of the baseline
as the origin and the other as point (d,0), we can find the
coordinates of each vertex. I used this figure:
/| \
/ | \
/ | \
/ |y \
/ | \
/ | \
/a | b\
A x d-x B
to write two equations
y/x = tan(a)
y/(d-x) = tan(b)
from which I got the formulas
d d
x = ---------- and y = ---------------
tan(a) 1 1
------ + 1 ------ + ------
tan(b) tan(a) tan(b)
You can check those; I did it pretty quickly, but it gives the idea.
The cases where the tangents are 0 or infinite can be handled easily.
The second step is to use these coordinates for each point (x_n,y_n)
to find the area. There is a standard formula for this:
A = [(x1y2 + x2y3 + ... + xny1) - (y1x2 + y2x3 + ... + ynx1)]/2
I think that should be enough - maybe not as elegant as it could be,
but pretty straightforward.
- Doctor Peterson, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/55186.html","timestamp":"2014-04-19T14:38:08Z","content_type":null,"content_length":"7811","record_id":"<urn:uuid:3657fbd8-7dba-40a1-8f64-646e3c59a7ee>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00392-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quiver varieties and cluster algebras
Motivated by a recent conjecture by Hernandez and Leclerc, we embed a Fomin-Zelevinsky cluster algebra into the Grothendieck ring $R$ of the category of representations of quantum loop algebras $U_q
(Lg)$ of a symmetric Kac-Moody Lie algebra, studied earlier by the author via perverse sheaves on graded quiver varieties. Graded quiver varieties controlling the image can be identified with
varieties which Lusztig used to define the canonical base. The cluster monomials form a subset of the base given by the classes of simple modules in $R$, or Lusztig's dual canonical base. The
positivity and linearly independence (and probably many other) conjectures of cluster monomials follow as consequences, when there is a seed with a bipartite quiver. Simple modules corresponding to
cluster monomials factorize into tensor products of `prime' simple ones according to the cluster expansion. nakajima@math.kyoto-u.ac.jp | {"url":"http://www.kurims.kyoto-u.ac.jp/~nakajima/Talks/20090618.html","timestamp":"2014-04-20T06:18:06Z","content_type":null,"content_length":"1336","record_id":"<urn:uuid:27bc625b-46c9-4fbb-ad6a-1938f96058e1>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00562-ip-10-147-4-33.ec2.internal.warc.gz"} |
Breadth-first search trace path
up vote 10 down vote favorite
How do you trace the path of a Breadth-first search such that in the following example:
If searching for key 11, return the shortest list connecting 1 to 11.
return [1,4,7,11]
python algorithm graph
9 Why does this feel like homework? – Peter Rowell Jan 19 '12 at 6:55
It was actually an old assignment I was helping a friend on months ago, based on the Kevin Bacon Law. My final solution was very sloppy, I basically did another Breadth-first search to "rewind"
and backtrack. I wan't to find a better solution. – Christopher Markieta Jan 19 '12 at 6:58
3 Excellent. I consider revisiting an old problem in an attempt to find a better answer to be an admirable trait in an engineer. I wish you well in your studies and career. – Peter Rowell Jan 19 '12
at 17:14
Thanks for the praise, I just believe if I don't learn it now, I will be faced with the same problem again. – Christopher Markieta Jan 20 '12 at 6:23
add comment
2 Answers
active oldest votes
You should have look at http://en.wikipedia.org/wiki/Breadth-first_search first.
Below is a quick implementation, in which I used a list of list to represent the queue of paths.
# graph is in adjacent list representation
graph = {
'1': ['2', '3', '4'],
'2': ['5', '6'],
'5': ['9', '10'],
'4': ['7', '8'],
'7': ['11', '12']
def bfs(graph, start, end):
# maintain a queue of paths
queue = []
# push the first path into the queue
while queue:
# get the first path from the queue
path = queue.pop(0)
# get the last node from the path
node = path[-1]
# path found
if node == end:
return path
# enumerate all adjacent nodes, construct a new path and push it into the queue
for adjacent in graph.get(node, []):
new_path = list(path)
print bfs(graph, '1', '11')
up vote 37 down Another approach would be maintaining a mapping from each node to its parent, and when inspecting the ajacent node, record its parent. When the search is done, simply backtrace
vote accepted according the parent mapping.
graph = {
'1': ['2', '3', '4'],
'2': ['5', '6'],
'5': ['9', '10'],
'4': ['7', '8'],
'7': ['11', '12']
def backtrace(parent, start, end):
path = [end]
while path[-1] != start:
return path
def bfs(graph, start, end):
parent = {}
queue = []
while queue:
node = queue.pop(0)
if node == end:
return backtrace(parent, start, end)
for adjacent in graph.get(node, []):
parent[adjacent] = node # <<<<< record its parent
print bfs(graph, '1', '11')
EDIT: the above codes are based on the assumption that there's no cycles.(thanks to @roberking).
This is excellent! My thought process lead me to believe in creating some type of table or matrix, I have yet to learn about graphs. Thank you. – Christopher Markieta Jan 19 '12
at 7:04
I also tried using a back tracing approach although this seems much cleaner. Would it be possible to make a graph if you only know the start and the end but none of the nodes
in-between? Or even another approach besides graphs? – Christopher Markieta Jan 19 '12 at 7:19
@ChristopherM I failed to understand your question :( – qiao Jan 19 '12 at 7:30
If we know the start node is "1" and the end node is "11" but the other nodes are unknown until we begin to search them; such that the graph is dynamic and will still work even
if the tree had 100 nodes. For ex: { '1': [a, b, c], a: [d, e], d: [f, g], c: [h, i], h: [11, k] } – Christopher Markieta Jan 19 '12 at 7:45
@ChristopherM As long as the adjacent nodes are being determined when a node is reached, then the above algorithm will be fine. – qiao Jan 19 '12 at 8:01
show 1 more comment
I thought i'd try code this up for fun:
graph = {
'1': ['2', '3', '4'],
'2': ['5', '6'],
'5': ['9', '10'],
'4': ['7', '8'],
'7': ['11', '12']
def bfs(graph,for_front,end):
#assumes no cycles.
next_for_front=[(node,path+','+node) for i,path in for_front if i in graph for node in graph[i]]
for node,path in next_for_front:
if node==end:
up vote 5 down return path
vote else:
return bfs(graph,next_for_front,end)
print bfs(graph,[('1','1')],'11')
if you want cycles you could add this:
for i,j in for_front: #allow cycles, add this code
if i in graph:
del graph[i]
2 +1, you reminded me of one thing: assume no cycles :) – qiao Jan 19 '12 at 8:35
Where would you add this piece of code? – Liondancer Dec 3 '13 at 13:25
after you have built the next_for_front. A follow on question, what if the graph contains loops? E.g. if node 1 had an edge connecting back to itself? What if the graph has multiple
edges going between two nodes? – robert king Dec 3 '13 at 21:33
add comment
Not the answer you're looking for? Browse other questions tagged python algorithm graph or ask your own question. | {"url":"http://stackoverflow.com/questions/8922060/breadth-first-search-trace-path","timestamp":"2014-04-20T15:21:52Z","content_type":null,"content_length":"85296","record_id":"<urn:uuid:aef92e6f-77cf-4881-9179-f7dd5096061c>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00001-ip-10-147-4-33.ec2.internal.warc.gz"} |
How do I use Stoke's Theorem to evaluate given this vector?
February 24th 2013, 11:14 PM #1
Feb 2013
How do I use Stoke's Theorem to evaluate given this vector?
Using Stoke's Theorem, evaluate:
∫∫ (∇⋅V)⋅ dA
over any surface whose bounding curve is the (x,y) plane, where
V = (x - x^2z)î + (yz^3 - y^2)jˆ + (x^2 - xz)ˆk
I'm really stuck on this particular problem, so any and all help is much appreciated. Thank you!
Last edited by JohnZ; February 24th 2013 at 11:19 PM.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/calculus/213762-how-do-i-use-stoke-s-theorem-evaluate-given-vector.html","timestamp":"2014-04-18T01:18:17Z","content_type":null,"content_length":"30109","record_id":"<urn:uuid:e3164f0c-ca50-4428-bf73-1cf42fe341f0>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00163-ip-10-147-4-33.ec2.internal.warc.gz"} |
HowStuffWorks "Elastic Collisions and Friction"
Elastic Collisions and Friction
There are two final things at play here, and the first is the elastic collision. An elastic collision occurs when two objects run into each other, and the combined kinetic energy of the objects is
the same before and after the collision. Imagine for a moment a Newton's cradle with only two balls. If Ball One had 10 joules of energy and it hit Ball Two in an elastic collision, Ball Two would
swing away with 10 joules. The balls in a Newton's cradle hit each other in a series of elastic collisions, transferring the energy of Ball One through the line on to Ball Five, losing no energy
along the way.
At least, that's how it would work in an "ideal" Newton's cradle, which is to say, one in an environment where only energy, momentum and gravity are acting on the balls, all the collisions are
perfectly elastic, and the construction of the cradle is perfect. In that situation, the balls would continue to swing forever.
But it's impossible to have an ideal Newton's cradle, because one force will always conspire to slow things to a stop: friction. Friction robs the system of energy, slowly bringing the balls to a
Though a small amount of friction comes from air resistance, the main source is from within the balls themselves. So what you see in a Newton's cradle aren't really elastic collisions but rather
inelastic collisions, in which the kinetic energy after the collision is less than the kinetic energy beforehand. This happens because the balls themselves are not perfectly elastic -- they can't
escape the effect of friction. But due to the conservation of energy, the total amount of energy stays the same. As the balls are compressed and return to their original shape, the friction between
the molecules inside the ball converts the kinetic energy into heat. The balls also vibrate, which dissipates energy into the air and creates the clicking sound that is the signature of the Newton's
Imperfections in the construction of the cradle also slow the balls. If the balls aren't perfectly aligned or aren't exactly the same density, that will change the amount of energy it takes to move a
given ball. These deviations from the ideal Newton's cradle slow down the swinging of the balls on either end, and eventually result in all the balls swinging together, in unison.
For more details on Newton's cradles, physics, metals and other related subjects, take a look at the links on the next page. | {"url":"http://science.howstuffworks.com/newtons-cradle6.htm","timestamp":"2014-04-21T12:10:04Z","content_type":null,"content_length":"125569","record_id":"<urn:uuid:0585d996-2424-4d28-b908-f8ae4ed47b64>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00384-ip-10-147-4-33.ec2.internal.warc.gz"} |
Memoirs of the AMS Editorial Board
[Past Editorial Board Members]
The AMS uses Centralized Manuscript Processing for initial submissions to Memoirs. Authors should follow instructions listed on the initial submission for peer review page.
Managing Editor:
Alejandro Adem
Department of Mathematics
The University of British Columbia
Room 121, 1984 Mathematics Road
Vancouver, B.C., Canada V6T 1Z2
Alexander Kleshchev
Department of Mathematics
University of Oregon
Eugene, OR 97403-1222 USA
Algebraic geometry
Dan Abramovich
Department of Mathematics
Brown University
Box 1917
Providence, RI 02912 USA
Algebraic topology
Soren Galatius
Department of Mathematics
Stanford University
Stanford, CA 94305 USA
Editor's Web page:http://www.math.stanford.edu/~galatius
Arithmetic geometry
Ted Chinburg
Department of Mathematics
University of Pennsylvania
Philadelphia, PA 19104-6395
Editor's Web page:http://www.math.upenn.edu/~ted
Automorphic forms, representation theory and combinatorics
Daniel Bump
Department of Mathematics
Stanford University
Building 380, Sloan Hall
Stanford, California 94305
Combinatorics and discrete geometry
Igor Pak
Department of Mathematics
University of California
Los Angeles, CA 90095 USA
Editor's Web page:http://www.math.ucla.edu/~pak/
Commutative and homological algebra
Luchezar L. Avramov
Department of Mathematics
University of Nebraska
Lincoln, NE 68588-0130 USA
Differential geometry and global analysis
Chris Woodward
Department of Mathematics
Rutgers University
110 Frelinghuysen Road
Piscataway, NJ 08854 USA
Dynamical systems and ergodic theory and complex analysis
Yunping Jiang
Department of Mathematics
CUNY Queens College and Graduate Center
65-30 Kissena Boulevard
Flushing, NY 11367 USA
Ergodic theory and combinatorics
Vitaly Bergelson
Department of Mathematics
Ohio State University
231 W. 18th Avenue
Columbus, OH 43210
Functional analysis and operator algebras
Nathaniel Brown
Department of Mathematics
Penn State University
320 McAllister Building
University Park, PA 16802 USA
Geometric analysis
William P. Minicozzi II
Department of Mathematics
Johns Hopkins University
3400 N. Charles Street
Baltimore, MD 21218 USA
Geometric topology
Mark Feighn
Department of Mathematics
Rutgers University
Newark, NJ 07102 USA
Harmonic analysis, complex analysis
Malabika Pramanik
Department of Mathematics
University of British Columbia
1984 Mathematics Road
Vancouver, BC
Canada V6T 1Z2
Harmonic analysis, representation theory, and Lie theory
E.P. van den Ban
Department of Mathematics
Utrecht University
P. O. Box 80 010
3508 TA Utrecht
The Netherlands
Editor's Web page:http://www.math.uu.nl/people/ban
Antonio Montalban
Department of Mathematics
The University of California, Berkeley
Evans Hall #3840
Berkeley, CA 94720 USA
Number theory
Shankar Sen
Department of Mathematics
505 Malott Hall
Cornell University
Ithaca, NY 14853 USA
Partial differential equations
Markus Keel
School of Mathematics
University of Minnesota
Minneapolis, MN 55455 USA
Partial differential equations and functional analysis
Alexander Kisilev
Department of Mathematics
University of Wisconsin-Madison
480 Lincoln Drive
Madison, WI 53706 USA
Probability and statistics
Patrick Fitzsimmons
Department of Mathematics
University of California, San Diego
La Jolla, CA 92093-0112
Real analysis and partial differential equations
Wilhelm Schlag
Department of Mathematics
The University of Chicago
5734 South University Avenue
Chicago, IL 60615 USA | {"url":"http://cust-serv@ams.org/publications/ebooks/memoedit","timestamp":"2014-04-17T16:28:18Z","content_type":null,"content_length":"24979","record_id":"<urn:uuid:06ef2dbf-9363-4798-8d80-826291b19652>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00465-ip-10-147-4-33.ec2.internal.warc.gz"} |
Elmwood, MA Science Tutor
Find an Elmwood, MA Science Tutor
...I taught an informal introduction to linear algebra class and also have experience tutoring undergraduate-level linear algebra. Classes I have tutored: MIT 18.06 - Linear Algebra University of
Phoenix MTH360 - Linear Algebra I aced a course in electromagnetism at MIT and passed the Fundamentals ...
8 Subjects: including mechanical engineering, electrical engineering, physics, calculus
...Poems C. Song lyrics I have been a university chemistry instructor and therefore, have tutored students, both as preparation for Organic Chemistry (the interim between General Chemistry 2 and
Organic 1), and the two semesters of undergraduate-level Organic Chemistry for over a decade. Standar...
6 Subjects: including chemistry, writing, organic chemistry, physical science
...My schedule is extremely flexible and am willing to meet you wherever is most convenient for you.I graduated from the University of Connecticut with a B.S. in Physics and minor in Mathematics
before attending graduate school at Brandeis University and Northeastern University, where I received a M...
9 Subjects: including physics, calculus, geometry, algebra 1
...My goal is to pass on that fascination, and to bring scientific concepts to a level students can relate to, understand and recall. I strive to teach a deep understanding of the concepts taught
at the Middle School level, rather than rote learning, which is necessary for a student's success in th...
3 Subjects: including biology, physical science, geology
...Teachers and professors can get caught up using too much jargon which can confuse students. I find real life examples and a crystal clear explanation are crucial for success. My schedule is
flexible as I am a part time graduate student.
19 Subjects: including chemistry, physics, physical science, biology
Related Elmwood, MA Tutors
Elmwood, MA Accounting Tutors
Elmwood, MA ACT Tutors
Elmwood, MA Algebra Tutors
Elmwood, MA Algebra 2 Tutors
Elmwood, MA Calculus Tutors
Elmwood, MA Geometry Tutors
Elmwood, MA Math Tutors
Elmwood, MA Prealgebra Tutors
Elmwood, MA Precalculus Tutors
Elmwood, MA SAT Tutors
Elmwood, MA SAT Math Tutors
Elmwood, MA Science Tutors
Elmwood, MA Statistics Tutors
Elmwood, MA Trigonometry Tutors | {"url":"http://www.purplemath.com/elmwood_ma_science_tutors.php","timestamp":"2014-04-17T04:46:35Z","content_type":null,"content_length":"23953","record_id":"<urn:uuid:9f7a16a0-3af9-4a05-bea5-0ddb7482c851>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00033-ip-10-147-4-33.ec2.internal.warc.gz"} |
Natural and Orthogonal Interaction framework for modeling gene-environment interactions with application to lung cancer
We aimed at extending the natural and orthogonal interaction (NOIA) framework, developed for modeling gene-gene interactions in the analysis of quantitative traits, to allow for reduced genetic
models, dichotomous traits, and gene-environment interactions. We evaluate the performance of the NOIA statistical models using simulated data and lung cancer data.
The NOIA statistical models are developed for the additive, dominant, recessive genetic models, and a binary environmental exposure. Using the Kronecker product rule, a NOIA statistical model is
built to model gene-environment interactions. By treating the genotypic values as the logarithm of odds, the NOIA statistical models are extended to the analysis of case-control data.
Our simulations showed that power for testing associations while allowing for interaction using the statistical model is much higher than using functional models for most of the scenarios we
simulated. When applied to the lung cancer data, much smaller P-values were obtained using the NOIA statistical model for either the main effects or the SNP-smoking interactions for some of the SNPs
The NOIA statistical models are usually more powerful than the functional models in detecting main effects and interaction effects for both quantitative traits and binary traits.
Keywords: Statistical power, Genetic association studies, Case-control association analysis, Gene-environment interaction, Environmental risk factor, Association mapping, Orthogonal modeling | {"url":"http://pubmedcentralcanada.ca/pmcc/articles/PMC3534768/?lang=en-ca","timestamp":"2014-04-20T15:03:29Z","content_type":null,"content_length":"109779","record_id":"<urn:uuid:39e815a9-b0b1-4771-ab24-dde6b952c0f6>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00177-ip-10-147-4-33.ec2.internal.warc.gz"} |
The last decade has seen an explosive increase in both the volume and the accuracy of data obtained from cosmological observations. The number of techniques available to probe and cross-check these
data has similarly proliferated in recent years.
Theoretical cosmologists have not been slouches during this time, either. However, it is fair to say that we have not made comparable progress in connecting the wonderful ideas we have to explain the
early universe to concrete fundamental physics models. One of our hopes in these lectures is to encourage the dialogue between cosmology, particle physics, and string theory that will be needed to
develop such a connection.
In this paper, we have combined material from two sets of TASI lectures (given by SMC in 2002 and MT in 2003). We have taken the opportunity to add more detail than was originally presented, as well
as to include some topics that were originally excluded for reasons of time. Our intent is to provide a concise introduction to the basics of modern cosmology as given by the standard "
In Lecture 1 we present the fundamentals of the standard cosmology, introducing evidence for homogeneity and isotropy and the Friedmann-Robertson-Walker models that these make possible. In Lecture 2
we consider the actual state of our current universe, which leads naturally to a discussion of its most surprising and problematic feature: the existence of dark energy. In Lecture 3 we consider the
implications of the cosmological solutions obtained in Lecture 1 for early times in the universe. In particular, we discuss thermodynamics in the expanding universe, finite-temperature phase
transitions, and baryogenesis. Finally, Lecture 4 contains a discussion of the problems of the standard cosmology and an introduction to our best-formulated approach to solving them - the
inflationary universe.
Our review is necessarily superficial, given the large number of topics relevant to modern cosmology. More detail can be found in several excellent textbooks [1, 2, 3, 4, 5, 6, 7]. Throughout the
lectures we have borrowed liberally (and sometimes verbatim) from earlier reviews of our own [8, 9, 10, 11, 12, 13, 14, 15].
Our metric signature is - + + +. We use units in which c = 1, and define the reduced Planck mass by M[P] G)^-1/2 ^18 GeV. | {"url":"http://ned.ipac.caltech.edu/level5/Sept03/Trodden/Trodden1.html","timestamp":"2014-04-19T15:06:33Z","content_type":null,"content_length":"4825","record_id":"<urn:uuid:832034d2-4cb0-4c2c-aa3f-b3e74f47a76a>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00063-ip-10-147-4-33.ec2.internal.warc.gz"} |
Holden, MA Algebra 2 Tutor
Find a Holden, MA Algebra 2 Tutor
...I won the Bronze medal in the National Olympic Math Contest (primary section) in China. I taught my niece math and English by chance in the summer of 2004. What surprised me was she really
made big progress.
11 Subjects: including algebra 2, geometry, accounting, Chinese
...Focusing on the subject matter, I would strive to have students understand the fundamentals of the subject, with the inclusion of real world (and personal) experience and discussion to
maximize their focus. I am enthusiastic and patient, and will let it show in all areas of my teaching. Thank you for your consideration.
19 Subjects: including algebra 2, physics, precalculus, geometry
...To love writing, reading, doing math, studying science, for the sake of learning the material, as well as for the real benefits of mastering each subject. I do not believe there are any “C”
students, only students who have not been taught in a way they can understand. Math, for example, is one of my strengths.
25 Subjects: including algebra 2, reading, English, ESL/ESOL
...Thus I have an informed perspective regarding both teaching and application of these disciplines. Recently I have been accepting some on-line tutoring requests in order to evaluate the Wyzant
on-line tutoring facility, which is in beta development, and assess its feasibility for my content. It ...
7 Subjects: including algebra 2, calculus, physics, algebra 1
...I enjoy helping people with basic math skills and algebra. I believe everyone learns in a different way and I will tailor the tutoring experience to each student and their needs. With
children, I will aim to make it fun with games and everyday things to help with math and English.
10 Subjects: including algebra 2, English, reading, writing
Related Holden, MA Tutors
Holden, MA Accounting Tutors
Holden, MA ACT Tutors
Holden, MA Algebra Tutors
Holden, MA Algebra 2 Tutors
Holden, MA Calculus Tutors
Holden, MA Geometry Tutors
Holden, MA Math Tutors
Holden, MA Prealgebra Tutors
Holden, MA Precalculus Tutors
Holden, MA SAT Tutors
Holden, MA SAT Math Tutors
Holden, MA Science Tutors
Holden, MA Statistics Tutors
Holden, MA Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Auburn, MA algebra 2 Tutors
Boylston algebra 2 Tutors
Clinton, MA algebra 2 Tutors
Gardner, MA algebra 2 Tutors
Jefferson, MA algebra 2 Tutors
Leicester, MA algebra 2 Tutors
Millbury, MA algebra 2 Tutors
Northborough algebra 2 Tutors
Paxton, MA algebra 2 Tutors
Princeton, MA algebra 2 Tutors
Rutland, MA algebra 2 Tutors
Shrewsbury, MA algebra 2 Tutors
Spencer, MA algebra 2 Tutors
Sterling, MA algebra 2 Tutors
West Boylston algebra 2 Tutors | {"url":"http://www.purplemath.com/Holden_MA_algebra_2_tutors.php","timestamp":"2014-04-18T00:39:20Z","content_type":null,"content_length":"23976","record_id":"<urn:uuid:ef2f92bc-8f0b-4d96-be39-52763380676f>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00455-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Entropy
Replies: 4 Last Post: Dec 13, 2008 6:57 PM
Messages: [ Previous | Next ]
Frozz Entropy
Posted: Oct 5, 2008 6:23 PM
Posts: 8
Registered: 10/5/08 Hi. I am a little confused about the entropy problem.
How many bits of information are required to express 32 equally probable alternatives? Is it
2^(5*32) = 2^160
160 bits of information
Is the entropy 5 or 2.32? | {"url":"http://mathforum.org/kb/thread.jspa?threadID=1841874","timestamp":"2014-04-18T00:51:13Z","content_type":null,"content_length":"20508","record_id":"<urn:uuid:161ad5b5-73c6-4918-8282-160db400fb58>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00113-ip-10-147-4-33.ec2.internal.warc.gz"} |
Figure 1. Structure of the network matrix
Plate 1. Diagnostic flow chart for resolving computational difficulties with FEQ
Many different conditions may cause computational failure or difficulty in FEQ simulations. Computational failure means FEQ or FEQUTL stops running before all the computations are completed as
directed. The failure may result from the time step decreasing to a size less than the specified minimum time step, so that FEQ stops running. FEQ also will stop if there is severe computational
problem, such as a division by zero. FEQ will stop if a time-series table or file does not extend far enough for the iterative computations. Computational difficulty is less well defined. However,
there are cases in which the FEQ simulation continues, but the time step, although greater than the user-established minimum value, is too small to make reasonable progress toward completion of the
computations. The run time will be unreasonably long for the simulated problem. More seriously, long run times are almost always a sign of some problem in the model representation that needs to be
The unsteady-flow simulation of every new flood wave has the potential to encounter problems with hydraulic features that were not apparent with earlier flood waves. A model that has successfully
computed 25 different flood waves may fail to complete the computations for the 26th flood wave. The reason may be that no two flood waves are exactly alike and, therefore, each new flood wave has
the potential of making use of some function in a new argument range. Also the pattern of flows for the new flood wave may differ across the hydraulic system, which may introduce computational
difficulties not experienced in previous flow simulations. These effects can be minimized by attempting to reduce the frequency of abrupt changes in values in any function involved in the
computations. The more abrupt the change, the more likely are computational problems at that change; therefore, unnecessary abrupt changes should not be introduced.
A stepwise development of the unsteady-flow model representation of the simulated stream is assumed. This means that the complete model input has not been prepared before making test runs with a
simplified version. This greatly reduces the probability of encountering so many problems in the initial runs that the causes cannot be determined. The model should start out simple even though it is
not fully representative of the problem being considered. Once the simple model is debugged, a series of refinements is begun, with one or more test runs made for each refinement to determine the
problems. This makes it easier to find the source or sources of the problems.
A step-by-step process is presented here to help find and solve the problem. These are guidelines only, and there is no guarantee that the solution will be found immediately. It is rare for a model
to run to completion on the first try. Unsteady-flow computations are highly complex, and not tolerant of even minor errors in the model representation. Small and simple models sometimes do run to
completion after the input-processing errors are fixed, but it is normal to make several runs before the desired computations are completed. Even after much user experience in unsteady-flow
applications, there will still be computational problems that present a challenge. However, there is always a solution for every problem: not always as elegant or exact a solution as wanted, but one
that will be acceptable given the design constraints.
[Begin] [Table of Contents] [Back to FEQ Web Resources] | {"url":"http://il.water.usgs.gov/proj/feq/appendix_a/trouble.html","timestamp":"2014-04-16T22:22:30Z","content_type":null,"content_length":"6063","record_id":"<urn:uuid:5aa7214c-1956-4188-b470-e6f1484678d7>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00371-ip-10-147-4-33.ec2.internal.warc.gz"} |
Polynomial Graphing help
Join Date
Dec 2011
Rep Power
Hey everyone. I'm working on a graphing program for my class that is supposed to graph a polynomial based on user input. What I'm having trouble with is calling my constructor class into my
test class correctly. I just know I'm not doing it correctly, but I'm stuck and need some help. If it helps, here's everything that my programs supposed to do
The PolyGrapher class
This class must contain methods with the following signatures:
public void plot(): When invoked, this method
Clears all existing graphs off the screen
prompts the user for the values of the coefficients for a polynomial
instantiates a polynomial with the data obtained in the previous step
prompts the user for the range [a, b] over which the polynomial is to be evaluated
invokes getGraphData() with the polynomial, range and number of points to plot and returns a data array with yMin, yMax and sf.
invokes graph() with the polynomial, data array with yMin, yMax and sf, and a color not used for the grid lines or axes.
public double[] getGraphData(Polynomial p, double a, double b, int nPoints) When invoked, this method determines the values nPoints, yMin, yMax, and sf
Evaluates the polynomial at the appropriate values for x compute maximum and minimum values of the function on [a, b].
Computes the scaling factor sf. sf = graphHeight / ( max - min ).
public void plotAndDifferentiate(): When invoked, this method
invokes plot()
invokes the current polynomial's differentiate method to get a new polynomial, the derivative of the current polynomial
invokes graph() with the new polynomial, same data, and a new color.
private double[] graph(Polynomial p, double[] data, Color c): When invoked, this method displays the line graph of the polynomial in an 800x600 image plotted from a through b on nPoints using
the color c.
Evaluates the polynomial at the appropriate values for x
Displays the line graph of the polynomial in an 800x600 image.
Returns the scale factor and minimum value of the plot.
Other methods as desired to make your code more readable/useable.
The Polynomial class
This class models a polynomial and must contain methods with the following signatures:
A constructor: public Polynomial(double [] coefficients)
public double evaluate(double x) - returns the value of the polynomial at the specified value for x.
public Polynomial differentiate() - returns a new polynomial, the derivative of this polynomial.
public String toString() - returns a string representation of this polynomial
Here's part of my main class
Java Code:
import algoritharium.*;
import java.util.Scanner;
import java.awt.Color;
public class Polygrapher
public static void main(String[] args)
new ImageViewer();
double aRange, bRange, x;
int function;
double[] c = new double[4];
public void createGraph(int w, int h)
Image img = ImageViewer.getImage();
Color[][] c = new Color[h][w];
for(int ci=0;ci<w;ci++)
for(int cj=0;cj<h;cj++)
int si=ci*img.getWidth()/w;
int sj=cj*img.getHeight()/h;
Color color = img.getPixelColor(si,sj);
public void plot()
Scanner input = new Scanner(System.in);
System.out.print("Coefficient for x^3: ");
c[0] = input.nextInt();
System.out.print("Coefficient for x^2: ");
c[1] = input.nextInt();
System.out.print("Coefficient for x: ");
c[2] = input.nextInt();
System.out.print("Constant: ");
c[3] = input.nextInt();
function = (int)(c[0]*(x*x*x) + c[1]*(x*x) + c[2]*x+c[3]);
//Polynomial[] p = new Polynomial[function]; ?????
System.out.print("Starting x: ");
aRange = input.nextInt();
System.out.print("Ending x: ");
bRange = input.nextInt();
graph(p, aRange, Color.GREEN);
public double[] getGraphData(Polynomial p, double a, double b, int nPoints)
double max;
double min;
double graphHeight=600;
double sf=graphHeight/(max-min);
//nPoints=(int)((max-min)*sf); ????????
I know the getGraphData method is all wrong right now so don't mind that. But since the created data type "Polynomial" is one of the parameters I need to know how to invoke it correctly.
and here is my constructor class:
Java Code:
public class Polynomial {
private double[] c;
private int n;
public Polynomial (double [] c){ // constructor
c = new double [c.length];
n = c.length -1;
for (int i =0; i < c.length; i++)
c[i] = c[i];
public String toString() {
if (n ==0)
return "" + c[0];
if (n==1)
return c[1] + "x + " + c[0];
String s = c[n] + "x^" + n;
for (int i = n-1; i >= 0; i--) {
if (c[i] == 0)
else if (c[i] > 0)
s += " + " + (c[i]);
else if (c[i] < 0)
s += " - " + (-c[i]);
if (i == 1)
s += "x";
else if (i > 1)
s += "x^" + i;
return s;
public double evaluate(double x){
double y=0.0;
int i;
for (i = n; i >= 0; i--)
y = c[i] + x * y;
return y;
public Polynomial differentiate(){
double [] deriv;
if (n == 0) {
deriv = new double [1];
deriv [0] = 0;
else {
deriv = new double [c.length - 1];
for (int i = 0 ; i < deriv.length; i++)
deriv [i] = (i + 1) * c[i + 1];
return new Polynomial(deriv);
Last edited by captain; 12-01-2011 at 09:23 AM. | {"url":"http://www.java-forums.org/new-java/52041-polynomial-graphing-help.html","timestamp":"2014-04-21T00:41:03Z","content_type":null,"content_length":"77372","record_id":"<urn:uuid:55ac57f7-5e51-4aa4-87b2-8bb5d0998605>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00135-ip-10-147-4-33.ec2.internal.warc.gz"} |
FEBRIANI . T , CANDRA (2009) MATRIKS TOEPLITZ NORMAL BERSERTA KLASIFIKASINYA. Other thesis, University of Muhammadiyah Malang.
MATRIKS_TOEPLITZ__NORMAL_BERSERTA_KLASIFIKASINYA.pdf - Published Version
Download (49kB) | Preview
On Toeplitz matrix every entry of the diagonal is constant. One of division of Toeplitz matrix that interest to learn is normal matrix Toeplitz. Toeplitz matrix that have order told normal, if ,where
is transpose conjugate of. The evidance of theory indicated that every normal Toeplitz matrix classified to type I and type II. Every normal real toeplitz matrix must fulfill one of four types:
Symmetry, Skew-Symmetry, Circulate or Skew-Circulated. Here used polynomial trigonometry at complex case and polynomial of algebra at real case.
Actions (login required) | {"url":"http://eprints.umm.ac.id/2353/","timestamp":"2014-04-16T16:15:21Z","content_type":null,"content_length":"18170","record_id":"<urn:uuid:03bd474f-bc47-4482-865b-327d1a3d9aad>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00291-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tensor Calculus in Sage
This page arose out of a thread at sage-devel on the use of differential forms in Sage. Differential forms have been mentioned on the mailing list a few times before, and in the current discussion a
number of interesting packages for tensor calculus were mentioned, which are listed here.
This list is by no means complete, so please feel free to edit.
As tensor calculus is a vast subject, at some stage we will want to have a roadmap of which tasks to handle first, benchmarks, and useful applications. See this paper for some real-world
Forms/Tensor packages
Related code
Sage code
Related Sage discussions
There are a few Sage projects in the works that might be interesting in the context of differential forms and tensor calculus. A quick search brings up the following.
Other discussions: | {"url":"http://wiki.sagemath.org/tensorcalc","timestamp":"2014-04-18T08:49:39Z","content_type":null,"content_length":"18143","record_id":"<urn:uuid:9383dbec-2f37-4472-8c88-1dbd78dbc911>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00438-ip-10-147-4-33.ec2.internal.warc.gz"} |
Robust Coordinated Formation for Multiple Surface Vessels Based on Backstepping Sliding Mode Control
Abstract and Applied Analysis
Volume 2013 (2013), Article ID 681319, 10 pages
Research Article
Robust Coordinated Formation for Multiple Surface Vessels Based on Backstepping Sliding Mode Control
^1College of Automation, Harbin Engineering University, Harbin, Heilongjiang 150001, China
^2University of Duisburg-Essen, 47057 Duisburg, Germany
Received 13 June 2013; Accepted 25 July 2013
Academic Editor: Lixian Zhang
Copyright © 2013 Mingyu Fu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
We investigate the problem of coordinated formation control for multiple surface vessels in the presence of unknown external disturbances. In order to realize the leaderless coordinated formation and
achieve the robustness against unknown external disturbances, a new robust coordinated formation control algorithm based on backstepping sliding mode control is proposed. The proposed coordinated
control algorithm is achieved by defining a new switched function using the combination of position tracking error and cross-coupling error. Particularly, the cross-coupling error is defined using
velocity tracking error and velocity synchronization error so as to be applicable for sliding mode controller design. Furthermore, the adaptive control law is proposed to estimate unknown
disturbances for each vessel. The globally asymptotically stability is proved using the Lyapunov direct method. Finally, the effectiveness of the proposed coordinated formation control algorithm is
demonstrated by corresponding simulations.
1. Introduction
Recently, coordination and consensus of multiagent systems have received much attention, and the network-induced constraints are discussed [1]. Many studies on coordination control issues have been
widely reported in the existing literature due to the widespread applications, such as multiple mobile robots, spacecraft formation, networked sensors, vehicles coordination [2]. Particulary, the
applications of formation control at sea are increasing, which can be found within both civil and military operations. For example, a group of surface vessels travel in formation structures to
perform the seabed mapping operations, the collective range of sensor equipment can be maximized, ensuring that larger areas can be covered in shorter time compared with a single vessel. And another
example is underway replenishment of vessels which is typically performed by one or more supply vessels lining up at the side of the receiving vessel, after which all vessels strive to maintain equal
and constant forward speed and bearing while supplies are being transferred across messenger lines [3]. The complicated operations often cannot be carried out through only one vessel.
In a word, multiple vessels work together to improve performance and reduce fatigue and difficulty for the people involved. What is more, compared with a single vessel, multiple vessels perform the
complicated tasks with less time and cost in practical maritime. So this paper will focus on the coordinated formation of multiple surface vessels; And the formation control in this paper aims at the
surface vessels, then the robustness to environmental disturbances is highly important when considering the marine and offshore applications. So the study on the robust coordination control algorithm
for multiple surface vessels is significative.
With respect to the coordination control issues of multiple surface vessels, abundant studies have been widely reported in the existing literature. There are several typical approaches used to design
the coordination formation controller. For example, Lagrangian formation method [4], null-space-based behavioral control [5], nonlinear model predictive control [6], and graph theory [7]. Meanwhile
the problem of coordinated path following for multiple vessels also has been discussed in the following references [8–10]. In recent years, the formation control of multiple vessels is researched
using the passivity-based control method included in the synchronization control approach [11–13]. A common trait in the aforementioned results is that the coordinated tracking controller is designed
by assuming that motions of vessels are disturbance-free. However, surface vessels often encounter external disturbances such as wind, wave, and current. The external environment disturbances are
difficult to model well and truly because of the disturbances varying with weather conditions and depth of water area change. So the coordinated formation algorithm for multiple surface vessels
should be robust to unknown disturbances. The adaptive control is helpful for solving this problem [14, 15].
This paper develops a coordinated formation control algorithm based on backstepping sliding mode control approach. The sliding mode control possesses the robustness to system uncertainty and external
disturbance as a result of the definition of the switched function [16]. And the stabilization for the switched systems are discussed in [17, 18]. For a single surface vessel, the sliding mode
control method can achieve robustness to the environment disturbances [19, 20]. Furthermore, the sliding mode control is also used to design the coordinated formation controller as in reference [21,
22]. And the backstepping method can determine the appropriate Lyapunov function systematically and simply, which makes the distributed robust and adaptive redesign implementable. The integration of
backstepping and sliding mode control will centralize both the advantages of the two control schemes [23]. Backstepping sliding mode control is used to solve the coordinated formation problem for
multi-agent systems in [24, 25]. The aforementioned studies on sliding mode formation control are all based on the leader-follower strategy. However, the leaderless strategy is applicable widely
because of most practical task without emphasizing which vessel is important [26]. The cross-coupling synchronization approach is propitious to the leaderless formation controller design. In
addition, the synchronization control approach has a simple control structure and convenient implementation capability compared to existing formation control approach. The cross-coupling
synchronization error can be convenient for defining switched function in the sliding mode controller design [27]. However, the cross-coupling error of [27] is composed of position tracking error and
integral of synchronization error, which cannot be applicable to design sliding mode controller. This motivates that the cross-coupling error is redefined to be applicable to the whole coordinated
controller design.
In this paper, a robust coordinated formation controller based on backstepping sliding mode control is proposed for multiple surface vessels. And a new switched function is defined using a new
cross-coupling error to achieve leaderless coordination between these vessels. And the cross-coupling error is defined using the velocity error and the synchronization velocity error. This will be
applicable to the sliding mode controller design. Furthermore, with respect to the unknown disturbances, the adaptive control law is proposed to improve the robust coordinated formation control
algorithm. The rest of this paper is organized as follows. Section 2 introduces the basic notations for the graph theory and establishes the vessel model. Section 3 describes the proposed
coordination control algorithm in detail. Section 4 describes robust coordinated tracking controller. The simulation of the proposed coordination algorithm for five vessels is carried out, and the
validity of the proposed coordinated control algorithm is demonstrated in Section 5. At last, we draw conclusions in Section 6.
2. Preliminaries
2.1. Notations
In this section, several basic concepts about the directed connected graph are given. A directed graph G consists of the pair . is a set of vertices; is a set of edges. if information flows from
vertex to vertex . If any two distinct vertices of a directed graph can be connected through a directed path, then the directed graph is called strongly connected. The adjacent matrix of the directed
graph is denoted as , which is defined as diagonal entries 0 and off-diagonal entries if and 0 otherwise. And the degree matrix is defined with off-diagonal entries 0 and diagonal entries . Then the
Laplacian matrix is obtained as . That is , and , .
In this paper, the communication topology between these vessels is described by a strongly connected graph. Each vertex in the graph represents a vessel in the group. The edges represent information
exchange links by available communication.
Define the Kronecker product of two matrices and as
Theorem 1 (see [28]). Let be an equilibrium point for , and let be a domain containing . Let be a continuously differentiable function such that for all and for all , where , , and are continuous
positive definite functions on . Then, is uniformly asymptotically stable. If and is radially unbounded, then is globally uniformly asymptotically stable.
Lemma 2 (Barbalat's lemma [28]). Let be a uniformly continuous function on . Suppose that exists and is finite. Then, as .
2.2. Mathematical Vessel Model
The vessel model can be divided into two parts: the kinematics and nonlinear dynamics. Generally, only the motion in the horizontal plane is considered for the surface vessel. The elements
corresponding to heave, roll, and pitch are neglected. The dynamic model for the th surface vessel can be represented by the following 3 degrees of freedom (DOF) [29]: where denotes the north
position, east position, and orientation which are decomposed in the earth-fixed reference frame, and denotes the linear surge velocity, sway velocity and angular velocity, which are decomposed in
the body-fixed reference frame. is the transformations matrix from the body-fixed reference frame to the earth-fixed reference frame, the form of which is as follows: Then we can know that the
transformations matrix satisfies the following properties as , for all . denotes the system inertia mass matrix including added mass which is positive definite. and denote the Corioliscentripetal
matrix and damping matrix, respectively. The detailed representation of the previous three-system matrix can be found in [29]. is the vector of forces and torques input from the thruster system. is
the vector of external environment forces and torques input which are generating by wind, wave and current.
In order to design the backstepping sliding mode controller, we transform the vessel model as following. Equation (3) can be transformed as By differentiating , we have , where where is the angular
velocity in the body-fixed reference frame. Take the derivative of (6), we can obtain that: Taking (6) and (8) into the vessel dynamic model (4) can yield The above equation can be written as where
3. Coordinated Formation Controller Design
In this section, we will design the coordinated trajectory tracking controller for multiple surface vessels based on backstepping sliding mode control method. And we assume that all the vessels are
disturbance-free. The detailed procedure of controller design is as follows.
3.1. Formation Setup
This paper considers a fleet of vessels to perform the desired coordination formation task. And each vessel in the formation is identified by the index set . As in [12], the desired formation is
established by defining the formation reference vector for each vessel, which is denoted as , where , , are constant, respectively, and . Then the formation reference point for each vessel is given
by: . If we assume the desired trajectory of the formation point is denoted as , where , , are sufficiently smooth functions, and . That means that the vessel direction is chosen as the tangential
vector of the respective desired trajectory. We can know that the coordinated formation is achieved if all the formation reference points of the group of vessels are synchronized; that is .
3.2. Formation Controller Design
The proposed coordinated formation controller for multiple surface vessels is designed using the backstepping sliding mode control approach, and a new switched function is defined to accomplish the
sliding mode controller.
Define the position tracking error of the formation reference point for each vessel as If we define the new variable as , take the derivative of , we can obtain that Define the following stabilizing
function for each vessel as where is a diagonal positive definite matrix.
Define the velocity error as ; the form of which is Then we can obtain With the form of vessel model, then we have In light of (16), we can obtain that Then (17) can be calculated as follows For
representing conveniently, we define ; ; ; ; ; ; ; So the error dynamics of multiple surface vessels can be written in terms of matrix and vector: Define the synchronization velocity error vector for
these vessels as where is the Laplacian matrix of the communication topology graph for these vessels.
Define the cross-coupling error using the velocity error and the synchronization velocity error; the form of which is where is a diagonal positive definite matrix.
The switched function is defined using the position tracking error and the cross-coupling error. The detailed form is as follows where is a diagonal positive definite matrix.
If we assume these vessels are disturbance-free; that is , then the control input can be chosen as follows where is the equivalent control input, and is the switch control part of the backstepping
sliding mode control input. The detailed expressions are as follows: where , are a diagonal positive definite matrix, respectively. And is sign function. And .
Remark 3. The control input vector for each vessel which is disturbance-free can be written as Define The detailed expressions of the control input for each vessel are as follows: where the switched
function is defined as where is the adjacency matrix for the communication topology graph. and if information flow from vessel to vessel and 0 otherwise, for all .
Theorem 4. Consider a group of vessels to perform the coordination formation task; the vessel model is described as (3) and (4), and we assume that these vessels are disturbance-free. With the
distributed coordinated formation control law (26) and (27), the following conditions are satisfied;(i)all the matrices , , and are diagonal positive definite;(ii), where ;(iii)the matrix is diagonal
positive definite and small enough. Then the position tracking error, the velocity error and their synchronization error, approach zero asymptoticallyl; that is, these vessels can realize the
coordination formation asymptotically.
Proof. Define the first Lyapunov function as Differentiating with respect to time, then we have Define the second Lyapunov function as Take the time derivative of the above equation as With the
control input (26) and (27), we can obtain that where denotes the absolute value of the variable of .
If we define , then we choose matrix as If we define , then we can obtain that Then (36) can be written as If we choose the matrix as small enough, then will be positive definite. All the matrices ,
, and are diagonal positive definite, if they are chosen to satisfy ; then we can guarantee that the matrix is positive definite. If we define , it is obvious that the bounded limit of exists. From
the front definition, we can know that , , and are bounded. With (16), we can know that is bounded. In terms of (22), (26), and (27), is bounded. Then and are bounded. Due to the definition of , and
are bounded. Then we can get that is also bounded; then is uniformly continuous. With Barbalats lemma, as . Then , , as . With (23), we can get . Then the synchronization error for each vessel is
Because is constant, then we can obtain that .
Though the following calculation where . From the above equation, we can know that it is a linear exponential system with the input as , and , and is bounded; then we can obtain that ; that is .
According to , then . Finally, we can know that . So the group of vessels achieve the coordinated tracking while holding the desired formation.
4. Robust Formation Controller Design
In this section, we will design the robust coordinated controller for multiple vessels in the presence of external disturbances. This section will be divided into two parts. The first part is that
the upper bounded of the disturbance is known in advance. The second part is that the upper bounded is unknown. For the second part, the adaptive control law is designed to estimate the external
disturbances. In this section, for a vector , the absolute value of the vector is denoted as .
If the external disturbance for each vessel is bounded and the upper bounded satisfy , where is a positive constant vector, and for multiple vessels, the vector form is . If we choose the control
input as where and are the same with the definition in the front section and is the control input to compensate for the external disturbances, the detailed representation is For each vessel, the
control input to compensate the external disturbance is as follows
Theorem 5. Consider vessels to perform the coordination formation task, the vessel model is described as (3) and (4). with the distributed coordinated formation control law (42), (27), and (43), and
the conditions in Theorem 4 are satisfied, then the position tracking error, the velocity tracking error and their synchronization error, are asymptotically stable; that is, these vessels can realize
the coordination formation asymptotically.
Proof. The proof procedure in this section is similar to the front section. The main difference is that the external disturbances are considered in this section.
Define the same Lyapunov function with the front section as Taking the time derivative of the above equation as with the control input (42), (27), and (43) yield: From the above inequality, we can
see that the same results with the front section are achieved; then we can prove that the position tracking error, velocity tracking error, and their synchronization error are asymptotically stable
according to the proof procedure in the front section.
If the external disturbances are unknown in advance, and we assume that the external disturbances vary slowly; that is , then we adopt the adaptive control to estimate the disturbances. Then control
law also includes three parts, which is written as where and is the estimate value of the external disturbances. The adaptive control law is chosen as follows where is a positive real number.
Theorem 6. Consider a group of vessels to perform the coordination formation task, the vessel model is described as (3) and (4). With the coordinated formation control law (42), (27), and (49), and
the adaptive control law (50) and the conditions in the Theorem 4 are satisfied, then the position tracking error, the velocity tracking error, and their synchronization error are asymptotically
stable; that is these vessels can realize the coordination formation asymptotically.
Proof. Define the new Lyapunov function as where the definition of is same as the front one, and is defined as the estimated error, the form of which is , where is the estimated value of the
Take the time derivative of the previous equation as From the above equality, we can see that the same results with the front section are achieved; then we can prove that the position tracking error,
velocity tracking error and their synchronization error, are asymptotically stable according to the proof procedure in the front section.
5. Simulation Results
In this section, experimental simulations are carried out to evaluate the effectiveness of the proposed coordinated formation control algorithm. Five marine vessels are considered to perform the
coordinated tracking task. Detailed parameters of these vessels are presented in [12]. The proposed algorithm has achieved the leaderless coordination based on the graph theory. In this experiment,
the topology graph of communication among these five vessels is chosen as Figure 1.
And the Laplacian matrix of the communication topology graph is as follows
The initial positions of the five vessels are , , , , and , respectively. In order to evaluate the performance of the coordinated tracking, the desired formation pattern of the coordinated formation
controller is described by , , , , and . The desired trajectory for all the formation points is chosen as , and the detailed forms are , , and .
Although the proposed coordination algorithm is robust to unknown external disturbance, the disturbance model in the simulation can be chosen as a fixed term instead of the unknown disturbance. We
assume that the vessels encounter the wind, wave, and current. And the wind is assumed to be fixed direction and fixed velocity; then the disturbance of wind is a constant; the wave and current is
assumed to be the sine wave with a fixed frequency at one time. So the external disturbances can be chosen as
The control parameters of the coordinated formation controller are chosen as , , , and .
The simulation results are shown from Figure 2 to Figure 9. Figure 2 shows the movements for these vessels in the plane. The heading change curve of each vessel is shown in Figure 3. Figures 4, 5,
and 6 show the surge velocity, the sway velocity, and the angular velocity of each vessel during the coordinated control process, respectively. Figures 7, 8, and 9 show the surge force, sway forces,
and the heave torques applied to the vessels, respectively.
We can see that these vessels realized the coordinated tracking task from Figures 2 and 3. From Figures 4, 5, and 6, the velocities of these vessels achieve consensus as a whole, and the velocities
cannot achieve consensus when the vessels move to the inflexion of the respective path curve. The phenomenon of the surge velocity is obvious in the experimental simulations due to the vessels move
along the tangent directions of the desired trajectory. This phenomenon appears as a result of vessel speed regulated to maintain the desired formation pattern at the inflexion of the respective path
curve. From Figures 7, 8, and 9, we know that the forces and torques for the group of vessels approach consensus, in a way. This is the result of that these vessels are uniform in these experimental
simulations. With the analysis of the simulation results, we can conclude that these vessels can accomplish coordinated trajectory tracking task while keeping the desired formation. It means that the
proposed coordination control algorithm is effective.
6. Conclusion
This paper has proposed a new backstepping sliding mode coordinated formation control algorithm for multiple surface vessels in the presence of external environmental disturbances. The proposed
coordinated formation controller for these vessels is designed by defining a new switched function. And a new cross-coupling error is defined using the velocity error and velocity synchronization
error to be applicable for the backstepping sliding mode controller. In addition, the adaptive control law is also designed to compensate for the external disturbances and then achieve the
robustness. Finally, the effectiveness of the proposed coordination control algorithm is demonstrated by experimental simulations.
1. L. Zhang, H. Gao, and O. Kaynak, “Network-induced constraints in networked control system-a survey,” IEEE Transactions on Industrial Informatics, vol. 9, no. 1, pp. 403–416, 2013.
2. W. Ren, R. W. Beard, and E. M. Atkins, “A survey of consensus problems in multi-agent coordination,” in Proceedings of the American Control Conference (ACC '05), pp. 1859–1864, June 2005. View at
3. A. Aguiar, J. Almeida, and M. Bayat, “Cooperative autonomous marine vehicle motion control in the scope of the eugrex project: theory and practice,” in Proceedings of the IEEE/MTS Conference on
Oceans, pp. 1–10, 2009.
4. I.-A. F. Ihle, J. Jouffroy, and T. I. Fossen, “Formation control of marine surface craft: a lagrangian approach,” IEEE Journal of Oceanic Engineering, vol. 31, no. 4, pp. 922–934, 2006. View at
Publisher · View at Google Scholar · View at Scopus
5. F. Arrichiello, S. Chiaverini, and T. I. Fossen, “Formation control of underactuated surface vessels using the null-spaee-based behavioral control,” in Proceedings of the IEEE/RSJ International
Conference on Intelligent Robots and Systems (IROS '06), pp. 5942–5947, October 2006. View at Publisher · View at Google Scholar · View at Scopus
6. F. Fahimi, “Non-linear model predictive formation control for groups of autonomous surface vessels,” International Journal of Control, vol. 80, no. 8, pp. 1248–1259, 2007. View at Publisher ·
View at Google Scholar · View at MathSciNet
7. J. Almeida, C. Silvestre, and A. M. Pascoal, “Cooperative control of multiple surface vessels with discrete-time periodic communications,” International Journal of Robust and Nonlinear Control,
vol. 22, no. 4, pp. 398–419, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
8. I.-A. F. Ihle, M. Arcak, and T. I. Fossen, “Passivity-based designs for synchronized path-following,” Automatica, vol. 43, no. 9, pp. 1508–1518, 2007. View at Publisher · View at Google Scholar ·
View at Zentralblatt MATH · View at MathSciNet
9. E. Børhaug, A. Pavlov, E. Panteley, and K. Y. Pettersen, “Straight line path following for formations of underactuated marine surface vessels,” IEEE Transactions on Control Systems Technology,
vol. 19, no. 3, pp. 493–506, 2011. View at Publisher · View at Google Scholar · View at Scopus
10. J. Ghommam and F. Mnif, “Coordinated path-following control for a group of underactuated surface vessels,” IEEE Transactions on Industrial Electronics, vol. 56, no. 10, pp. 3951–3963, 2009. View
at Publisher · View at Google Scholar · View at Scopus
11. Y. Wang, W. Yan, and J. Li, “Passivity-based formation control of autonomous underwater vehicles,” IET Control Theory & Applications, vol. 6, no. 4, pp. 518–525, 2012. View at Publisher · View at
Google Scholar · View at MathSciNet
12. C. Thorvaldsen and R. Skjetne, “Formation control of fully-actuated marine vessels using group agreement protocols,” in Proceedings of the 50th IEEE Conference on Decision and Control and
European Control Conference, pp. 4132–4139, 2011.
13. M. Fu and J. Jiao, “A hybrid approach for coordinated formation control of multiple surface vessels,” Mathematical Problems in Engineering, vol. 2013, Article ID 794284, 8 pages, 2013. View at
Publisher · View at Google Scholar
14. I. A. Gravagne, J. M. Davis, and J. J. DaCunha, “A unified approach to high-gain adaptive controllers,” Abstract and Applied Analysis, vol. 2009, Article ID 198353, 13 pages, 2009. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
15. S. Yin, S. Ding, and H. Luo, “Real-time implementtion of fault tolerant control system with performance optimization,” IEEE Transactions on Industrial Electronics, 2013. View at Publisher · View
at Google Scholar
16. J. Huang, H. Li, Y. Chen, and Q. Xu, “Robust position control of PMSM using fractional-order sliding mode controller,” Abstract and Applied Analysis, vol. 2012, Article ID 512703, 33 pages, 2012.
View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
17. L. Zhang and E.-K. Boukas, “Stability and stabilization of Markovian jump linear systems with partly unknown transition probabilities,” Automatica, vol. 45, no. 2, pp. 463–468, 2009. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
18. L. Zhang, P. Shi, E.-K. Boukas, and C. Wang, “${\text{H}}_{\infty }$ model reduction for uncertain switched linear discrete-time systems,” Automatica, vol. 44, no. 11, pp. 2944–2949, 2008. View
at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
19. E. A. Tannuri, A. C. Agostinho, H. M. Morishita, and L. Moratelli, “Dynamic positioning systems: an experimental analysis of sliding mode control,” Control Engineering Practice, vol. 18, no. 10,
pp. 1121–1132, 2010. View at Publisher · View at Google Scholar · View at Scopus
20. H. Ashrafiuon, K. R. Muske, L. C. McNinch, and R. A. Soltan, “Sliding-mode tracking control of surface vessels,” IEEE Transactions on Industrial Electronics, vol. 55, no. 11, pp. 4004–4012, 2008.
View at Publisher · View at Google Scholar · View at Scopus
21. F. Fahimi, “Sliding-mode formation control for underactuated surface vessels,” IEEE Transactions on Robotics, vol. 23, no. 3, pp. 617–622, 2007. View at Publisher · View at Google Scholar · View
at Scopus
22. M. Defoort, T. Floquet, A. Kökösy, and W. Perruquetti, “Sliding-mode formation control for cooperative autonomous mobile robots,” IEEE Transactions on Industrial Electronics, vol. 55, no. 11, pp.
3944–3953, 2008. View at Publisher · View at Google Scholar · View at Scopus
23. Y. Xia, Z. Zhu, and M. Fu, “Back-stepping sliding mode control for missile systems based on an extended state observer,” IET Control Theory & Applications, vol. 5, no. 1, pp. 93–102, 2011. View
at Publisher · View at Google Scholar · View at MathSciNet
24. D. Zhao and T. Zou, “A finite-time approach to formation control of multiple mobile robots with terminal sliding mode,” International Journal of Systems Science, vol. 43, no. 11, pp. 1998–2014,
2012. View at Publisher · View at Google Scholar · View at MathSciNet
25. D. Zhao, T. Zou, S. Li, and Q. Zhu, “Adaptive backstepping sliding mode control for leader-follower multi-agent systems,” IET Control Theory & Applications, vol. 6, no. 8, pp. 1109–1117, 2012.
View at Publisher · View at Google Scholar · View at MathSciNet
26. W. Ren, “Distributed leaderless consensus algorithms for networked Euler-Lagrange systems,” International Journal of Control, vol. 82, no. 11, pp. 2137–2149, 2009. View at Publisher · View at
Google Scholar · View at Zentralblatt MATH · View at MathSciNet
27. D. Sun, C. Wang, W. Shang, and G. Feng, “A synchronization approach to trajectory tracking of multiple mobile robots while maintaining time-varying formations,” IEEE Transactions on Robotics,
vol. 25, no. 5, pp. 1074–1086, 2009. View at Publisher · View at Google Scholar · View at Scopus
28. H. Khalil, Nonlinear Systems, Prentice Hall, Upper Saddle River, NJ, USA, 3rd edition, 2002.
29. T. Fossen, Marine Control Systems: Guidance, Navigation and Control of Ships, Rigs and Underwater Vehicles, Marine Cybernetics, Trondheim, Norway, 2002. | {"url":"http://www.hindawi.com/journals/aaa/2013/681319/","timestamp":"2014-04-16T22:48:57Z","content_type":null,"content_length":"616440","record_id":"<urn:uuid:9109384e-5ca6-4fe0-905a-2e0f350d6d86>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00445-ip-10-147-4-33.ec2.internal.warc.gz"} |
Relative and Absolute Extrema of a Function
Date: 01/07/2004 at 12:27:31
From: barbara
Subject: Relative and absolute extrema
What is the difference between the absolute extrema and the relative
extrema in calculus?
Date: 01/07/2004 at 16:09:24
From: Doctor Peterson
Subject: Re: Relative and absolute extrema
Hi, Barbara.
A relative maximum is the greatest IN ITS NEIGHBORHOOD. An absolute
maximum is the greatest ANYWHERE (in the domain).
Suppose you wanted to find the highest point in your state. Every
mountain peak would be a relative maximum; it is a place where the
slope is zero (or undefined), and every point nearby is lower. The
highest of those mountains MIGHT be the highest point or absolute
miximum in the state--but not necessarily. In some states, the
boundary is on the slope of a mountain range whose peaks are in a
neighboring state; so the highest point might be on the slope of one
of those mountains. This is in fact true of my home state,
Connecticut, where a nearby mountain peak within the state is almost
as high, but is not the absolute maximum:
Here is a picture of two relative maxima and the absolute maximum in
such a case, with a finite domain:
| *
| * |
| * | * * |
| * | * * |
| | |
| *|* * | |
| * | * | |
| * | * * | |
* | * * | |
| | * * | |
| | | |
relative relative absolute
max max max
To find the absolute maximum, you have to find all relative maxima,
as well as boundary points, and determine which of these candidate
maxima is the highest.
All the same things can be said about minima.
If you have any further questions, feel free to write back.
- Doctor Peterson, The Math Forum
Date: 01/08/2004 at 09:35:21
From: barbara
Subject: Thank you
Thank you very much for helping me with my math question. Your answer
was very helpful, and I appreciate your time. | {"url":"http://mathforum.org/library/drmath/view/64504.html","timestamp":"2014-04-17T16:36:58Z","content_type":null,"content_length":"7622","record_id":"<urn:uuid:0f036029-a99b-4170-aa18-6bd1718d74ad>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00660-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Polynomial Algorithm for the Inference of Context Free Languages
Alexander Clark, Rémi Eyraud and Amaury Habrard
In: 9th International Colloquium on Grammatical Inference (ICGI 2008), Saint Malo, France(2008).
We present a polynomial algorithm for the inductive inference of a large class of context free languages, that includes all regular languages. The algorithm uses a representation which we call Binary
Feature Grammars based on a set of features, capable of representing richly structured context free languages as well as some context sensitive languages. More precisely, we focus on a particular
case of this representation where the features correspond to contexts appearing in the language. Using the paradigm of positive data and a membership oracle, we can establish that all context free
languages that satisfy two constraints on the context distributions can be identified in the limit by this approach. The polynomial time algorithm we propose is based on a generalisation of
distributional learning and uses the lattice of context occurrences. The formalism and the algorithm seem well suited to natural language and in particular to the modelling of first language
PDF - PASCAL Members only - Requires Adobe Acrobat Reader or other PDF viewer. | {"url":"http://eprints.pascal-network.org/archive/00004477/","timestamp":"2014-04-16T13:28:30Z","content_type":null,"content_length":"7898","record_id":"<urn:uuid:4769a717-227f-4f0b-87b0-aebc60716944>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00026-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Traffic Flow Modelling? On a stretch of single-lane road with no entrances or exits the traffic density ρ(x,t) is a continuous function of distance x and time t, for all t > 0, and the traffic
velocity ) u( ρ) is a function of density alone. Two alternative models are proposed to represent u: i)u = u_(SL)*(1- ρ^n/ρ^n_max ), where n is a postive constant ii) u = u_(SL)* In (ρ_max / ρ) Where
u_SL represents the maximum speed limit on the road and p_max represents maximum density of traffic possible on the road(meaning bumper-to-bumper traffic) a) Compare the realism of the 2 models for u
above. You should consider in particular the variations of velocity with density for each model, and the velocities for high and low densities in each case. State which model you prefer, giving
reasons. b)It is assumed that a model of the form given in case (i) with n = 2 is a reasonable representation of actual traffic behaviour on a particular road, for which the maximum speed limit is 40
m.p.h. Initially traffic is flowing smoothly along the road with a constant density (everywhere) equal to half the maximum possible density. An accident occurs – which immediately blocks the road.
Find where a car which was initially half a mile back from the accident (when the accident occurred) will come to a halt.
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50e62970e4b04bcb151680ba","timestamp":"2014-04-20T18:47:03Z","content_type":null,"content_length":"26480","record_id":"<urn:uuid:96fe320c-ce4f-4bce-880f-569d19993c97>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00281-ip-10-147-4-33.ec2.internal.warc.gz"} |
Manassas, VA Algebra 1 Tutor
Find a Manassas, VA Algebra 1 Tutor
...As well as making sure that all of my students are standing on a firm mathematical foundation, I make sure to link the skills I am teaching to as many concrete applications as possible, which
often helps students process them more completely. I have been helping students, from middle school thro...
37 Subjects: including algebra 1, chemistry, reading, English
I am a current student at George Mason University studying Biology which allows me to connect to other students struggling with certain subjects. I tutor students in reading, chemistry, anatomy,
and math on a high school level and lower. I hope to help students understand the subject they are working with by repetition, memorization, and individualized instruction.
9 Subjects: including algebra 1, English, reading, anatomy
...The GED Math test includes number computations, Algebra, and Geometry. You probably remember more than you think you do. Together we can review concepts and then practice in test-like
conditions so you feel completely confident walking into your test.
10 Subjects: including algebra 1, geometry, algebra 2, GED
...I do not have any official training in cooking. However, I have been cooking on my own since I was very young. I have taught basic cooking skills and principles to friends and colleagues.
21 Subjects: including algebra 1, reading, writing, English
I am a recent college graduate currently working in biomedical research. I have experience tutoring in many subjects, but my specialty is test prep for college and medical school. I took the MCAT
in March of 2013, scoring in the 95th percentile, and I took both the SAT and ACT with scores at or above the 95th percentile.
39 Subjects: including algebra 1, Spanish, chemistry, writing
Related Manassas, VA Tutors
Manassas, VA Accounting Tutors
Manassas, VA ACT Tutors
Manassas, VA Algebra Tutors
Manassas, VA Algebra 2 Tutors
Manassas, VA Calculus Tutors
Manassas, VA Geometry Tutors
Manassas, VA Math Tutors
Manassas, VA Prealgebra Tutors
Manassas, VA Precalculus Tutors
Manassas, VA SAT Tutors
Manassas, VA SAT Math Tutors
Manassas, VA Science Tutors
Manassas, VA Statistics Tutors
Manassas, VA Trigonometry Tutors
Nearby Cities With algebra 1 Tutor
Annandale, VA algebra 1 Tutors
Burke, VA algebra 1 Tutors
Centreville, VA algebra 1 Tutors
Chantilly algebra 1 Tutors
Fairfax, VA algebra 1 Tutors
Falls Church algebra 1 Tutors
Herndon, VA algebra 1 Tutors
Manassas Park, VA algebra 1 Tutors
Mc Lean, VA algebra 1 Tutors
Reston algebra 1 Tutors
Springfield, VA algebra 1 Tutors
Stafford, VA algebra 1 Tutors
Sterling, VA algebra 1 Tutors
Vienna, VA algebra 1 Tutors
Woodbridge, VA algebra 1 Tutors | {"url":"http://www.purplemath.com/Manassas_VA_algebra_1_tutors.php","timestamp":"2014-04-18T13:37:48Z","content_type":null,"content_length":"24059","record_id":"<urn:uuid:1822e234-aa63-4d90-aad3-6aa009dda77b>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00338-ip-10-147-4-33.ec2.internal.warc.gz"} |
Series and Functions: Real Life Examples of Arithmetic and Geometric Series; Relationship of Series to Functions
1. 71857
Series and Functions: Real Life Examples of Arithmetic and Geometric Series; Relationship of Series to Functions
Using the index of a series as the domain and the value of the series as the range, is a series a function?
Include the following in your answer:
Which one of the basic functions (linear, quadratic, rational, or exponential) is related to the arithmetic series?
Which one of the basic functions (linear, quadratic, rational, or exponential) is related to the geometric series?
Give real-life examples of both arithmetic and geometric sequences and series. Explain how these examples might affect you personally.
Real life examples of arithmetic and geometric series and relationship of series to functions are investigated. Brief responses are given along with calculation. | {"url":"https://brainmass.com/math/algebra/71857","timestamp":"2014-04-19T09:33:48Z","content_type":null,"content_length":"25713","record_id":"<urn:uuid:24ef720e-3387-455e-a5f4-446a710f2dff>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00100-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: October 1995 [00020]
[Date Index] [Thread Index] [Author Index]
Re: A simple swap function
• To: mathgroup at smc.vnet.net
• Subject: [mg2226] Re: [mg2132] A simple swap function
• From: wagner at goober.cs.colorado.edu (Dave Wagner)
• Date: Tue, 17 Oct 1995 02:34:35 -0400
• Organization: University of Colorado, Boulder
In article <45cvu4$5tc at ralph.vnet.net>,
Richard Mercer <richard at seuss.math.wright.edu> wrote:
> > I think I must be missing something straightforward.
>> I want a function to swap the values of 2 lists. I
>> defined ab={a,b}, and ba={b,a}, and
>> swap[v1_,v2_]:=Module[{temp=v1},v1=v2;v2=temp;], and I
>> get an Iteration limit exceeded error. When I look at
>> ab, it looks like {Hold[a],Hold[b]} (or maybe
>> {Hold[b],Hold[a], I don't remember), and ba looks
>> opposite. When I tried to use the same function on
>> numbers, it didn't work either. What's wrong with what
>> I'm doing, and how can I do what I want to do?
>All of this is unnecessary.
>Assuming the variables a and b have been assigned values,
>the assignment
>{a,b} = {b,a}
>works. Try it! I do NOT recommend this if a and b have not been
>assigned values!
Following up to my earlier posting on this topic, which was done while
on the road and in some haste, I can now elaborate. Fleck showed that
it was impossible to write a correct swap function in Algol using the
call-by-name parameter transmission mechanism:
A.C. Fleck, 1976. On the impossibility of content exchange through
the by-name parameter transmission mechanism. ACM SIGPLAN Notices
Here is an example of what kinds of problems occur, using the
Mathematica-specific mechanism suggested by Richard Mercer above:
s = {3,1,2};
i = 2;
{i, s}
{i, s[[i]]} = {s[[i]], i};
{i, s}
The result ought to be {1, {3, 2, 2}}.
After some reflection I think that may be oversimplifying to conclude
that something that's impossible to do in Algol is also impossible to
do in Mathematica. I've thought about trying to write a correct swap
function using constructs such as Hold and With to "freeze" the
expressions being swapped. Unfortunately, you can only scope symbols,
not entire expressions, so my strategy would fail when trying to swap
s[[i]] with s[[s[[i]]]], for example. So while I won't go so far as to
state that this is impossible to do in Mathematica, it seems to be very
hard, and given Fleck's result I wouldn't waste too much time on it.
This is not to say, of course, that the {a,b} = {b,a} trick isn't useful,
as long as you realize its shortcomings.
Dave Wagner
Principia Consulting
(303) 786-8371
dbwagner at princon.com | {"url":"http://forums.wolfram.com/mathgroup/archive/1995/Oct/msg00020.html","timestamp":"2014-04-20T00:52:22Z","content_type":null,"content_length":"36667","record_id":"<urn:uuid:e0230c83-3e06-45cf-af10-f6f12b2a6f6c>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00450-ip-10-147-4-33.ec2.internal.warc.gz"} |
A theoretical and experimental investigation of the H_3 system
The H_3 system is the simplest triatomic neutral molecular species. It possesses only three electrons and three protons. As a result of its simplicity, the H_3 system has received a great deal of
attention in ab initio quantum mechanical as well as experimental studies.
This dissertation consists of two parts. The first part is a theoretical investigation of the H_3 molecular system. Results of the ab initio quantum mechanical calculations for the lowest three
electronic potential energy surfaces are given, as well as electronically nonadiabatic coupling elements between these states. The calculated nonadiabatic coupling elements compare well in some
regions of configuration space with previous calculations performed on this system. Discrepancies in other regions can be attributed to the method of calculation. In our study these coupling elements
were calculated by an ab initio method whereas analytic continuation was used in previous work. Calculation of the nonadiabatic coupling surfaces represents notable progress and will improve the
fidelity of dynamics calculations of the H_3 system. All 3-D quantum mechanical theoretical investigations to date invoke the Born-Oppenheimer approximation and neglect nonadiabatic coupling of the
nearby states. Although this is justified in many cases, the H_3 system exhibits a conical intersection near which this approximation breaks down. To obtain theoretical estimates of predissociative
lifetimes of excited states of the H_3 system, accurate bound state wavefunctions and energies of the excited states of H_3 and accurate differential and integral cross sections in quantum mechanical
scattering studies of the H + H_2 system above 2.75 eV, these nonadiabatic terms must be included.
The second part of this dissertation involves the development and characterization of an intense source of trihydrogen molecules. The ultimate goal of this work is to fully characterize the
metastable H_3 molecules formed in this beam and to create a source of monoenergetic trihydrogen molecules whose translational energy would be continuously tunable from ~1-12 eV. Once developed, it
could be utilized in crossed beam experiments and would enable many reactions to be studied that might not otherwise take place due to low reaction probability. The H_3 molecule in its 2p^2_zA^˝_2
electronic state is 5.85 eV higher and the 2p^2_x,_yE' repulsive ground state is 2.65 eV higher in energy than H + H_2 .^(17-20) Therefore, upon a vertical transition to the ground state, the 2p^2_zA
^˝_2 state of H_3 will liberate about 3 eV of electronic energy with the remaining energy being channeled into vibration and rotation of the H + H_2 dissociated system. In a collision with another
molecule, this energy could become available for reaction along with some fraction of the translational energy of these molecules (1-12 eV). This species can be expected to exhibit unusual dynamics,
in that it may undergo novel chemical reactions as well as unique partitioning of the available energy into electronic, vibrational, rotational and translational energy of the products.
Item Type: Thesis (Dissertation (Ph.D.))
Subject Keywords: Chemistry
Degree Grantor: California Institute of Technology
Division: Chemistry and Chemical Engineering
Major Option: Chemistry
Thesis Availability: Restricted to Caltech community only
Research Advisor(s): • Kuppermann, Aron
Thesis Committee: • Unknown, Unknown
Defense Date: 24 February 1992
Record Number: CaltechTHESIS:09122011-095236680
Persistent URL: http://resolver.caltech.edu/CaltechTHESIS:09122011-095236680
Default Usage Policy: No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code: 6664
Collection: CaltechTHESIS
Deposited By: John Wade
Deposited On: 13 Sep 2011 22:20
Last Modified: 26 Dec 2012 04:38
Thesis Files
PDF - Final Version
Restricted to Caltech community only
See Usage Policy.
Repository Staff Only: item control page | {"url":"http://thesis.library.caltech.edu/6664/","timestamp":"2014-04-16T13:08:15Z","content_type":null,"content_length":"26576","record_id":"<urn:uuid:39d02a57-bfd7-47b1-a99f-3e3f6ddaa26e>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00402-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: RE: Multiple linear regression the right approach?
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: RE: Multiple linear regression the right approach?
From "William Buchanan" <william@williambuchanan.net>
To <statalist@hsphsun2.harvard.edu>
Subject st: RE: Multiple linear regression the right approach?
Date Tue, 25 Jun 2013 10:48:16 -0700
Hi Simon,
You could probably get a much better response if you provided some basic
information about your data to the listserv (e.g., descriptive statistics,
what the data are, etc...). If one of your independent variables is
categorical it wouldn't have a scale that would be interpretable (e.g., if
you coded Black = 3; White = 2; Asian = 1 it doesn't mean that Whites or
Asians have less of a racial property and the numbers signify nothing); I
assume that you meant that the variable is ordinal in nature (e.g., the
numbers convey magnitude but are not necessarily proportional).
Which variable do you assume is not linearly related to your dependent
variable? Are your dependent variable and independent variables measured on
similar scales (in terms of orders of magnitude)? What does the non-linear
relationship appear to be (e.g., quadratic, cubic, quartic, something else,
It is difficult to provide any useful feedback without knowing more of these
details and any feedback at this point could lead you to the same place.
-----Original Message-----
From: owner-statalist@hsphsun2.harvard.edu
[mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Simon Hauburger
Sent: Monday, June 24, 2013 11:47 AM
To: statalist@hsphsun2.harvard.edu
Subject: st: Multiple linear regression the right approach?
Dear potential helpers,
I have a problem figuring out the right regression for my model:
- It has a interval dependent variable (costs in $) that looks normally
distributed, but according to shapiro-wilk test isn't
- a number of independent variables which are categorial (scale from
1-6) and interval (assets in $)
My first guess was to use a multiple linear regression, but not all of the
independent variables are linearly related to the dependent variable (tested
with cprplot lowess), even after having tried the common transformation
techniques (log, square...)
Any reommendations for my next steps? Keep trying to transform the variables
and use the multiple linear regression or try an alternative method? If so,
which method could it be? Logistic regression?
(Transformation of the dependent variable to a binary variable is
I am really confused, statistics will never become my best friend....
Thank you for your help!
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/faqs/resources/statalist-faq/
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/faqs/resources/statalist-faq/
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2013-06/msg01185.html","timestamp":"2014-04-21T07:15:29Z","content_type":null,"content_length":"10236","record_id":"<urn:uuid:776c480c-fb20-43ba-ad3c-aa03d87538ae>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00385-ip-10-147-4-33.ec2.internal.warc.gz"} |
5 Data Layouts on Distributed Memory Machines
High Performance Fortran (HPF) [27] permits the user to define a virtual array of processors, align actual data structures like matrices and arrays with this virtual array (and so with respect to
each other), and then to layout the virtual processor array on an actual machine. We describe the layout functions f offered for this last step. The range of f is a rectangular array of processors
numbered from f can be parameterized by two integer parameters
Suppose the matrix A (or virtual processor array) is A. Choosing blocked layout, where A is broken into QR decomposition, reduction to tridiagonal form, and so on), the leftmost processors will
become idle early in the computation and make load balance poor. | {"url":"http://www.phy.ornl.gov/csep/CSEP/LA/NODE7A.html","timestamp":"2014-04-19T17:52:21Z","content_type":null,"content_length":"3529","record_id":"<urn:uuid:a1975cf1-48d0-438f-ba07-e519e956643f>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00076-ip-10-147-4-33.ec2.internal.warc.gz"} |
Manassas, VA Algebra 1 Tutor
Find a Manassas, VA Algebra 1 Tutor
...As well as making sure that all of my students are standing on a firm mathematical foundation, I make sure to link the skills I am teaching to as many concrete applications as possible, which
often helps students process them more completely. I have been helping students, from middle school thro...
37 Subjects: including algebra 1, chemistry, reading, English
I am a current student at George Mason University studying Biology which allows me to connect to other students struggling with certain subjects. I tutor students in reading, chemistry, anatomy,
and math on a high school level and lower. I hope to help students understand the subject they are working with by repetition, memorization, and individualized instruction.
9 Subjects: including algebra 1, English, reading, anatomy
...The GED Math test includes number computations, Algebra, and Geometry. You probably remember more than you think you do. Together we can review concepts and then practice in test-like
conditions so you feel completely confident walking into your test.
10 Subjects: including algebra 1, geometry, algebra 2, GED
...I do not have any official training in cooking. However, I have been cooking on my own since I was very young. I have taught basic cooking skills and principles to friends and colleagues.
21 Subjects: including algebra 1, reading, writing, English
I am a recent college graduate currently working in biomedical research. I have experience tutoring in many subjects, but my specialty is test prep for college and medical school. I took the MCAT
in March of 2013, scoring in the 95th percentile, and I took both the SAT and ACT with scores at or above the 95th percentile.
39 Subjects: including algebra 1, Spanish, chemistry, writing
Related Manassas, VA Tutors
Manassas, VA Accounting Tutors
Manassas, VA ACT Tutors
Manassas, VA Algebra Tutors
Manassas, VA Algebra 2 Tutors
Manassas, VA Calculus Tutors
Manassas, VA Geometry Tutors
Manassas, VA Math Tutors
Manassas, VA Prealgebra Tutors
Manassas, VA Precalculus Tutors
Manassas, VA SAT Tutors
Manassas, VA SAT Math Tutors
Manassas, VA Science Tutors
Manassas, VA Statistics Tutors
Manassas, VA Trigonometry Tutors
Nearby Cities With algebra 1 Tutor
Annandale, VA algebra 1 Tutors
Burke, VA algebra 1 Tutors
Centreville, VA algebra 1 Tutors
Chantilly algebra 1 Tutors
Fairfax, VA algebra 1 Tutors
Falls Church algebra 1 Tutors
Herndon, VA algebra 1 Tutors
Manassas Park, VA algebra 1 Tutors
Mc Lean, VA algebra 1 Tutors
Reston algebra 1 Tutors
Springfield, VA algebra 1 Tutors
Stafford, VA algebra 1 Tutors
Sterling, VA algebra 1 Tutors
Vienna, VA algebra 1 Tutors
Woodbridge, VA algebra 1 Tutors | {"url":"http://www.purplemath.com/Manassas_VA_algebra_1_tutors.php","timestamp":"2014-04-18T13:37:48Z","content_type":null,"content_length":"24059","record_id":"<urn:uuid:1822e234-aa63-4d90-aad3-6aa009dda77b>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00338-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about facebook on Lucky's Notes
To all Digsby users (ignore this post if you don’t use Digsby):
If you use Digsby with Facebook, you might have noticed that things behave strangely — the program pops up a window looking like this when it tries to connect to Facebook:
Then after you give it your credentials, Digsby still thinks you’re not logged in, and so on.
If you found this page via a google search, there’s a simple hack / workaround you can use to patch up this problem. Basically, instead of using the Facebook protocol to connect, we let Digsby use
the Jabber protocol as a ‘proxy’ to connect to Facebook:
1. Go to Digsby -> My Accounts and in the Add Accounts section at the top, select the Jabber icon.
2. In the Jabber ID box, put your.id@chat.facebook.com, and in the password field, put your facebook password. For example, if your facebook page is at facebook.com/yourname, your Jabber id is
3. Remove the facebook account from Digsby
At this point, you’re done: Digsby should give you no more problems about Facebook.
Warning: the following is unnecessary and experimental! It might screw up the entire Digsby installation, forcing you to reinstall!
However, you can replace the Jabber icon with the Facebook one (this is for purely cosmetic purposes):
1. Go to C:\Program Files (x86)\Digsby\res\skins\default\serviceicons (that’s the default installation path on my machine, yours may be different)
2. Delete jabber.png, duplicate facebook.png, and rename it jabber.png
3. Restart Digsby
There you have it — hack accomplished:
Facebook: Simon Says – How to set up Thrift on Windows
May 15, 2010
I’ve been doing some of the Facebook Engineering Puzzles recently, and in a previous blog post I described how to solve the problem User Bin Crash.
The problem User Bin Crash would be considered a batch problem. Such problem requires a program that takes some input, and produces some output. All the judge does is feed your program the input and
check its output.
Thrift problems are a bit different. Such a problem would be an interactive problem, where the judge program has to interact with your program.
In the facebook puzzles, Simon Says, Rush Hour, Battleship, and Dinosaur island are interactive problems. All of the rest of the problems are batch.
Interactive problems are submitted in a different way from regular problems, as well. You, the programmer, are required to make a Thrift program, and instead of sending it to run on their server, you
run your program on your own computer. The program connects to the facebook servers via Thrift. A successful program will return a special code which can be used to claim credit for a problem.
Because of the nature of interactive problems, they are somewhat difficult to set up. Thrift itself is rather new and its documentation lacking. When making my first Thrift submission, I found a few
difficulties. No adequate tutorial exists on this topic, so this post may be of some use to future puzzlers.
Simon Says is, after all, a simple test problem to help you get set up with Thrift.
This tutorial is relevant most if you are using the Java programming language, and are using Windows with Cygwin. If you’re using a different language, or a different platform, parts of this document
may not apply to you.
Cygwin is required because the thrift compiler itself does not run natively on windows. Only the base cygwin is necessary to run the thrift compiler.
Step 1: Setting up the Thrift compiler
The official Thrift package comes only with the source code, and no linux nor cygwin binaries. As building anything from source is a tricky and error-prone process, we’re not going to compile Thrift
from source (even though the official documentation recommends us to do so).
I wasn’t able to compile Thrift from source, but I found someone else’s cygwin Thrift binaries. I’ll put a link to it:
To run this in Cygwin, extract thrift.exe to /cygwin/bin/.
This Thrift binary is of version 0.20, which is not backwards compatible with 0.10. The Thrift wiki provides cygwin binaries to 0.10, which isn’t very useful for us since the code generated by 0.10
can’t be used to solve Facebook problems.
Anyways, this compiler will likely be obsolete soon, and 0.3x will likely break backwards compatability again.
The Thrift compiler compiles your thrift source into source code into a language of your choice. The outputted source code contains much of the code handling networking, protocols, etc.
To run the thrift compiler, navigate to the directory containing simonsays.thrift (while in cygwin), and run,
thrift --gen java simonsays.thrift
You should now have a directory called gen-java. At this point we just need the stuff in gen-java, and we don’t need the thrift compiler anymore. We can also switch back from the cygwin shell.
Part 2: Setting up the Java libraries
To build and run a Thrift Java program, we need to link a few libraries. The first library is the Java thrift library, which I’ve named thrift.jar.
Another dependency is SLF4J, a logging framework required by the Thrift libraries. I’m not sure why it really needs this library, but we need this to compile the code. I’ve named this library
Here’s a download link to the two jar files:
Now we’re ready to begin coding. In a new directory, place the two jar files and the gen-java folder; create your source file in this folder as well.
Part 3: Coding the problem
The problem itself is very simple. On each round, you receive a list of colors via startTurn(), and send the colors back one by one in order using chooseTurn(). When you’re done, call endTurn()
(which also tells you if there’s more rounds).
In the beginning, call registerClient() with your facebook email address; at the end, call winGame() to receive the special password.
Here’s my implementation (simonsays.java):
import org.apache.thrift.*;
import org.apache.thrift.transport.*;
import org.apache.thrift.protocol.*;
import java.util.*;
public class simonsays{
public static void main(String[] args) throws Throwable {
// Set up the thrift connection to the Facebook servers
TTransport transport =
new TSocket("thriftpuzzle.facebook.com", 9030);
TProtocol protocol = new TBinaryProtocol(transport);
SimonSays.Client client = new SimonSays.Client(protocol);
// Register client
boolean done = false;
// Retrieve color list
List<Color> listColors = client.startTurn();
// Some debug information
// Play back colors list
for(Color color : listColors){
// If we're done, endTurn() will return true.
done = client.endTurn();
String pwd = client.winGame();
Compiling the code:
javac -cp .;./thrift.jar;./slf4j.jar;./gen-java simonsays.java
Running the code:
java -cp .;./thrift.jar;./slf4j.jar;./gen-java simonsays
If you did everything correctly, the program should output the numbers 1 to 31 and finally a line like this:
Part 4: Submitting the code
Be sure to replace my email with your own email if you wish to receive credit for the problem.
From the email you provided in registerClient(), send an email to puzzles@facebook.com, with the subject as the line outputted by the program.
If you want, you can also attach your source code, but that’s not needed. We’re done!
Facebook: User Bin Crash
April 26, 2010
I’ve started doing some of the Facebook engineering puzzles, which are programming challenges that Facebook supposedly uses to recruit people.
This is a little bit similar to Project Euler and various online judges like SPOJ, but it’s somewhat different too.
Instead of submitting solutions in a web form, solutions are sent via email, in the form of an attachment. After several hours, the automated grader would send you back a response: whether it was
successful or not, and if it is, how long your program ran.
The Facebook system differs from SPOJ in that results are not available immediately, and also that an incorrect submission would return a generic error message (whether it produced incorrect output,
crashed, ran out of time, failed to compile, whatever).
This made it a bit annoying to write solutions as it was difficult to figure out exactly what was wrong with the submission.
The problems are grouped into four groups in order of difficulty: Hors d’oeuvre, Snack, Meal, and Buffet. So far I’ve only solved the Hors d’oeuvre problems and one snack.
User Bin Crash
The problem goes something like this:
You’re on a plane, and somehow you have to dump some cargo or else it will crash. You know how much you need to dump (in pounds), and also an inventory of the items you can dump.
Each item in the inventory has a certain weight, and a certain value. By dumping a combination of the items, you have to dump at minimum the given weight, while minimizing the loss.
You can only dump integral amounts of an item (you can’t dump three quarters of a package), but you’re allowed to dump any amount of the item.
The program you write takes the parameters and outputs the minimum loss (value of items that are dumped).
An example
Suppose that the plane needs to dump 13 pounds. To simplify things, suppose that there are only two types of items we are allowed to dump.
Item A weighs 5 pounds, and costs 14.
Item B weighs 3 pounds, and costs 10.
The optimal solution is to dump two of item A, and one of item B. The cost for this is $2 \cdot 14 + 1 \cdot 10$ or 38.
Any other combination either costs more than 38, or does not weigh at least 13 pounds.
A solution using dynamic programming
This problem is a variation of the fairly well known knapsack problem. A dynamic programming solution exists for that, and can be modified to work for this problem.
Dynamic programming is just a way to speed up recursion. So in order to form a dynamic programming algorithm, we should first make a recursive algorithm.
Working backwards, suppose that we have a weight W, and a set of items $S = [e_1,e_2, \cdots e_n]$ such that S is the optimal set for weight W.
Now remove one item from the set:
$S' = S - e$
This new set $S'$ must be the optimal set for $W-w_e$ (I’m going to use the notation $w_e$ to denote the weight of an item e)
The converse is not necessarily true, however. Just because you have an optimal set for some subproblem, adding any arbitrary element to your set does not make your new set optimal.
But adding an element to an optimal subset may create another optimal subset. This depends on which is smaller- the cost obtained by adding the element, and the cost obtained by not adding the
It’s probably better to express this idea recursively:
Here $F()$ is a function taking a list of items we can use, L, and the minimum weight, W, and returning the minimum cost. e is an element of L (for convenience, e is always the first element).
Obviously when $W \leq 0$, we don’t have to dump any more items.
The first path, $F(L, W-w_e) + v_e$ is the path taken when we decide to dump e.
The second path, $F(L-e, W)$ is the path taken when we don’t dump any more of e. In order to avoid recursing indefinitely, once we decide not to dump any more of something we never go back and dump
more of it.
The code doesn’t handle all the edge cases. For example if we only have one possible item to dump, we are forced to dump it until the weight is reached; we can’t just decide not to dump any more of
it and end up with an empty list.
If we use this code, the program will start unfolding the recursion:
Being a recursive function, we can go further:
Now having reached the end of the recursion, we can fill up the tree from bottom up:
Whenever we reach a node that is the parent of the two nodes, we fill it up with the smaller of the two child nodes.
Now let’s transform this recursive function into a dynamic programming routine.
Instead of a function taking two parameters, we have the intermediate results stored in a two-dimensional array:
Here the first row is the optimal solutions for each weight using only A, while the second row is by using A and B.
Filling up the first row is very obvious, as we basically only have one item to choose from:
The first few cells of the second row is equally easy:
However we now reach a point where it’s uncertain what we should put in the cell with the question mark. If we follow our pattern, it should be 20.
But the cell above it is 14. How can the optimal solution when we’re allowed to use both A and B be 20, if the optimal solution when we’re not allowed to use B is 14?
Instead, the cell should be 14:
We continue this to fill up the entire array:
The bottom right corner of this array is our answer.
The computational complexity of this algorithm is the size of the array, or $O(nW)$. This complexity is not actually polynomial, but this algorithm is considered to run in pseudo-polynomial time.
This is because as W‘s length increases, the running time increases exponentially. If, in our example, W was actually 13000000 instead of 13, and A weighed 5000000 instead of 5, and B weighed 3000000
instead of 3, this algorithm might run out of space.
There is no real way around this problem. The knapsack problem is NP-complete to solve exactly.
However there’s something we can do about it. We can divide all weights by a common factor, and the result would be the same. In the example we can simply divide 13000000, 5000000, and 3000000 all by
a million.
Needless to say, this would fail badly if W had been, for instance, 13000001.
The implementation
With the algorithm, it’s pretty straightforward to implement it.
Here’s my implementation in Java (don’t cheat though):
import java.io.*;
import java.util.*;
public class usrbincrash{
public static void main(String[] args) throws Exception{
BufferedReader in = new BufferedReader(
new FileReader(args[0]));
// Minimum weight to prevent crash
int crashw = Integer.parseInt(in.readLine());
// List containing weights of items
List<Integer> itemW = new ArrayList<Integer>();
// List containing values of items
List<Integer> itemV = new ArrayList<Integer>();
String parse;
while( (parse = in.readLine()) != null){
Scanner scn = new Scanner(parse);
// Take the GCD's before starting the DP
int gcd = crashw;
for(int i : itemW) gcd = gcd(gcd, i);
// Divide all weights by gcd
crashw /= gcd;
for(int i=0; i<itemW.size(); i++)
itemW.set(i, itemW.get(i)/gcd);
// Calculate optimal fit using dynamic programming
int[][] dp = new int[itemW.size()][crashw+1];
// First row of DP array done separately
dp[0][0] = 0;
for(int j=1; j<=crashw; j++){
int aW = itemW.get(0);
int aV = itemV.get(0);
if(aW > j) dp[0][j] = aV;
else dp[0][j] = aV + dp[0][j-aW];
// Filling up the rest of the DP array
for(int i=1; i<dp.length; i++){
dp[i][0] = 0;
for(int j=1; j<=crashw; j++){
int iW = itemW.get(i);
int iV = itemV.get(i);
// Cell directly up from current
int imUp = dp[i-1][j];
// Cell left of it by iW
int imLeft = 0;
if(iW > j) imLeft = iV;
else imLeft = iV + dp[i][j-iW];
// Smallest of the two
dp[i][j] = imUp<imLeft? imUp: imLeft;
// GCD using the Euclid algorithm
static int gcd(int a, int b){
if(b == 0) return a;
return gcd(b, a%b);
When submitting it, it’s necessary to use the -Xmx1024m option. Otherwise the program will run out of memory and fail. According to the robot, the longest test case took 2911.623 ms to run.
A rant about Facebook Fan pages
March 10, 2010
So I’m on Facebook one day, and in the news feed I see a fan page, “I LOL’D at “. Being used to these fan pages, I click “Become a Fan” before going to the fan page and looking at what it’s actually
Going to the fan page, it looks something like this, with a big arrow pointing to the right, with something like “Click X tab”:
I apologize for having to scale down the images. My blog’s width is only 450 pixels.
Fair enough, I go to the CLICK HERE tab. The page looks something like this:
Sometimes it’s even worse, requiring you to invite all your friends before proceeding.
Often it will go one step further and offer you a piece of javascript to invite all your friends (usually without you knowing what it actually does). The code they want you to copy and paste into the
browser looks something like this:
javascript:elms=document.getElementById('friends').getElementsByTagName('li'); for(var fid in elms){if(typeof elms[fid] === 'object'){fs.click(elms[fid]);}}
I’m pretty sure it’s not technically possible for them to detect whether you have actually invited any of your friends.
Anyways, supposing we click the next “Click here” button:
Now they link you to some random blogspot page, or whatever hosting site. There’s some javascript popup of some survey you have to complete, before the popup supposedly disappears.
The website owner gets paid a considerable amount for a survey. $1-3 the last time I checked (much more than you could expect to gain from ads).
Of course the surveys themselves try to rip you off, usually demanding your cell phone number to give you a ‘pin code’. You don’t actually get anything, except perhaps a worthless monthly
subscription and a bill. Zynga does it too.
Back to the blogspot page. Notice the text:
Depending on your screen configuration, you may or may not be able to see the text. Adjusting the contrast in Photoshop:
Scroll down. Except you can’t scroll down, because there’s some javascript that sets you back to the start of the page when you try to.
By this point, I’ve forgotten what I’m trying to do. Oh yea, there’s supposed to be a funny picture at the end of all this.
And unless you manually remove yourself from the fanpage, others will see that you’ve became a fan of something. This is a chain reaction, seeing that the page currently has 287k fans.
I don’t mind really stupid fan pages. I don’t mind pages like this if there aren’t too many of them. The problem is, there’s such a huge number of similar fan pages.
Advertising is fine, but this is beyond advertising. They are actively scamming the less informed members of Facebook. It’s incredible that this is allowed to happen, and at such an extreme rate. | {"url":"http://luckytoilet.wordpress.com/tag/facebook/","timestamp":"2014-04-16T13:02:43Z","content_type":null,"content_length":"61772","record_id":"<urn:uuid:3807f3e1-5f5c-42a5-a63f-4525a2704239>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00335-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] Producing a Histogram When Bins Are Known
Christopher Barker Chris.Barker@noaa....
Fri Nov 27 11:41:50 CST 2009
Wayne Watson wrote:
> Yes, I'm just beginning to deal with the contents of NumPy, SciLab, and
> SciPy. They all have seemed part of one another, but I think I see how
> they've divided up the game.
For the record:
I know this is a bit confusing, particularly for someone used to an
integrated package like Matlab, etc, but there is a lot of power an
flexibility gained by the divisions:
Python: is a general-purpose, extensible programming language
Numpy: is a set of package of classes, functions, etc that provide
facilities for numeric computation -- primarily a n-d array class and
the utilities to use it.
Matplotlib (MPL): is a plotting package, built on top of numpy -- it was
originally designed to somewhat mimic the plotting interface of Matlab.
MPL is the most commonly used plotting package for numpy, but by no
means the only one.
Pylab: Is a package that integrates matplotlib and numpy and an
assortment of other utilities into one namespace, making it more like
Matlab -- personally, I think you should avoid using it, it makes it a
bit easier to type code, but harder to know where the heck what you are
doing is coming from.
SciPy: Is a broad collection of assorted utilities that facilitate
scientific computing, built on Numpy -- it is also sometimes used as an
umbrella term for anything connected to scientific computing with Python
(i.e. the SciPy conferences)
These distinctions are a bit confusing (particularly MPL-numpy), because
MPL includes a number of utility functions that combine computation and
plotting: like "hist", which both computes a histogram, and plots it as
bar chart in one call -- it's a convenient way to perform a common
operation, but it does blur the lines a bit!
By the way -- there is also potentially a bit of confusion as to how MPL
uses/interacts with the command line and GUI toolkits. This is because
MPL can be used with a number of different GUI front-ends (or none), and
they tend to take over control from the command line. Which brings up to:
iPython: an enhanced python interactive interpreter command line system.
It adds many nice features that make using python in interactive mode
nicer. IN particularly, it adds a "--pylab" mode that helps it play well
with MPL. You won't regret using it!
> I thought I'd look through Amazon
> for books on Python and scientific uses. I found almost all were written
> by authors outside the US, and none seemed to talk about items like
> matplotlib.
FWIW, a book about MPL has just been published -- I don't know any more
about it, but I'm sure google will tell you.
> Is there a matplotlib or Pylab mailing list?
There certainly is:
And yes, that is the place for such questions.
Christopher Barker, Ph.D.
Emergency Response Division
NOAA/NOS/OR&R (206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115 (206) 526-6317 main reception
More information about the NumPy-Discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2009-November/046974.html","timestamp":"2014-04-21T03:16:05Z","content_type":null,"content_length":"5994","record_id":"<urn:uuid:c77f3b53-e22b-4be9-91f4-b25b8d663742>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00278-ip-10-147-4-33.ec2.internal.warc.gz"} |
Operation F means “take the square root,” operation G means
Author Message
Operation F means “take the square root,” operation G means [#permalink] 19 Apr 2013, 23:56
This post received
25% (low)
Question Stats:
(02:12) correct
Joined: 09 Feb 2013
23% (01:00)
Posts: 121
Followers: 1
based on 60 sessions
Kudos [?]: 137 [5] ,
given: 17 Operation F means “take the square root,” operation G means “multiply by constant c,” and operation H means “take the reciprocal.” For which value of c is the result of applying the three operations
to any positive x the same for all of the possible orders in which the operations are applied?
(A) –1
(B) –½
(C) 0
(D) ½
(E) 1
Express appreciation by pressing KUDOS.
Spoiler: OA
Kudos will encourage many others, like me.
Good Questions also deserve few KUDOS.
Re: Operation F means “take the square root,” operation G means [#permalink] 20 Apr 2013, 00:10
Joined: 03 Mar 2013
This post received
Posts: 44 KUDOS
Location: India The contention is between 0 and 1
Concentration: for zero you get two values 0 and \frac{1}{0}
Technology, Strategy
for one you get only one answer \frac{1}{\sqrt{x}}
GMAT 1: 730 Q49 V41
Hence the answer is E i.e 1
GPA: 3
WE: Information
Technology KUDOS is always a good way to thank anyone.
(Telecommunications) It encourages someone to post more questions and also answers.
Followers: 3 KUDOS Please if My post helps you in anyway. Your Kudos keep me alive
Kudos [?]: 32 [2] ,
given: 30
Re: Operation F means “take the square root,” operation G means [#permalink] 20 Apr 2013, 00:14
coolpintu This post received
Well...for this Q...I would look at values of C which change when a particular operation is applied.
Joined: 05 Apr 2010 All except E change the value. Hence, E.
Posts: 16 Following are a few combinations FCH, CHF, CFH:
Followers: 0 x becomes sqrt[x] -> c * sqrt[x] -> 1/c * sqrt[x]
If c = -1, result = -1/c * sqrt[x]
Kudos [?]: 6 [1] ,
given: 4 CHF
x becomes cx -> 1/cx -> sqrt[1/cx]
Value gets changed for FCH and CHF. Hence A is eliminated.
Only E stands because no matter any operation, the value remains same...be it multiplying or dividing.
Re: Operation F means “take the square root,” operation G means [#permalink] 20 Apr 2013, 00:16
This post received
emmak wrote:
Operation F means “take the square root,” operation G means “multiply by constant c,” and operation H means “take the reciprocal.” For which value of c is the result of applying the three operations
to any positive x the same for all of the possible orders in which the operations are applied?
BangOn (A) –1
Manager (B) –½
Joined: 27 Feb 2012 (C) 0
Posts: 139 (D) ½
Followers: 1 (E) 1
Kudos [?]: 12 [1] , Express appreciation by pressing KUDOS.
given: 22
Ok. Looking at options will give you what we are looking at. A and B are out because square root will always be positive. So, we are not dealing with negative numbers.
C is out because of 1/0 --> infinity. We can check D and E.
E satisfy in first shot.
Please +1 KUDO if my post helps. Thank you.
Joined: 20 Apr 2013 Re: Operation F means “take the square root,” operation G means [#permalink] 04 May 2013, 09:10
Posts: 24 1
Concentration: This post received
Finance, Finance KUDOS
GMAT Date: Operation G means “multiply by constant C
GPA: 3.3
WE: Accounting
Followers: 0
Kudos [?]: 1 [1] ,
given: 99
Re: Operation F means “take the square root,” operation G means [#permalink] 06 May 2013, 22:25
goh4n 1
Intern This post received
Joined: 31 Mar 2013
assume that all the operations give a constant answer "k".
Posts: 6 take 1st order for
eg. HGF which gives sqroot{c/x} = k
Followers: 0 2nd order
eg. HFG which gives c*sqroot{1/x} = k
Kudos [?]: 1 [1] ,
given: 11 Therefore, sqr{c/x} = c*sqr{1/x}
this gives, c = sqr{c}
=> sqr{c} = 1
=> c = 1
nimishag Re: Operation F means “take the square root,” operation G means [#permalink] 06 May 2013, 23:36
Intern 1
Joined: 05 Apr 2013 This post received
Posts: 1
I approached this question by going through the answers.
Followers: 0 The value of C changes with values in the option A,B,C,D when different combination is applied.
The value of E stands out in any operation, the value remains same whether it is multiplication or division.
Kudos [?]: 1 [1] , E is the answer strainght away.
given: 3
Re: Operation F means “take the square root,” operation G means [#permalink] 07 May 2013, 01:22
Virgilius This post received
In such a problem, we should consider an example. For instance, x = 2.
Status: Currently
Preparing the GMAT Considering the operations we have :
Joined: 15 Feb 2013 - F : square root ;
- G : multiply by a constant c ;
Posts: 31 - H : reciprocal ;
Location: United We want to know which value of the constant "c" that would allow us to get the same result no matter the order of operations applied. Since we have three operations, we get to have 6 possibilities
States (to be even more accurate 3! = 6) :
GMAT 1: 550 Q47 V23 F - G - H : \frac{1}{c*\sqrt{2}}
F - H - G : \frac{c}{\sqrt{2}}
GPA: 3.7 G - F - H : \frac{1}{\sqrt{2*c}}
G - H - F : \sqrt{\frac{1}{2*c}}
WE: Analyst H - F - G : c*\sqrt{\frac{1}{2}}
(Consulting) H - G - F : \sqrt{\frac{c}{2}}
Followers: 1 As you can see, there are instances where the constant c is within a square root, or in the denominator of a fraction. Which means that zero and non-zero negative values can't be considered,
therefore answers A, B and C are excluded.
Kudos [?]: 3 [1] ,
given: 11 Answer choice D is a fraction and seeing as the constant c can be either in the numerator or the denominator of the resulting fraction, c can either become \frac{1}{2} or 2. Which means we get
different values everytime we change the order of the operations. Therefore, answer choice D is excluded as well.
Finally, we're left with one possible answer, E, which gives us c = 1.
Hope that helped.
Re: Operation F means “take the square root,” operation G means [#permalink] 07 May 2013, 15:58
Joined: 24 Apr 2013
Man i got a half D I was sooo close. it was a guestimate though hehe
Posts: 58
Schools: Duke '16
Followers: 0
Kudos [?]: 2 [0],
given: 76
Re: Operation F means “take the square root,” operation G means [#permalink] 07 May 2013, 16:43
Operation F means “take the square root,” operation G means “multiply by constant c,” and operation H means “take the reciprocal.” For which value of c is the result of applying the three operations
to any positive x the same for all of the possible orders in which the operations are applied?
yezz (A) –1
SVP (B) –½
Joined: 05 Jul 2006 (C) 0
Posts: 1543 (D) ½
Followers: 4 (E) 1
Kudos [?]: 51 [0],
given: 39 it is simple if we work our way from answer choices
A) if we multiplied c=-1 by x in one of the operations .. there exist no real square root...out
b) same with the -ve sign like above ..........out
c) there is no reciprocal for x when multiplied by zero i ..out
d) just try 2 orders (c : x/2 ,f: sqrt x/2 , h: sqrt x/2) (sqrtx , 2sqrtx 1/2 sqrt x)...out
E) ... the only one left intact ... bingo
gmatclubot Re: Operation F means “take the square root,” operation G means [#permalink] 07 May 2013, 16:43 | {"url":"http://gmatclub.com/forum/operation-f-means-take-the-square-root-operation-g-means-151252.html","timestamp":"2014-04-18T21:06:58Z","content_type":null,"content_length":"183704","record_id":"<urn:uuid:63d6c42b-6378-4815-a1c9-e508cd9c45a7>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00429-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: [ap-calculus] ellipsoid
Replies: 1 Last Post: May 25, 2012 10:20 AM
RE: [ap-calculus] ellipsoid
Posted: May 25, 2012 10:20 AM
I see that I misread your question. Can you clarify what you (or your
student) means by "the integral of the area of an ellipse?" The area of an
ellipse is a number. Do you mean to integrate this constant?
Alan Lipp
19 Payson Avenue
Easthampton, MA 01026
The Williston Northampton School inspires students to live with passion,
purpose, and integrity.
-----Original Message-----
From: Lenore Horner [mailto:Lenore.Horner@7hills.org]
Sent: Friday, May 25, 2012 9:43 AM
To: AP Calculus
Cc: AP Calculus
Subject: Re: [ap-calculus] ellipsoid
I'm not sure what the student is trying to integrate. If integral (pi*a*b)
dx then this is the volume of an elliptical cylinder because all the slices
that the integral stacks up are the same size rather than becoming smaller
as an ellipsoid should.
On May 24, 2012, at 17:17:26, Kevin Regardie wrote:
> A student thinks that the integral of the area of an ellipse (pi*ab) is
the same as the volume of an ellipsoid (4/3pi*abc). I told him it is not
the same but had a hard time proving it. Can anyone shed some light on this
> ====
> Course related websites:
> http://apcentral.collegeboard.com/calculusab
> http://apcentral.collegeboard.com/calculusbc
> ---
> To search the list archives for previous posts go to
> http://lyris.collegeboard.com/read/?forum=ap-calculus
> To unsubscribe click here:
> http://lyris.collegeboard.com/read/my_forums/
> To change your subscription address or other settings click here:
> http://lyris.collegeboard.com/read/my_account/edit
Course related websites:
To search the list archives for previous posts go to
To unsubscribe click here:
To change your subscription address or other settings click here:
Course related websites:
To search the list archives for previous posts go to
To unsubscribe click here:
To change your subscription address or other settings click here: | {"url":"http://mathforum.org/kb/message.jspa?messageID=7828268","timestamp":"2014-04-21T03:20:55Z","content_type":null,"content_length":"17769","record_id":"<urn:uuid:8560ef93-60b3-45e3-9623-58178a7d36cf>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00491-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Multiply seven-eighths times twenty-eight
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/503b086ae4b02c1631bd8d2a","timestamp":"2014-04-16T10:42:31Z","content_type":null,"content_length":"65286","record_id":"<urn:uuid:01fd5cc2-fe8d-4482-a003-8d0e592964bc>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00390-ip-10-147-4-33.ec2.internal.warc.gz"} |
Data Analysis with Open Source Tools
Data Analysis with Open Source Tools Follow
Author: Philipp K. Janert
Publisher: O'Reilly, 2010
Aimed at: Beginning dataists
Rating: 1
Pros: Some idosyncratic topics
Cons: Misleading and dangerous
Reviewed by: Mike James
The title suggests this could be a useful book - but do the contents live up to it?
Author: Philipp K. Janert
Publisher: O'Reilly, 2010
Aimed at: Beginning dataists
Rating: 1
Pros: Some idosyncratic topics
Cons: Misleading and dangerous
Reviewed by: Mike James
This is a very strange book about data analysis - because it is not written by a statistician but a physicist. This could be good - I was a physicist before I studied statistics and
AI, and I know from experience the subject requires intellectual vigour.
In this case, however, the result reads very much like an outsider's point of view and in its attempts to be "new wave" it encourages you to view the problem of data analysis in
ways that are potentially dangerous.
For example, in Chapter 10 the basic ideas of classical statistics are explained with the background idea that they were developed in a time without computers and not really that
relevant to today's situation. The argument is put forward that today we don't need to worry about statistical tests because we have so much data and can draw charts and graphs.
This would be laughable if it were not so dangerous and misleading.
So, for example, after explaining the ideas of significance, an example is given where a chart clearly shows that two groups of data don't overlap and so no significance test was
actually ever necessary. The author even suggests that the orginal statisticans were a little misguided to even have bothered to do so much work when the evidence is clear for all
to see.
This would be fine but the size of the sample is small and judging the separation of small datasets is not something to be done by eye. Small data sets are heavily influenced by the
random component of the signal. Data that appears to be different might only differ because of random fluctuations and the significance test takes this into account by giving you a
probability that the difference is due to just this random fluctuation. In short you do need significance tests even if you have a computer that can draw charts.
Needless to say, the whole idea that big data makes graphical comparisons more reasonable is dangerous unless you know what you are doing. It is good to encourage people to look at
the data and use as much data visualization as possible, but not to use it as an excuse or encouragement not to use statistical testing. The point is that you cannot rely on visual
inspection to tell you all you need to know about the data.
Later in the same chapter trendy Baysian statistics is promoted as being so much better than significance testing. However what the author fails to mention is that the Baysian
approach is still controversial with many unsolved theoretical problems. In many cases whatever it is that the Bayesians are computing, it isn't probability but some sort of measure
of belief, and the theory of belief measures is much more sophisticated and complicated than this account. Of course a Baysian statistician would take me to task for being so
simplistic but at least we would have a discussion of the difficulties.
The book also has huge blind spots. There is no mention of many modern statistical techniques that are core to the new science of "big data". The topics selected are indeed the sort
of thing a physicist getting into statistics would pick out based on what extends the sort of physical modelling found in that subject.
Part I of the book is all about graphics and here there is little chance of the author misleading the innocent reader. After all what can go wrong with graphics? The chapters do
indeed form a good foundation and show how to use NumPy, matplotlib, gnuplot and some others. But why not R which is arguably the simplest and most flexible way of creating
statistical charts?
Part II is more technical and deals with analytics and in essence data modelling - and this is where the book is most dangerous. It starts very strangely with a chapter on
guesstimation - i.e. the art of getting a ballpark figure. This is an art that most physicists are taught and it's nice to see it get a wider audience, but it is hardly mainstream
The same is true of the next chapter which deals with working out plausible models from scaling arguments - this includes dimensional analysis which is another of the physicist’s
favourite tools, but rarely of use in other subject areas. For example, you can work out that the period of a pendulum has to be proportional to the square root of its length just
by knowing that the units have to work out to be pure time. Try the same exercise with IQ and household income and you will find that dimensions don't really help. The same is true
for scaling arguments. Even so it's a nice idea and certainly one worth knowing about - but mainstream statistics it isn't and it needs careful application to any real world topic.
Then we have Chapter 10 which is the most dangerous and gives a potted overview of statistics that suggests that it was all made up because they didn't have a computer back then. If
you now think that "classical" statistics is made redundant because there are computers - think again.
Finally we have a chapter on a mixture of topics including parameter estimation by least squares. Why stop at this method alone when you are in the middle of a discussion of the how
and why of estimation? Why not at least mention the principle of maximum likelihood - i.e. the best estimate of a set of parameters is the one that maximises the probability of the
observed data given the model? It isn't that difficult to explain.
Even though this section is on modelling there is no mention of linear models beyond simple regression. Certainly no mention of generalised linear models or anything even slightly
Part III is all about Data Mining but in practice it is just more modelling. Chapter 12 is on simulation and includes Monte Carlo simulation and resampling. Then we have a chapter
on cluster analysis and one on Principle Components analysis with some AI thrown in.
The final part is a collection of applications but these are not case studies. In fact the remaining chapters are just filling in some missing techniques and mostly use trivial data
sets to explain more modeling techniques. The book is very light on any real world examples at all and certainly there are no "big data" relevant examples.
The main problem with this book is that it seems to be written from an outsider's viewpoint. It includes many topics which are usually left out and it leaves out many topics which
are included. There is little coverage of categorical data - no contingency tables, no chi squared, no classical factor analysis, time series analysis is hardly touched on,
discriminant analysis is introduced as a form of PCA (which it is when used in feature detection). Although the book mentions non-parametric methods often these don't really make an
appearance. Where novel techniques are introduced such as AI techniques the selection is similarly biased - Kohonen maps but not neural networks or GA.
I could go on... and on...
You could argue that one book isn't sufficient for all of these methods but there are other less important methods covered and just to mention that they exist and complement and
extend the methods discussed would take little extra space. The real problem is that this book is like giving the key of the car to a non-driver after telling them where the
accelerator is and that's all.
This is a deeply flawed book and a dangerous one in the hands of a novice.
As you can guess I can't recommend it and if you do choose to read it make sure you read a good book on statistics before committing yourself to a conclusion drawn from real data.
And whatever you do don't belive that statistical tests are now redundant because we have computers...
HTML5 Canvas
Author: Steve & Jeff Fulton
Publisher: O'Reilly, 2011
ISBN: 978-1449393908
Aimed at: Javascript programmers
Rating: 3.5
Pros: Extensive treatment of games and beyond
Cons: Not focused on Canvas
Reviewed by: Ian Elliot
This is a big book and you can't help but wonder what it has to say about [ ... ]
+ Full Review
How Google Tests Software
Author:James A. Whittaker, Jason Arbon & Jeff Carollo
Publisher: Addison-Wesley
ISBN: 978-0321803023
Audience: Testers, managers and executives
Rating: 3.5
Reviewer: Mike James
The trouble with this book's title is that invites the humorous response "Google tests software?" which said [ ... ]
+ Full Review
More Reviews
Last Updated ( Thursday, 16 December 2010 )
RSS feed of book reviews only
RSS feed of all content
Copyright © 2014 i-programmer.info. All Rights Reserved. | {"url":"http://www.i-programmer.info/bookreviews/65-mathematics/1718-data-analysis-with-open-source-tools.html","timestamp":"2014-04-17T13:23:45Z","content_type":null,"content_length":"42732","record_id":"<urn:uuid:f45c2fb2-064d-4af1-8d3c-1561277c686d>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00179-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sign confusion in partial differentiation of a summation.
June 18th 2012, 03:50 PM #1
Jun 2012
United States
Sign confusion in partial differentiation of a summation.
Hello all, thanks for taking the time to read this.
So, I am looking over a document about least squares fitting, and right in the beginning is a summation function: This function is partially differentiated w/respect to b to get:Which is fine.
However, here’s where I am confused: the simplified function appears as:
Shouldn’t the last term be positive, as in:
or am I missing something?
Thanks for your help!
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/calculus/200159-sign-confusion-partial-differentiation-summation.html","timestamp":"2014-04-21T00:17:08Z","content_type":null,"content_length":"30964","record_id":"<urn:uuid:2b43de6a-1730-4e2d-9090-6060bc600666>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00175-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Twenty marbles are used to spell a word—9 blue ones, 5 red ones, 3 yellow ones and 3 green ones.? Twenty marbles are used to spell a word—9 blue ones, 5 red ones, 3 yellow ones and 3 green ones.? If
two marbles are drawn from the set of twenty marbles at random in succession and without replacement, what is the probability (as a reduced fraction) of choosing a marble other than green each time
• 7 months ago
• 7 months ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/521aad34e4b0750826df922f","timestamp":"2014-04-21T04:36:51Z","content_type":null,"content_length":"54870","record_id":"<urn:uuid:ba471db5-a904-4875-9f8e-8c23f0045cb4>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00602-ip-10-147-4-33.ec2.internal.warc.gz"} |
Zentralblatt MATH
Publications of (and about) Paul Erdös
Zbl.No: 565.05040
Autor: Chung, F.R.K.; Erdös, Paul; Graham, Ronald L.
Title: Minimal decomposition of all graphs with equinumerous vertices and edges into mutually isomorphic subgraphs. (In English)
Source: Finite and infinite sets, 6th Hung. Combin. Colloq., Eger/Hung. 1981, Vol. I, Colloq. Math. Soc. János Bolyai 37, 171-179 (1984).
Review: [For the entire collection see Zbl 559.00001.]
Let G = {G[1],G[2],...,G[k]} be a set of graphs, all of the same size. A U-decomposition of G is a set of partitions of the edge sets E[i] of the G[i]'s, E[i] = \cup^r[j = 1]E[ij] such that for each
fixed j = 1,...,r, all the E[ij](1 \leq i \leq k) induce isomorphic graphs. Denote by U(G) the least value of r any U-decomposition of G can have, and by U[k](n) the largest value of U(G) over all
sets G of k graphs of order n (and the same size).
It was shown by the authors, S.M.Ulam, and F.F.Yao [Congr. Numerantium 23, 3-18 (1979; Zbl 434.05046)] that U[2](n) = 2/3n+o(n), and by the authors [Combinatorica 1, 13-24 (1981; Zbl 491.05049)] that
U[k](n) = 3/4n+o(n) for any fixed k \geq 3.
In the present paper, the family G = G(n,e) of all graphs of order n and size e is investigated. Let U(n) be the maximum value of U(G(n,e)) over all values of e; clearly U[k](n) \leq U(n). The main
result states that U(n) = 3/4n+o(1); in particular, U(G(n,e)) = o(n) if n/e = o(1).
Reviewer: J.Sirán
Classif.: * 05C35 Extremal problems (graph theory)
Keywords: decomposition of edge sets of a collection of graphs; decomposition of graphs into mutually isomorphic subgraphs
Citations: Zbl 559.00001; Zbl 434.05046; Zbl 491.05049
© European Mathematical Society & FIZ Karlsruhe & Springer-Verlag
│Books │Problems │Set Theory │Combinatorics │Extremal Probl/Ramsey Th. │
│Graph Theory │Add.Number Theory│Mult.Number Theory│Analysis │Geometry │
│Probabability│Personalia │About Paul Erdös │Publication Year│Home Page │ | {"url":"http://www.emis.de/classics/Erdos/cit/56505040.htm","timestamp":"2014-04-21T14:52:13Z","content_type":null,"content_length":"5317","record_id":"<urn:uuid:ac615944-4aa2-4eaf-9621-3a9852152687>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00375-ip-10-147-4-33.ec2.internal.warc.gz"} |
Exact Solutions of the Kudryashov-Sinelshchikov Equation Using
the Multiple
Mathematical Problems in Engineering
Volume 2013 (2013), Article ID 708049, 7 pages
Research Article
Exact Solutions of the Kudryashov-Sinelshchikov Equation Using the Multiple -Expansion Method
Department of Mathematics, Honghe University, Mengzi, Yunnan 661100, China
Received 3 December 2012; Accepted 29 January 2013
Academic Editor: Claude Lamarque
Copyright © 2013 Yinghui He et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
Exact traveling wave solutions of the Kudryashov-Sinelshchikov equation are studied by the -expansion method and its variants. The solutions obtained include the form of Jacobi elliptic functions,
hyperbolic functions, and trigonometric and rational functions. Many new exact traveling wave solutions can easily be derived from the general results under certain conditions. These methods are
effective, simple, and many types of solutions can be obtained at the same time.
1. Introduction
The investigation of the traveling wave solutions to nonlinear evolution equations (NLEEs) plays an important role in mathematical physics. A lot of physical models have supported a wide variety of
solitary wave solutions. Here, we study the Kudryashov-Sinelshchikov equation. In 2010, Kudryashov and Sinelshchikov [1] obtained a more common nonlinear partial differential equation for describing
the pressure waves in a mixture liquid and gas bubbles taking into consideration the viscosity of liquid and the heat transfer, that is, where , are real parameters. In [2], they derived partial
cases of nonlinear evolution equations of the fourth order for describing nonlinear pressure waves in a mixture liquid and gas bubbles. Some exact solutions are found and properties of nonlinear
waves in a liquid with gas bubbles are discussed. Equation (1) is called Kudryashov-Sinelshchikov equation; it is generalization of the KdV and the BKdV equations and similar but not identical to the
Camassa-Holm (CH) equation; it has been studied by some authors [1, 3–5]. Undistorted waves are governed by a corresponding ordinary differential equation which, for special values of some
integration constant, is solved analytically in [1]. Solutions are derived in a more straightforward manner and cast into a simpler form, and some new types of solutions which contain solitary wave
and periodic wave solutions are presented in [4]. Ryabov [5] obtained some exact solutions for and using a modification of the truncated expansion method [6, 7]. Li and He discussed the equation by
the bifurcation method of dynamical systems and the method of phase portraits analysis [8–10]. In [11], the equation is studied by the Lie symmetry method.
The -expansion method proposed by Wang et al. [12] is one of the most effective direct methods to obtain travelling wave solutions of a large number of nonlinear evolution equations, such as the KdV
equation, the mKdV equation, the variant Boussinesq equations, and the Hirota-Satsuma equations. Later, the further developed methods named the generalized -expansion method, the modified -expansion
method, the extended -expansion method, and the improved -expansion method have been proposed in [13–15], respectively. The aim of this paper is to derive more and new traveling wave solutions of the
Kudryashov-Sinelshchikov equation by the -expansion method and its variants.
The organization of the paper is as follows: in Section 2, a brief account of the -expansion and its variants that is, the generalized, improved, and extended versions, for finding the traveling wave
solutions of nonlinear equations, is given. In Section 3, we will study the Kudryashov-Sinelshchikov equation by these methods. Finally conclusions are given in Section 4.
2. Description of Methods
2.1. The -Expansion Method
Step 1. Consider a general nonlinear PDE in the form Using , , we can rewrite (2) as the following nonlinear ODE: where the prime denotes differentiation with respect to .
Step 2. Suppose that the solution of ODE (3) can be written as follows: where , are constants to be determined later, is a positive integer, and satisfies the following second-order linear ordinary
differential equation: where , are real constants. The general solutions of (5) can be listed as follows.
When , we obtain the hyperbolic function solution of (5)
When , we obtain the trigonometric function solution of (5)
When , we obtain the solution of (5) where and are arbitrary constants.
Step 3. Determine the positive integer by balancing the highest order derivatives and nonlinear terms in (3).
Step 4. Substituting (4) along with (5) into (3) and then setting all the coefficients of of the resulting system's numerator to zero yields a set of over-determined nonlinear algebraic equations for
and .
Step 5. Assuming that the constants and can be obtained by solving the algebraic equations in Step 4 and then substituting these constants and the known general solutions of (5) into (4), we can
obtain the explicit solutions of (2) immediately.
2.2. The Generalized -Expansion Method
In generalized version [13], one makes an ansatz for the solution as where satisfies the following Jacobi elliptic equation: where , , and are the arbitrary constants to be determined later and .
Substituting (9) into (3) and using (10), we obtain a polynomial in , . Equating each coefficient of the resulted polynomials to zero yields a set of algebraic equations for , , , and . Now,
substituting and the general solutions of (10) (see Table 1) into (9), we obtain many new traveling wave solutions in terms of Jacobi elliptic functions of the nonlinear PDE (2).
2.3. The Extended -Expansion Method
In the extended form of this method [15], the solution of (3) can be expressed as where , , are constants to be determined later, , is a positive integer, and satisfies the following second order
linear ODE: where is a constant. Substituting (11) into (3), using (12), collecting all terms with the same order of and together, and then equating each coefficient of the resulting polynomial to
zero yield a set of algebraic equations for , , , . On solving these algebraic equations, we obtain the values of the constants , , , , and then substituting these constants and the known general
solutions of (12) into (11), we obtain the explicit solutions of nonlinear differential equation (2).
After the brief description of the methods, we now apply these for solving the Kudryashov-Sinelshchikov equation.
3. The Exact Solutions of the Kudryashov-Sinelshchikov Equation
3.1. Using -Expansion Method
Let , with , , that is, , where is the wave speed. Under this transformation, (1) can be reduced to the following ordinary differential equation (ODE): Integrating (13) once with respect to and
setting the constant of integration to zero, we have
Balancing with in (10) we find that , so is an arbitrary positive integer. For simplify, we take . Suppose that (14) owns the solutions in the form Substituting (15) along with (5) into (14) and then
setting all the coefficients of of the resulting system's numerator to zero yield a set of overdetermined nonlinear algebraic equations about , , , , , . Solving the overdetermined algebraic
equations, we can obtain the following results.
Case 1. We have
where , are arbitrary constants and .
Case 2. We have
where , are arbitrary constants and .
Case 3. We have
where , are arbitrary constants, , .
Using Case 3, (15) and the general solutions of (5), we can find the following travelling wave solutions of Kudryashov-Sinelshchikov equation (1).
Subcase 3.1. When , , we obtain the hyperbolic function solutions of (1) as follows: where , , , are arbitrary constants.
It is easy to see that the hyperbolic function solution can be rewritten at and as follows: where, .
Subcase 3.2. When , , the trigonometric function solution of (1) can be rewritten at and as follows: where, , where, .
Subcase 3.3. When , , we obtain the rational function solutions of (1) as follows:
Using other two cases, (15), and the general solutions of (5), we could obtain other exact solutions of (1), and here we do not list all of them.
3.2. Using Generalized -Expansion Method
Suppose that (13) owns the solutions in the form in this case, satisfies the Jacobi elliptic equation (10).
Substituting (24) along with (10) into (14) and then setting all the coefficients of , of the resulting system's numerator to zero yield a set of overdetermined nonlinear algebraic equations about ,
, , , , . Solving the overdetermined algebraic equations, we can obtain the following results.
Case 1. We have
Case 2. We have
where, .
Thus using (24) and (26), the following solutions of (1) are obtained: where, . Now, with the aid of Table 1, we get the following set of exact solutions of (1).
Using Case 1, (24), and the general solutions of (10), we can find the following travelling wave solutions of Kudryashov-Sinelshchikov equation (1).
Set 1.1, if , , , , or , then we obtain where, .
When , , solution (28) becomes where, .
Set 1.2, if , , , , then we obtain where, .
When , , solution (31) becomes where, .
Set 1.3, if , , , , then we obtain where, .
When , , solution (33) becomes where, . It is the same with the solution (34).
Set 1.4, if , , , , or , then we obtain where, .
When , , , solution (35) becomes where, .
When , , solution (34) becomes It is the same with solution (30). where, .
Set 1.5, if , , , , then we obtain where, .
Set 1.6 if , , , , then we obtain where, .
Set 1.7 if , , , , solution (39) becomes where, .
Similarly, we can write down the other sets of exact solutions of (1) with the help of Table 1 and the Case 2, which are omitted for convenience. Thus using the generalized form of the -expansion
method, we can obtain families of the exact traveling wave solutions of (1) in terms of Jacobi elliptic functions. Under some conditions, these solutions change into hyperbolic and trigonometric
functional forms.
3.3. Using Extended -Expansion Method
Suppose that (14) owns the solutions in the form where , , , , are constants to be determined later, , is a positive integer, and satisfies the second-order linear ODE (12).
Substituting (42) along with (12) into (14) and then setting all the coefficients of and of the resulting system to zero yield a set of overdetermined nonlinear algebraic equations about , , , , , ,
, . Solving the overdetermined algebraic equations, we can obtain the following results.
Case 1. We have
Case 2. We have
Case 3. We have
where, .
Case 4. We have
Using Case 1, (42), and the general solutions of (12), we can find the following travelling wave solutions of Kudryashov-Sinelshchikov equation (1).
Subcase 1.1. When , we have the hyperbolic function solution as where, .
In particular, setting , , then (47) can be written as
Setting , , then (47) can be written as
Subcase 1.2. When , we have the trigonometric function solution as
In particular, setting , , then (50) can be written as setting , , then (50) can be written as where, .
Using Case 2, (42), and the general solutions of (12), we can find the following travelling wave solutions of Kudryashov-Sinelshchikov equation (1).
Subcase2.1. When , we have the hyperbolic function solution as
In particular, setting , then (53) can be written as Setting , , then (53) can be written as where, .
Subcase2.2. When , we have the trigonometric function solution as
In particular, setting , , then (56) can be written as Setting , , then (56) can be written as where, .
Similarly, we can get the other exact solutions of (1) in Cases 3 and 4, which are omitted for convenience.
Remark 1. The validity of the solutions we obtained is verified.
Remark 2. The solutions expressed by Jacobi elliptic functions are not given in the related literature. So, the solutions we obtained are new.
Remark 3. The solutions we got are general involving various arbitrary parameters. If we set the parameters to special values, some results in the literature can be obtained.
4. Conclusions
In the present work, we successfully obtained exact traveling wave solutions of the Kudryashov-Sinelshchikov equation using the -expansion method and its variants. some obtained new exact and
explicit analytic solutions are in general forms involving various arbitrary parameters. These solutions are expressed by the hyperbolic functions, the trigonometric functions, the rational
functions, and the Jacobi elliptic functions. The results of [1–11] have been enriched.
This research is supported by the Natural Science Foundation of of china (11161020), the National Natural Science Foundation of Yunnan Province (2011FZ193), and Research Foundation of Honghe
university (10XJY120).
1. N. A. Kudryashov and D. I. Sinelshchikov, “Nonlinear waves in bubbly liquids with consideration for viscosity and heat transfer,” Physics Letters A, vol. 374, no. 19-20, pp. 2011–2016, 2010. View
at Publisher · View at Google Scholar · View at Scopus
2. N. A. Kudryashov and D. I. Sinelshchikov, “Nonlinear evolution equations for describing waves in bubbly liquids with viscosity and heat transfer consideration,” Applied Mathematics and
Computation, vol. 217, no. 1, pp. 414–421, 2010. View at Publisher · View at Google Scholar · View at MathSciNet
3. N. A. Kudryashov and D. I. Sinel'shchikov, “Nonlinear waves in liquids with gas bubbles with account of viscosity and heat transfer,” Fluid Dynamics, vol. 45, no. 1, pp. 96–112, 2010. View at
Publisher · View at Google Scholar · View at Scopus
4. M. Randruut, “On the Kudryashov-Sinelshchikov equation for waves in bubbly liquids,” Physics Letters A, vol. 375, no. 42, pp. 3687–3692, 2011.
5. P. N. Ryabov, “Exact solutions of the Kudryashov-Sinelshchikov equation,” Applied Mathematics and Computation, vol. 217, no. 7, pp. 3585–3590, 2010. View at Publisher · View at Google Scholar ·
View at Zentralblatt MATH · View at MathSciNet
6. N. A. Kudryashov, “One method for finding exact solutions of nonlinear differential equations,” Communications in Nonlinear Science and Numerical Simulation, vol. 17, no. 6, pp. 2248–2253, 2012.
View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
7. P. N. Ryabov, D. I. Sinelshchikov, and M. B. Kochanov, “Application of the Kudryashov method for finding exact solutions of the high order nonlinear evolution equations,” Applied Mathematics and
Computation, vol. 218, no. 7, pp. 3965–3972, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
8. J. Li, “Exact traveling wave solutions and their bifurcations for the Kudryashov-Sinelshchikov equation,” International Journal of Bifurcation and Chaos, vol. 22, no. 5, pp. 12501181–125011819,
9. B. He, “The bifurcation and exact peakons solitary and periodic wave solutions for the Kudryashov-Sinelshchikov equation,” Communications in Nonlinear Science and Numerical Simulation, vol. 17,
no. 11, pp. 4137–4148, 2012. View at Publisher · View at Google Scholar
10. B. He, Q. Meng, J. Zhang, and Y. Long, “Periodic loop solutions and their limit forms for the Kudryashov-Sinelshchikov equation,” Mathematical Problems in Engineering, vol. 2012, Article ID
320163, 10 pages, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
11. M. Nadjafikhah and V. Shirvani-Sh, “Lie symmetry analysis of Kudryashov-Sinelshchikov equation,” Mathematical Problems in Engineering, vol. 2011, Article ID 457697, 9 pages, 2011. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
12. M. Wang, X. Li, and J. Zhang, “The ${G}^{’}/G$-expansion method and travelling wave solutions of nonlinear evolution equations in mathematical physics,” Physics Letters A, vol. 372, no. 4, pp.
417–423, 2008. View at Publisher · View at Google Scholar · View at MathSciNet
13. S. Zhang, J. L. Tong, and W. Wang, “A generalized ${G}^{’}/G$-expansion method for the mKdV equation with variable coefficients,” Physics Letters A, vol. 372, no. 13, pp. 2254–2257, 2008. View at
Publisher · View at Google Scholar · View at Scopus
14. Y.-B. Zhou and C. Li, “Application of modified ${G}^{’}/G$-expansion method to traveling wave solutions for Whitham-Broer-Kaup-like equations,” Communications in Theoretical Physics, vol. 51, no.
4, pp. 664–670, 2009. View at Publisher · View at Google Scholar · View at MathSciNet
15. S. Guo and Y. Zhou, “The extended ${G}^{’}/G$-expansion method and its applications to the Whitham-Broer-Kaup-like equations and coupled Hirota-Satsuma KdV equations,” Applied Mathematics and
Computation, vol. 215, no. 9, pp. 3214–3221, 2010. View at Publisher · View at Google Scholar · View at MathSciNet | {"url":"http://www.hindawi.com/journals/mpe/2013/708049/","timestamp":"2014-04-16T22:44:34Z","content_type":null,"content_length":"583789","record_id":"<urn:uuid:d82af251-6bf8-44ba-b2fd-5be5a5713771>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00658-ip-10-147-4-33.ec2.internal.warc.gz"} |
Impredicativity entails untypedness
- University of Edinburgh , 2000
"... Toposes and quasi-toposes have been shown to be useful in mathematics, logic and computer science. Because of this, it is important to understand the di#erent ways in which they can be
constructed. Realizability toposes and presheaf toposes are two important classes of toposes. All of the former and ..."
Cited by 13 (4 self)
Add to MetaCart
Toposes and quasi-toposes have been shown to be useful in mathematics, logic and computer science. Because of this, it is important to understand the di#erent ways in which they can be constructed.
Realizability toposes and presheaf toposes are two important classes of toposes. All of the former and many of the latter arise by adding "good " quotients of equivalence relations to a
simple category with finite limits. This construction is called the exact completion of the original category. Exact completions are not always toposes and it was not known, not even in the
realizability and presheaf cases, when or why toposes arise in this way. Exact completions can be obtained as the composition of two related constructions. The first one assigns to a category with
finite limits, the "best " regular category (called its regular completion) that embeds it. The second assigns to
- IN: FROM SETS AND TYPES TO TOPOLOGY AND ANALYSIS , 2005
"... We advocate a pragmatic approach to constructive set theory, using axioms based solely on set-theoretic principles that are directly relevant to (constructive) mathematical practice. Following
this approach, we present theories ranging in power from weaker predicative theories to stronger impredicat ..."
Cited by 9 (0 self)
Add to MetaCart
We advocate a pragmatic approach to constructive set theory, using axioms based solely on set-theoretic principles that are directly relevant to (constructive) mathematical practice. Following this
approach, we present theories ranging in power from weaker predicative theories to stronger impredicative ones. The theories we consider all have sound and complete classes of category-theoretic
models, obtained by axiomatizing the structure of an ambient category of classes together with its subcategory of sets. In certain special cases, the categories of sets have independent
characterizations in familiar category-theoretic terms, and one thereby obtains a rich source of naturally occurring mathematical models for (both predicative and impredicative) constructive set
- UNDER CONSIDERATION FOR PUBLICATION IN MATH. STRUCT. IN COMP. SCIENCE , 2007
"... It is a fact of experience from the study of higher type computability that a wide range of approaches to defining a class of (hereditarily) total functionals over N leads in practice to a
relatively small handful of distinct type structures. Among these are the type structure C of Kleene-Kreisel co ..."
Cited by 4 (2 self)
Add to MetaCart
It is a fact of experience from the study of higher type computability that a wide range of approaches to defining a class of (hereditarily) total functionals over N leads in practice to a relatively
small handful of distinct type structures. Among these are the type structure C of Kleene-Kreisel continuous functionals, its effective substructure C eff, and the type structure HEO of the
hereditarily effective operations. However, the proofs of the relevant equivalences are often non-trivial, and it is not immediately clear why these particular type structures should arise so
ubiquitously. In this paper we present some new results which go some way towards explaining this phenomenon. Our results show that a large class of extensional collapse constructions always give
rise to C, C eff or HEO (as appropriate). We obtain versions of our results for both the “standard” and “modified” extensional collapse constructions. The proofs make essential use of a technique due
to Normann. Many new results, as well as some previously known ones, can be obtained as instances of our theorems, but more importantly, the proofs apply uniformly to a whole family of constructions,
and provide strong evidence that the above three type structures are highly canonical mathematical objects.
, 2011
"... We generalize the standard construction of realizability models (specifically, of categories of assemblies) to a very wide class of computability structures, broad enough to embrace models of
computation such as labelled transition systems and process algebras. We also discuss a general notion of si ..."
Cited by 1 (0 self)
Add to MetaCart
We generalize the standard construction of realizability models (specifically, of categories of assemblies) to a very wide class of computability structures, broad enough to embrace models of
computation such as labelled transition systems and process algebras. We also discuss a general notion of simulation between such computability structures, and show that such simulations correspond
precisely to certain functors between the realizability models. Furthermore, we show that our class of computability structures has good closure properties — in particular, it is ‘cartesian closed ’
in a slightly relaxed sense. We also investigate some important subclasses of computability structures and of simulations between them. We suggest that our 2-category of computability structures and
simulations may offer a framework for a general investigation of questions of computational power, abstraction and simulability for a wide range of computation models from across computer science.
, 2000
"... This paper is about purely categorical approaches to realizability, and contrasts with recent work particularly by Longley [14] and Lietz and Streicher [13], in which the basis is taken as a
typed generalisation of a partial combinatory algebra. We, like they, will be interested in when the construc ..."
Add to MetaCart
This paper is about purely categorical approaches to realizability, and contrasts with recent work particularly by Longley [14] and Lietz and Streicher [13], in which the basis is taken as a typed
generalisation of a partial combinatory algebra. We, like they, will be interested in when the construction yields a topos, and hence gives a full interpretation of higher-order logic. This is also a
theme of Birkedal's work, see [1, 2], and his joint work in [3]. Birkedal makes considerable use of the construction we study. We present realizability toposes as the product of two constructions.
First one takes a category (which corresponds to the typed partial combinatory algebra), and then one glues Set to it in a variant of the comma construction. This, as we shall see, has the eect of
improving the categorical properties of the algebra category. Then one takes an exact completion of the result. This also has the eect of improving the categorical properties. Formally the main
result of the paper is that the result is a topos just (modulo some technical conditions) when the original category has a universal object. Early work on realizability (e.g.[12, 22], or see [23]) is
characterised by its largely syntactic nature. The core denition is when a sentence of some formal logic is realised, and the main interest is in when certain deductive principles (such as Markov's
rule) are validated. Martin Hyland's invention y The authors wish to acknowledge the support of the EPSRC, EU Working Group 26142 APPSEM, and MURST 1 2 of realizability toposes [10] advances on this,
not only in the simplicity of the construction, but by providing a semantic framework in which the formal logics can naturally be interpreted. Hyland was strongly motivated in his work by a then
recent approach...
, 1999
"... In program synthesis, we transform a specification into a system that is guaranteed to satisfy the specification. When the system is open, then at each moment it reads input signals and writes
output signals, which depend on the input signals and the history of the computation so far. The specifi ..."
Add to MetaCart
In program synthesis, we transform a specification into a system that is guaranteed to satisfy the specification. When the system is open, then at each moment it reads input signals and writes output
signals, which depend on the input signals and the history of the computation so far. The specification considers all possible input sequences. Thus, if the specification is linear, it should hold in
every computation generated by the interaction, and if the specification is branching, it should hold in the tree that embodies all possible input sequences.
"... We define a constructive topos to be a locally cartesian closed pretopos. The terminology is supported by the fact that constructive toposes enjoy a relationship with constructive set theory
similar to the relationship between elementary toposes and (impredicative) intuitionistic set theory. This pa ..."
Add to MetaCart
We define a constructive topos to be a locally cartesian closed pretopos. The terminology is supported by the fact that constructive toposes enjoy a relationship with constructive set theory similar
to the relationship between elementary toposes and (impredicative) intuitionistic set theory. This paper elaborates upon one aspect of the relationship between constructive toposes and constructive
set theory. We show that any constructive topos with countable coproducts provides a model of a standard constructive set theory, CZFExp (that is, the variant of Aczel’s Constructive Zermelo-Fraenkel
set theory obtained by weakening Subset Collection to the Exponentiation axiom). The model is constructed as a category of classes, using ideas derived from Joyal and Moerdijk’s programme of
algebraic set theory. A curiosity is that our model always validates the axiom V = Vω1 (in an appropriate formulation). Hence the full Separation schema is always refuted. 1. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1994383","timestamp":"2014-04-19T14:55:11Z","content_type":null,"content_length":"30250","record_id":"<urn:uuid:3abfb643-c4fd-466d-b585-194f572d6225>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00100-ip-10-147-4-33.ec2.internal.warc.gz"} |
Irrationality of e+pi and e*pi
Date: 09/24/2001 at 00:38:32
From: Andy Weck
Subject: Irrationality of e+pi and e*pi
I have read that it is unknown if either E + Pi or E * Pi is an
irrational number. However, it is provable that at most one of the
two numbers is rational. How do you prove this? I thought that the
proof would involve summing the two numbers in some manner and then
proving that the result is irrational. (This way you know at least one
of the addends must be irrational.) But I can't figure out how to
prove that any combination of the two numbers is irrational.
Date: 09/24/2001 at 10:59:22
From: Doctor Luis
Subject: Re: Irrationality of e+pi and e*pi
Hi Andy,
Thanks for the very interesting question. Like you, I was unable to
find a combination e+pi and e*pi that proved immediately the
irrationality of either number. There's hardly anything that can
easily be proved by using that approach. However, given further
knowledge about both both pi and e, your claim can be proved rather
Now, e and pi are rather peculiar numbers. It turns out that, in
addition to being irrational numbers, they are also transcendental
numbers. Basically, a number is transcendental if there are no
polynomials with rational coefficients that have that number as a
Clearly, p(x) = (x-e)*(x-pi) is a polynomial whose roots are e and pi,
so its coefficients cannot all be rational, by the definition of
transcendental numbers. Expanding that expression, we get
(x-e)*(x-pi) = x^2 - (e+pi)*x + e*pi
This means that 1, -(e+pi), e*pi cannot all be rational. If all the
coefficients were rational, we would have found a polynomial with
rational coefficients that had e and pi as roots, and that has been
proven impossible already. Hermite proved that e is transcendental in
1873, and Lindemann proved that pi is transcendental in 1882. In fact,
Lindemann's proof was similar to Hermite's proof and was based on the
fact that e is also transcendental.
In other words, at most one of e+pi and e*pi is rational. (We know
that they cannot both be rational, so that's the most we can say).
Great question. I hope this explanation helped!
- Doctor Luis, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/51617.html","timestamp":"2014-04-17T21:29:33Z","content_type":null,"content_length":"7250","record_id":"<urn:uuid:a39cc710-60c1-4aac-86cc-08399badf836>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00247-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lakeville, MA Math Tutor
Find a Lakeville, MA Math Tutor
...Some students are heavy visual learners, some learn best through audio, some students benefit most by tactile learning (learning by doing.) Most students learn best with a mix of the three
styles. I have taken courses at Bridgewater State in student learning, and have attended a number of profe...
15 Subjects: including geometry, algebra 1, algebra 2, biology
...Besides my teaching experience, I also have an undergraduate degree in linguistics, and I write professionally for a fashion start-up. Send me a message, and we can see if I'm a good match for
you or your child. Cheers, DianaI have a degree from the University of Chicago in linguistics where I took in-depth courses on phonetics and phonology.
21 Subjects: including algebra 2, English, algebra 1, prealgebra
...When I tutor high school students, I usually choose the library as a setting. This allows for few distractions and greater concentration. I prefer to tutor in 2 hour segments, as it ensures
greater understanding of a subject.
15 Subjects: including prealgebra, algebra 1, reading, writing
...I can help you improve your understanding of written and conversational English. If you are fluent in English, I can help you improve the impact and clarity of your language. If you are
less-than-fluent in English, and especially if you are trying to write a manuscript in English, I can definitely help you.
18 Subjects: including prealgebra, trigonometry, SPSS, English
I am a professional, experienced tutor. I am so effective that many of my students call me the "miracle tutor"! That is why I am one of the busiest tutors in Massachusetts and the United States
(top 1% across the country). I provide 1-on-1 instruction in all levels of math and English, including...
67 Subjects: including precalculus, marketing, logic, geography
Related Lakeville, MA Tutors
Lakeville, MA Accounting Tutors
Lakeville, MA ACT Tutors
Lakeville, MA Algebra Tutors
Lakeville, MA Algebra 2 Tutors
Lakeville, MA Calculus Tutors
Lakeville, MA Geometry Tutors
Lakeville, MA Math Tutors
Lakeville, MA Prealgebra Tutors
Lakeville, MA Precalculus Tutors
Lakeville, MA SAT Tutors
Lakeville, MA SAT Math Tutors
Lakeville, MA Science Tutors
Lakeville, MA Statistics Tutors
Lakeville, MA Trigonometry Tutors | {"url":"http://www.purplemath.com/lakeville_ma_math_tutors.php","timestamp":"2014-04-19T15:16:59Z","content_type":null,"content_length":"23837","record_id":"<urn:uuid:aa01154a-01b9-4244-a8a6-9d91aa1bda4b>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00463-ip-10-147-4-33.ec2.internal.warc.gz"} |
Products of prime powers less than a number.
up vote 3 down vote favorite
Given a number n. Suppose $p_1< p_2<....< p_k$ be any k distinct primes less than n. How do i find the cardinality of the set, S={$(e_1,e_2,...,e_k): p_1^{e_1}p_2^{e_2}...p_k^{e_k} < n$}. Assume that
$(1,1,...,1)\in S$ and $e_i\geq 1\;\;\forall i$. It will be great if one could give a non-trivial upper bound for the cardinality of this set.
P.S. I have a way of getting the cardinality of the set approximately for four primes but things get complicated when the number of primes are increased.
add comment
2 Answers
active oldest votes
Take a logarithm: so the condition is $\sum_{j=1}^ke_j \log p_j < \log n$. Geometrically, it is the number of points of the lattice $\prod_{j=1}^k (\log p_j)\mathbb{Z}$ within the
up vote 11 orthogonal simplex with edge $\log n$. This gives lower and upper bounds, and an asymptotics, $$|S_n|=\frac{(\log n)^k}{k!\prod_ {j=1}^k\log p _ j }\big(1+o(1)\big)\, .$$
down vote
2 And because you insist that each $e_i\ge1$, I believe that $(\log n)^k/k!\prod_{j=1}^k \log p_j$ is an actual upper bound for $\#S$. – Greg Martin Oct 27 '12 at 23:28
add comment
What I did was the following,
Suppose $p_1p_2.....p_k^{e_k} < n$. Then the maximum value that $e_k$ can take would be $\leq$ $\frac{\log \left(\frac{n}{p_1p_2...p_{k-1}}\right)}{\log p_k}$. Then I tried to find the
maximum power $e_{k-1}$ of $p_{k-1}$ for which $p_{k-1}^{e_{k-1}} < {p_k}^{l_1}$, for $1\leq l_1\leq\frac{\log \left(\frac{n}{p_1p_2...p_{k-1}}\right)}{\log p_k}-1=\frac{\log \left(\frac{n}
{p_1p_2...p_{k-1}p_k}\right)}{\log p_k}$. Then the maximum value of $e_{k-1}$ would be $\leq l_1\frac{\log p_k}{\log p_{k-1}}$, for $1\leq l_1\leq \frac{\log \left(\frac{n}{p_1p_2...p_{k-1}
p_k}\right)}{\log p_k}$. Similarly I proceeded this way for getting the maximum power of $p_{k-2}$ for which $p_{k-2}^{e_{k-2}} < p_{k-1}^{l_2}$, for $1 \leq l_2\leq l_1\frac{\log p_k}{\log
p_{k-1}}-1$. And so on. Finally I had to calculate the following sum for getting an upper bound for |S|, $$\sum_{l_1=1}^{\frac{\log \left(\frac{n}{p_1p_2...p_{k-1}p_k}\right)}{\log p_k}}\sum_
up vote {l_2=1}^{l_1\frac{\log p_k}{\log p_{k-1}}-1}.....\sum_{l_{k-1}=1}^{l_{k-2}\frac{\log p_3}{\log p_2}-1}l_{k-1}\left(\frac{\log p_2}{\log p_1}\right)$$.
0 down
vote Now these sums are easy to calculate for two, or three primes. Indeed for 4 primes things get complicated. I hope I was going correctly. Infact if I try taking the most trivial upper bound of
each sum, considering the upper limit of each sum. Then I find the sum bounded by, $$\frac{\left(\log \left(\frac{n}{p_1p_2...p_{k-1}p_k}\right)\right)^k}{\prod_{j=1}^{k}\log p_j}$$ That's
not a good bound since the numerator can be negative sometimes as well. Hopefully absolute value of this fraction would help since i understand why such negative sign might be coming while i
calculated the sum exactly for three primes.
@ Pietro: Thanks for your help. It was simple for an upper bound.
add comment
Not the answer you're looking for? Browse other questions tagged nt.number-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/110814/products-of-prime-powers-less-than-a-number","timestamp":"2014-04-16T07:15:56Z","content_type":null,"content_length":"54785","record_id":"<urn:uuid:eab16c1c-b6e6-40cb-82f2-89d8672772fc>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00412-ip-10-147-4-33.ec2.internal.warc.gz"} |
We're pleased to announce that (second RacketCon) will take place on October 13, 2012, at Northeastern University in Boston. This year, RacketCon will feature 3 2-hour tutorial sessions, as well as a
series of short talks about development in and of Racket over the last year.
Potential tutorial sessions include:
• Building a new domain-specific language using syntactic extension
• Using contracts in application development
• Adding types to an existing application with Typed Racket
• Parallelizing Racket applications with futures and places
Potential talks include:
• submodules
• WhaleSong
• futures and visualization
• distributed places
• optimization coaching
• Dracula and ACL2
• PLT Redex
Lunch will be provided.
On Sunday after RacketCon, we plan to hold a hackathon to work as a group on various Racket projects such as documentation improvements and FFI bindings. This will be organized by
Asumu Takikawa
To register, fill out
this form
. The
conference website
has more information.
[Edit on 2012-08-27, 12:31EDT: added code and pictures below. 2012-08-27, 13:10EDT: also incorporated some comments.]
I wrote this on the Racket educators' mailing list, and Eli Barzilay
suggested I post it here as well.
The article is about the difference between memoization and dynamic programming (DP). Before you read on, you should stop and ask yourself: Do I think these two are the same concept?; if you think
they are different, How do I think they differ?; and for that matter, Do I even think of them as related?
Did you think? Okay, then read on.
They most certainly are related, because they are both mechanisms for optimizing a computation by replacing repeated sub-computations with the storage and reuse of the result of those
sub-computations. (That is, both trade off space for time.) In that description is already implicit an assumption: that the sub-computation will return the same result every time (or else you can't
replace the computation with its value on subsequent invocations). You've almost certainly heard of DP from an algorithms class. You've probably heard of memoization if you're a member of this
language's community, but many undergrads simply never see it because algorithms textbooks ignore it; and when they do mention it they demonstrate fundamental misunderstandings (as Algorithms by
Dasgupta, Papadimitriou, and Vazirani does).
Therefore, let's set aside precedent. I'll tell you how to think about them.
Memoization is fundamentally a top-down computation and DP is fundamentally bottom-up. In memoization, we observe that a computational tree can actually be represented as a computational DAG (a
directed acyclic graph: the single most underrated data structure in computer science); we then use a black-box to turn the tree into a DAG. But it allows the top-down description of the problem to
remain unchanged. (As I left unstated originally but commenter23 below rightly intuited, the nodes are function calls, edges are call dependencies, and the arrows are directed from caller to callee.
See the pictures later in this article.)
In DP, we make the same observation, but construct the DAG from the bottom-up. That means we have to rewrite the computation to express the delta from each computational tree/DAG node to its parents.
We also need a means for addressing/naming those parents (which we did not need in the top-down case, since this was implicit in the recursive call stack). This leads to inventions like DP tables,
but people often fail to understand why they exist: it's primarily as a naming mechanism (and while we're at it, why not make it efficient to find a named element, ergo arrays and matrices).
In both cases, there is the potential for space wastage. In memoization, it is very difficult to get rid of this waste (you could have custom, space-saving memoizers, as Václav Pech points out in his
comment below, but then the programmer risks using the wrong one...which to me destroys the beauty of memoization in the first place). In contrast, in DP it's easier to save space because you can
just look at the delta function to see how far “back” it reaches; beyond there lies garbage, and you can come up with a cleverer representation that stores just the relevant part (the “fringe”). Once
you understand this, you realize that the classic textbook linear, iterative computation of the fibonacci is just an extreme example of DP, where the entire “table” has been reduced to two iteration
variables. (Did your algorithms textbook tell you that?)
In my class, we work through some of the canonical DP algorithms as memoization problems instead, just so when students later encounter these as “DP problems” in algorithms classes, they (a) realize
there is nothing canonical about this presentation, and (b) can be wise-asses about it.
There are many trade-offs between memoization and DP that should drive the choice of which one to use.
• leaves computational description unchanged (black-box)
• avoids unnecessary sub-computations (i.e., saves time, and some space with it)
• hard to save space absent a strategy for what sub-computations to dispose of
• must alway check whether a sub-computation has already been done before doing it (which incurs a small cost)
• has a time complexity that depends on picking a smart computation name lookup strategy
In direct contrast, DP:
• forces change in desription of the algorithm, which may introduce errors and certainly introduces some maintenance overhead
• cannot avoid unnecessary sub-computations (and may waste the space associated with storing those results)
• can more easily save space by disposing of unnecessary sub-computation results
• has no need to check whether a computation has been done before doing it—the computation is rewritten to ensure this isn't necessary
• has a space complexity that depends on picking a smart data storage strategy
[NB: Small edits to the above list thanks to an exchange with Prabhakar Ragde.]
I therefore tell my students: first write the computation and observe whether it fits the DAG pattern; if it does, use memoization. Only if the space proves to be a problem and a specialized memo
strategy won't help—or, even less likely, the cost of “has it already been computed” is also a problem—should you think about converting to DP. And when you do, do so in a methodical way, retaining
structural similarity to the original. Every subsequent programmer who has to maintain your code will thank you.
I'll end with a short quiz that I always pose to my class.
Memoization is an optimization of a top-down, depth-first computation for an answer. DP is an optimization of a bottom-up, breadth-first computation for an answer. We should naturally ask, what about
• top-down, breadth-first
• bottom-up, depth-first
Where do they fit into the space of techniques for avoiding recomputation by trading off space for time?
1. Do we already have names for them? If so, what?, or
2. Have we been missing one or two important tricks?, or
3. Is there a reason we don't have names for these?
Where's the Code?
I've been criticized for not including code, which is a fair complaint. First, please see the comment number 4 below by simli. For another, let me contrast the two versions of computing Levenshtein
distance. For the dynamic programming version, see Wikipedia, which provides pseudocode and memo tables as of this date (2012-08-27). Here's the Racket version:
(define levenshtein
(lambda (s t)
[(and (empty? s) (empty? t)) 0]
[(empty? s) (length t)]
[(empty? t) (length s)]
(if (equal? (first s) (first t))
(levenshtein (rest s) (rest t))
(min (add1 (levenshtein (rest s) t))
(add1 (levenshtein s (rest t)))
(add1 (levenshtein (rest s) (rest t)))))])))
The fact that this is not considered the more straightforward, reference implementation by the Wikipedia author is, I think, symptomatic of the lack of understanding that this post is about.
Now let's memoize it (assuming a two-argument memoize):
(define levenshtein
(lambda (s t)
[(and (empty? s) (empty? t)) 0]
[(empty? s) (length t)]
[(empty? t) (length s)]
(if (equal? (first s) (first t))
(levenshtein (rest s) (rest t))
(min (add1 (levenshtein (rest s) t))
(add1 (levenshtein s (rest t)))
(add1 (levenshtein (rest s) (rest t)))))]))))
All that changed is the insertion of the second line.
Bring on the Pitchers!
The easiest way to illustrate the tree-to-DAG conversion visually is via the Fibonacci computation. Here's a picture of the computational tree:
Now let's see it with memoization. The calls are still the same, but the dashed ovals are the ones that don't compute but whose values are instead looked up, and their emergent arrows show which
computation's value was returned by the memoizer.
Important: The above example is misleading because it suggests that memoization linearizes the computation, which in general it does not. If you want to truly understand the process, I suggest
hand-tracing the Levenshtein computation with memoization. And to truly understand the relationship to DP, compare that hand-traced Levenshtein computation with the DP version. (Hint: you can save
some manual tracing effort by lightly instrumenting your memoizer to print inputs and outputs. Also, make the memo table a global variable so you can observe it grow.)
While writing the code for the triangular distribution in the upcoming math library, I found that I needed a function that sorts exactly three numbers. This kind of code is annoying to write and to
get right. But it comes up rarely enough, and it seems simple enough, that I’ve never felt like making a library function for it.
But what if I wrote a macro that generated code to sort n numbers very quickly, where n is known at expansion time, but the numbers themselves aren’t? I think I could justify putting that in a
Here’s code that correctly sorts three numbers a, b and c:
(if (< b c)
(if (< a b)
(values a b c)
(if (< a c)
(values b a c)
(values b c a)))
(if (< a c)
(values a c b)
(if (< a b)
(values c a b)
(values c b a))))
It’s an if tree. Notice that there are 6 leaf expressions, for the 3! = 6 possible permutations. Also, it never compares more than it has to. It’s optimal.
The optimality came from my reasoning about transitivity. For example, only two comparisons are needed before returning (values a b c). I knew that both (< a b) and (< b c), so (< a b c) must be true
by transitivity.
It would be nice if the macro generated optimal code by explicitly reasoning about transitivity, or as an emergent property of the sorting algorithm it uses.
We’ll write a macro that does the latter, by generating a fully inlined merge sort.
[Edit: The final inline sort macro is here.]
Racket version 5.3 is now available from
Release Highlights:
• Submodules are nested module declarations that can be loaded and run independently from the enclosing module. See also the overview of submodules.
• The futures visualizer is a graphical profiling tool for parallel programs using futures. The tool shows a detailed execution timeline depicting the migration of futures between threads, and
gives detailed information about each runtime synchronization that occurred during program execution. In addition, would-be-future is a special type of future that always executes sequentially
and records all potential barricades a regular future would encounter.
• Optimization Coach (formerly Performance Report) reports information about Racket's inlining optimizations. Optimization Coach can be launched in any language through the View menu.
• The new images/flomap library defines floating-point bitmaps and fast image processing operations on them. It is written in Typed Racket, so Typed Racket code may use it without the cost of
contract checks.
• The new json library supports parsing and generating JSON. (Originally based on Dave Herman's planet library.)
• racket/string is extended with a set of simplified string manipulation functions that are more convenient than using regexps. regexp-match* and friends can now be used with new keyword arguments
to return specific matched regexp group/s and gaps between matches.
• The new racket/generic library allows generic function definitions, which dispatch to methods added to a structure type via the new #:methods keyword.
• The class form supports declaring a method abstract. An abstract method prevents a class from being instantiated unless it is overridden.
• The contract library comes with support for interfaces, generics, prompts, continuation-marks, and structs.
• Most error messages use a new multi-line format that is more consistent with contract errors and accommodates more information.
• Typed Racket supports function definitions with keyword arguments; the startup time of Typed Racket programs has been sharply reduced.
• The new ffi/com library replaces MysterX; a compatibility mysterx library remains, but without ActiveX support. The new ffi/unsafe/com library offers a more primitive and direct way to use COM
classes and methods.
• There is now a very complete completion code for zsh. It is not included in the distribution though; get it at http://goo.gl/DU8JK (This script and the bash completions will be included in the
standard installers in future versions.)
Effective this release:
• The tex2page and combinator-parser libraries have been moved from the Racket distribution to PLaneT:
(require (planet plt/tex2page))
(require (planet plt/combinator-parser))
• The following has been deprecated and will be removed in the January 2013 release:
the planet command-line tool; use raco planet instead.
• The following has been deprecated and will be removed in the August 2013 release:
the mzlib/class100 library; use racket/class instead. | {"url":"http://blog.racket-lang.org/2012_08_01_archive.html","timestamp":"2014-04-16T07:31:51Z","content_type":null,"content_length":"86944","record_id":"<urn:uuid:6ef0249f-134c-469a-bb58-c35436b8efc4>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00551-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Following Equation Describes A Certain... | Chegg.com
The following equation describes a certain dilution process, where y(t) is the concentration of salt in a tank of fresh water to which salt brine is being added.
Suppose that y(0) = 0.
a. Use MATLAB to solve this equation for y(t) and to plot y(t) for 0 ≤ t ≤ 10.
b. Check your results by using an approximation that converts the differential equation into one having constant coefficients. | {"url":"http://www.chegg.com/homework-help/following-equation-describes-certain-dilution-process-y-t-co-chapter-7-problem-62p-solution-9780073529271-exc","timestamp":"2014-04-20T01:51:21Z","content_type":null,"content_length":"30982","record_id":"<urn:uuid:f95fb792-8786-4eb7-8e76-4d1b74a3a80b>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00589-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - light ray paths near schwarzschild blackhole
George Jones
Are you interested in what an observer would see with his eyes, or in plotting the trajectory of a photon on a coordinate map, or maybe both? The first thing is somewhat hard, and the second thing is
Let me use an example to illustrate what I mean. Suppose a ship makes a long ocean voyage. The voyage could be watched from a telescope on a satellite in geosynchronous orbit, or the voyage could be
plotted as a moving, glowing dot on a map.
My end goal will be to render what an observer would see , but for now I would like to simply plot the trajectory of several photons on a coordinate map.
I've been studying geodesics (not with much luck) and reading up on equations for the impact parameter and deflection angle in the schwarzschild metric which I have as :
[tex]\Delta\phi = \int^{r_{observed}}_{r_{omitted}} \stackrel{dr}{r\sqrt{r^{2}/b^{2} - 1 + R_{s}/r}} [/tex] | {"url":"http://www.physicsforums.com/showpost.php?p=2409536&postcount=7","timestamp":"2014-04-21T09:56:50Z","content_type":null,"content_length":"8412","record_id":"<urn:uuid:abf6be48-440f-4ecb-bf10-2bd7be3891a2>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00358-ip-10-147-4-33.ec2.internal.warc.gz"} |
MATCH Formula Excel – How to use Excel MATCH Function
Syntax of MATCH Formula
Example of MATCH Formula
Possible Errors returned by the MATCH Formula
MATCH formula in Excel returns the relative position of a value in an array or a range
MATCH Formula Syntax
MATCH Formula has three parts:
MATCH (value_to_find, range_to_find_value_in, match_type)
value_to_find is the value which we are trying to find. It can be a number or a string. If the value_to_find is specified as a string, you can use special wildcard characters to specify the string.
(For Single character use ? and for multiple characters use *)
range_to_find_value_in is the range in which we would like to locate the value in. This range has to consist of cells in a single continuous row of column. A 2-dimensional range will not generate the
correct result.
rmatch_type is an optional parameter that specifies how we would like the match function to operate. This parameters can have three potential values -1, 0 and 1. If the value is specified as -1 then
the function would try to find the smallest value in the range that is greater than or equal to the value we are trying to find. For this parameter to work correctly, the range has to be sorted in
the descending order. If the value is specified as 1 then the function would try to find the largest value in the range that is less than or equal to the value we are trying to find. For this
parameter to work correctly, the range has to be sorted in the ascending order. If the match_type parameter is omitted, it is assumed to be 1. Finally if the parameter is specified as 0, the first
exact match to the value is returned, irrespective of the sort order of the range. (Please note that this parameter can also be a boolean with TRUE for an approximate match and FALSE for an exact
Also worth noting is that if a match is found, the formula returns the relative position of the value in the range which means that the position is specified with respect the first cell of the range
specified and not the first cell in the spreadsheet (A1). So incase you plan to use this formula with any other function, one needs to be aware of this.
Example of a MATCH Formula
Let’s us take a look at an example of the MATCH formula. Suppose we had a list of managers (names) for a business as shown in the above example. And now let’s say that we wanted to find the position
at which a specific manager’s name (Say “Bill”) appears in the list. We could simply enter =MATCH(“Bill”,A1:A10,0) or=MATCH(“Bill”,A1:A10,-1). The formula mentioned first will go ahead and look for
an exact match for “Bill” and if an exact match is found, will return the position (relative to the first cell in the range). The second would have tried searching for a match which is the closest
value less than or equal to the value specified. Hence the chances of finding a match, all else being equal, are higher when using the later function albeit at the cost of lower accuracy.
MATCH formula with the single character (?) and multiple character (*) wildcard operators
Let’s now take the example where you would like to know the position of the first occurrence of name starting with a ‘j’ with an ‘m’ as the third character. We have two such names in the list – Jim
and James. Aplhabetically James appears earlier but in our list Jim is listed at position 3 and James at 5. Now if we were to use the formula = =MATCH(“j?m”,A2:A10,0) we would get the 3 as the
result. Why? Because we specified the single character wildcard (?) and 0 as the third parameter. The ? wildcard character will give a match for any single character at that position. The 0, as we’ve
seen earlier, works to find and exact match.
Now say we wanted to find any name with the substring “rri” in it. We have two such names “Terri” and “Sherri” with Terri appearing first. We could use the multiple character wildcard operator (*)
and write something like =MATCH(“*rri”,A2:A10,0). What we’ve done here is to specify the function to find the first occurrence where the substring “rri” occurs anywhere in the name. In this case the
result will be 1 since “Terri” is a name that occurs in the first cell with the given substring. You can combine both the wildcard characters and write something like =MATCH(“?e*ri”,A2:A10,0) as a
How to enter the MATCH formula in an Excel Sheet
1. Select the cell in which you want to place the formula
2. Type the formula as =MATCH(
3. Then enter the value that you would like to find. This can be a number, a string, a cell address or a combined expression with a wildcard character as shown above.
4. Press the comma key (,)
5. Then using the mouse up-down and left-right keys, move to the first cell in range in which you would like to locate this value.
6. Keeping the SHIFT key pressed, move the cursor to the last cell of the range.
7. At this point you can either close the formula by entering the closing bracket ) enter the optional third parameter. If you want to enter the third parameter, read on.
8. Press the comma key (,)
8. Enter a value from (-1,0,1) as the match_type parameters. (0 for an exact match, -1 and 1 for approximate matches as described above)
9. Close the formula by entering the closing bracket ).
Check out the clip above for knowing if the values you’ve entered are in the same order. In the end your formula should look something like this =MATCH(“jim”,A2:A10,0)
Possible Errors with the MATCH Formula
The MATCH formula can result in the following error values:
MATCH Formula #NAME? Error
If the range or a parameter has been incorrectly specified, the MATCH formula can result in the #NAME? error. For Example specifying =MATCH(“?e*ri”,A2:AAAA10,1) would lead to this error since the
cell AAAA10 does not exist in the spreadsheet. Similarly specifying the match_type parameter as a string can lead to this error.
MATCH Formula #NA Error
If the MATCH formula does not find a match, it will generate the #N/A error value.
You can download an example of MATCH formula here or click on the button below: | {"url":"http://www.databison.com/match-formula-excel-how-to-use-excel-match-function/","timestamp":"2014-04-17T21:22:34Z","content_type":null,"content_length":"51207","record_id":"<urn:uuid:26920bc8-4edf-4973-bdc4-954348844cb2>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00262-ip-10-147-4-33.ec2.internal.warc.gz"} |
Differential operators preserving the space of harmonic functions (aka higher symmetries of the Laplacian)
up vote 4 down vote favorite
The article http://arxiv.org/abs/hep-th/0206233 (published in Ann. of Math. (2) 161 (2005), no. 3) deals with linear differential operators $D$ for which there exists another linear differential
operator $\delta$ such that $\Delta D = \delta \Delta$. Obviously these operators preserve the kernel of $\Delta$, i.e. the space of harmonic functions. The mentioned article finds essentially all
such operators $D$. The result is that up to trivial operators $D = P\Delta$ all the operators $D$ have polynomial coefficients and are generated by sums of compositions of first order operators of
this kind.
First question: Let $D$ be any differential operator preserving the space of harmonic functions. It is easy to see that the operator $\delta = \Delta D (\Delta)^{-1}$ is well defined and satisfies $\
Delta D = \delta \Delta$. Is $\delta$ also a differential operator?
Second question: Is it true that all differential operators, which preserve the space of harmonic functions, are generated by first order ones with this property?
One can also ask these questions only for linear differential operators or for operators from the Weyl algebra (i.e. linear differential operators with polynomial coefficients). For example, by a
theorem of Peetre, the answer to the first question is affirmative if the operator $\delta = \Delta D (\Delta)^{-1}$ is local (i.e. the support of $\delta u$ is contained in the support of $u$).
Third question: What makes the linked article so interesting that it was published in Annals?
The concept of the higher (or generalized) symmetry is much broader than you appear to imply in your title, see e.g. Olver's book from my answer. So I suggest editing the title: e.g. change the aka
part to, say, Higher Symmetries of the Laplacian, to allude to the title of the paper you refer to. – mathphysicist Jun 15 '10 at 21:00
add comment
1 Answer
active oldest votes
The answer to your second question (unless I somehow misread it) is yes precisely because of the result of the paper you refer to (you may also wish to look at this paper and the preprint
math-ph/0506002 which address the same subject). This is the case because if $D$ is a differential operator that preserves the space of harmonic functions then there indeed exists a
differential operator $\delta$ such that $\Delta D = \delta \Delta$. The latter holds (see e.g. the discussion at p.290 near Eq.(5.5) of the book Applications of Lie groups to Differential
Equations by P.J. Olver) because the equation $\Delta f=0$ is totally nondegenerate in the sense of Definition 2.83 of the same book. In spite of the rather technical language the idea behind
up all this is very simple: if you have a submanifold $N$ of an manifold $M$ defined by the equations $F_1=0, \dots, F_k=0$ with smooth $F$'s and $k<\mathrm{dim}\ M$, then a smooth function $h$
vote 1 vanishes on $N$ iff there exist smooth functions h[j] on $M$ such that $$h=h_{1} F_1+\cdots+h_k F_k$$ provided $dF_1\wedge \dots \wedge dF_k\neq 0$ on $N$ (see Proposition 2.10 of the same
down book). In a sense, this is a smooth counterpart of the famous Hilbert's Nullstellensatz in the form stated e.g. here. This result is then applied to the case when $M$ is a jet bundle and $N$
vote is a submanifold thereof defined by a system of differential equations and all its differential consequences (more precisely, one should rather consider the consequences only up to a certain
order, to avoid dealing with infinitely many equations), et voila.
add comment
Not the answer you're looking for? Browse other questions tagged ap.analysis-of-pdes symmetry ca.analysis-and-odes rt.representation-theory weyl-algebra or ask your own question. | {"url":"http://mathoverflow.net/questions/28277/differential-operators-preserving-the-space-of-harmonic-functions-aka-higher-sy","timestamp":"2014-04-19T02:07:57Z","content_type":null,"content_length":"56792","record_id":"<urn:uuid:d215f23b-ac7c-4be5-b1a5-4506636be50f>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00073-ip-10-147-4-33.ec2.internal.warc.gz"} |
DNA molecule from sweets
Build your own DNA molecule from sweets
• 2 different coloured long candy or liquorice cords such as:
│ │ │ │ │
│ │ Red │Gummi Worms│ │
│Black Licorice │Licorice│ │Sour Gummi │
│ │ │ │ Worms │
• 1 large packet of similar sized soft sweets such as:
│ │ │ │ │ │ Gummi Sour │
│Assorted Licorice │Gummi│Gummi Raspberries │ │ │Coke Bottles│
│ │Bears│ │Red Licorice│Wine Gum│ │
• 1 box of cocktails sticks
• Needle and thread
Procedure: (Download instructions)
1. Take the two candy cords - assign one colour to represent the pentose sugar molecules and the other to represent the phosphate molecules. Cut the candy cords into 2 - 3 centimetre pieces.
2. Using the needle and thread string half the candy pieces together lengthwise alternating the two colours to form a chain.
3. Repeat step 2 with the remaining half of candy pieces to form a second chain of the same length.
4. Lay the two chains down side by side so pieces of the same colour are opposite one another.
5. Count the number of pentose sugar molecules you have in one chain (you should have the same number in both chains). Obtain this number of cocktail sticks. These represent hydrogen bonds that hold
the base pairs together.
6. Divide the sweets into four different colours. Assign names to the each of the four colours to represent the nucleotide bases - adenine, cytosine, guanine or thymine.
For example:
adenine thymine
cytosine guanine
7. The bases have to be paired up on the cocktail sticks. Adenine always pairs with thymine and guanine with cytosine, so make sure you get the right colours matching.
8. Push each end of a cocktail stick into candy pieces representing pentose sugar molecules lying opposite one another - the cocktail sticks should join the two chains together so they look like the
rungs of a ladder.
9. Hold the end of each chain and twist slightly to get the double helix effect for your DNA model.
Back to Home Page | {"url":"http://nobel.scas.bcit.ca/resource/dna/dna_sweets.htm","timestamp":"2014-04-18T21:29:13Z","content_type":null,"content_length":"10100","record_id":"<urn:uuid:6ece722c-2e5b-4566-a2e9-735b31ccf1b7>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00387-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
If I have 9764 pencils in 5 years what is the probability of the number pencils that I will have in the next 8 years.
• 11 months ago
• 11 months ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/51963d43e4b0c719b63f47bb","timestamp":"2014-04-16T20:02:44Z","content_type":null,"content_length":"39752","record_id":"<urn:uuid:29bf223b-d2c5-402a-9c5a-785b3efb3e37>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00175-ip-10-147-4-33.ec2.internal.warc.gz"} |
Return of the Weekend Challenge
I didn’t do a Weekend Challenge last weekend. It’s not like I didn’t want to, y’all. Things just got totally cray-cray. My use of “cray-cray” in the previous sentence should be enough of an
indication to you that things remain squarely thus.
I actually wrote two questions for this weekend, but I’m only going to use one of them. The other one is going into the strategic reserve. For emergencies.
If you’re the first to answer this correctly (and not anonymously), I’ll bestow upon you coveted access to the PWN the SAT Math Guide Beta.
Other ways you can gain access to my Magnum Opus:
• Buy it. It’s $5. You get about 300 pages of useful SAT math help. I get a footlong meatball sub at Subway.
• Send me a question of your own making. Added bonus: this is a fantastic way to solidify your knowledge of the test.
On to the question:
In the figure above A is the center of the circle, A and D lie on BF and CE, respectively, and B, D, F, and G lie on the circle. If BC = 3, and DG (not shown) bisects BF, what is the total area
of the shaded regions?
Good luck, and have a great weekend. I’ll post the solution early next week.
UPDATE: Commenter Katieluvgold got it first. Nice work Katie! Solution posted below the cut.
This is a shaded region question, and the usual shaded region technique applies, but with a twist. Since only parts of the top half of the circle are shaded, let’s just look use the top semicircle as
We know AD is perpendicular to CE because it’s a radius connecting to a tangent line. Those are always perpendicular. It’s a rule. And since A and D are both on rectangle BCEF, and BC = 3, AD must
also equal 3.
So the radius of the circle is 3, so the area of the circle is 9π. That means the area of the semicircle is half that. A[whole] = 4.5π.
Point A is the center of the circle, so it’s obviously the center of diameter BF. When the question tells us that DG bisects BF, it’s telling us that DG is also a diameter because it also passes
through the center of the circle. That means ΔCEG has both a base and a height of 6.
Because the rectangle’s top and bottom are parallel, that means the top triangle (which is our A[unshaded]) is similar to the large triangle, because all the angles of the two triangles are the same
(the top one is shared, and the bottom ones are corresponding angles across parallel lines). We don’t have a way of knowing what the base is automatically, but we DO know that the height of it is 3,
because the height of the little triangle is a radius. So the small triangle has a base of 3 and a height of 3.
The area of a triangle is (1/2)bh, so A[unshaded] = 4.5.
To calculate A[shaded], we just subtract A[unshaded] from A[whole]:
A[whole] – A[unshaded] = A[shaded]
4.5π – 4.5 = A[shaded] | {"url":"http://pwnthesat.com/wp/2011/10/return-of-the-weekend-challenge/","timestamp":"2014-04-24T18:51:19Z","content_type":null,"content_length":"69472","record_id":"<urn:uuid:6ef1a8e1-59e6-41fb-af16-c290ca858d9c>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00507-ip-10-147-4-33.ec2.internal.warc.gz"} |
January 10th 2009, 11:15 PM #1
Jan 2009
A palindrome is a string which, when reversed it is identical to the original string, eg. 0110 is a palindrome and 0100 is not. How many bit strings of length n are palindromes?
Please help me with this question.
I would really appreciate any help.
n even:
Spilt the string in half. The first n/2 characters can be either 0 or 1 so there are $2^{n/2}$ arrangements. You require the second half to be the reverse of the first half and there's only 1 way
that can happen. Therefore ....
n odd:
The middle character can be 0 or 1 so there's 2 possibilities for the middle.
The characters to the left of the middle character can be either 0 or 1 so there's $2^{(n-1)/2}$ arrangements. You require the characters to the right of the middle character to be the reverse of
the characters to the left and there's only 1 way that can happen. Therefore ....
January 10th 2009, 11:52 PM #2 | {"url":"http://mathhelpforum.com/discrete-math/67646-palindrome.html","timestamp":"2014-04-17T12:49:02Z","content_type":null,"content_length":"31203","record_id":"<urn:uuid:232be710-3db7-4f08-855a-1dc2bd6a60e2>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00586-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mission Viejo Calculus Tutor
Find a Mission Viejo Calculus Tutor
...I am patient, engaging and friendly. If a given approach does not result in success, then I will try another and yet another so that understanding is achieved. My job isn't done until the
client understands the material.
13 Subjects: including calculus, English, physics, statistics
...I also teach Pre-Calculus and Physics for Biola Youth Academics. In middle school, I participated in math competitions, primarily MathCounts, in which I was first place in the state of
California in 8th grade. A few years later, I scored a 5 on the AP Calculus BC exam in 10th grade.
6 Subjects: including calculus, physics, SAT math, trigonometry
...I have a Masters (University of Cincinnati) and PhD in Aerospace Engineering (Virginia Tech). Differential equations are used extensively in engineering subjects. I am approved to teach this
subject at the University of Phoenix and many other schools online and on ground. I have completed the CFA exams and I am currently a Charterholder of the CFA institute.
12 Subjects: including calculus, statistics, trigonometry, algebra 1
...Personally, I have always had what I consider a poor memory and consequently, I have relied on conceptual understanding and regularly re-derive relevant equations. I believe this approach has
helped me in my own studies and I have endeavored to instill this same focus on fundamental understandin...
6 Subjects: including calculus, physics, trigonometry, precalculus
Hello! My name is David, and I hope to be the tutor you are looking for. I have over 5 years of tutoring experience in all math subjects, including Algebra, Geometry, Trigonometry, Pre-Calculus,
Calculus, Probability and Statistics.
14 Subjects: including calculus, physics, geometry, statistics | {"url":"http://www.purplemath.com/Mission_Viejo_Calculus_tutors.php","timestamp":"2014-04-19T12:02:09Z","content_type":null,"content_length":"24124","record_id":"<urn:uuid:503f5a7e-62ae-4ab6-a720-c8b6e1c3e30f>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00361-ip-10-147-4-33.ec2.internal.warc.gz"} |
Jupiter Algebra 2 Tutor
I am from an engineering background and have scored throughout distinction in my field. I have knowledge in physics, maths and geometry and can teach English for students pursuing TOEFL. I had my
own teaching classes in India and have a method of teaching students by understanding their capability.
24 Subjects: including algebra 2, reading, English, writing
...I was a High School teacher and substitute right out of college but then switched careers to retail management. For the past year I have been a private in home tutor to 5 local students. All
students passed their math EOC in the Jupiter schools and the youngest student who was in 5th grade received the Silver Award for most improved during the school year.
16 Subjects: including algebra 2, English, reading, writing
Mathematics is one of my strengths. I am certfied to teach mathematics in the Palm Beach school district schools. I use plenty of visual examples to better understand mathematical concepts and
create positive learning experience. I am willing to answer any questions and concerns.
13 Subjects: including algebra 2, geometry, precalculus, algebra 1
...I have had the privilege to teach many different subjects during my career, which has led to a fun and exciting experience. My tutoring specialties include the areas of Social Studies (US and
World History, Government, & Economics), basic and advanced algebra, biology, and any ACT or SAT test pr...
20 Subjects: including algebra 2, English, reading, writing
...I have taught Mathematics and Physics in High School and College for over 10 years, I have also taught Spanish privately in recent years. I have a degree in Physics and post-degree studies in
Geophysics and Information Systems. My background as a physicist, my experience as a teacher in college...
10 Subjects: including algebra 2, Spanish, physics, geometry | {"url":"http://www.purplemath.com/Jupiter_Algebra_2_tutors.php","timestamp":"2014-04-20T16:01:11Z","content_type":null,"content_length":"24144","record_id":"<urn:uuid:7699de74-92e3-443e-8226-3b1c0a61643a>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00357-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: matrix encoding IS adjacency list
From: VC <boston103_at_hotmail.com> Date: Mon, 19 Sep 2005 22:50:45 -0400 Message-ID: <2aydnbeuzNmY5LLeRVn-iA@comcast.com>
"Vadim Tropashko" <vadimtro_invalid_at_yahoo.com> wrote in message news:1127179828.730374.139640_at_g14g2000cwa.googlegroups.com...
> VC wrote:
>> OK, at first you talked about an adjacency list, but no matter. Let
>> a,b,c,d,e be some nodes. Then an adjacency list for some DAG/tree might
>> be
>> a set of pairs:
>> {(a, {b,c}), (b, {d,e}) or, equivalently, {(a,b), (a,c), (b,d), (b,e)}
>> In the RM, an adjacency list is usually emulated by a (child, parent,
>> data) relation with the 'child' attribute being the primary key. One can
>> enforce a referential integrity constraint if need be, but that's
>> although
>> desirable is not required in order to express a hypothetical tree
>> topology.
> This terminology is blamed on Celko:-) Seriously, we are talking about
> the relation analogous to adjacency matrix.
Please explain.
>> >Where do you see this constraint
>> > with materialized path encoding? In other words, given parent node
>> > materialized path encoding, how do I find all the (immediate) children?
>> > Trivial with adjacency list.
>> It's as trivial with a materialized path: in the above example in order
>> to
>> find all node a.b 's children, just say 'find tuples where
>> prefix(materialized_path) = a.b'.
> Nope. You query looks for descendants, not (immediate) children. You
> can say: "aha, just add a postfilter", or in other words find all the
> descendants, that are on the next level, but that is not guaranteed to
> be efficient.
Sorry, yes, that's what I meant.
>For the root node the descendants index scan would return
> the whole hierarchy, and then you'd have to filter out almost all of
> them.
>> > Referring to the problem of (a11,a21) being not unique, I actually
>> > figured it out. There are exactly 2 nodes with the same (a11,a21). This
>> > follows from the equation
>> >
>> > a11*a22-a21*a12 = 1 or -1
>> This actually follows from the fact that a rational number, as a
>> continued
>> fraction, has two encodings (if we are talking about the same thing).
> Yes
>> > It is possible to amend matrix encoding in such a way that determinant
>> > of [[a11,a12],[a21,a22]] matrix is always 1. Then, (a11,a21) is unique,
>> > and we can decare foreing key constraint. Matrix encoding IS adjacency
>> > list!
>> Of course it is, m.e being a materialized path encoding. You seem to
>> imply
>> that the constraint makes finding, say, all the immediate children
>> somehow
>> easier compared to the constraint-less search. Could you please
>> elaborate ?
> Once again, your query should be written as
> 'find tuples where prefix(materialized_path) = a.b
> and length(materialized_path)=3'
> which is not the same as
> 'find tuples where parent_id = 5'
Well, nothing is for free. For certain queries, an m.p. encoding may cause performance issues that can be resolved by maintaining a separate/redundant parent attribute, creating a function-based
index, or using the real adjacency list encoding. There is no universal solution for traversing hierarchies, unfortunately (unless you want to use Oracle's proprietary 'connect by' or the standard
recursive query (DB2/SQL Server 2005).
Still, you did not show how the referential constraint helps with finding immediate children in the matrix encoding.
I can see how you would use the parent pair of numbers, of course. Btw, the pair is redundant: it can be deduced from the child pair if disambiguated by a boolean value indicating which, out of two,
rational encoding you want to use. Received on Mon Sep 19 2005 - 21:50:45 CDT | {"url":"http://www.orafaq.com/usenet/comp.databases.theory/2005/09/19/0275.htm","timestamp":"2014-04-20T06:10:45Z","content_type":null,"content_length":"12518","record_id":"<urn:uuid:b315097b-905c-4eca-8031-c3423cc159bd>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00446-ip-10-147-4-33.ec2.internal.warc.gz"} |
Public Surface
45 + 100 points
March 8th, 2008 2:01 PM
Welcome to Vert3x!
Vert3x is a simple yet amusing game of trickery, cunning, and aggression. You must be thinking steps ahead of your opponent, else you'll end up on the losing side swiftly!
Vert3x: preparing the paper
In Vert3x you play on a board of Triangles. The size of the board is up to you! The longer, and more challenging a game you want, the bigger board you play on. The shape of the board is also up to
you! Once you've drawn your board, you need to choose a start position, you could choose anywhere or start opposite sides, it's up to you! When that's done, you're ready to go!
Vert3x: Playing the game
There are many additions on Vert3x, and one of the attractions is that you can think up your own variations and rules, but here we've listed the basics.
Vert3x: Rules: Aim
The aim of Vert3x is to get as many points as possible by the end of the game. Simple.
Vert3x: Rules: Gameplay
In the game of Vert3x, players take turns in putting down lines. They must place lines that touch their existing lines at least once. Players may not cross other players lines, but they can touch
them. You can cross your own lines.
Vert3x: Rules: Scoring
The basic scoring in Vert3x comes from simple things. Every line you place down, earns you one point! So far so good.
The other way to earn score is to draw triangles with your lines. Once you complete a triangle you receive points based on how large it is. Ie, the smallest triangle you can create is 3 points. For
larger triangles, you receive points based on how many lines its outline is made of.
Vert3x: Variations
There are many variations on Vert3x, and half the fun is thinking up your own! Here's a few to start you off.
Multiplier bonuses for larger and larger triangles.
Multiple lines
Extra bonuses for other shapes
Mink: I've played a game where you get points for boxing in someone else's line, could incorporate this maybe.
Ways to 'Overrule' other players lines
Ways to steal points
Bonuses for creating triangles covering other triangles.
Mink/Fish Susy Derkins/Chalk Girl
Rules of our version here:
In Vert3x (Trianguliche version) you play on a board of triangles. The standard board (for a fast game with two players) is a 2x2 board made by four windows (each one made of four "windowpanes"),
divided in triangles by an X. Players take turns tracing down lines on the triangle grid: each player traces lines of a different color. The aim is to draw triangles of your own color: the more and
the bigger the better. And, of course, at the same time, to prevent the other player from drawing triangles of his/her own color, by blocking vertexes with your color. Only triangles with three sides
of the same color count for the score.
The scoring in Vert3x (Trianguliche version) comes from the number of triangles drawn. Individual triangles (also called 1x triangles) are worth 1 point. 2x-triangles (made by two individual
triangles) are worth 2 points, 4x-triangles are worth 3 points, 8x-triangles are worth 4 points and so on.
You can play Vert3x (Triaguliche version) on a larger board (i.e., substitute the 2x2 board by a 3x3, 4x4, 5x5, etc) and it can also be played with more players, as long as there are different colors
for each one. .
See captions in the "larger" pictures for the details of our game in a most public surface: the town square (the plaza).
Tom, Daryl, Adam and Askew met up on top of the mulit-storey to play a large scale game of Vert3x (GYØ Rules), Ben was supposed to be there but a communication breakdown meant he didn't arrive in
Vert3x: GYØ Rules In GYØ Rules Vert3x, you play on a large dotted square board. This can be any size, we went for 15x15. Players take turns placing their respective colour lines down on the board.
Lines may only go from dot to dot, and may go horizontally, vertically or diagonally. Lines may only span between 2 rows/coloumns. The aim is to draw triangles, thus scoring points. Players do not
get an extra turn after completing a triangle. The scoring in GYØ rules Vert3x is as follows. A triangle made up of 3 small lines, ie: the smallest triangle possible, is a triangluar unit. One
triangular unit is worth 1 point. Triangles made up of more than one triangular unit are scored as 1 point per unit contained within them. Units previously claimed by other players do not count
toward the score. Once a large scale triangle is 'scored' lines may not be placed inside it to split it up. For example: In this example, D will have already scored 1 point. T then scores 3 points
for completing the triangle. T does not receive a point for the unit already scored by D. Lines now cannot be placed inside this triangle to split it up even more. Our game
We wanted somewhere peacfull, big and awesome. Where better than the top of a multistorey carpark undergoing building work?
We got a few odd looks from builders, but we were mostly ignored. Some skaters came up and had a look, but left quickly.
After a slow start whilst everyone learnt, and settled on our version of the rules, Tom was ahead after getting two larger triangles. Then Askew and Adam pulled ahead with a string of larger
triangles. Daryl was losing badly.
About 20 minutes later, Askew was winning and Daryl was annoyed. Then Daryl scored a 35pt mega triangle, which we were not happy about.
Final scores:
Daryl: 71
Tom: 61
Askew: 66
Adam: 61
So there you have it, Vert3x. Give it a try next time you're bored and all you have to hand is a pen and some paper.
Note: Haberley, Cpt. Dorothy, Ben, Leigh, Emma - If you have any proof, send it to us!
20 vote(s)
Favorite of:
11 comment(s)
posted by
GYØ Ben
on March 9th, 2008 2:50 PM
Team Shplank? Really?
It certainly deserves more votes though.
posted by
on March 9th, 2008 3:09 PM
I'm with Ben. It's a good completion, but far from a shplank, IMHO.
posted by
Kyle Westwood
on March 11th, 2008 1:30 PM
This was an awsome completion and deserves more votes. Wish our attempt at this task hadn't failed so miserably.
posted by
susy derkins
on March 11th, 2008 1:36 PM
What happened? A case of "why does it always rain on me"?
posted by
on March 11th, 2008 1:45 PM
And why not give it another go?
This task is so much fun to complete.
posted by
on March 18th, 2008 12:38 PM
What happened? A case of "why does it always rain on me"?
Nah, more a case of "why do we get reprimanded by security guards and almost taken to the police?"
posted by
on March 18th, 2008 12:45 PM
Ah, a problem we face quite often. Infact I think it was when we were doing this that we got hastled by shopping centre security about why we needed to go to the roof. Or at least Ben did.
Probably saw us on CCTV too.
posted by
Kyle Westwood
on March 23rd, 2008 7:53 AM
Yeah we also tried escape from the camera too. That almost ended badly because we ran through a shopping centre and the guards shouted at us to come back, we didn't but found out later that they were
all on the lookout for us. It was a bad day really but it wont stop us from trying again and making it better.
posted by
on March 23rd, 2008 12:44 PM
"It was a bad day really"
I'm sorry, what?
A fairly productive day if you ask me!
But yeah, EFTCII and Public Surface completions upcoming (if we don't get arrested)!
posted by
Kyle Westwood
on March 23rd, 2008 4:54 PM
Ok, not a bad day due to clues but in the whole get in trouble with authorities it was a bad day. | {"url":"http://sf0.org/inYarmouth/Public-Surface","timestamp":"2014-04-18T15:43:25Z","content_type":null,"content_length":"52124","record_id":"<urn:uuid:555a59aa-5dae-4c0f-84d8-0a4d28e9ef59>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00521-ip-10-147-4-33.ec2.internal.warc.gz"} |
Classical probability.
"Bus tickets in a certain city contain four numbers, U, V, W, X. Each of these four numbers is equally likely to be any of the ten digits 0, 1, 2,...,9 and the four numbers are chosen independently.
A bus rider is said to be lucky if U + V = W + X. What proportion of the bus riders are lucky?"
I tried calculating the numerator, but there are just so many possibilities, what with repetition of digits! Needless to add, I couldn't get to the answer. :| I was wondering if there's a simple,
more elegant way of doing this.
Thanks in advance! | {"url":"http://www.physicsforums.com/showthread.php?p=3845859","timestamp":"2014-04-20T23:45:04Z","content_type":null,"content_length":"19957","record_id":"<urn:uuid:01339f6f-674e-405e-977a-dbe99b267512>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00142-ip-10-147-4-33.ec2.internal.warc.gz"} |
Integration by iteration - WyzAnt Answers
Calculate an expression for the int from 1 to e of (ln(x))^k dx in terms of the integral from 1 to e of (ln(x))^k-1 dx.
essentially, I need to integrate by parts to get this equality:
The integral from 1 to e of (ln(x))^k dx= (e-k) times(integral from 1 to e of (ln(x))^k-1 dx.
Then I will take the value of the original integral with k=1 to help me form a table of values for a series of k's.
I get close, but can't get there. Anyone able to help?
Tutors, please sign in to answer this question.
2 Answers
Remember the integration-by-parts formula:
∫u v' dx = u v - ∫ v u' dx.
Now let u=(ln x)^k and v'=1. Then u'=k (ln x)^k-1/x and v=x. Put this into the formula and you get:
∫(ln x)^k dx = x (ln x)^k - k ∫(ln x)^k-1 dx.
For the definite integral from 1 to e, evaluate the integrated term at those boundaries:
[x (ln x)^k][1]^e= e ln e - 1 ln 1 = e.
∫[1]^e (ln x)^k dx = e - k ∫[1]^e (ln x)^k-1 dx.
Note: the iteration holds for any real number k≠0; however, you will get a finite number of terms if and only if k is a positive integer.
Thanks a ton Andre. You confirmed that I was doing it correctly. I realize now that I lost sight of what I ultimately had to do and thought I needed to be able to evaluate the resulting integral.
When I saw that it wasn't simplified, I thought I was doing the mechanics wrong. You were right on. I used the equality to make a reduction formula and set up a small table of values. I appreciate
the course correction.
Glad I could help, Judith. Iterated integrals are fun and very important. You can use them to define the factorial (k!) for any real number, not just positive integers! You can even use them to
define a fractional derivative, e.g., the 1/2-th derivative of ln(x). Strange and beautiful.
I hope I can help you with this, Judith.
If k = 1
∫[x] [=1]^x=e ln(x)dx = x[ln(x)] - x = 1
If k = 2
∫[x =1]^x=e [ln(x)]^2dx =2x + x[ln(x)]^2 - 2xln(x) = e - 2
If k = 3
∫[x =1]^x=e [ln(x)]^3dx = 6 - 2e
∫[x =1]^x=e [ln(x)]^4dx = 9e - 24
∫[x =1]^x=e [ln(x)]^5dx = 120 - 44e
I'm not sure if this is exactly what you need. If not, please write back.
Thanks William for your speedy response. It wasn't the method I was required to use, but I have it covered now. | {"url":"http://www.wyzant.com/resources/answers/24121/integration_by_iteration","timestamp":"2014-04-21T07:43:01Z","content_type":null,"content_length":"44570","record_id":"<urn:uuid:1154aca8-7094-49c7-9904-fc2fc98c9bc0>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00634-ip-10-147-4-33.ec2.internal.warc.gz"} |
permutation - basic
July 23rd 2010, 10:19 PM #1
Mar 2008
permutation - basic
below the basic question of permutation:
given that 4 letters are chosen from the word PAYMENT,find the number of arrangement that start with a vowel
ans: 2P1 x 6P3 = 240
my ans: 120
any guide?
It seems like the question assumes the only vowel can be in the first postition.
Giving $^2P_1 \times ^6P_3 = 240$
As you may realise you start the 4 letter arrangement with a vowel and have another vowel (as there are 2) in positions 2-4.
tq for your post, but i hope u can commnet my ans attached as below:
Why can't N or T be in the last postition? I think you are calculating combinations not permutations as you should. Are you aware of the difference?
it mean what i am doing is combination?
Hello, nikk!
Why do they include $Y$ in these problems?
There is always that controversy: "Is $Y$ a vowel?"
Given that 4 letters are chosen from the word PAYMENT,
find the number of arrangement that start with a vowel.
If $Y$ is a vowel: . $\;\underbrace{\text{vowel}}_3 \text{ - } \underbrace{\text{any}}_6 \text{ - } \underbrace{\text{any}}_5 \text{ - }\underbrace{\text{any}}_4 \;=\;360$
If $Y$ is not a vowel: . $\;\underbrace{\text{vowel}}_2 \text{ - } \underbrace{\text{any}}_6 \text{ - } \underbrace{\text{any}}_5 \text{ - }\underbrace{\text{any}}_4 \;=\;240$
tq ror your idea..nice
July 23rd 2010, 10:31 PM #2
July 23rd 2010, 10:38 PM #3
Mar 2008
July 23rd 2010, 10:50 PM #4
July 23rd 2010, 10:54 PM #5
Mar 2008
July 24th 2010, 06:49 AM #6
Super Member
May 2006
Lexington, MA (USA)
July 25th 2010, 07:54 AM #7
Mar 2008 | {"url":"http://mathhelpforum.com/discrete-math/151837-permutation-basic.html","timestamp":"2014-04-18T14:05:11Z","content_type":null,"content_length":"48547","record_id":"<urn:uuid:ade73579-c1ac-4398-b557-5ef936f0e20b>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00328-ip-10-147-4-33.ec2.internal.warc.gz"} |
Brevet US7386779 - Systems and methods for correcting errors in a received frame
The present invention relates to the field of communications, and more particularly, systems and methods for correcting errors in a received frame.
As the world has become more reliant on computers and information exchange, the need for reliable data transmission has become increasingly important. One key element in the exchange of information
is the accurate and efficient transmission and reception of data across noisy transmission channels.
Signal processing methods implemented in practical communications systems are usually designed under the assumption that any underlying noise and interference is Gaussian. Although this assumption
finds strong theoretical justification in the central limit theorem, the noise and interference patterns commonly present in modern mobile communications systems are far from Gaussian. Noise and
interference generally exhibit “impulsive” behavior. In typical mobile communication systems, noise and interference sources often include: motor-vehicle ignition noise, switching noise from
electromechanical equipment, thunderstorms, and heavy bursts of interference. Current signal processing systems are not designed to handle these non-Gaussian noise sources. Accordingly, these systems
may perform poorly, and might even fail, in the presence of impulsive noise.
Channel noise and interference can be effectively modeled as the superposition of many small and independent effects. In practice, these effects do not always follow a Gaussian distribution. This
situation appears to contradict the central limit theorem. For many years, engineers have been unable to explain this apparent contradiction. Consequently, many of the techniques developed to cope
with impulsive noise were mainly ad hoc, largely based on signal clipping and filtering prior to application of a Gaussian-based technique.
Clipping the amplitude of an input signal is only effective if the amplitude of the input signal is above or below specific threshold values. These threshold values are typically determined by the
limits of hardware used in a receiver in a communication system. Accordingly, the threshold values are often chosen in order to take advantage of the full dynamic range of analog to digital (A/D)
converter(s) used in such a receiver. However, if impulsive noise added to the input signal does not cause the amplitude of a signal to exceed a specific threshold, clipping will not remove the
noise. Additionally, even when noise does cause the signal to exceed a threshold, clipping only removes noise to the extent that the magnitude of the signal plus the noise is above the threshold.
Accordingly, noise is not actually removed, though its effects are somewhat reduced.
When individual signals within a sequence are contaminated by noise, the sequence may not be properly decoded, thereby making communications difficult. In typical communication systems, decoding is
used to identify potential communication errors. Additionally, decoding may be able to correct some, or even most, errors. Errors may be corrected by one of many error detection and correct schemes
known to those skilled in the art. Typical coding and decoding schemes are able to correct errors by inserting controlled redundancy into a transmitted information stream. This is typically performed
by adding additional bits or using an expanded channel signal set. These schemes allow a receiver to detect, and possibly correct, errors.
In its most simple form, one problem with noisy transmission environments is that, a certain percentage of the time, a transmitted ‘1’ is received as a ‘0’ or vice versa. There are many methods of
encoding data that allow received errors to be detected or even corrected. These encoding and decoding schemes are typically optimized based on a set of underlying assumptions. Preferably, these
assumptions are designed to match the conditions of a real-world communications environment. Often, systems using these schemes are designed under the assumption that the underlying noise and
interference is Gaussian. When these assumptions do not match real-world conditions, the performance of these schemes may no longer be optimal. While systems which use these schemes work well a
majority of the time, their performance is severely affected when conditions degrade.
One way to accommodate increased noise in a transmission channel is to build a high level of redundancy into the encoding scheme. The problem with such solutions is that adding redundancy increases
the size of each transmission frame. Those skilled in the art are familiar with the tradeoffs between using highly redundant encoding schemes, which allow the detection and correction of a greater
number of reception errors, and using a scheme with lower redundancy, which has a smaller frame, and thus allows a greater quantity of data to be transmitted in a given time period at the expense of
being able to detect and correct few reception errors. While these solutions may be somewhat effective, the tradeoff between accuracy and speed limits optimal performance.
Another solution for reducing the effects of noise on a transmission channel is to use multiple transmission channels for each transmission. Such schemes, called diversity schemes, transmit the same
data frame on multiple channels. When the data is received, each channel is checked for accuracy and a logical decision engine, or a combiner, chooses a received signal from one of the channels that
is believed to be accurate. An example of a receiver system using a diversity scheme is shown in FIG. 1.
The classical goal of a system based on a diversity scheme is to provide the receiver with L versions of an information signal transmitted over independent channels. The parameter L is the diversity
order of the system. There are many ways to introduce diversity into a system. Well-known examples include frequency, time and space diversity. The RAKE receiver is a diversity technique commonly
employed to combat error bursts or “deep fades” over a multipath fading channel. The basic idea is that the provisioning of multiple, independent versions of a transmitted signal greatly reduces the
impact of fading. One weak signal can be compensated by other strong signals. Hence, diversity addresses the issue of robust error performance in a fading environment.
There are several well-known methods used to combine the L diversity versions of a signal that reach a receiver. The most fundamental combining techniques include selection combining, equal-gain
combining, and maximal-ratio combining.
These schemes may be successful in reducing the effects of noise because it is unlikely that all of the channels will be simultaneously corrupted by noise. However, the overhead (i.e., cost of
additional hardware) associated with such a scheme is large because the system utilizes multiple transmitters, receivers, and broadcast channels. The use of multiple broadcast channels is also
undesirable because it requires significantly more bandwidth than normal broadcast schemes.
Therefore, there is a need in the art for systems and methods for accurately and efficiently encoding and decoding transmission signals in varying transmission conditions.
The present invention overcomes the limitations of the existing technology by providing systems and methods for correcting errors in a received frame. The systems utilize a plurality of inner
decoders for decoding a received frame to form a plurality of inner decoded received frames, wherein each of the plurality of inner decoders uses a different decoding scheme. Additionally, the
systems use an outer decoder unit for decoding each inner decoded received frame to form outer decoded received frames and for selecting an outer decoded received frame to use as an output frame.
The present invention introduces diversity into an error detection and correction system at the receiver side by decoding a received frame using a plurality of decoding schemes. Each of these schemes
are optimized for a different set of underlying assumptions. The schemes may be optimized to account for various types of noise including, but not limited to, Gaussian noise and impulsive noise. By
including a plurality of decoders using a plurality of decoding schemes, the error detection and correction system may accurately detect and correct errors in a constantly changing environment having
constantly changing noise patterns.
Other objects, features, and advantages of the present invention will become apparent upon reading the following detailed description of the embodiments of the invention, when taken in conjunction
with the accompanying drawings and appended claims.
FIG. 1 is a block diagram illustrating a diversity transmission structure.
FIG. 2 is an illustration of an α-k plot.
FIG. 3 is a block diagram illustrating a system for correcting errors in a received frame in accordance with an exemplary embodiment of the present invention.
FIG. 4 is a block diagram illustrating a system for correcting errors in a received frame using a Viterbi algorithm in accordance with an exemplary embodiment of the present invention.
Referring now to the drawings, in which like numerals refer to like techniques throughout the several views, exemplary embodiments of the present invention are shown.
The techniques of the present invention were developed after realizing that the conditions needed to validate the central limit theorem are not satisfied if the variance of “small and independent
effects” is allowed to be unbounded (from a conceptual perspective, an infinite variance describes a highly dispersed or impulsive random variable). Without a finite variance constraint, a converging
sum of normalized random variables can be proven to belong to a wider class of random variables known as “α-stable”. Thus, similar to Gaussian processes, α-stable processes can appear in practice as
the result of physical principles. Furthermore, all non-Gaussian Ix-stable processes are heavy-tailed processes with infinite variance, explaining the often found impulsive nature of practical
“Symmetric” α-stable random variables possess a characteristic function of the form:
Φ(ω)=e^−γ|ω| ^ α (1)
where α is called the index or characteristic exponent, and γ is the dispersion. Analogous to the variance in a Gaussian process, γ is a measure of the signal strength. The shape of the distribution
is determined by α. From the above equation, it can be proven that α is restricted to values in the interval (0,2]. Qualitatively, smaller values of α correspond to more impulsive distributions. The
limiting case of α=2 corresponds to the Gaussian distribution. This is the least impulsive α-stable distribution, and the only one with finite variance. A value of α=1 results in a random variable
with a Cauchy distribution, which is a heavy-tailed distribution.
An estimation theory in α-stable environments can be derived from the tools of robust statistics. In general, let ρ(x) be a symmetric cost function or metric which is monotonically non-decreasing on
[0,∞). For a set of samples x[1], x[2], . . . ,x[N], the M-estimator of the location parameter, β, is defined as
$β = arg min β ∑ i - 1 N ρ ( x i - β ) . ( 2 )$
In the theory of M-estimators, the shape of the cost function, ρ, determines the characteristics of the estimate, β. For example, if ρ(x)=x^2 (i.e. the Euclidean metric), β becomes the least-squares
estimate (i.e. the sample mean). For ρ(x)=|x|, β is the sample median. It may be shown that the cost function
ρ(x)=log(k ^2 +x ^2),(3)
where k is a constant, possesses important properties for optimizing decoder performance along the whole range of α-stable distributions. The importance of the cost function described in equation (3)
is that the value of k may be tuned to give optimal estimation performance depending on the parameters of the underlying distribution. Given the parameters α and γ of an α-stable distribution
generating an independently and identically distributed (i.i.d.) sample, the optimal value of k is given by a function of the form:
k(α, γ)=k(α)γ^1/α(4)
Expression (4) indicates a “separability” property of the optimal value of k in terms of the parameters α and γ. This reduces the problem of finding the functional form of k(α, γ) to that of
determining the simpler form:
k(α)=k(α, 1), 0<α≦2.(5)
This function may be referred to as “the α-k plot” of α-stable distributions. Under the maximum likelihood optimality criterion, the α-k plot touches three fundamental points:
□ 1. For α=2 (i.e. the Gaussian distribution), the optimal value of k is k=∞, which, for the location estimation problem, makes β equal to the sample mean.
□ 2. With α=1 (i.e. the Cauchy distribution), the optimal value is k=1. This is a direct consequence of the definition of the cost function in Equation (3), and the fact that the resulting
M-estimator is equivalent to the maximum likelihood estimator for a Cauchy distribution.
□ 3. When α→0 (i.e. the most impulsive distribution), the optimal value of k converges to k=0.
The above points suggest the general shape of the α-k plot illustrated in FIG. 2.
One general goal of using encoding and decoding for the transmission of data is to minimize the probability of error. In the situation where coded sequences are equally likely, this is accomplished
using a “maximum likelihood” decoder.
For hard decision decoding, it is well known that a maximum likelihood decoder selects the codeword that is closest in Hamming distance to the received sequence.
It is also well known that soft decision decoding offers a performance advantage over hard decision decoding. Soft decision decoding preserves information contained in the received sequence and
passes that information on to a decoding scheme. The task is to choose a cost function appropriate for soft decision decoding. For a channel with underlying noise and interference that is Gaussian,
maximum likelihood decoding is achieved using a Euclidean distance cost function. However, for a channel that is not Gaussian, the choice of an appropriate cost function is not trivial and may have a
significant impact on decoder performance.
The present invention introduces diversity into baseband detection and/or decoding of received information frames. In an exemplary embodiment of the present invention, different decoding schemes are
used to decode transmitted signals on channels that exhibit some degree of impulsiveness.
There are many systems that employ an outer code, usually a cyclic redundancy check (CRC) code, for the purpose of frame error detection (e.g. IS-95, List Viterbi Algorithm). A received frame that
passes the CRC check is accepted as containing no errors. Typically, a frame that fails the CRC check is discarded. In some systems, a retransmission request is issued when the frame fails the CRC
check. In an exemplary embodiment of the present invention, the outer CRC code is used to validate different “candidate” frames. Each candidate frame is generated using a different baseband detection
and/or decoding method. Hence, diversity is introduced into the system through the use of various detection and/or decoding techniques. The CRC code is a form of selection combining since the CRC
determines which, if any, of the candidate frames is accepted as the final estimate of a transmitted frame. If all L candidates fail the CRC, all candidates may be discarded.
Under the assumption that the CRC code is perfect (i.e. there are no undetected errors), it is easy to see that baseband diversity with L>1 (i.e. L different methods of detection and/or decoding)
exhibits performance no worse than L=1 using any one particular method of detection and/or decoding.
FIG. 3 is a block diagram of a system for correcting errors in a received frame according to an exemplary embodiment of the present invention. An RF receiver 305 receives a signal over an RF channel
and distributes the received signal to a plurality of decoders 310, 315, 320. Each of the decoders uses a different baseband decoding technique. In an exemplary embodiment of the present invention,
one decoder is optimized for Gaussian noise, and one or more decoders are optimized for non-Gaussian noise. Typically, it is preferable for the decoders optimized for non-Gaussian noise to be
optimized for impulsive noise. Each decoder 310, 315, 320 outputs a decoded output signal to a CRC check and select unit 325. The CRC check and select unit 325 performs a CRC check on the outputs
from the decoders 310, 315, 320 and selects a decoded output signal that passes the CRC check. The selected decoded output signal is sent from the CRC check and select unit as an output decision 330.
FIG. 4 is a block diagram of a system for correcting errors in a received frame which uses a Viterbi algorithm according to an exemplary embodiment of the present invention. The system shown in FIG.
4 is designed for a communications channel with background noise that is potentially impulsive (e.g., mobile communications system). The channel coding system 410, 415 utilizes an outer CRC code and
an inner convolutional code. An input frame 405 is fed to a CRC encoder 410 for CRC encoding. Any suitable error detection or error detection/correction encoder may be used in place of the CRC
encoder 410. This first step of encoding may be referred to as outer error detection encoding. In reference to the various embodiments of the present invention, outer error detection encoding and
decoding may refer to CRC encoding, parity check encoding, or any other suitable error detection or error detection/correction scheme.
In an exemplary embodiment of the present invention, the CRC encoder 410 feeds the outer encoded input frame to a convolutional encoder 415. The present invention is operable using any decoding
scheme that can be used for decoding a frame, including, but not limited to, Viterbi codes, Turbo codes, block codes, LDPC codes, Reed-Solomon codes, etc. The convolutional encoder 415 performs a
second level of encoding to the input frame. This second level of encoding may be referred to as the inner error detection/correction scheme. It should be understood that while the embodiment
described above used a convolutional code, any inner code may be used.
Once the input frame is encoded by both the inner error detection/correction scheme and the outer error detection scheme, it is transmitted to a desired destination. The present invention is not
concerned with the actual transmission of data, but rather the detection and correction of errors incurred during transmission. In a typical data transmission system, the input frame is modulated by
a modulator 420, transmitted over a transmission channel 425, and demodulated by a demodulator 430 once it is received at a destination.
After receipt of the transmitted frame at the destination, the received frame is decoded. In accordance with the present invention, the received frame is first decoded using multiple decoding schemes
associated with the inner decoding scheme. In an exemplary embodiment of the present invention, a plurality of Viterbi decoders 435, 440 are used. Each Viterbi decoder 435, 440 uses a different cost
function. The use of various cost (i.e. metric) functions within the Viterbi decoding unit 435, 440 for the inner convolutional code, introduces diversity into the system. For example, L cost
functions may be described by:
ρ[1](x)=log(k [i] ^2 +x ^2), i=1,2 . . . L,(6)
where the constant k[i ]is optimized for a particular level of impulsivity (i.e., a particular value of α). We refer to this system as having “metric diversity.”
As a simple example, and without limitation, consider L=2. A designer may choose k[1 ]to be optimized for a channel with no impulsivity (i.e., Gaussian noise) and k[2 ]for a channel with extreme
impulsivity. These two extremes are represented by α=2 and α=0, respectively. Accordingly, the optimal values of k are k[1]=∞ and k[2]=0.
In an exemplary embodiment of the present invention, various decoding schemes are selected to accommodate the various noise profiles that may be encountered. As in the example above, it is generally
desirable, but not critical, to select at least one decoding scheme optimized for a channel with no impulsivity (i.e., Gaussian noise) and at least one decoding scheme optimized for a channel with
impulsivity. Additionally, depending on available resources and other considerations, it may be desirable to include a plurality of decoders optimized for channels having a variety of impulsiveness.
For example, the decoders may be optimized ranging across the spectrum of α=2 to α=0. Such a scheme using multiple decoders, greatly increases the odds of correcting errors incurred in a frame due to
noise in the transmission channel.
After the decoders 435, 440 decode the transmitted frame, the results are fed to an outer error check/frame select unit 445. In an exemplary embodiment of the present invention, the outer error check
/frame selection unit 445 performs a CRC check and selects the results of an inner decoder that passes the CRC check. The selected frame is then outputted as the output frame 450. Any selection
routine may be used. A simple selection routine includes sequentially checking the results of each inner decoder 435, 440 and selecting the first decoder which passes the CRC check. Alternatively,
all decoders 435, 440 may be checked and compared to assure that all frames passing the CRC check contain the same message. It is highly unlikely that multiple decoded frames will pass the CRC check
but contain different messages, however this alternative technique may be desirable in systems where a low level error detection scheme, such as parity check, is used for the outer error detection
While this invention has been described in detail with reference to embodiments thereof, it will be understood that variations and modifications can be effected without departing from the spirit or
scope of the present invention as defined by the claims that follow. | {"url":"http://www.google.fr/patents/US7386779","timestamp":"2014-04-21T04:45:49Z","content_type":null,"content_length":"94018","record_id":"<urn:uuid:7da25aaf-e489-4f8e-8a36-1227cadf6ad7>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00479-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sampling a discrete distribution
The following cute question came up at lunch today (all credit goes to Aaron Archer).
Informal challenge: You are given a discrete distribution with n possible outputs (and you know the probability p[i] of each output), and you want to sample from the distribution. You can use a
primitive that returns a random real number in [0,1].
This was the way the problem came to me, and the goal was to find a very fast practical solution. Think of n as around 10^6 and use a regular machine as your machine model (real number means a
"double" or something like that).
Formal challenge: To formalize this completely for a bounded-precision computer, we can say that we work on a w-bit machine and probabilities have w-bit precision. The distribution is given by n
integers x[i], and these values sum to exactly 2^w. Output "i" should be produced with probability exactly x[i] / 2^w. You are given a primitive that produces a uniformly distributed w-bit number.
The challenge is to get O(1) worst-case time. (NB: some fancy data structures required.)
11 comments:
That is a cute question.
No fancy data structures are required using the solution from slides 41--48 of this PDF: http://www.eurandom.nl/events/workshops/2009/quantative_models/BusicSIMParf.pdf
The solution there, known as the Walker method or alias method, has O(n log n) preprocessing and O(1) query time (both worst-case). The preprocessing can easily be improved to O(n), which was
done by Kronmal and Peterson or can be easily done in a couple minutes.
How much space am I allowed to use?
For the challenge, what pre-processing time are you allowing us?
The space is O(n) and the preprocessing time=sorting time.
Anon Rex, I don't quite see how this solves the problem. (But it's likely that I'm misunderstanding the algorithm).
If many columns have height 1/N + eps, and some are much below 1/N, you need to move mass from many high columns to a low one. Then each column has many breaks inside it, and you need something
like predecessor search to select among the breaks. Can you clarify what the algorithm really does?
Mihai, I think the idea is that in step 1, if the tallest column has height 1/n + eps and the shortest one has height delt, you don't just move eps, but 1/n - delt. Then each column has only one
break. (That's what I got from the pictures in the PDF, at least)
I would love to hear the fancy data structure solution!
Here's a 2-page note by Warren Smith, who appears to have independently (but subsequently) discovered the linear-time version of Walker's method. It works a lot like quicksort partitioning.
Jim is right. It is explained a little better here: http://www.scribd.com/doc/55772825/62/Alias-Method
Basically you want to cut up the probabilities p_i into 2n pieces y_jk for k=1,2, j=1,...,n, in such a way that y_j1+y_j2=1/n, and the sum of y_jk that are associated with p_i sum up to p_i.
(Then sampling in constant time is easy.) This turns out to be easy: Just take the smallest p_i (say p_min), define y_11 = p_min and y_12 = 1/n-p_min. Associate y_11 with p_min and y_12 with
p_max (the largest p_i). Set p_max = p_max - 1/n + p_min (which is larger than 0 since p_max >= 1/n). Now recurse on the probabilities except p_min.
Apologies for the ugly writeup of the very beautiful idea. And thanks Rex! I learned something today!
Isn't the obvious O(log n)-time algorithm practical enough?
The Alias method does solve this problem, at least as it is described in Non-Uniform Random Variate Generation by Luc Devroye http://cg.scs.carleton.ca/~luc/rnbookindex.html
The key insight is that any discrete distribution over n possibilities can be rewritten as a mixture with uniform mixing proportions over 2-possibility discrete distributions. Then the cost for a
sample (after setup) is 1 or 2 table lookups and 1 or 2 calls to the primitive random number source. The setup procedure can be done in linear time and only requires saving 2 tables of O(n) space
since WLOG we can order the first components of each mixture component by index.
Not quite answering your question: See page 3 of this paper, section "Refinement", for a solution with expected constant time and no fancy data structure (only a trie with precomputed values).
My solution went as follows. In principle, each possible output gets an interval inside [0,1] and the generated sample is the interval stabbed by the random number. Trivially, this translates to
predecessor search, which is not constant time. But we are allowed to shuffle the intervals around, and this will make predecessor search O(1)-time. (It turns out that this magical shuffling is
just sorting by length.)
Assume all probabilities are between p and 2p. Then we can easily solve the problem in O(1) time.
Observation 1: Given a point in [0,1], we can locate it inside intervals of roughly equal size (up to a factor of 2) in constant time.
Proof: We impose a grid of precision p and store the interval stabbed by every grid point. Then round the query point to the grid, and do a linear search among the intervals starting from where
the nearest grid point lands. This cannot inspect more than O(1) intervals.
Observation 2: Bucket probabilities between 2^k/2^w and 2^(k+1)/2^w for every k. Apply the solution above inside a bucket. The query first needs to figure out which bucket the random number
landed in. This is predecessor search among w start points (the beginning of the first interval of each bucket). But this only needs constant time with fusion trees.
While fusion trees are a non-elementary data structure and probably not practical, one can say they "unite theory and practice". In practice, binary search among 32 values is not O(1) time, it's
more like zero time because everything is in cache.
As an anonymous commenter points out, I overcomplicate things, and there is a very nice elementary solution already known in the literature. I'll summarize this in a later comment. | {"url":"http://infoweekly.blogspot.com/2011/09/sampling-discrete-distribution.html?showComment=1316435382493","timestamp":"2014-04-19T22:27:47Z","content_type":null,"content_length":"63000","record_id":"<urn:uuid:abb6fdc8-7db9-4576-9799-794e5178884a>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00070-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculating gear inch for 406 wheel
02-15-09, 08:11 PM #1
Calculating gear inch for 406 wheel
When I enter the following to the Sheldon gear calculator:
Tire: "20 X 1.75 / 44-406 / BMX tire"
Chainring: 52T
Cassete: 11T
I get 88.3 gear inches.
I presume that the formula is: gear_inch = (52 / 11) x 18.68"
Where does the 18.68" constant comes from? Is it part of the specification of the rim and/or tire? 406mm is about 16" so is the extra 2.68" the contribution of the tire to the diameter?
(in my case, I plan to use Schwalbe Marathon Plus 20 x 1.35" (406)).
I have found an accurate enough value is to add twice the tyre width to the wheel bead seat diameter, so 406mm+2*34mm=474.6mm=18.68".
Usually I am a practical fellow not entering philosophical questions, but don't you guys think that is about time to start using gain ratios instead of gear inches. A bicycle moves forward thanks
to a set of levers and the crank is the most important lever. A few millimeters change in the crank length changes the final amount of effort a ride makes for the same gear inches. Gain ratio
takes into consideration effort and gear development no.
About your question, I doubt Sheldon Brown's calculator would not be right.
Seeing as I can't feel a 2.5 mm difference in crank length, I'm not too interested in gain ratio; though it should make a noticeable difference according to Mr Brown. 'Development' as per the
Continental method, that's different.
Comparisons between one combination of chainwheel/cog ratios and another, they can be in inches, metres or anything else as long as I can understand the relative sizes.
I think that if your legs are either really long or really short or if you make a big jump in crank lengths you should feel the difference. I am not that tall 5"3' and 170mm cranks hurt my knees.
And no, I am not a masher, I am a spinner that keeps a really high RPM.
One of my bikes has 165 mm cranks and another one has 145 mm cranks.
THERE IS A HUGE DIFFERENCE IN PEDALING DYNAMICS
There is significantly less torque since the moment arm is 20 mm shorter in the 145 mm cranks, but it is infinitely easier to spin since the pedaling circumference is
2 * pi * 145 = 911.06187 mm
and the 165 mm crank pedaling circle is
2 * pi * 165 = 1 036.72558 mm,
1036.7255 mm - 911.06187 mm = 125.66363 mm = 4.94738701 in
Almost a 5 inch difference in pedaling circumference!
In short, spinning is a lot easier for shorter cranks, but climbing is easier for longer cranks.
Anyways, to sort of reply to the original post, Sheldon's calculator doesn't have more tire sizes available for input, but it works well enough!
I hope we didn't confuse you with really technical jargon
Just a bit of engineering. Since we are talking about engineering an since you started making your calculations in the metric system, I do not see the point on converting it to the imperial
system. After all, the Americans in recognition for the help by the French in their struggle for independence, should at least adopt the metric system!
Oops, it seems that I have hijacked the thread again
Usually I am a practical fellow not entering philosophical questions, but don't you guys think that is about time to start using gain ratios instead of gear inches. A bicycle moves forward thanks
to a set of levers and the crank is the most important lever. A few millimeters change in the crank length changes the final amount of effort a ride makes for the same gear inches. Gain ratio
takes into consideration effort and gear development no.
The problem I see with using gain ratios is that, unlike cog/wheel ratios, crank length also has biomechanical ramifications due to differences in bone size:
So although gear inches don't tell the whole story, at least they can be compared among different individuals without taking body measurements.
Jur has it right, you take the 406 mm diameter of the rim and add twice the diameter of the tire to get the overall diameter. I generally just use the spec diameter of the tire. That's worked
pretty well for me. Schwalb has a page on tire dimensions. At the top of their table they have some 406 mm tire examples. Take the 40-406 as an example. I would calculate the diameter as 2*40+406
=486 mm. The tire circumference is given in the table as ~1530 mm. That tire circumference corresponds to a diameter of 487 mm. So, for the kind of accuracy that I require (not much) just using
the specified tire diameter works pretty well.
To get back to inches divide by 25.4 in/mm
I go back and forth between bikes with 170 mm and 175 mm crank arms. The bikes are different, so I notice a difference between bikes, but I really can't feel anything different in the crank
Part of every ride I'm spinning and part of every ride I'm climbing!
I get 94.5 in?
Amen to the metric system. But, in the end, it's just a number and, with experience, you get to understand what that number means to you. So, for example, I know that I really don't need a high
gear any higher than about 103 inches. I know that with an unloaded bike I can climb the steepest steep hills comfortably with a 22 inch low, and that lower doesn't help. I could express these
numbers in mm, m, or furlongs, or I could use development or gain, as long as I understand what they mean to me.
I kind of like the gear inches. Sort of archaic and quirky.
Just a bit of engineering. Since we are talking about engineering an since you started making your calculations in the metric system, I do not see the point on converting it to the imperial
system. After all, the Americans in recognition for the help by the French in their struggle for independence, should at least adopt the metric system!
Oops, it seems that I have hijacked the thread again
I'm all for metric/SI, but some things in cycling are still in inches: 1" and 1-1/8" steerer diameters, BB shell threads in TPI, ... And I still think in psi for tire pressures rather than bar,
atmospheres or kpascal; and chain wear limits in fractions of inches.
Gain ratios are fine and dandy, but i've been used to thinking in terms of gear inches that I'd rather keep to what I know. Besides, all my cranks are the same length, so it wouldn't matter to
I can't tell the difference between 170's and 175's either, because they both fit me the same: too big!
Seriously, you're talking about a 3% difference in crank size. That's almost nothing. Try alternating between 180's, 170's, 155's and 140's for a while; not only will you feel a difference, but
you will quickly learn which is the best size for you.
I rather like the gear inch system; the reason being is that you can get a fair idea of how a bike will perform regardless of the wheel size. I currently have a Dahon MU SL and should be getting
my new Brompton in a couple of weeks, but I already know how the gears will compare between the two bikes.
I can't tell the difference between 170's and 175's either, because they both fit me the same: too big!
Seriously, you're talking about a 3% difference in crank size. That's almost nothing. Try alternating between 180's, 170's, 155's and 140's for a while; not only will you feel a difference, but
you will quickly learn which is the best size for you.
I have, somewhere between 167.5 mm and 175 mm works for me. 180 and 177.5 cranks hurt my joints (don't know why) and shorter cranks slow me down. YMMV
02-15-09, 08:28 PM #2
Senior Member
Join Date
May 2005
1 Post(s)
0 Thread(s)
02-15-09, 09:22 PM #3
Senior Member
Join Date
Mar 2005
0 Post(s)
0 Thread(s)
02-15-09, 10:39 PM #4
Senior Member
Join Date
Dec 2005
Auld Blighty
My Bikes
Early Cannondale tandem, '99 S&S Frezoni Audax, '65 Moulton Stowaway, '52 Claud Butler, TSR30, Brompton
0 Post(s)
0 Thread(s)
02-15-09, 11:02 PM #5
02-15-09, 11:39 PM #6
02-16-09, 06:45 AM #7
Senior Member
Join Date
Mar 2005
0 Post(s)
0 Thread(s)
02-16-09, 07:28 AM #8
Join Date
Oct 2006
0 Post(s)
0 Thread(s)
02-16-09, 07:58 AM #9
Senior Member
Join Date
Jun 2006
Boston Area
My Bikes
Univega Gran Turismo, Guerciotti, Bridgestone MB2, Bike Friday New World Tourist, Serotta Ti
0 Post(s)
0 Thread(s)
02-16-09, 08:03 AM #10
Senior Member
Join Date
Jun 2006
Boston Area
My Bikes
Univega Gran Turismo, Guerciotti, Bridgestone MB2, Bike Friday New World Tourist, Serotta Ti
0 Post(s)
0 Thread(s)
02-16-09, 08:05 AM #11
Biker looking for a ride!
Join Date
Nov 2006
Edmond Oklahoma
My Bikes
Kuota Kreedo...looking for something different.
0 Post(s)
0 Thread(s)
02-16-09, 08:11 AM #12
Senior Member
Join Date
Jun 2006
Boston Area
My Bikes
Univega Gran Turismo, Guerciotti, Bridgestone MB2, Bike Friday New World Tourist, Serotta Ti
0 Post(s)
0 Thread(s)
02-16-09, 09:28 AM #13
02-17-09, 10:53 PM #14
Senior Member
Join Date
May 2008
0 Post(s)
0 Thread(s)
02-18-09, 06:14 AM #15
Senior Member
Join Date
Jan 2008
0 Post(s)
0 Thread(s)
02-18-09, 07:49 AM #16
multimodal commuter
Join Date
Nov 2006
NJ, NYC, LI
My Bikes
1945? Fothergill, 1948 Raleigh Record Ace, 1954 Drysdale, 1963? Claud Butler Olympic Sprint, Lambert 'Clubman', 1972 Fuji Finest, 1983 Trek 720, 1984 Counterpoint Opus II, 1993 Basso Gap, 2010
Downtube 8h, and...
8 Post(s)
1 Thread(s)
02-18-09, 04:13 PM #17
Senior Member
Join Date
Jan 2008
Cheshire, North West England, UK
My Bikes
Brompton S2L-X, Bridgestone Moulton, 1963 & 1966 Moultons, Scott Mountain bike
0 Post(s)
0 Thread(s)
02-25-09, 03:10 PM #18
Senior Member
Join Date
Dec 2005
Auld Blighty
My Bikes
Early Cannondale tandem, '99 S&S Frezoni Audax, '65 Moulton Stowaway, '52 Claud Butler, TSR30, Brompton
0 Post(s)
0 Thread(s) | {"url":"http://www.bikeforums.net/folding-bikes/511632-calculating-gear-inch-406-wheel.html","timestamp":"2014-04-16T19:45:42Z","content_type":null,"content_length":"99388","record_id":"<urn:uuid:2b2edcca-ea0b-48be-88b0-ce8e7df6160e>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00031-ip-10-147-4-33.ec2.internal.warc.gz"} |
Latest posts of: JCollie - Java-Gaming.org
I have some issues with mapping converted coordinates to a sphere. I'm using libnoiseforjava and the jMonkeyEngine.
First my conversion methode:
1 public Vector2f cartesianToGeo(Vector3f position, float sphereRadius)
2 {
3 sphereRadius = FastMath.sqrt((position.x*position.x)+(position.y*position.y)+(position.z*position.z));
5 float lat = (float)Math.asin(position.z / sphereRadius) * FastMath.RAD_TO_DEG; //theta
6 float lon = (float)Math.atan2(position.y, position.x) * FastMath.RAD_TO_DEG; //phi
8 return new Vector2f(lat, lon);
9 }
This is the mapping method from libnoiseforjava:
1 public double getValue (double lat, double lon)
2 {
3 assert (module != null);
5 double x, y, z;
6 double r = Math.cos(Math.toRadians(lat));
7 x = r * Math.cos (Math.toRadians(lon));
8 y = Math.sin (Math.toRadians(lat));
9 z = r * Math.sin (Math.toRadians(lon));
10 return module.getValue (x, y, z);
11 }
And here is what I get (using sphere approximation from octahedron):
It looks like two spheres into one another and on the right is the transition between them with a high count of points. My assumption is, that the getValue method expects latitude and longitude in an
area from -90 to 90 degrees. My conversion method only returns values from -180 to 180 degrees. That's probably the reason why I get two spheres. First: does anybody agree? Or is there another
mistake? Second: Is there an easy way to also convert the new geo coordinates to an area from -90 to 90 degrees. Or a better method that gives values in this area from the beginning? My math skills
kinda suck | {"url":"http://www.java-gaming.org/index.php?action=profile;u=38564;sa=showPosts","timestamp":"2014-04-18T21:04:03Z","content_type":null,"content_length":"81677","record_id":"<urn:uuid:a381f70b-76f0-44ac-bcc7-06308ae252e2>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00291-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cournot-Walras equilibrium as a subgame perfect equilibrium
The Library
Cournot-Walras equilibrium as a subgame perfect equilibrium
Busetto, Francesca, Codognato, Giulio and Ghosal, Sayantan . (2008) Cournot-Walras equilibrium as a subgame perfect equilibrium. International Journal of Game Theory, Volume 37 (Number 3). pp.
371-386. ISSN 0020-7276
Full text not available from this repository.
In this paper, we investigate the problem of the strategic foundation of the Cournot-Walras equilibrium approach. To this end, we respecify a la Cournot-Walras the mixed version of a model of
simultaneous, noncooperative exchange, originally proposed by Lloyd S. Shapley. We show, through an example, that the set of the Cournot-Walras equilibrium allocations of this respecification does
not coincide with the set of the Cournot-Nash equilibrium allocations of the mixed version of the original Shapley's model. As the nonequivalence, in a one-stage setting, can be explained by the
intrinsic two-stage nature of the Cournot-Walras equilibrium concept, we are led to consider a further reformulation of the Shapley's model as a two-stage game, where the atoms move in the first
stage and the atomless sector moves in the second stage. Our main result shows that the set of the Cournot-Walras equilibrium allocations coincides with a specific set of subgame perfect equilibrium
allocations of this two-stage game, which we call the set of the Pseudo-Markov perfect equilibrium allocations.
Item Type: Journal Article
Subjects: H Social Sciences > HB Economic Theory
Q Science > QA Mathematics
Divisions: Faculty of Social Sciences > Economics
Library of Game theory, Noncooperative games (Mathematics), Equilibrium (Economics)
Journal or International Journal of Game Theory
Publisher: Springer
ISSN: 0020-7276
Date: November 2008
Volume: Volume 37
Number: Number 3
Number of 16
Page Range: pp. 371-386
Identification 10.1007/s00182-008-0123-8
Status: Peer Reviewed
Publication Published
Access rights Restricted or Subscription Access
to Published
Version or Busettoy, F., Codognatoz, G., and Ghosal, S. (2008). Cournot-Walras equilibrium as a subgame perfect equilibrium. [Coventry]: University of Warwick, Department of Economics. (Warwick
Related economic research papers, no.837). http://wrap.warwick.ac.uk/id/eprint/219
Related URLs: • Related item in WRAP
[error in
script] [error
in script]
Aliprantis CD, Border KC (1999) Infinite dimensional analysis. Springer, New York Amir R, Sahi S, Shubik M, Yao S (1990) A strategic market game with complete markets. J Econ Theory
51:126–143 Aumann RJ (1965) Integrals of set valued functions. J Math Anal Appl 12:1–12 Aumann RJ (1966) Existence of competitive equilibria inmarkets with a continuum of traders.
Econometrica 24:1–17 Bonnisseau J-M, Florig M (2003) Existence and optimality of oligopoly equilibria in linear exchange economies. Econ Theory 22:727–741 CodognatoG (1995)
Cournot–Walras and Cournot equilibria inmixed markets: a comparison. Econ Theory 5:361–370 Codognato G, Gabszewicz JJ (1991) Equilibres de Cournot–Walras dans une économie d’échange.
Revue Econ 42:1013–1026, 1–17 Codognato G, Gabszewicz JJ (1993) Cournot–Walras equilibria in markets with a continuum of traders. Econ Theory 3:453–464 Codognato G, Ghosal S (2000a)
Cournot–Nash equilibria in limit exchange economies with complete markets and consistent prices. J Math Econ 34:39–53 Codognato G, Ghosal S (2000b) Oligopoly à la Cournot–Nash in
markets with a continuum of traders. Discussion Paper No 2000-5, CEPET (Central European Program in Economic Theory) Institute of Public Economics, Graz University d’Aspremont C,
DosSantos FerreiraR,Gérard-VaretL-A (1997) General equilibrium concepts under imperfect competition: a Cournotian approach. J Econ Theory 73:199–230 Dierker H, Grodal B (1986)
Nonexistence of Cournot–Walras equilibrium in a general equilibrium model with two oligopolists. In: HildenbrandW,Mas-Colell A (eds) Contributions to mathematical economics in honor
of Gérard Debreu. North-Holland, Amsterdam Dubey P, Shapley LS (1994) Noncooperative general exchange with a continuum of traders: two models. J Math Econ 23:253–293 Dubey P, Shubik M
References: (1978) The noncooperative equilibria of a closed trading economy with market supply and bidding strategies. J Econ Theory 17:1–20 Fudenberg D, Tirole J (1991) Game theory. MIT Press,
Cambridge Gabszewicz JJ, Michel P (1997) Oligopoly equilibrium in exchange economies. In: Eaton BC, Harris RG (eds) Trade, technology and economics. Essays in honour of Richard G
Lipsey. Edward Elgar, Cheltenham Gabszewicz JJ, Vial J-P (1972) Oligopoly ‘à la Cournot–Walras’ in a general equilibrium analysis. J Econ Theory 4:381–400 Gale D (1960) The theory of
linear economic models. Academic Press, New York Lahmandi-Ayed R (2001) Oligopoly equilibria in exchange economies: a limit theorem. Econ Theory 17:665–674 Mas-Colell A (1982) The
Cournotian foundations ofWalrasian equilibrium theory. In: HildenbrandW(ed) Advances in economic theory. Cambridge University Press, Cambridge Maskin E, Tirole J (2001) Markov perfect
equilibrium. J Econ Theory 100:191–219 Okuno M, Postlewaite A, Roberts J (1980) Oligopoly and competition in large markets. Am Econ Rev 70:22–31 Peck J, Shell K, Spear SE (1992) The
market game: existence and structure of equilibrium. J Math Econ 21:271–299 Postlewaite A, Schmeidler D (1978) Approximate efficiency of non-Walrasian Nash equilibria. Econometrica
46:127–137 Roberts K (1980) The limit points of monopolistic competition. J Econ Theory 22:256–278 Roberts DJ, Sonnenschein H (1977) On the foundations of the theory of monopolistic
competition. Econometrica 45:101–114 Sahi S, Yao S (1989) The noncooperative equilibria of a trading economy with complete markets and consistent prices. J Math Econ 18:325–346
Shapley LS, ShubikM (1977) Trade using one commodity as ameans of payment. J Polit Econ 85:937–968 Shitovitz B (1973) Oligopoly in markets with a continuum of traders. Econometrica
41:467–501 Shitovitz B (1997) A comparison between the core and the monopoly solutions in a mixed exchange economy. Econ Theory 10:559–563
URI: http://wrap.warwick.ac.uk/id/eprint/29123
Data sourced from Thomson Reuters' Web of Knowledge
Actions (login required) | {"url":"http://wrap.warwick.ac.uk/29123/","timestamp":"2014-04-16T10:25:17Z","content_type":null,"content_length":"45240","record_id":"<urn:uuid:3bf8c297-36b1-436d-aa76-716e7a600929>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00216-ip-10-147-4-33.ec2.internal.warc.gz"} |
Our ability to estimate simple statistics: a neglected research area - Big Data, Plainly Spoken (aka Numbers Rule Your World)
I recently came across a series of papers by Irwin Levin (link), about how well people estimate statistical averages from a given set of numbers. In contrast to the findings of Tversky and Kahneman,
Gigerenzer, etc. on probability, it seems like we are able to guess average values pretty well, even in the presence of outliers.
It must be said the sample size used in Levin's experiments was tiny (12 students in one case but working with something like 75 sets of numbers). That said, the experimental setup was remarkable.
Take this paper as an example. The numbers were either shown in sequence or at the same time. Levin created three types of tasks: a descriptive task in which the goal was to get the average of the
numbers presented, including the outliers; an inference task in which the goal was to guess the average of the population of numbers from which the sample was drawn, in which case we expect the
subjects to discount the outliers; and a discounting task, in which subjects were presented with data including outliers, but were asked to ignore them.
The reason for this post is that Levin's work was done in the 1970s (Levin himself retired this year according to his webpage). There doesn't appear to be much interest in this subject since then.
It seems like researchers may find the estimation of summary statistics like means, medians, etc. not interesting enough. All the new research that I know of concerns judging probability
distributions, margins of error, variability, etc.
However, I'm more interested in point estimates, and I feel that the early research left the question still unsettled. I haven't found any research on how good we are at guessing the median of a set
of numbers, or the mode, or trimmed means, or moving averages. If we see repeated numbers, are we likely to use the average, the median or the mode, or some other statistic to summarize that
information? Given what we now know about irrationality and biases in judging probabilities, are we able to replicate Levin's finding? Or will we find that his experiment would not hold with better
You can follow this conversation by subscribing to the comment feed for this post.
It would be interesting to explore this topic graphically rather than numerically. A simple trial would be to present a scatterplot and ask people to draw a best-fit line. Simpler would be a number
line inviting the subject to plot the mean. The subject could be asked to match distributions of points they believed arose from the most similar probability functions... Sounds like a kind of fun
study to be a participant in.
When I took cognitive psych back in college I did something sort of similar: I asked people to add an additional "random" point to a cloud of previously generated pseudorandom points. The results
suggested that the process the subject went through to select a point was complex - they intersected an overal tendency toward certain spaces that was unique to each person with a tendency to certain
areas (e.g. those far from other points) that everyone preferred with a given arrangement of seed points.
Overal I find it very interesting, but I'm not sure what the paths are from results of studies like these to applications. It may be that it could help make choices about where to lead the reader's
eye to statistically significant patterns, and where to rely on them to see those patterns without help? Or perhaps it could help analysts inoculate themselves against human weaknesses in the
interpretation of data?
GTT: Here's the application that I was thinking about when I wrote this post. Imagine you are one of those people who receive regular reports with data in them or are in a position to observe data as
they arise. Say the manager of a call center, or a shop. You are not the analyst or accountant. But because you're there, you develop an intuition of what the week's sales number is, or the average
person who calls the call center. I'm interested in how that representative statistic is generated in the absence of doing the numbers, and how accurate that estimate is compared to the real numbers.
Following that line of reasoning along, a very simple utility would be in identifying cases where more explicit statistics are needed to pre-empt poor intuition on the part of a data observer. If
people are great at picking out the mean of un-analyzed data, then there's not much need to calculate that mean on the fly. But (for example) people aren't necessarily that great at identifying
significant clumpiness in data, so if that's a relevant question it might be good to report data along with some quantification of that clumpiness so that the observer doesn't jump to the conclusion
that there's something going on when there isn't.
Specific example: A business that sold high-value items might see very few sales per week, and in this situation clumps due to random chance will be common. However real clumps, related to factors
the business operators weren't yet aware of, might well happen. So if they had a little helper utility to watch the data stream and estimate the chances that a given clump resulted from random
variability, that might be valuable.
That's one step beyond the research you're talking about here, but I can see how the research might suggest approaches like this.
There's some nice work on the representation of statistics by the visual system that might be of interest:
(and the papers that follow).
This work is more perceptual in nature, but it suggests that our visual system computes a variety of summary statistics over the displays we see.
Here's another example of an application. I just finished marking a stack of midterm exams. If you ask me to what the mean score was in the class, how accurate would my guesstimate be? What are the
heuristics I would be using to come up with that guess? Am I affected by the last few papers I marked? Am I affected by the frequency of repeated scores? Am I affected by the sections of the exam
that count for more points? Am I affected by the max and min scores I've seen? How does memory play into this?
I understand that there is work done on the visual side and I am of course interested in visualizations. But I don't believe the two areas overlap. What I'm looking for is different.
Here's something from another vision lab -- not necessarily about summary statistics, but much more on the visualization side of things.
This is only a preview. Your comment has not yet been posted.
Your comment could not be posted. Error type:
Your comment has been posted. Post another comment
As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.
Having trouble reading this image? View an alternate.
Post a comment
Recent Comments | {"url":"http://junkcharts.typepad.com/numbersruleyourworld/2012/11/our-ability-to-estimate-simple-statistics-a-neglected-research-area.html","timestamp":"2014-04-20T06:27:14Z","content_type":null,"content_length":"73669","record_id":"<urn:uuid:4f682d9f-6a1d-4d08-b73a-5b312126f0a0>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00466-ip-10-147-4-33.ec2.internal.warc.gz"} |
The n-Category Café
Entropies vs. Means
Posted by Tom Leinster
If you’ve been watching this blog, you can’t help but have noticed the current entropy-fest. It started on John’s blog Azimuth, generated a lengthy new page on John’s patch of the nLab, and led to
first this entry at the Café, then this one.
Things have got pretty unruly. It’s good unruliness, in the same way that brainstorming is good, but in this post I want to do something to help those of us who are confused by the sheer mass of
concepts, questions and results—which I suspect is all of us.
I want to describe a particular aspect of the geography of this landscape of ideas. Specifically, I’ll describe some connections between the concepts of entropy and mean.
This can be thought of as background to the project of finding categorical characterizations of entropy. There will be almost no category theory in this post.
I’ll begin by describing the most vague and the most superficial connections between entropy and means. Then I’ll build up to a more substantial connection that appeared in the comments on the first
Café post, finishing with a connection that we haven’t seen here before.
Something vague I’m interested in measures of size. This has occupied a large part of my mathematical life for the last few years. Means aren’t exactly a measure of size, but they almost are: the
mean number of cameras owned by a citizen of Cameroon is the size of the set of Cameroonian cameras, divided by the size of the population. So I naturally got interested in means: see these two
posts, for instance. On the other hand, entropy is also a kind of size measure, as I argued in these two posts. So the two concepts were already somewhat connected in my mind.
Something superficial All I want to say here is: look at the definitions! Just look at them!
So, I’d better give you these definitions.
Basic definitions I’ll write
$\mathbf{P}_n = \{ (p_1, \ldots, p_n) \in [0, \infty)^n | \sum p_i = 1 \}$ (which previously I’ve written as $\Delta_n$). For each $t \in [-\infty, \infty]$, the power mean of order $t$ is the
$M_t: \mathbf{P}_n \times [0, \infty)^n \to [0, \infty)$
defined for $t eq -\infty, 0, \infty$ by
$M_t(\mathbf{p}, \mathbf{x}) = \Bigl( \sum_{i: p_i \gt 0} p_i x_i^t \Bigr)^{1/t}.$
Think of this as an average of $x_1, \ldots, x_n$, weighted by $p_1, \ldots, p_n$. The three exceptional values of $t$ are handled by taking limits: $M_t(\mathbf{p}, \mathbf{x}) = \begin{cases} min
x_i &if t = -\infty\\ \prod x_i^{p_i} &if t = 0\\ max x_i &if t = \infty. \end{cases}$
The minimum, product and maximum are, like the sum, taken over all $i$ such that $p_i \gt 0$. I’ll generally assume that $t eq -\infty, 0, \infty$; these cases never cause trouble. So: the only
definition you need to pay attention to is the one for generic $t$.
Now for a definition of entropy… almost. Actually, I’m going to work with the closely related notion of diversity. For $q \in [-\infty, \infty]$, the diversity of order $q$ is the map
$D_q: \mathbf{P}_n \to [0, \infty)$
defined by
$D_q(\mathbf{p}) = \Bigl( \sum_{i: p_i \gt 0} p_i^q \Bigr)^{1/(1 - q)}$
for $q eq -\infty, 1, \infty$, and again by taking limits in the exceptional cases:
$D_q(\mathbf{p}) = \begin{cases} min (1/p_i) &if q = -\infty \\ \prod p_i^{-p_i} &if q = 1\\ max (1/p_i) &if q = \infty \end{cases}$
where again the min, product and max are over all $i$ such that $p_i \gt 0$.
The name ‘diversity’ originates from an ecological application. We think of $\mathbf{p} = (p_1, \ldots, p_n)$ as representing a community of $n$ species in proportions $p_1, \ldots, p_n$, and $D_q(\
mathbf{p})$ as a measure of that community’s biodiversity. Different values of the parameter $q$ represent different opinions on how much importance should be assigned to rare or common species.
(Newspaper stories on biodiversity typically focus on threats to rare species, but the balance of common species is also important for the healthy functioning of an ecosystem as a whole.) Theoretical
ecologists often call $D_q$ the Hill number of order $q$.
Now, many of you know $D_q$ not as ‘diversity’, but as Rényi extropy. I’d like to advocate the name ‘diversity’.
First, diversity is a fundamental concept and deserves a simple name. It’s much more general than just something from ecology: it applies whenever you have a collection of things divided into
Second, ‘Rényi extropy’ is a terribly off-putting name. It assumes you already understand entropy (itself a significant task), then that you understand Rényi entropy (whose meaning you couldn’t
possibly guess since it’s named after a person), and then that you’re familiar with the half-jokey usage of ‘extropy’ to mean the exponential of entropy. In contrast, diversity is something that can
be understood directly, without knowing about entropy of any kind.
An enormously important property of diversity is that it is an effective number. This means that the value it assigns to the uniform distribution on a set is the cardinality of that set:
$D_q(1/n, \ldots, 1/n) = n.$
This is what distinguishes diversity from the various other functions of $\sum p_i^q$ that get used (e.g. Rényi entropy and the entropy variously named after Havrda, Charvát, Daróczy, Patil, Taillie
and Tsallis). I recently gave a little explanation of why effective numbers are so important, and I gave a different explanation (using terminology differently) in this post on entropy, diversity and
Something superficial, continued Let me now go back to my superficial reason for thinking that means will be useful in the study of entropy and diversity. I declared: just look at the formulas!
There’s an obvious resemblance. And in particular, look what happens in the definition of power mean when you put $\mathbf{x} = \mathbf{p}$ and $t = q - 1$:
$M_{q - 1}(\mathbf{p}, \mathbf{p}) = \Bigl( \sum p_i^q \Bigr)^{1/(q - 1)} = 1/D_q(\mathbf{p}).$
This reminds me of some other things. To study quadratic forms $x \mapsto x^* A x$, it’s really helpful to study the associated bilinear forms $(x, y) \mapsto x^* A y$. Or, similarly, you’ll often be
able to prove more about a Banach space if you know it’s a Hilbert space.
Moreover, there are reasons for thinking that something quite significant is going on in the step ‘put $\mathbf{x} = \mathbf{p}$’. I suspect that fundamentally, $\mathbf{x}$ is a function on $\{1, \
ldots, n\}$, but $\mathbf{p}$ is a measure. By equating them we’re really taking advantage of the finiteness of our sets. For more general sets or spaces, we might need to keep $\mathbf{p}$ and $\
mathbf{x}$ separate.
Something substantial To explain this more substantial connection between diversity and means, I first need to explain how the simplices $\mathbf{P}_n$ form an operad.
If you know what an operad is, it’s enough for me to tell you that any convex subset $X$ of $\mathbb{R}^n$ is naturally a $\mathbf{P}$-algebra via the action
$\mathbf{p}(x_1, \ldots, x_n) = \sum p_i x_i$
($\mathbf{p} \in \mathbf{P}_n, x_i \in X$). That should enable you to work out what the composition in $\mathbf{P}$ must be.
If you don’t know what an operad is, all you need to know is the following. An operad structure on the sequence of sets $(\mathbf{P}_n)_{n \in \mathbb{N}}$ consists of a choice of map
$\mathbf{P}_n \times \mathbf{P}_{k_1} \times \cdots \times \mathbf{P}_{k_n} \to \mathbf{P}_{k_1 + \cdots + k_n}$
for each $n, k_1, \ldots, k_n \in \mathbb{N}$, satisfying some axioms. The map is written
$(\mathbf{p}, \mathbf{r}_1, \ldots, \mathbf{r}_n) \mapsto \mathbf{p} \circ (\mathbf{r}_1, \ldots, \mathbf{r}_n)$
and called composition. The particular operad structure that I have in mind has its composition defined by
$\mathbf{p} \circ (\mathbf{r}_1, \ldots, \mathbf{r}_n) = \bigl( p_1 r_{1 1}, \ldots, p_1 r_{1 k_1}, \ldots, p_n r_{n 1}, \ldots, p_n r_{n k_n} \bigr).$
So the composite is obtained by putting the probability distributions $\mathbf{r}_1, \ldots, \mathbf{r}_n$ side by side, weighting them by $p_1, \ldots, p_n$ respectively.
Here’s the formula for the diversity of a composite:
$D_q(\mathbf{p} \circ (\mathbf{r}_1, \ldots, \mathbf{r}_n)) = \Bigl( \sum_{i: p_i \gt 0} p_i^q D_q(\mathbf{r}_i)^{1 - q} \Bigr)^{1/(1 - q)}.$
Notice that the diversity of a composite depends only on $\mathbf{p}$ and the diversities $D_q(\mathbf{r}_i)$, not on the distributions $\mathbf{r}_i$ themselves. Pushing that thought, you might hope
that it wouldn’t depend on $\mathbf{p}$ itself, only its diversity; but it’s not to be.
(Here I’m assuming that $q eq -\infty, 1, \infty$. I’ll let you work out those cases, or you can find them here. And you should take what I say about the case $q \lt 0$ with a pinch of salt; I
haven’t paid much attention to it.)
Digressing briefly, this expression can be written as a mean:
$D_q(\mathbf{p} \circ (\mathbf{r}_1, \ldots, \mathbf{r}_n)) = M_{1 - q}(\mathbf{p}, D_q(\mathbf{r}_\bullet)/\mathbf{p})$
where $D_q(\mathbf{r}_\bullet)/\mathbf{p}$ is the vector with $i$th component $D_q(\mathbf{r}_i)/p_i$. I call this a digression because I don’t know whether this is a useful observation. It’s a
different connection between the diversity of a composite and means that I want to point out here.
To explain that connection, I need a couple more bits of terminology. The partition function of a probability distribution $\mathbf{p}$ is the function
$Z(\mathbf{p}): \mathbb{R} \to (0, \infty)$
defined by
$Z(\mathbf{p})(q) = \sum_{i: p_i \gt 0} p_i^q.$
Any probability distribution $\mathbf{p}$ belongs to a one-parameter family $\bigl(\mathbf{p}^{(q)} \bigr)_{q \in \mathbb{R}}$ of probability distributions, defined by
$p^{(q)}_i = p_i^q/Z(\mathbf{p})(q)$
where $\mathbf{p}^{(q)} = (p^{(q)}_1, \ldots, p^{(q)}_n)$. These are sometimes called the escort distributions of $\mathbf{p}$. (In particular, $\mathbf{p}^{(1)} = \mathbf{p}$, so there’s something
especially convenient about the case $q = 1$.)
A small amount of elementary algebra tells us that the diversity of a composite can be re-expressed as follows:
$D_q(\mathbf{p} \circ (\mathbf{r}_1, \ldots, \mathbf{r}_n)) = D_q(\mathbf{p}) \cdot M_{1 - q}\bigl(\mathbf{p}^{(q)}, D_q(\mathbf{r}_\bullet)\bigr).$
This is the connection I’ve been building up to: the diversity of a composite expressed in terms of a power mean.
To understand this further, think of a large ecological community spread over several islands, with the special feature that no species can be found on more than one island. The distribution $\mathbf
{p}$ gives the relative sizes of the total populations on the different islands, and the distribution $\mathbf{r}_i$ gives the relative abundances of the various species on the $i$th island.
Now, the formula tells us the diversity of the composite community in terms of the diversities of the islands and their relative sizes. More exactly, it expresses it as a product of two factors: the
diversity between the islands ($D_q(\mathbf{p})$), and the average diversity within the islands ($M_{1 - q}(\ldots)$).
Something new …where ‘new’ is in the sense of ‘new to this conversation’, not ‘new to the world’.
We’ve just seen how, for each real number $q$, the diversity $D_q$ of a composite $\mathbf{p} \circ (\mathbf{r}_1, \ldots, \mathbf{r}_n)$ decomposes as a product of two factors. The first factor is
the diversity of $\mathbf{p}$. The second is some kind of mean of the diversities of the $\mathbf{r}_i$s, weighted by a distribution depending on $\mathbf{p}$.
We know this because we have a formula for $D_q$. But what if we take the description in the previous paragraph as axiomatic? In other words, suppose that we have for each $n \in \mathbb{N}$
$D: \mathbf{P}_n \to (0, \infty), \quad \hat{ }: \mathbf{P}_n \to \mathbf{P}_n,$
and some kind of ‘mean operation’ $M$, satisfying
$D(\mathbf{p} \circ (\mathbf{r}_1, \ldots, \mathbf{r}_n)) = D(\mathbf{p}) \cdot M(\hat{\mathbf{p}}, D(\mathbf{r}_\bullet)).$
What does this tell us about $D$, $M$ and $\hat{ }$? Could it even be that it forces $D = D_q$, $M = M_{1 - q}$ and $\hat{ } = ( )^{(q)}$ for some $q$?
Well, it depends what you mean by ‘mean’. But that’s a subject that’s been well raked over, and there are several axiomatic characterizations of the power means out there. So let me skip that part of
the question and assume immediately that $M = M_{1 - q}$ for some $q \in (0, \infty)$.
So now we’ve decided what our mean operation is, but we still have an undetermined thing called ‘diversity’ and an undetermined operation $\hat{ }$ for turning one probability distribution into
another. All we have by way of constraints is the equation above for the diversity of a composite, and perhaps we’ll also allow ourselves some further basic assumptions on diversity, such as
The theorem is that these meagre assumptions are enough to determine diversity uniquely.
Theorem (Routledge) Let $q \in (0, \infty)$. Let
$\bigl( D: \mathbf{P}_n \to (0, \infty) \bigr)_{n \in \mathbb{N}}, \quad \bigl( \hat{ }: \mathbf{P}_n \to \mathbf{P}_n \bigr)_{n \in \mathbb{N}}$
be families of functions such that
□ $D$ is an effective number
□ $D$ is symmetric
□ $D$ is continuous
□ $D(\mathbf{p}\circ(\mathbf{r}_1, \ldots, \mathbf{r}_n)) = D(\mathbf{p}) \cdot M_{1 - q}(\hat{\mathbf{p}}, D(\mathbf{r}_\bullet))$ for all $\mathbf{p}, \mathbf{r}_1, \ldots, \mathbf{r}_n$.
Then $D = D_q$ and $\hat{ } = ( )^{(q)}$.
This result appeared in
R. D. Routledge, Diversity indices: which ones are admissible? Journal of Theoretical Biology 76 (1979), 503–515.
And the moral is: diversity, hence entropy, can be uniquely characterized using means.
Postscript This theorem is closer to the basic concerns of ecology than you might imagine. When a geographical area is divided into several zones, you can ask how much of the biological diversity of
the area should be attributed to variation between the zones, and how much to variation within the zones. This is very like our island scenario above, but more complicated, since the same species may
be present in multiple zones.
Ecologists talk about $\alpha$-diversity (the average within-zone diversity), $\beta$-diversity (the diversity between the zones), and $\gamma$-diversity (the global diversity, i.e. that of the whole
community). The concept of $\beta$-diversity can play a part in conservation decisions. For example, if the $\beta$-diversity of our area is perceived or measured to be low, that means that some of
the zones are quite similar to each other. In that case, it might not be important to conserve all of them: resources can be concentrated on just a few.
The theorem tells us something about how $\alpha$-, $\beta$- and $\gamma$-diversity must be defined if simple and desirable properties are to hold. This story reached a definitive end in a quite
recent paper:
Lou Jost, Partitioning diversity into independent alpha and beta components, Ecology 88 (2007), 2427–2439.
But Jost’s paper takes us beyond what we’re currently doing, so I’ll leave it there for now.
Posted at May 10, 2011 7:10 AM UTC
Re: Entropies vs. Means
Thank you, Tom. I have been reading the Renyi Entropy posts feeling more and more lost. But ‘Diversity’ has helped me regain orientation.
Posted by: Roger Witte on May 10, 2011 8:57 AM | Permalink | Reply to this
Re: Entropies vs. Means
You’re very welcome!
Posted by: Tom Leinster on May 10, 2011 9:22 AM | Permalink | Reply to this
Re: Entropies vs. Means
This is good stuff!
So do you think that the formula for $D_q(p\circ (r_1,\ldots,r_n))$ can also be formulated categorically in terms of lax points for general $q$, like you already did in the $q=1$ case?
(I was about to link to our notes, but somehow currently these only contain a section on the partition function…?)
Posted by: Tobias Fritz on May 10, 2011 12:32 PM | Permalink | Reply to this
Re: Entropies vs. Means
Tobias wrote:
(I was about to link to our notes, but somehow currently these only contain a section on the partition function…?)
I was busy editing our notes, and there are certain typos that can make everything after the typo not show up at all. I fixed that. Now someone is editing our notes and I can’t continue working on
them. I guess that’s you!
I am starting to organize the material up to including ‘Shannon entropy’ in a more systematic way. I will move the section ‘Rényi entropy’ further down, because I think a self-contained story about
convex algebras, partition functions and Shannon entropy is starting to emerge which does not need to involve Rényi entropy. This self-contained story may also include some of your ideas on ‘the
basic inequalities of information theory’.
I hope and believe that Rényi entropy (and/or related concepts) will show up naturally in an expanded version of this story. However, about 100 times more people understand Shannon entropy than Rényi
entropy. It’s about 100 times as important, too. And even without Rényi entropy, our paper will be fairly intimidating to most people thanks to the use of monads, operads, ideas from statistical
mechanics, and so on. So, I’m leaning towards a first paper that leaves out Rényi ideas. We could then try to ‘Rényify’ things in a second paper, if we want.
Posted by: John Baez on May 10, 2011 1:19 PM | Permalink | Reply to this
Re: Entropies vs. Means
Thanks, Tobias. To answer your question, I don’t know whether the lax point result can be generalized to arbitrary $q$. I’ve thought about it a little bit, without seeing a way forward, but I haven’t
tried to push it hard.
Posted by: Tom Leinster on May 11, 2011 12:25 AM | Permalink | Reply to this
Re: Entropies vs. Means
Tom wrote:
Well, it depends what you mean by ‘mean’.
I think you’re providing Bill Clinton with some serious competition here!
Posted by: John Baez on May 10, 2011 1:28 PM | Permalink | Reply to this
Re: Entropies vs. Means
Now you’re just being mean.
Posted by: Tom Leinster on May 11, 2011 12:20 AM | Permalink | Reply to this
Re: Entropies vs. Means
Nice explanation, thank you! One question: if $\sum p_i = 1$ and each $p_i\ge 0$, then it seems unlikely to me that any of the $p_i$s could be greater than $1$; so did you really mean to write $[0,\
infty)^n$ in the definition of $\mathbf{P}$?
Posted by: Mike Shulman on May 10, 2011 11:52 PM | Permalink | Reply to this
Re: Entropies vs. Means
I did! Presumably you’re just saying it looks strange, in the same way that it would look strange to have used $[0, 7]$ instead (although it would make no difference). I suppose what I first had in
mind was the expression
$\{ \mathbf{p} \in \mathbb{R}^n | p_i \geq 0, \sum p_i = 1\}.$
Then I realized that it was wasteful to say “$\mathbf{p} \in \mathbb{R}^n$” and “$p_i \geq 0$” separately, so I merged them into one: $\mathbf{p} \in [0, \infty)^n$.
Posted by: Tom Leinster on May 11, 2011 12:19 AM | Permalink | Reply to this | {"url":"http://golem.ph.utexas.edu/category/2011/05/entropies_vs_means.html","timestamp":"2014-04-18T20:44:26Z","content_type":null,"content_length":"86497","record_id":"<urn:uuid:4dbef49e-0ec3-47e7-86c1-8b75ef9affb2>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00179-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cournot-Walras equilibrium as a subgame perfect equilibrium
The Library
Cournot-Walras equilibrium as a subgame perfect equilibrium
Busetto, Francesca, Codognato, Giulio and Ghosal, Sayantan . (2008) Cournot-Walras equilibrium as a subgame perfect equilibrium. International Journal of Game Theory, Volume 37 (Number 3). pp.
371-386. ISSN 0020-7276
Full text not available from this repository.
In this paper, we investigate the problem of the strategic foundation of the Cournot-Walras equilibrium approach. To this end, we respecify a la Cournot-Walras the mixed version of a model of
simultaneous, noncooperative exchange, originally proposed by Lloyd S. Shapley. We show, through an example, that the set of the Cournot-Walras equilibrium allocations of this respecification does
not coincide with the set of the Cournot-Nash equilibrium allocations of the mixed version of the original Shapley's model. As the nonequivalence, in a one-stage setting, can be explained by the
intrinsic two-stage nature of the Cournot-Walras equilibrium concept, we are led to consider a further reformulation of the Shapley's model as a two-stage game, where the atoms move in the first
stage and the atomless sector moves in the second stage. Our main result shows that the set of the Cournot-Walras equilibrium allocations coincides with a specific set of subgame perfect equilibrium
allocations of this two-stage game, which we call the set of the Pseudo-Markov perfect equilibrium allocations.
Item Type: Journal Article
Subjects: H Social Sciences > HB Economic Theory
Q Science > QA Mathematics
Divisions: Faculty of Social Sciences > Economics
Library of Game theory, Noncooperative games (Mathematics), Equilibrium (Economics)
Journal or International Journal of Game Theory
Publisher: Springer
ISSN: 0020-7276
Date: November 2008
Volume: Volume 37
Number: Number 3
Number of 16
Page Range: pp. 371-386
Identification 10.1007/s00182-008-0123-8
Status: Peer Reviewed
Publication Published
Access rights Restricted or Subscription Access
to Published
Version or Busettoy, F., Codognatoz, G., and Ghosal, S. (2008). Cournot-Walras equilibrium as a subgame perfect equilibrium. [Coventry]: University of Warwick, Department of Economics. (Warwick
Related economic research papers, no.837). http://wrap.warwick.ac.uk/id/eprint/219
Related URLs: • Related item in WRAP
[error in
script] [error
in script]
Aliprantis CD, Border KC (1999) Infinite dimensional analysis. Springer, New York Amir R, Sahi S, Shubik M, Yao S (1990) A strategic market game with complete markets. J Econ Theory
51:126–143 Aumann RJ (1965) Integrals of set valued functions. J Math Anal Appl 12:1–12 Aumann RJ (1966) Existence of competitive equilibria inmarkets with a continuum of traders.
Econometrica 24:1–17 Bonnisseau J-M, Florig M (2003) Existence and optimality of oligopoly equilibria in linear exchange economies. Econ Theory 22:727–741 CodognatoG (1995)
Cournot–Walras and Cournot equilibria inmixed markets: a comparison. Econ Theory 5:361–370 Codognato G, Gabszewicz JJ (1991) Equilibres de Cournot–Walras dans une économie d’échange.
Revue Econ 42:1013–1026, 1–17 Codognato G, Gabszewicz JJ (1993) Cournot–Walras equilibria in markets with a continuum of traders. Econ Theory 3:453–464 Codognato G, Ghosal S (2000a)
Cournot–Nash equilibria in limit exchange economies with complete markets and consistent prices. J Math Econ 34:39–53 Codognato G, Ghosal S (2000b) Oligopoly à la Cournot–Nash in
markets with a continuum of traders. Discussion Paper No 2000-5, CEPET (Central European Program in Economic Theory) Institute of Public Economics, Graz University d’Aspremont C,
DosSantos FerreiraR,Gérard-VaretL-A (1997) General equilibrium concepts under imperfect competition: a Cournotian approach. J Econ Theory 73:199–230 Dierker H, Grodal B (1986)
Nonexistence of Cournot–Walras equilibrium in a general equilibrium model with two oligopolists. In: HildenbrandW,Mas-Colell A (eds) Contributions to mathematical economics in honor
of Gérard Debreu. North-Holland, Amsterdam Dubey P, Shapley LS (1994) Noncooperative general exchange with a continuum of traders: two models. J Math Econ 23:253–293 Dubey P, Shubik M
References: (1978) The noncooperative equilibria of a closed trading economy with market supply and bidding strategies. J Econ Theory 17:1–20 Fudenberg D, Tirole J (1991) Game theory. MIT Press,
Cambridge Gabszewicz JJ, Michel P (1997) Oligopoly equilibrium in exchange economies. In: Eaton BC, Harris RG (eds) Trade, technology and economics. Essays in honour of Richard G
Lipsey. Edward Elgar, Cheltenham Gabszewicz JJ, Vial J-P (1972) Oligopoly ‘à la Cournot–Walras’ in a general equilibrium analysis. J Econ Theory 4:381–400 Gale D (1960) The theory of
linear economic models. Academic Press, New York Lahmandi-Ayed R (2001) Oligopoly equilibria in exchange economies: a limit theorem. Econ Theory 17:665–674 Mas-Colell A (1982) The
Cournotian foundations ofWalrasian equilibrium theory. In: HildenbrandW(ed) Advances in economic theory. Cambridge University Press, Cambridge Maskin E, Tirole J (2001) Markov perfect
equilibrium. J Econ Theory 100:191–219 Okuno M, Postlewaite A, Roberts J (1980) Oligopoly and competition in large markets. Am Econ Rev 70:22–31 Peck J, Shell K, Spear SE (1992) The
market game: existence and structure of equilibrium. J Math Econ 21:271–299 Postlewaite A, Schmeidler D (1978) Approximate efficiency of non-Walrasian Nash equilibria. Econometrica
46:127–137 Roberts K (1980) The limit points of monopolistic competition. J Econ Theory 22:256–278 Roberts DJ, Sonnenschein H (1977) On the foundations of the theory of monopolistic
competition. Econometrica 45:101–114 Sahi S, Yao S (1989) The noncooperative equilibria of a trading economy with complete markets and consistent prices. J Math Econ 18:325–346
Shapley LS, ShubikM (1977) Trade using one commodity as ameans of payment. J Polit Econ 85:937–968 Shitovitz B (1973) Oligopoly in markets with a continuum of traders. Econometrica
41:467–501 Shitovitz B (1997) A comparison between the core and the monopoly solutions in a mixed exchange economy. Econ Theory 10:559–563
URI: http://wrap.warwick.ac.uk/id/eprint/29123
Data sourced from Thomson Reuters' Web of Knowledge
Actions (login required) | {"url":"http://wrap.warwick.ac.uk/29123/","timestamp":"2014-04-16T10:25:17Z","content_type":null,"content_length":"45240","record_id":"<urn:uuid:3bf8c297-36b1-436d-aa76-716e7a600929>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00216-ip-10-147-4-33.ec2.internal.warc.gz"} |
Negative powers and study habits
January 12th 2010, 12:35 PM #15
January 12th 2010, 11:27 AM #14
January 12th 2010, 11:18 AM #13
January 12th 2010, 11:08 AM #12
January 12th 2010, 10:53 AM #11
January 12th 2010, 10:50 AM #10
January 12th 2010, 10:49 AM #9
January 12th 2010, 10:47 AM #8
January 12th 2010, 10:46 AM #7
January 12th 2010, 10:42 AM #6
January 12th 2010, 10:41 AM #5
January 12th 2010, 10:41 AM #4
January 12th 2010, 10:38 AM #3
January 12th 2010, 10:37 AM #2
January 12th 2010, 10:36 AM #1
Do you recommend any maths GCSE book I should study from? I got the CGP workbook...
i'm sure you didn't use ONLY the past papers. you had to consult some other resources at some point. problems don't tell you how to solve them. moreover, just asking people how to do specific
problems is not a good way to study. if the exam has any surprises, which it will, you won't know how to deal with them. because you never learned a process, you never learned the method, so you
would not be able to pick out what's relevant from what's not or how to tweek the solution you got from someone else in order to apply it to this new problem. furthermore, under the pressure of
an exam, if you go in KNOWING that you don't know what these problems are about but only memorized a bunch of solutions, you are going to forget some of what you memorized. forgetting a solution
is easy, forgetting a method is hard.
i never said don't use past papers. in fact, i recommend using past papers. i used them when i was doing O and A-levels, and i use them when i was in college. what this user is doing is not what
you or i did.
A much better way to study is to go through the workbook. if you need to, go over the topics covered on the exam from another source. the way you are studying now will not be very effective. at
the very least, use the past exam as a guide for what you should be focusing on. but you need to go through the topics and learn the rules before attempting to do the problems.
Ok thanks for the feedback
A much better way to study is to go through the workbook. if you need to, go over the topics covered on the exam from another source. the way you are studying now will not be very effective. at
the very least, use the past exam as a guide for what you should be focusing on. but you need to go through the topics and learn the rules before attempting to do the problems.
I did my Chemistry A-Level revision with past papers so it can work for some people.
Personally, I found past papers very effective but only where there was a mark scheme nearby detailing the answers. It just goes to show that maths is about trying everything and seeing what
works for you
Thank you for that rule.
I have a workbook but I have not used it yet. I am using past papers and any problems I come across I post on this forum. Since the past papers are relatively generic, they tend to have the same
questions. I'm hoping the exam follows this. It seems to get easier by the year anyway. In the 2003 paper I was dealing with frustums which have incredibly annoying formulas while in 2008 it was
only rectangles with cuboids in 2009.
A much better way to study is to go through the workbook. if you need to, go over the topics covered on the exam from another source. the way you are studying now will not be very effective. at
the very least, use the past exam as a guide for what you should be focusing on. but you need to go through the topics and learn the rules before attempting to do the problems.
I will give you the rule, see what you can do with it: for a real number $x e 0$ and a constant $a > 0$, $x^{-a} = \frac 1{x^a}$
Well, that's a problem! doing math is largely about following rules.
If you are self studying, what are you studying from? Any math book that deals with this topic will have the rules in it. As an alternative, it is pretty easy to type "laws of exponents" into
google or any search engine of your choice.
Thank you for that rule.
I have a workbook but I have not used it yet. I am using past papers and any problems I come across I post on this forum. Since the past papers are relatively generic, they tend to have the same
questions. I'm hoping the exam follows this. It seems to get easier by the year anyway. In the 2003 paper I was dealing with frustums which have incredibly annoying formulas while in 2008 it was
only rectangles with cuboids in 2009.
b^-2/b^3 = b^-5?
I will give you the rule, see what you can do with it: for a real number $x e 0$ and a constant $a > 0$, $x^{-a} = \frac 1{x^a}$
Well, that's a problem! doing math is largely about following rules.
If you are self studying, what are you studying from? Any math book that deals with this topic will have the rules in it. As an alternative, it is pretty easy to type "laws of exponents" into
google or any search engine of your choice.
Note I do not know the law of exponents. I am self studying and self teaching myself for an exam. I do not know where to find resources so I just do questions.
By self teaching myself for an exam I mean that I have been given no basic background at all. No teachers or school or anything :/
What about (2^-2)^-2?
to supplement what Plato said, I just want to clear up your misunderstanding here. In particular, negative powers do not affect the sign of the base number
$-3 \cdot -3 = (-3)^2 = 9$
$3^{-2} = \frac 1{3^2} = \frac 19$
As you can see, the sign of the base number and the sign of the power do different things. Please review the laws of exponents!
oh and 2^-1/2^3 = 2^4???
Negative powers and study habits
3^{-2}=9 yes? Because -3*-3 cancel out
i'm sure you didn't use ONLY the past papers. you had to consult some other resources at some point. problems don't tell you how to solve them. moreover, just asking people how to do specific
problems is not a good way to study. if the exam has any surprises, which it will, you won't know how to deal with them. because you never learned a process, you never learned the method, so you
would not be able to pick out what's relevant from what's not or how to tweek the solution you got from someone else in order to apply it to this new problem. furthermore, under the pressure of an
exam, if you go in KNOWING that you don't know what these problems are about but only memorized a bunch of solutions, you are going to forget some of what you memorized. forgetting a solution is
easy, forgetting a method is hard.
i never said don't use past papers. in fact, i recommend using past papers. i used them when i was doing O and A-levels, and i use them when i was in college. what this user is doing is not what you
or i did.
Do you recommend any maths GCSE book I should study from? I got the CGP workbook... | {"url":"http://mathhelpforum.com/algebra/123435-negative-powers-study-habits.html","timestamp":"2014-04-16T07:20:23Z","content_type":null,"content_length":"88163","record_id":"<urn:uuid:eaf6b299-edf4-4caf-8a2c-78afda847e93>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00642-ip-10-147-4-33.ec2.internal.warc.gz"} |
find current at high/low frequency
1. The problem statement, all variables and given/known data
find the rms current delievered by the 45v (rms) power supply when
a) the frequency is very large
and b) the frequency is very small.
answer: a) 225mA, b) 450mA
Uploaded with
ImageShack.us 2. Relevant equations
w= angular velocity (supposed to be omega)
L= inductor
C= capacitor
j= complex coefficent = sqrt(-1)
Zl= resistance of inductor
Zc= resistance of capacitor
Ztot= total resistance
R= resistance of resistor
P= Power
Ztot= R + Zl + Zc = R + j(Xl-Xc)
3. The attempt at a solution
with frequency just being high, how am I supposed to get these exact numbers without letters/symbol for I? o.o I tried the calculation and omega did not cancel out either. | {"url":"http://www.physicsforums.com/showthread.php?t=434326","timestamp":"2014-04-19T15:08:50Z","content_type":null,"content_length":"26714","record_id":"<urn:uuid:ff49548b-09c2-4e5c-a3e6-bae416a39f93>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00150-ip-10-147-4-33.ec2.internal.warc.gz"} |
RF Technologies for Low-Power Wireless Communications
More About This Textbook
A survey of microwave technology tailored for professionals in wireless communicationsRF Technologies for Low Power Wireless Communications updates recent developments in wireless communications from
a hardware design standpoint and offers specialized coverage of microwave technology with a focus on the low power wireless units required in modern wireless systems. It explores results of recent
research that focused on a holistic, integrated approach to the topics of materials, devices, circuits, modulation, and architectures rather than the more traditional approach of research into
isolated topical areas.Twelve chapters deal with various fundamental research aspects of low power wireless electronics written by world-class experts in each field. The first chapter offers an
overview of wireless architecture and performance, followed by detailed coverage of:*Advanced GaAs-based HBT designs*InP-based devices and circuits*Si/SiGe HBT technology*Noise in GaN devices*Power
amplifier architectures and nonlinearities*Planar-oriented components*MEMS and micromachined components*Resonators, filters, and low-noise oscillators*Antennas*Transceiver front-end architecturesWith
a clear focus and expert contributors, RF Technologies for Low Power Wireless Communications will be of interest to a wide range of electrical engineering disciplines working in wireless
Read More Show Less
Editorial Reviews
From the Publisher
"...presents the latest findings from military-sponsored academic research on wireless communications from a hardware design perspective..." (SciTech Book News, Vol. 25, No. 4, December 2001)
Read More Show Less
Product Details
• ISBN-13: 9780471382676
• Publisher: Wiley, John & Sons, Incorporated
• Publication date: 9/7/2001
• Edition number: 1
• Pages: 480
• Product dimensions: 6.34 (w) x 9.51 (h) x 1.06 (d)
Read an Excerpt
RF Technologies for Low-Power Wireless Communications
John Wiley & Sons
Copyright © 2001 John Wiley & Sons, Inc.
All right reserved.
ISBN: 0-471-38267-1
Chapter One
Wayne Stark
Department of Electrical Engineering and Computer Science The University of Michigan, Ann Arbor
Larry Milstein
Department of Electrical and Computer Engineering University of California-San Diego
1.1 INTRODUCTION
Low power consumption has recently become an important consideration in the design of commercial and military communications systems. In a commercial cellular system, low power consumption means long
talk time or standby time. In a military communications system, low power is necessary to maximize a mission time or equivalently reduce the weight due to batteries that a soldier must carry. This
book focuses attention on critical devices and system design for low power communications systems. Most of the remaining parts of this book consider particular devices for achieving low power design
of a wireless communications system. This includes mixers, oscillators, filters, and other circuitry. In this chapter, however, we focus on some of the higher level system architecture issues for low
power design of a wireless communications system. To begin we discuss some of the goals in a wireless communications system along with some of the challenges posed by a wireless medium used for
A system level (functional) block diagram of a wireless communications system is shown in Figure 1.1. In this Figure the source of information could be a voice signal, a video signal, situation
awareness information (e.g., position information of a soldier), an image, a data file, or command and control data. The source encoder processes the information and formats the information into a
sequence of information bits [member of] {[+ or -]1}. The goal of the source encoder is to remove the unstructured redundancy from the source so that the rate of information bits at the output of the
source encoder is as small as possible within a constraint on complexity. The channel encoder adds structured redundancy to the information bits for the purpose of protecting the data from distortion
and noise in the channel. The modulator maps the sequence of coded bits into waveforms that are suitable for transmission over the channel. In some systems the modulated waveform is also spread over
a bandwidth much larger than the data rate. These systems, called spread-spectrum systems, achieve a certain robustness to fading and interference not possible with narrowband systems. The channel
distorts the signal in several ways. First, the signal amplitude decreases due to the distance between the transmitter and receiver. This is generally referred to as propagation loss. Second, due to
obstacles the signal amplitude is attenuated. This is called shadowing. Finally, because of multiple propagation paths between the transmitter antenna and the receiver antenna, the signal waveform is
distorted. Multipath fading can be either constructive, if the phases of different paths are the same, or destructive, if the phases of the different paths cause cancellation. The destructive or
constructive nature of the fading depends on the carrier frequency of the signal and is thus called frequency selective fading. For a narrowband signal (signal bandwidth small relative to the inverse
delay spread of the channel), multipath fading acts like a random attenuation of the signal. When the fading is constructive the bit error probability can be very small. When the fading is
destructive the bit error probability becomes quite large. The average overall received amplitude value causes a significant loss in performance (on the order of 30-40 dB loss). However, with proper
error control coding or diversity this loss in performance can essentially be eliminated.
In addition to propagation effects, typically there is noise at the receiver that is uncorrelated with the transmitted signal. Thermal (shot) noise due to motion of the electrons in the receiver is
one form of this noise. Other users occupying the same frequency band or in adjacent bands with interfering sidelobes is another source of this noise. In commercial as well as military communications
systems interference from other users using the same frequency band (perhaps geographically separated) can be a dominant source of noise. In a military communications system hostile jamming is also a
possibility that must be considered. Hostile jamming can easily thwart conventional communications system design and must be considered in a military communications scenario.
The receiver's goal is to reproduce at the output of the source decoder the information-bearing signal, be it a voice signal or a data file, as accurately as possible with minimal delay and minimal
power consumed by the transmitter and receiver. The structure of the receiver is that of a demodulator, channel decoder, and source decoder. The demodulator maps a received waveform into a sequence
of decision variables for the coded data. The channel decoder attempts to determine the information bits using the knowledge of the codebook (set of possible encoded sequences) of the encoder. The
source decoder then attempts to reproduce the information.
In this chapter we limit discussion to an information source that is random data with equal probability of being 0 or 1 with no memory; that is, the bit sequence is a sequence of independent,
identically distributed binary random variables. For this source there is no redundancy in the source, so no redundancy can be removed by a source encoder.
There are important parameters when designing a communications system. These include data rate [R.sub.b] (bits/s, or bps), at the input to the channel encoder, the bandwidth W (Hz), received signal
power P (watts), noise power density [N.sub.0]/2 (W/Hz), and bit error rate [P.sub.e,b]. There are fundamental trade-offs between the amount of power or equivalently the signal-to-noise ratio used
and the data rate possible for a given bit error probability, [P.sub.e,b]. For ideal additive white Gaussian noise channels with no multipath fading and infinite delay and complexity, the relation
between data rate, received power, noise power, and bandwidth for [P.sub.e,b] approaching zero was determined by Shannon as
(1:1) [R.sub.b] < W [log.sub.2](1 + P/[N.sub.0]W):
If we let [E.sub.b] = P/[R.sub.b] represent the energy used per data bit (joules per bit), then an equivalent condition for reliable communication is
[E.sub.b]/[N.sub.0] > [2.sup.[R.sub.b]/W] - 1/[R.sub.b]/W
This relation determines the minimum received signal energy for reliable communications as a function of the spectral efficiency [R.sub.b]/W (bps/Hz). The interpretation of this condition is that for
lower spectral efficiency, lower signal energy is required for reliable communications. The trade-off between bandwidth efficiency and energy efficiency is illustrated in Figure 1.2. Besides the
trade-off for an optimal modulation scheme, the trade-off is also shown for three modulation techniques: binary phase shift keying (BPSK), quaternary phase shift keying (QPSK), and 8-ary phase shift
keying (8PSK).
In this figure the only channel impairment is additive white Gaussian noise. Other factors in a realistic environment are multipath fading, interference from other users, and adjacent channel
interference. In addition, the energy is the received signal energy and does not take into account the energy consumed by the processing circuitry. For example, power consumption of signal processing
algorithms (demodulation, decoding) are not included. Inefficiencies of power amplifiers and low noise amplifiers are not included. These will be discussed in subsequent sections and chapters. These
fundamental trade-offs between energy consumed for transmission and data rate were discovered more than 50 years ago by Shannon (see Cover and Thomas). It has been the goal of communications
engineers to come close to achieving the upper bound on data rate (called the channel capacity) or equivalently the lower bound on the signal-to-noise ratio.
To come close to achieving the goals of minimum energy consumption, channel coding and modulation techniques as well as demodulation and decoding techniques must be carefully designed. These
techniques are discussed in the next two sections.
In this section we describe several different modulation schemes. We begin with narrowband techniques whereby the signal bandwidth and the data rate are roughly equal. In wideband techniques, or
spread-spectrum techniques, the signal bandwidth is much larger than the data rate. These techniques are able to exploit the frequency-selective fading of the channel. For more details see Proakis.
1.3.1 Narrowband Techniques
A very simple narrowband modulation scheme is binary phase shift keying (BPSK). The transmitter and receiver for BPSK are shown in Figures 1.3 and 1.4, respectively. A sequence of data bits [b.sub.l]
[member of] [+ or -] 1 is mapped into a data stream and filtered. The filtered data stream is modulated onto a carrier and is amplified before being radiated by the antenna. The purpose of the filter
is to confine the spectrum of the signal to the bandwidth mask for the allocated frequency. The signal is converted from baseband by the mixer to the desired center or carrier frequency
(upconversion). The signal is then amplified before transmission. With ideal devices (mixers, filters, amplifiers) this is all that is needed for transmission. However, the mixers and amplifiers
typically introduce some additional problems. The amplifier, for example, may not be completely linear. The nonlinearity can cause the bandwidth of the signal to increase (spectral regrowth), as will
be discussed later.
For now, assume that the filter, mixer, and amplifier are ideal devices. In this case the transmitted (radiated) signal can be written as
(1.2) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
where P is the transmitted power, T is the duration of a data bit or the inverse of the data rate [R.sub.b], [f.sub.c] is the carrier frequency, and h(t) is the impulse response of the pulse-shaping
filter. There are various choices for the pulse-shaping filter. A filter with impulse response being a rectangular pulse of duration T seconds results in a constant envelope signal (peak-to-mean
envelope ratio of 1) but has large spectral splatter, whereas a Nyquist-type pulse has high envelope variation and no spectral splatter. The disadvantage of high envelope variation is that it will be
distorted by an amplifier operating in a power efficient mode because of the amplifier's nonlinear characteristics. Thus there is a trade-off between power efficiency and bandwidth efficiency in the
design of the modulation.
The simplest channel model is called the additive white Gaussian noise (AWGN) channel. In this model the received signal is the transmitted signal (appropriately attenuated) plus additive white
Gaussian noise:
(1.3) r(t) = [alpha]s (t) + n(t). The noise is assumed to be white with two-sided power spectral density [N.sub.0]/2 W/Hz.
The receiver for BPSK is shown in Figure 1.4. The front end low noise amplifier sets the internal noise figure for the receiver. The mixer converts the radio frequency (RF) signal to baseband. The
filter rejects out-of-band noise while passing the desired signal. The optimal filter in the presence of additive white Gaussian noise alone is the matched filter (a filter matched to the transmitter
filter). This very simplified diagram ignores many problems associated with nonideal devices. For the case of ideal amplifiers and a transmit filter and receiver filter satisfying the Nyquist
Criteria for no intersymbol interference, the receiver filter output can be expressed as
[X.sub.l] = [square root of ([bar.E][b.sub.l-1] + [[eta].sub.1]),
where bar.E is the received energy (bar.E = [[alpha].sup.2]PT) and [[eta].sub.l] is a Gaussian distributed random variable with mean zero and variance [N.sub.0]/2. The decision rule is to decide
[b.sub.l-1] = +1 if [X.sub.l] > 0 and to decide [b.sub.l-1] = -1 otherwise. For the simple case of an additive white Gaussian noise channel, the error probability is
[P.sub.e,b] = Q ([square of root (2[bar.E])/[N.sub.0]
where [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. This is shown in Figure 1.5.
From Figure 1.5 it can be seen that in order to provide error probabilities around [10.sup.-5] it is necessary for the received signal-to-noise ratio to [bar.E]/[N.sub.0] = 9:6 dB. The capacity curve
for BPSK in Figure 1.2, however, indicates that if we are willing to lower the rate of transmission we can significantly save on energy. For example, it is possible to have a nearly 0 dB
signal-to-noise ratio if we are willing to reduce the rate of transmission by 50%. Thus about 9.6 dB decrease in signal power is possible with a 50% reduction in transmission rate.
The above analysis is for the case of additive white Gaussian noise channels. Unfortunately, wireless channels are not accurately modeled by just additive white Gaussian noise. A reasonable model for
a wireless channel with relatively small bandwidth is that of a flat Rayleigh fading channel. While there are more complex models, the Rayleigh fading channel model is a model that provides the
essential effect. In the Rayleigh fading model the received signal is still given by Eq. (1.3). However, [alpha] is a Rayleigh distributed random variable that is sometimes large (constructive
addition of multiple propagation paths) and sometimes small (destructive addition of multiple propagation paths). However, the small values of a cause the signal-to-noise ratio to drop and thus the
error probability to increase significantly. The large values of a corresponding to constructive addition of the multiple propagation paths result in the error probability being very small. However,
when the average error probability is determined there is significant loss in performance. The average error probability with Rayleigh fading and BPSK is
(1.4) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],
where f(r) is the Rayleigh density and [bar.E] is the average received energy. The average error probability as a function of the average received energy is shown in Figure 1.6. Included in this
figure is the performance with just white Gaussian noise. As can be seen from the figure, there is a significant loss in performance with Rayleigh fading. At a bit error rate of [10.sup.-5] the loss
in performance is about 35 dB.
Excerpted from RF Technologies for Low-Power Wireless Communications Copyright © 2001 by John Wiley & Sons, Inc.. Excerpted by permission.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.
Read More Show Less
Table of Contents
Introduction 1
1 Wireless Communications System Architecture and Performance 9
2 Advanced GaAs-Based HBT Designs for Wireless Communications Systems 39
3 InP-Based Devices and Circuits 79
4 Si/SiGe HBT Technology for Low-Power Mobile Communications System Applications 125
5 Flicker Noise Reduction in GaN Field-Effect Transistors 159
6 Power Amplifier Approaches for High Efficiency and Linearity 189
7 Characterization of Amplifier Nonlinearities and Their Effects in Communications Systems 229
8 Planar-Oriented Passive Components 265
9 Active and High-Performance Antennas 305
10 Microelectromechanical Switches for RF Applications 349
11 Micromachined K-Band High-Q Resonators, Filters, and Low Phase Noise Oscillators 383
12 Transceiver Front-End Architectures Using Vibrating Micromechanical Signal Processors 411
Index 463
Read More Show Less | {"url":"http://www.barnesandnoble.com/w/rf-technologies-for-low-power-wireless-communications-tatsuo-itoh/1102286228?ean=9780471382676&itm=13","timestamp":"2014-04-18T09:27:48Z","content_type":null,"content_length":"132249","record_id":"<urn:uuid:7d899563-5e14-4299-bcda-f942efc33b09>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00287-ip-10-147-4-33.ec2.internal.warc.gz"} |
Garnet Valley, PA Algebra 2 Tutor
Find a Garnet Valley, PA Algebra 2 Tutor
...I have tutored privately most of that time as well. I know that everyone learns in a different way and I try to use real world objects, models and examples to help students understand abstract
concepts with which they may be struggling. I also try to explain concepts in a variety of ways becaus...
28 Subjects: including algebra 2, calculus, geometry, ASVAB
...I have taught students in grades 2-12 in a variety of settings - urban classrooms, after-school programs, summer enrichment, and summer schools. I work with students to develop strong
conceptual understanding and high math fluency through creative math games. Having worked with a diverse popula...
9 Subjects: including algebra 2, geometry, ESL/ESOL, algebra 1
...My background is in engineering and business, so I use an applied math approach to teaching. I find knowing why the math is important goes a long way towards helping students retain
information. After all, math IS fun!In the past 5 years, I have taught differential equations at a local university.
13 Subjects: including algebra 2, calculus, geometry, statistics
...My approach has been very successful in the past--I bond with students easily and work hard to not only help them improve their grades and do their homework in an organized and effective
manner, but I also make tireless efforts to inspire and motivate them to enjoy the learning process and find t...
23 Subjects: including algebra 2, English, reading, writing
...My personal challenge for each lesson is making sure to close the students’ “concept gap”: often, students do not have trouble with the process of problem solving, but understanding what the
problem is. Every session is a conversation about subject fundamentals, sample problems, and translating ...
9 Subjects: including algebra 2, calculus, physics, geometry
Related Garnet Valley, PA Tutors
Garnet Valley, PA Accounting Tutors
Garnet Valley, PA ACT Tutors
Garnet Valley, PA Algebra Tutors
Garnet Valley, PA Algebra 2 Tutors
Garnet Valley, PA Calculus Tutors
Garnet Valley, PA Geometry Tutors
Garnet Valley, PA Math Tutors
Garnet Valley, PA Prealgebra Tutors
Garnet Valley, PA Precalculus Tutors
Garnet Valley, PA SAT Tutors
Garnet Valley, PA SAT Math Tutors
Garnet Valley, PA Science Tutors
Garnet Valley, PA Statistics Tutors
Garnet Valley, PA Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Arden, DE algebra 2 Tutors
Aston algebra 2 Tutors
Brookhaven, PA algebra 2 Tutors
Chester Township, PA algebra 2 Tutors
Claymont algebra 2 Tutors
Hockessin algebra 2 Tutors
Lenni algebra 2 Tutors
Logan Township, NJ algebra 2 Tutors
Marcus Hook algebra 2 Tutors
Marple Township, PA algebra 2 Tutors
Media, PA algebra 2 Tutors
Springfield, PA algebra 2 Tutors
Thornton, PA algebra 2 Tutors
Upland, PA algebra 2 Tutors
West Chester, PA algebra 2 Tutors | {"url":"http://www.purplemath.com/garnet_valley_pa_algebra_2_tutors.php","timestamp":"2014-04-21T02:17:09Z","content_type":null,"content_length":"24447","record_id":"<urn:uuid:3b54e8d5-87a1-4e96-9e5b-2aca95a6c8e6>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00379-ip-10-147-4-33.ec2.internal.warc.gz"} |
Catenoid Fence
Rotate Me
About the Catenoid Fence
H. Karcher
These singly periodic surfaces are parametrized (aa) by
rectangular tori; our lines extend polar coordinates around
the two punctures to the whole Torus. The surfaces look
like a fence of catenoids, joined by handles; they were made by
Karcher and Hoffman, responding to the suggestive skew 4-noids.
The morphing parameter aa is the modulus (a function of the
length ratio) of the rectangular Torus. Formulas are from [K2]
[K2] H. Karcher, Construction of minimal surfaces, in "Surveys in
Geometry", Univ. of Tokyo, 1989, and Lecture Notes No. 12,
SFB 256, Bonn, 1989, pp. 1--96.
For a discussion of techniques for creating minimal surfaces with
various qualitative features by appropriate choices of Weierstrass
data, see either [KWH], or pages 192--217 of [DHKW].
[KWH] H. Karcher, F. Wei, and D. Hoffman, The genus one helicoid, and
the minimal surfaces that led to its discovery, in "Global Analysis
in Modern Mathematics, A Symposium in Honor of Richard Palais'
Sixtieth Birthday", K. Uhlenbeck Editor, Publish or Perish Press, 1993
[DHKW] U. Dierkes, S. Hildebrand, A. Kuster, and O. Wohlrab,
Minimal Surfaces I, Grundlehren der math. Wiss. v. 295
Springer-Verlag, 1991
blog comments powered by | {"url":"http://xahlee.info/surface/catenoid_fence/catenoid_fence.html","timestamp":"2014-04-18T10:34:46Z","content_type":null,"content_length":"8063","record_id":"<urn:uuid:4c2613a0-ba07-43da-a05a-1186a0cc22d7>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00093-ip-10-147-4-33.ec2.internal.warc.gz"} |