content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Image of the cuspidal subgroup of J_0(N) in J_1(N)
up vote 8 down vote favorite
Let N be a prime integer. We know that the element $c=(0)-(\infty)$ generates the torsion subgroup of $J_0(N)$ and it has order Num( (N-1)/12). Now, there is a natural map $\pi^*:J_0(N) \rightarrow
J_1(N)$, coming from the covering map $\pi:X_1(N) \rightarrow X_0(N)$. My question is what is the image of c under this map? Specifically, is it possible for $\pi^*(c)=0$?
nt.number-theory modular-forms
This can be made "explicit" in the special case $N=11$, for which both curves have genus 1. By moduli interpretation of cusps, $X_1(p)$ has $p-1$ geometric cusps: $(p-1)/2$ are rational points
6 (the $p$-gon cusps) and $(p-1)/2$ are a single Galois orbit (the $1$-gon cusps, residue field $\mathbf{Q}(\zeta_p)^+$). Making $X_1(11)$ an elliptic curve using a rational cusp and $X_0(11)$ an
elliptic curve using its image (the cusp $0$, not $\infty$!!), the deg-5 map $\pi$ has kernel given by the rat'l cusps: $\ker \pi = \mathbf{Z}/(5)$. Hence, dual map has kernel $\mu_5$, and $\mu_5
(\mathbf{Q})=1$. – BCnrd Sep 27 '10 at 5:53
Thanks! This example was actually confusing me, and that clarifies the situation. – Soroosh Sep 27 '10 at 17:41
add comment
2 Answers
active oldest votes
The fact you mention about $(0) - (\infty)$ generating the torsion subgroup of $J_0(N)$ is Theorem 1 (Ogg's conjecture) on the first page of Mazur's paper "The Eisenstein Ideal". I
recommend you actually read this paper. If you get as far as page 2, you will find a "Theorem 2 (twisted Ogg's conjecture)" which concerns the Shimura subgroup $\Sigma$. The
up vote 4 construction of this subgroup essentially identifies it with the kernel of $J_0(N) \rightarrow J_1(N)$, and Proposition 11.6 of ibid. shows that $\Sigma$ is of multiplicative type, so
down vote BCnrd's remarks apply to the general case.
add comment
I don't think pi^*(c) can be 0. Suppose pi^*(c) = div(f); then f would be a map from X_1(N) to P^1 of degree about N. But in fact any such map is of degree at least ~N^2, i.e. the gonality
of X_1(N) is bounded below by a constant multiple of N^2. This was proved independently by Zograf ("Small eigenvalues of automorphic Laplacians in spaces of cusp forms") and Abramovich ("A
linear lower bound on the gonality of modular curves.")
up vote 2
down vote Update: As Kevin points out in comments I should say "for N large enough." But "large enough" is effective here since the constants in the gonality bounds are effective (albeit small.)
It's zero when $N=2$, right? So maybe you mean "...can be zero for $N$ suff large" or something. – Kevin Buzzard Sep 27 '10 at 12:32
add comment
Not the answer you're looking for? Browse other questions tagged nt.number-theory modular-forms or ask your own question. | {"url":"http://mathoverflow.net/questions/40071/image-of-the-cuspidal-subgroup-of-j-0n-in-j-1n/40135","timestamp":"2014-04-18T19:00:44Z","content_type":null,"content_length":"57926","record_id":"<urn:uuid:1276694b-ce48-4bc6-ab64-0b17faa8cdc2>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00348-ip-10-147-4-33.ec2.internal.warc.gz"} |
Perfect information
October 26th 2009, 05:11 AM
Perfect information
Not sure if this is a basic pre uni question but since I'm covering it at uni I will assume that it fits under the advanced section.
I'm trying to solve a question that asks me to suggest the best investment choice given a payoff table. In the table I am given 4 investments, one of which returns the same amount in all cases,
and the other three vary. The question then goes on to say that assuming that the return on one of the remaining three investment classes will be known... which of the four would be the best
The problem that I'm having is figuring out how to calculate the expected profit under perfect information. The definitions that I've seen regarding this are very clear... i.e. it is the sum of
the expected profit under uncertainty (easy) and the amount a decision maker is willing to pay to gain perfect information.
To me, the expected profit of perfect information is just the outcome that you predict. i.e. out of lets say 3 possible outcomes, loss, break even and gain (theres 5 in the question but doesn't
matter) lets say we know that a gain will take place. Therefore this becomes the expected profit. Question is... if I'm not told which outcome will occur, how do I compare the 4 asset classes and
decide the best one?
I found this
as an example showing how to calculate perfect information, but I didn't understand it as it was rather poorly annotated.
Is this even a mathematical question? I was thinking of answering it along the lines of risk profiles (risk taking, risk neutrality and risk aversion) but I can't help but feel that this is
October 26th 2009, 05:20 AM
Not sure if this is a basic pre uni question but since I'm covering it at uni I will assume that it fits under the advanced section.
I'm trying to solve a question that asks me to suggest the best investment choice given a payoff table. In the table I am given 4 investments, one of which returns the same amount in all cases,
and the other three vary. The question then goes on to say that assuming that the return on one of the remaining three investment classes will be known... which of the four would be the best
The problem that I'm having is figuring out how to calculate the expected profit under perfect information. The definitions that I've seen regarding this are very clear... i.e. it is the sum of
the expected profit under uncertainty (easy) and the amount a decision maker is willing to pay to gain perfect information.
To me, the expected profit of perfect information is just the outcome that you predict. i.e. out of lets say 3 possible outcomes, loss, break even and gain (theres 5 in the question but doesn't
matter) lets say we know that a gain will take place. Therefore this becomes the expected profit. Question is... if I'm not told which outcome will occur, how do I compare the 4 asset classes and
decide the best one?
I found this
Decision theory: Definition from Answers.com
as an example showing how to calculate perfect information, but I didn't understand it as it was rather poorly annotated.
Is this even a mathematical question? I was thinking of answering it along the lines of risk profiles (risk taking, risk neutrality and risk aversion) but I can't help but feel that this is
If you know the return, then the expected return is that return (since it occurs with probability 1, and all other alternatives (and combinations of alternatives) occur with probability 0)
October 27th 2009, 03:34 AM
Yep, I understand that, but as I said, I'm not told which of the 3 outcomes will occur. Lets say I was told that my outcome for that asset class (for which I have perfect knowledge) will be
profit with the one that generates the same returns in all states giving me losses of the same size and the remaining 2 giving me break even. In that case it's a pretty simple decision. The
problem is that not only am I not given the probabilities of the outcomes to the other two (not perfect knowledge) asset classes which makes the calculation of their respective expected profits
impossible, but I'm not given what outcome will take place for the one perfect knowledge asset class. Therefore it is impossible to asses which of the 4 is the best option.
My greatest concern is the fact that I'm given the payoff table, so I can't imagine it not needing to be used to answer the question. There are other parts to the question that state the
probabilities but they are in a completely different part and I'm almost 100% certain that those probabilities can't used to answer this part that I'm talking about. The more I read the question,
the more I believe it's a very theoretical type question but I'm just not sure.
EDIT: I now understand how to answer the question. I believe that has to be done is a discussion has to be carried out, analysing the outcomes one by one under the assumption that that is the
outcome that has taken place. So simple :). | {"url":"http://mathhelpforum.com/advanced-statistics/110581-perfect-information-print.html","timestamp":"2014-04-18T16:00:13Z","content_type":null,"content_length":"9266","record_id":"<urn:uuid:3cdaa412-b479-4f46-9c29-5c4ebbc276d6>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00030-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: February 2006 [00130]
[Date Index] [Thread Index] [Author Index]
Re: determinant of sparse matrix
• To: mathgroup at smc.vnet.net
• Subject: [mg64212] Re: determinant of sparse matrix
• From: Paul Abbott <paul at physics.uwa.edu.au>
• Date: Mon, 6 Feb 2006 02:49:12 -0500 (EST)
• Organization: The University of Western Australia
• References: <ds1s8j$fse$1@smc.vnet.net>
• Sender: owner-wri-mathgroup at wolfram.com
In article <ds1s8j$fse$1 at smc.vnet.net>,
Mark Fisher <mark at markfisher.net> wrote:
> Are there methods for computing the determinant of (large) sparse
> matrices? Mathematica shuts down the kernel when I ask for
> Det[iden]
> where
> iden = SparseArray[{i_, i_} -> 1. {10^4, 10^4}, 0.]
> Of course I know the answer in this case, but more generally I'm
> interested in the determinant of tridiagonal positive definite matrices.
A general n x n tridiagonal matrix with diagonal entries a[i],
super-diagonal entries b[i], and sub-diagonal entries c[i], is
tri[n_] := SparseArray[{
{i_, i_} -> a[i],
{i_, j_} /; j - i == 1 -> b[i],
{i_, j_} /; i - j == 1 -> c[j]
}, {n, n}]
The determinant of such a matrix can be computed recursively using
standard properties of the determinant:
det[0] = 1;
det[1] = a[1];
det[n_] := det[n] = a[n] det[n-1]-b[n-1]c[n-1]det[n-2]
with no need to explicitly construct the matrix. As a check
Simplify[ det[7] == Det[tri[7]] ]
For a large matrix you will have to increase the $RecursionLimit, say
$RecursionLimit = 10^4
and, if the expressions for a[i], b[i], c[i], are exact, then the
resulting expression can be huge. On the other hand, for numeric
entries, the result is likely to be quite reasonable.
Paul Abbott Phone: 61 8 6488 2734
School of Physics, M013 Fax: +61 8 6488 1014
The University of Western Australia (CRICOS Provider No 00126G)
AUSTRALIA http://physics.uwa.edu.au/~paul | {"url":"http://forums.wolfram.com/mathgroup/archive/2006/Feb/msg00130.html","timestamp":"2014-04-17T04:12:27Z","content_type":null,"content_length":"35977","record_id":"<urn:uuid:2dc437b7-f894-4a32-8137-79637ef6707d>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00253-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/amistre64/medals/1","timestamp":"2014-04-17T12:36:09Z","content_type":null,"content_length":"148786","record_id":"<urn:uuid:a1cfb7b9-f00c-4d17-94ed-233b2dd4eda8>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00115-ip-10-147-4-33.ec2.internal.warc.gz"} |
eccentric circular cam
i started by taking the 110 rad/s and dividing by 2pi to get 17.5 revolutions per sec. Then 1 stroke length per revolution which in turn gets 17.5 strokes lengths per sec to get shaft speed.
since it required a flow rate of .65, I divided that by 0.75 efficiency to get 0.867 m^3/s
then i divided that by 17.5 and set the whole thing equal to stroke length times the volume of the two cylinders.
Found the diameter and stroke length to equal 12.6 cm.
Is this correct or did i make a mistake? | {"url":"http://www.physicsforums.com/showthread.php?t=477472","timestamp":"2014-04-20T21:33:49Z","content_type":null,"content_length":"24476","record_id":"<urn:uuid:bb7e420a-6bab-4ed9-9d02-4ecf87f388c8>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00583-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] 2d binning and linear regression
josef.pktd@gmai... josef.pktd@gmai...
Tue Jun 22 14:58:29 CDT 2010
On Tue, Jun 22, 2010 at 10:09 AM, Tom Durrant <thdurrant@gmail.com> wrote:
>> the basic idea is in "polyfit on multiple data points" on
>> numpy-disscusion mailing list April 2009
>> In this case, calculations have to be done by groups
>> subtract mean (this needs to be replaced by group demeaning)
>> modeldm = model - model.mean()
>> obsdm = obs - obs.mean()
>> xx = np.histogram2d(
>> xx, xedges, yedges = np.histogram2d(lat, lon, weights=modeldm*modeldm,
>> bins=(latedges,lonedges))
>> xy, xedges, yedges = np.histogram2d(lat, lon, weights=obsdm*obsdm,
>> bins=(latedges,lonedges))
>> slopes = xy/xx # slopes by group
>> expand slopes to length of original array
>> predicted = model - obs * slopes_expanded
>> ...
>> the main point is to get the group functions, for demeaning, ... for
>> the 2d labels (and get the labels out of histogramdd)
>> I'm out of time (off to the airport soon), but I can look into it next
>> weekend.
>> Josef
> Thanks Josef, I will chase up the April list...
> If I understand what you have done above, this returns the slope of best fit
> lines forced through the origin, is that right?
Not if both variables, model and obs, are demeaned first, demeaning
removes any effect of a constant and only the slope is left over,
which can be done with the ration xx/xy.
But to get independent intercept per group, the demeaning has to be by group.
What's the size of your problem, how many groups or how many separate
regressions ?
demeaning by group has setup cost in this case, so the main speed
benefit would come if you calculate the digitize and label generation,
that histogram2d does, only ones and reuse it in later calculations.
Using dummy variables as Bruce proposes works very well if there are
not a very large number of groups, otherwise I think the memory
requirements and size of array would be very costly in terms of
> Have a great trip!
> Tom
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
More information about the NumPy-Discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2010-June/051124.html","timestamp":"2014-04-18T16:34:58Z","content_type":null,"content_length":"5585","record_id":"<urn:uuid:9cca170c-395a-4109-b41d-0166c7f8809d>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00005-ip-10-147-4-33.ec2.internal.warc.gz"} |
Which properties of ultrafilters on countable sets hold for filters in general?
up vote 4 down vote favorite
Background/motivation: I'm investigating the construction of models for a first-order modal system (S5) as products of classical models. Since ultraproducts are all classical models and I need
non-classical ones as well, I need to look at reduced products where the filter is not an ultrafilter. This leads me to ask about filters in general:
J.L. Bell & A.B. Slomson, in Models and Ultraproducts (p. 116), state and prove:
Lemma 1.17. Let I be a countable set. Then the collections of non-principal, $\omega$-incomplete, uniform, and regular ultrafilters on I all coincide.
Suppose I alter their definitions slightly so the above properties are all defined for filters in general, then modify the lemma to assert that it holds for filters in general. Would that be true?
Can anyone supply a reference to a proof or disproof? Thanks.
model-theory reference-request lo.logic ultrafilters
1 In the statement of Lemma 1.17, "complete" should be "incomplete. Also, when you "alter the definitions slightly" do you just mean to change "ultrafilter" to "filter" in Bell and Slomson's
definitions, or do you envision other slight changes? For example, many people define "non-principal" to mean "contains the complement of every finite set." That's equivalent to the definition in
Bell and Slomson for ultrafilters but not for filters in general. – Andreas Blass Nov 21 '10 at 22:47
Andreas - I changed "complete" to "incomplete"; thanks. -- [B&S] already define a "principal" filter as one generated by a fixed non-empty subset of I; let's say a filter is "non-principal" if
it's not principal. Let's say a filter F is "uniform" if all its members are of the same cardinality as I. It's "$\omega$-complete" if it contains the intersection of any countable collection of
its members, and "$\omega$-incomplete" otherwise. "Regular" is a bit complicated to define in a comment. – MikeC Nov 21 '10 at 23:45
add comment
1 Answer
active oldest votes
If you use literally the definitions in Bell and Slomson, only changing "ultrafilter" to "filter," and if, as in the lemma you cited, you're interested only in flters on a countable set,
then I believe non-principal is equivalent to $\omega$-incomplete, while "regular" is strictly stronger and "uniform" is strictly weaker. Unfortunately, I don't have time right now to
check this carefully, so I hope someone will object loudly if I've messed it up.
up vote 3 Now that I have a bit more time, let me add the counterexamples that justify "strictly". Partition $\omega$ into two infinite pieces $A$ and $B$. The principal filter $F_0$ generated by
down vote $A$ is uniform; that establishes the second "strictly" above. For the first, let $U$ be a nonprincipal ultrafilter that contains $B$, and let $F_1=U\cap F_0$. Then $F_1$ is nonprincipal
accepted (the intersection of all the sets in it is $A$, which isn't in it), but it is not regular. (A function $f$ as in Bell and Slomson's definition of "regular" on page 114 would have to send
each $a\in A$ to a finite set $f(a)$ that contains all elements $j$ of $\omega$, a contradiction.)
Thanks, that's very interesting. I expected the answer to be simpler. – MikeC Nov 21 '10 at 23:43
add comment
Not the answer you're looking for? Browse other questions tagged model-theory reference-request lo.logic ultrafilters or ask your own question. | {"url":"http://mathoverflow.net/questions/46871/which-properties-of-ultrafilters-on-countable-sets-hold-for-filters-in-general","timestamp":"2014-04-18T13:54:26Z","content_type":null,"content_length":"57189","record_id":"<urn:uuid:d51b62f0-cb87-4697-96ed-11837b86130a>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00062-ip-10-147-4-33.ec2.internal.warc.gz"} |
Batavia, IL Calculus Tutor
Find a Batavia, IL Calculus Tutor
...I have coached volleyball for 5th-7th graders. I also have a range of younger cousins that I enjoy spending quality time with. My friends ask me for help in math when they don't understand
19 Subjects: including calculus, reading, geometry, algebra 1
Hello! I have always had a passion for teaching and find it extremely fulfilling to help others succeed. I graduated from the University of California, San Diego with a degree in Biochemistry in
25 Subjects: including calculus, chemistry, biology, physics
...I passed the WyzAnt subject test with a 10/10. I have a Bachelor's degree (2010) in mathematics from the University of Illinois at Urbana-Champaign. I am certified to teach secondary (6-12)
12 Subjects: including calculus, statistics, algebra 2, algebra 1
...For the past 6 years I am teaching all levels of mathematics courses. I am also doing one-on-one tutoring for high school students for the past three years. After my tutoring all of my
students see the hike in their grades.
12 Subjects: including calculus, geometry, statistics, algebra 1
...A good foundation in understanding and solving word problems not only creates a basis in Mathematics, but also prepares the student for real-life situations. Topics include: working with
fractions, decimals, percents, positive/negative integers, rational numbers, ratios and proportions, and algebraic equations. Elementary Math, from grades 1-8, are covered.
11 Subjects: including calculus, geometry, algebra 1, algebra 2
Related Batavia, IL Tutors
Batavia, IL Accounting Tutors
Batavia, IL ACT Tutors
Batavia, IL Algebra Tutors
Batavia, IL Algebra 2 Tutors
Batavia, IL Calculus Tutors
Batavia, IL Geometry Tutors
Batavia, IL Math Tutors
Batavia, IL Prealgebra Tutors
Batavia, IL Precalculus Tutors
Batavia, IL SAT Tutors
Batavia, IL SAT Math Tutors
Batavia, IL Science Tutors
Batavia, IL Statistics Tutors
Batavia, IL Trigonometry Tutors | {"url":"http://www.purplemath.com/Batavia_IL_calculus_tutors.php","timestamp":"2014-04-16T05:05:42Z","content_type":null,"content_length":"23815","record_id":"<urn:uuid:0e5583ec-6210-4f3d-b4a1-bb2c1cb12cad>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00661-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum - Ask Dr. Math Archives: Middle School Higher-Dimensional Geometry
This page:
3D and higher
Dr. Math
See also the
Dr. Math FAQ:
geometric formulas
naming polygons
and polyhedra
Internet Library:
About Math
factoring expressions
graphing equations
factoring numbers
conic sections/
3D and higher
Number Sense
factoring numbers
negative numbers
prime numbers
square roots
Word Problems
Browse Middle School 3D and Higher Geometry
Stars indicate particularly interesting answers or good places to begin browsing.
Selected answers to common questions:
Do cones or cylinders have edges?
If there are 4 cubic yards of mulch in a yard, and the mulch is to be laid 3 inches thick, how many square feet can you cover?
I need to know how to compute square feet for some cabinets that I have to move in a room.
What is the definition of surface area and volume? What are the differences and similarities between surface area and volume?
How do you find the surface area and volume of a cylinder?
How do you find the surface area of a box?
Is there any difference between the formula for an unrolled cylinder and the regular surface area formula for a cylinder?
How do I find the surface area of an egg?
Can you find the surface area of a cube or other 3D rectangular object by calculating the area of the sides you can see and multiplying by 2?
How do I calculate the surface area of a sphere?
Three cubes whose edges are 2, 6, and 8 centimeters long are glued together at their faces. Compute the minimum surface area possible for the resulting figure.
If you build a frame shaped like a tetrahedron and dip it in bubble solution, why do all of the faces of the bubble collapse to a point in the middle of the tetrahedron?
Can you give me any good sources of information that a high school geometry student would understand?
Three-dimensional counterparts for lines, polygons, perpendicular lines, and collinear lines.
A piece of plywood has three holes it it: a circular hole with a diameter of 2 cm, a square hole with 2 cm sides, and a triangular hole with a base and height of 2 cm. What object could
completely plug AND pass completely through each hole?
Why is a three-legged stool steady, while a four-legged stool can be wobbly?
What is topology? What is knot theory?
What is topology?
How do you convert from true north to magnetic north?
Find the volume and surface area of a cylindical storage tank with a radius of 15 feet and a height of 30 feet.
How do you find the volume of a cylinder that is 7.5mm high and has a diameter of 4mm?
How much coffee can a tapered coffee pot hold?
What is the volume of the storage tank with a diameter 6m, height 5m?
How do you calculate the lateral area, total area, and volume of a rectangular solid with the following dimensions...
Where does the (4/3) come from in the formula for the volume of a sphere?
I have a length of round pipe that is 4" in diameter and 10 feet long. How many gallons of water will it hold if filled and sealed at both ends?
What is the volume of a pyramid with a a height of 20.6 meters and a square base with sides of 35 meters?
Can you explain the difference between the terms volume and capacity? Also mass and weight.
How do you find the weight of a whale?
Is there a good way to describe a torus to a child?
What is geometry really for?
What is the volume of a tank 10 feet in diameter, 8 feet deep, with a 2 foot conical bottom?
A description of skew lines including diagrams and definitions.
Is a piece of paper a 3D object when held up in space? Or is it a 2D object in 3D space?
Why do we use "cubed" for the volume of a figure?
Page: [<prev] 1 2 3 | {"url":"http://mathforum.org/library/drmath/sets/mid_3d.html?start_at=81&num_to_see=40&s_keyid=38893116&f_keyid=38893120","timestamp":"2014-04-19T17:54:46Z","content_type":null,"content_length":"19674","record_id":"<urn:uuid:b4534f33-55d6-414f-b1cc-37dd2c971b9b>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00407-ip-10-147-4-33.ec2.internal.warc.gz"} |
Equality of two balls in a metric space
September 28th 2013, 07:23 PM
Equality of two balls in a metric space
Is it posssible for b[x:r) and b[y;s) to be equal with x not equal to y and r not equal to s ?
I know it is possible,for instance if we consider a non empty set X with the discrete metric, then for each x in X the balls b[x;r) for r in (0,1] are equal to the singleton set {x}. Also the
balls b[x;r) for r in (1,infinity) are equal to X for all x in X.
What is the idea behind two balls with different radii and centre's being equal ?
What i don't understand is, that even in the above example, in what sense are the two balls equal ?
What is the meaning of equality of two balls in a metric space ?
In this example one ball has only singleton element {x} and the other one is the whole metric space X then how are they equal ?
I am a little confused !
September 28th 2013, 07:55 PM
Re: Equality of two balls in a metric space
Is it posssible for b[x:r) and b[y;s) to be equal with x not equal to y and r not equal to s ? I know it is possible,for instance if we consider a non empty set X with the discrete metric, then
for each x in X the balls b[x;r) for r in (0,1] are equal to the singleton set {x}. Also the balls b[x;r) for r in (1,infinity) are equal to X for all x in X.
What is the idea behind two balls with different radii and centre's being equal ?
Those of trained in the tradition of R L Moore are distrustful of empty point sets.
One of the most basic properties of metric is: if $xe y$ then $d(x,y)>0$.
Now if $xe y$ then let $r=\frac{d(x,y)}{2}>0$.
Then it should be very clear that $\mathfrak{B}\left( {x;r} \right) \cap \mathfrak{B}\left( {y;r} \right) = \emptyset$.
How could they be equal ?
September 28th 2013, 08:10 PM
Re: Equality of two balls in a metric space
The interval in which r lies is changing so isn't "r" changing ? How can we take it to be the same for both the cases ?
September 29th 2013, 03:49 AM
Re: Equality of two balls in a metric space
Please read this page on Hausdorff spaces.
Every metric space is a Hausdorff space. So points are separated.
Balls are determined by first a point and a positive real number.
This if balls are equal the the centers are the same. | {"url":"http://mathhelpforum.com/advanced-math-topics/222372-equality-two-balls-metric-space-print.html","timestamp":"2014-04-24T23:58:40Z","content_type":null,"content_length":"8215","record_id":"<urn:uuid:49ca34fc-2857-493d-bbaa-44f03d6029f6>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00312-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Basic Mathematics of Bridges
Bridges serve one basic purpose that of connecting two points that are otherwise disconnected and difficult to access. Generally it provides the shortest distance between the two points. The art and
science of constructing these structures rely heavily on the mathematics and consequently the physics of stress and load. Some of the prevailing factors that influence the designs and types of
structures include the intended use and users, the available constructing materials, environmental conditions, cost, adequate manpower, the length of the span, the type of river banks, present and
projected traffic load, the free height under the bridge, aesthetic considerations and the available technology. (Salvadori, 144)
The history of mankind suggests that his survival is somewhat dependent on the ability to travel from place to place whether for war, protection or in search of the resources vital to his existence.
To facilitate this mobility, man has used several techniques to bridge gaps providing easy and short access routes. Over time, as the demand on human existence intensifies, coupled with the growth of
technology, be it small, man has devised more efficient and elaborate means of transportation by land, water and even air. Consequently better, safer and more durable bridge structures emerged. The
technology that emerged provided the world with four basic forms of bridge structures that are replicas of natural forms found in nature. "But there is still more to the bridge than just usefulness
and looks. However shallow the water, the smallest stream can seem a Rubicon. Our other aim must to consider the bridge as symbolic marker, dominating a stage that the traveler has reached on a
journey that may not, perhaps, be reversed."( Sweetman, 1)
I teach mathematics at the eighth grade level in a middle school in New Haven. The population is predominantly Black- about twelve percent Latino, five percent Asian and twenty percent white. There
is a pressing need to improve the academic performance of our students in the area of mathematics given the recent results of our standardized testing. There is a noticeable deficiency in the area of
mathematical applications - an unacceptable number of our students lack the mastery and proficiency in this area. Consequently, the development and use of a curriculum that focuses on the practical
application of the basic mathematical concepts should be helpful in addressing the student's deficiencies.
Given the level of mathematics that is taught at the eighth grade level, the mathematical focus on the topic of bridges will be on proportions.
The Basic Features of Bridges
Ancient societies are known to have the experience of bridge building. However, the Romans have left a lasting legacy of durable usable structures that have lasted for more than 2000 years. Some of
the remarkable features are the technology and the utilitarian diversity. Some of the designs were not only for human traffic but great aqueducts that supplied towns and plantations with great
quantities of water vital to the maintenance of the city. One of the most impressive aqueducts built by the Romans is the Pont du Gard, completed in18 B. C. It carried water to a ancient city,
Nemausus, a distance of more than thirty miles. The brilliance in the technology and its durability is still a testimony to the genius of the Roman engineering skill. (Dupre, 15)
There are some cardinal principles that every functioning bridge must adhere to regardless of size or utility. Every bridge must be able to tolerate and overcome the stress or forces it encounters.
For it to function, it should carry its own weight, a weight which is called dead load; it should be able to carry the weight of the traffic for which it was intended, a weight which is called live
or dynamic load. It must have the capability of resisting natural forces like wind, earthquake or other natural environmental stress or load.
The magnitude of a bridge is generally characterized by the length of its span, which is the distance from one end of the main support to the other. A plank across a stream is a span - the length of
the plank. Longer bridges might require support from below; these columns are called piers. The major supports at the two extreme ends of the bridge are called adutments.
There are four types of forces that act on a bridge singly or in combination: tension, compression, shear, and torsion. The force that pulls or stretches apart is called tension. This is the exact
opposite of compression, a force that pushes together. Shear is a sliding force while torsion produces a twisting effect. (Super Bridges)
The designs of bridges take four basic forms derived from nature: the Beam, a log across a stream, an arch that patterns rock formation, the suspension and the cable from a twig or branch hanging
from a vine. Changing technology and the availability of different kinds of materials have greatly influenced the types of designs and structures over time. (Dupre, 12)
Types of Bridges
Beam Bridge
The simplest form and the most inexpensive form of a bridge is the beam or "girder'. It takes on the natural basic form of a log fallen across a stream (Dupre, 12). It consists of a horizontal beam
which has at its end two support called piers or abutments. The beam must be able to bear its own weight plus the weight of the traffic without bending. The beam experiences the forces of the dead
and dynamic loads which produce a compression force which forces it together, while the bottom edge is pushed apart- the tension force.
A good example is that of taking a thick piece of sponge which is about two inches thick and cutting a strip about three inches in width and a length of about six to eight inches. Get two cans of the
same size to provide the support and place them about four inches apart. Use the sponge as a plank across the two supports. Cut a notch in the middle on either side. Slightly press on the middle
section, then observe the notches. You will notice that the notch closes on the topside which is the evidence of compression force while on the under side, the notch tends to widen-- evidence of the
tension force.
An ideal material used in beam bridge construction is pre-stressed concrete. It withstands very well the forces of compression and the steel rods imbedded in it resist the forces of tension--
pre-stressed concrete is one of the least expensive materials used in construction. However efficient the material, there is no compensation for beam bridges greatest limitation, its length. The
further apart the supports are placed, it's the weaker the beam bridge. Considering this factor, beam bridges seldom span more than 250 feet, which does not imply that it cannot be designed to span
great distances. In situation where it spans a great distance, the spans are daisy-chained together a connection known in the bridge world as "continuous span'. The world's longest bridge, which is
24 miles long, is a continuous span beam bridge, Lake Ponchartrain Causeway. It consists of two, two- way lane sections running parallel to each other. The Southbound Lane, which was completed in
1956, consists of 2243 separate spans, while the Northbound Lane, completed in 1969, consists of 1500 longer spans.
One big drawback of continuous spans: they are not suitable for places where there is need for the unobstructed clearance below the span given the fact that a great number of piers (support) are
Arch Bridge
Arch bridges pattern the natural rock formation. This type of bridge is considered one of the oldest and does posses great natural strength. In this design, the load of the bridge instead of pushing
straight down, it distributes the load outward along the curve of the arch to the support at the ends of the bridge, the abutments. They support the load and prevent the ends of the bridge from
spreading out.
How do the abutments support an arc bridge?
Get a 2- inch by 12- inch strip of cardboard. Bend the strip gently forming an arch. Place both ends on the table in the fashion of an inverted U. Press down gently on the top of the arch and notice
the effect on the ends. The likely outcome is the spreading outward of the ends of the arch.
Place two stacks of books about six inches apart ( the stacks must be lower than the height of the arch). Place the arched strip between the stacks of book in the inverted U fashion. Apply a gentle
force to the top of the arch and notice how the stack of books act as abutments, preventing the ends of the arch from spreading apart. Whenever the arch bridge is supporting its dead load, its own
weight, and the load of the traffic crossing the bridge, very part of the arch is under compression. Given this situation, arch bridges must be constructed from materials whish can withstand much
compression force.
The Romans, who are known for their great engineering genius at building arch bridges, used stones as their primary construction material. The upper portion of the bridge is held together by mortar
while the rest of the structure is held together by its own weight.
In present time, the availability of steel and pre-stressed concrete makes it possible to construct longer and more elegant arches. Generally, modern arches span a distance of 200-800 feet. In New
River Gorge, West Virginia, there is a spectacular arch bridge that has a span of 1700feet.
Constructing arch bridges can be somewhat tricky due to the fact that the structure is unstable until the both spans meet in the middle. To overcome this problem, sufficient scaffolding is used to
provide support for the spans until they meet in the middle, a technique known as centering. Another method of support is the use of cables that are attached to the spans at one end while the other
ends anchored to the ground on the other side of the bridge. This method allows for the use of the waterway or road below the structure while it is still under construction.
The Natchez Trace Bridge in Franklin, Tennessee, which was opened in 1994, is the first American arch bridge to be constructed from pre-cast concrete. Two arches are used to support the roadway
above. Conventionally arch bridges of comparable size require vertical supports called 'spandrels" to distribute the load of the roadway to the arches. This bridge is designed without the use of
spandrels for aesthetic reasons. Instead, most of the live load is resting on the crown of the two arches that are slightly flattened to provide better support.(Super Bridges)
Suspension Bridge
In Asia, as early as the first century B.C., suspension bridges were constructed to provide access that averted the need for piers and other forms of centering. In the earlier designs, the structures
were suspended with bamboo cables. By the sixth century A.D., the introduction of iron chains displaced the use of bamboo cables in parts of China. (Kranakis, 29)
Suspension bridges are light, strong, aesthetic and span distances from 2,000 to 7,000 feet-- much longer distances than the other types of bridges. It is the most expensive type of bridges to
construct. This type of bridge as the names suggest, suspends the roadway from huge bundles of cables, which extend from one end of the bridge to the other. The cables are supported on high sturdy
towers and are secured firmly at each end on the ground to firm anchors of solid rocks or massive concrete blocks.
The tower allows the cable to be draped over them for long distances. Most of the weight of the bridge is transferred by way of the cable to the anchorages. At the anchorage, the cables are spread
out so that the load is evenly distributed. This provides greater security. (Super Bridges)
What are the uses of the anchorages?
Get two pieces of board an inch in thickness, 4 inches by 6 inches. Place a small nail half way down at the top of the wood. Get a length of string about 36 inches in length and to the middle of the
string place a loop around each nail about 10 inches apart. Place the two pieces of wood uprightly facing each other. Allow the string to hang arching between them. Apply a weight about that of a
wallet to the loop and notice the result.
Replace the pieces of wood to their original position. This time place the free ends of the string over their corresponding sides and secure each end to a separate secure anchor allowing the length
of string between the anchors and the piece of wood to have just a slight arch. Then add the same weight to the loop between the pieces of wood and note the result. Notice that the anchorages on the
outside of the pieces of wood help to stabilize the "bridge".
In earlier times some of the cables used were made from twisted grass. During the early part of the he nineteenth century, the cables used on suspension bridges were iron chains. Presently the cables
used are made of thousands of individual steel wires bounded tightly together. Steel shows the capacity to withstand great tension making it an ideal choice for cable material.
The Humber Bridge in England, once had the world's longest center span - measuring 4,624 feet. Currently in Japan, the Akashi Kaikou Bridge, linking the islands of Honshu and Shikoku, has a center
span of 6,527 feet.
Because of the length, flexibility, and lightweight feature of suspension bridges, wind is always a serious concern. The Tacoma Narrows Bridge opened in 1940, was the third longest suspension bridge
in the world -- a span of 2,800 feet. It was noticeably quite unstable even in moderate wind conditions. Attempts were made to address the instability but with little success. On November 7, 1940
barely four months after being opened, it collapsed in a wind speed of 42 mph. It was designed to withstand wind speeds of up to 120 mph. Scientist believe that the wind matched the resonance
frequency of the bridge and it produced a destructive displacement. (Super Bridges)
The engineer who designed the structure did anticipate some form of wind displacement, but according to others, he neglected to calculate the effects of multiple pushes of the wind force on the
cables of the structure. (Kranakis,155)
Cable-Stayed Bridge
Cable-stayed bridges do look much like the suspension bridges-they all have towers and roadways hanging from cables. However, the difference lies in the manner in which the load of the roadway is
supported. In suspension bridges, the cables are draped over the towers and the ends are secured to the anchorages where the load is distributed. In cable-stayed bridges, the cables are secured to
the towers that are responsible for bearing the load of the bridge. There are several ways by which the cables can be attached: radial pattern -- the cables extend from several points on the road to
a single point at the top of the towers. The parallel pattern attaches the cables from different heights along the tower at an angle parallel to each other.
Demonstrating a cable-stayed bridge
If you are standing up with your arms stretched horizontally, your head now becomes the tower and your outstretched arms the span or roadway of the bridge. Get a partner to tie a piece of rope to
each wrist to support your arms in the horizontal position with the middle of the rope resting slightly taut on your head. Get a second piece of rope and repeat the process except this time the ends
of the rope are tied to your elbows. Now you have two cable-stayed. Try bringing your arms down to your sides. Where do you feel most of the load?
The concept of cable-stayed bridges might appear to be a relatively new concept, however as early as 1595 there have been published sketches of this type of bridge. It was not until the 20th century
that this design became acceptable. In Europe just after World War 11 when steel was scarce, the cable-stayed bridge designs were perfect for reconstructing bombed out bridges which still had
standing foundations.
In the United States it is considered a fairly new approach to bridge construction. However, the response has been positive given the aesthetic nature of the feature and the cost effectiveness. For
medium length spans between 5000 and 2,800feet, cable-stayed is fast becoming the bridge of choice. Compared to suspensions, they require less cable, can be constructed out of pre-cast concrete
sections, and are faster to construct.
In 1988, the Sunshine Skyway Bridge in Tampa, Florida won the prestigious Presidential Design Award from the National Endowment for the Arts.(Super Bridges)
1. Students will construct models to demonstrate the following:
a. Compression
b. Tension
c. Torsion
2. Students will focus on the construction of a cable-stayed bridge with the objectives of getting practice in measuring angles and linear measurements, and applying the principles of similar
3. Students will use a model of the beam bridge to discover the effect of the load on a beam at varying distances.
4. Students could choose to investigate the strength of a model cable (a thin thread) to determine the structure of a suspension bridge of a required load capacity based on the results of their
Activity 1
Aim: To demonstrate Compression, Tension and Torsion
Material: Piece of foam material 3" x 3" x 10"
Procedure to demonstrate compression:
1. A weight or load is applied to the piece of foam and students are asked to describe the change in the form of the material.
2. A small portion of about a half of an inch is removed from the center on the top and bottom surface, no deeper than about one inch. Each end of the strip of foam is supported by a thick textbook.
The books are placed about five inches apart. A slight pressure is applied to the top middle portion of the foam. Students should observe:
a) The change in the surface of the material in particular in the area of the slit.
What do you notice about the slit on top?
What does the force of compression do to a material?
What kind of force does your body exert on the floor?
Procedure to demonstrate Tension
a) Observe the change at the underside of the foam while a force is applied to the top.
What do you notice about the slit at the bottom while the force was applied?
Try to gently break a thin strip of pine wood, about a foot in length.
Where does the splintering start?
b) Pin two three inch pieces of thin strips of wood at one end forming an inverted V. Place it on the table in an inverted position on the desk. Put a rubber band around the opened end (slightly
taut) then press down on the point (vertex) of the V.
What change or changes do you observe?
Procedure to demonstrate Torsion
Material: Strip of rubber or sponge about a foot in length
Hold one end of the strip with the right hand and the other end in the left hand. Then twist the strip in a ringing motion in opposite directions.
What effect does this turning force have on the strip?
Activity 2
Aim: To show the effect of a moving load on a bridge
Material: Two spring-balances, a meter rule
Procedure: Place the balances on a firm table. Place one end of the meter rule on one of the balances and the other end on the other balance. Get a small object that has a weight of about fifty grams
starting at the end the rule.
The two balances will act as the abutments of a typical beam bridge while the weight will behave like a moving vehicular load over the bridge. The meter rule represents the beams of the bridge. To
begin: places the weight at the one end of the rule, then observe and record the separate weights measured on each balance. Move the load in increments of ten centimeters and record the weights.
a) The initial weights of the beam on balance one and on balance two
b) The changes in the weighs as the moving load changes position
Inference: If the bridge were to collapse, at what point along the beam is it most likely to occur? Why?
Activity 3
Aim: To apply the mathematics of proportion
Students will construct a beam bridge proportional to a reasonable dimension of a real beam bridge. Students could visit a local bridge and acquire the basic measurements of the width and length of
the span. They can choose a their own materials to construct the bridge. Each project should have a drawn plan that matches the proposed scales. Prior to this exercise, students should be exposed to
the principles of finding equivalent proportions. This bridge building exercise should serve as practice in practical applications of the mathematics of proportionality.
Activity 4
Aim: To construct a cable-stayed suspension bridge
Procedure: Students will identify a particular cable-stayed bridge and ascertain the dimension of the structure. They should draw a scaled rendition of the structure that they will use to construct
their model. This activity should provide useful exercises in drawing and identifying the properties of similar right triangles and finding equivalent proportions.
The tower of the bridge forms the vertical side of the right triangle. The design could have five attached cables on each side of the tower. The distance between the points of attachment of preceding
cables on the tower should be equal. Likewise, the points of attachment of the cables on the beam of the span should be equidistant.
Students should be able to calculate the length of the remaining cables after the first cable has been installed by applying the proportionality concept. For the more advanced students, the
Pythagorean theorem could be utilized.
Activity 5
Aim: To evaluate the economic effect of a particular bridge.
Procedure: Students should select a particular bridge and observe for about an hour the number of vehicles that use the bridge traveling in both directions. Later they should then assume that the
bridge is destroyed. They should now show the economic impact on one of the bridge user applying reasonable estimates. For example, find an alternate route to get to his/her original destination.
Calculate the additional distance and the increase in the time required to travel, and the cost of the additional volume of gasoline.
Inference: In what ways will this affect the rest of the other drivers who will now share road space with the additional drivers?
Activity 6
Aim: To measure the arch of a bridge in degrees
Procedure: Students should obtain a picture of an arch bridge. They should trace the arch of the bridge on sheet of paper. They should use the arch, which is now considered an arc of a circle, to
complete the missing portion of the circle. With the knowledge that a circle has 360°, they should measure the length of the arc that forms the arch of the bridge, and compare it with the length of
the total circumference of the circle. They can then write a ratio of the length of the arch of the bridge to the circumference of the circle. This ratio should be considered equivalent to the ratio
of the degrees in the arch of the bridge which is unknown but could be called "d ", to the total number of degrees in the full circle, 360°. The two ratios are used as equivalent fractions to
calculate the degrees "d " of the arch of the bridge.
Teacher and Student Resources
Calatrava, Santiago. Dymanic Equilibrum: Recent Projects, Zurich, Switzerland: Artemis Verlags, 1992. This text presents some of the most aesthetic and innovative structural designs in bridge.
Dupre, Judith. Bridges. New York, NY: Black Dog & Leventhal Publishers, Inc., 1997 This is a text which presents a great variety of outstanding bridges spanning a long time period.
K'nex Education Division. Bridge - Educator Guide. Hatfield, PA: K'nex Industries, Inc., 1996. It has basic bridge building projects.
Polladr, Jean. Math Project - Building Bridges. Paloalto, CA: Dayl Symoure Publications, 1985. This book provides bridge building projects
Salvadori, Mario. Why Buildings Stand Up. New York, NY: W.W. Norton & Co.,1980.
Super Bridges. http//www.pbs.org./nova/bgidg/meetcable.html, 2002 This is a site that provides information on the basic features of bridges in a format that is student friendly.
Contents of 2001 Volume V | Directory of Volumes | Index | Yale-New Haven Teachers Institute | {"url":"http://www.yale.edu/ynhti/curriculum/units/2001/5/01.05.06.x.html","timestamp":"2014-04-16T13:23:57Z","content_type":null,"content_length":"31842","record_id":"<urn:uuid:8e06faa2-0d90-4887-b51b-123455376f05>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00186-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of improper-integral
In calculus, an improper integral is the limit of a definite integral as an endpoint of the interval of integration approaches either a specified real number or ∞ or −∞ or, in some cases, as both
endpoints approach limits.
Specifically, an improper integral is a limit of the form
$lim_\left\{btoinfty\right\} int_a^bf\left(x\right), dx, qquad lim_\left\{ato -infty\right\} int_a^bf\left(x\right), dx,$
or of the form
$lim_\left\{cto b^-\right\} int_a^cf\left(x\right), dx,quad$
lim_{cto a^+} int_c^bf(x), dx, in which one takes a limit in one or the other (or sometimes both) endpoints . Improper integrals may also occur at an
interior point
of the domain of integration, or at multiple such points.
It is often necessary to use indefinite integrals in order to compute a value for integrals which may not exist in the conventional sense (as a Riemann integral, for instance) because of a
singularity in the function, or an infinite endpoint of the domain of integration.
The following integral does not exist as a
Riemann integral
$int_1^infty frac\left\{1\right\}\left\{x^2\right\},dx$
because the domain of integration is unbounded. (The Riemann integral is only well-defined over a bounded domain.) However, it may be assigned a value as an improper integral by interpreting it
instead as a limit
right] = 1$
The following integral also fails to exist as a Riemann integral:
$int_0^1 frac\left\{1\right\}\left\{sqrt\left\{x\right\}\right\},dx.$
Here the function is unbounded, and the Riemann integral is not well-defined for unbounded functions. However, if the integral is instead understood as the limit:
$lim_\left\{ato 0^+\right\}int_a^1frac\left\{1\right\}\left\{sqrt\left\{x\right\}\right\}, dx = lim_\left\{ato 0^+\right\}left\left[2sqrt\left\{1\right\}-2sqrt\left\{a\right\}right\right]=2,$
then the limit converges.
$int_\left\{-infty\right\}^\left\{infty\right\} f\left(x\right) , dx = int_\left\{-infty\right\}^\left\{infty\right\} f\left(x - 1/x\right) , dx$
Convergence of the integral
An improper integral converges if the limit defining it exists. Thus for example one says that the improper integral
$lim_\left\{ttoinfty\right\} int_a^t f\left(x\right), dx$
exists and is equal to
if the integrals under the limit exist for all sufficiently large
, and the value of the limit is equal to
It is also possible for an improper integral to diverge to infinity. In that case, one may assign the value of ∞ (or −∞) to the integral. For instance
$lim_\left\{btoinfty\right\}int_1^b frac\left\{1\right\}\left\{x\right\},dx = infty.$
However, other improper integrals may simply diverge in no particular direction, such as
$lim_\left\{btoinfty\right\}int_1^b xsin x, dx,$
which does not exist, even as an
extended real number
A limitation of the technique of improper integration is that the limit must be taken with respect to one endpoint at a time. Thus, for instance, an improper integral of the form
$int_\left\{-infty\right\}^infty f\left(x\right), dx$
is defined by taking two separate limits; to wit
$int_\left\{-infty\right\}^infty f\left(x\right), dx = lim_\left\{ato -infty\right\}lim_\left\{bto infty\right\} int_a^bf\left(x\right),dx,$
provided the double limit is finite. By the properties of the integral, this can also be written as a pair of distinct improper integrals of the first kind:
$lim_\left\{ato -infty\right\}int_a^cf\left(x\right), dx + lim_\left\{bto infty\right\} int_c^bf\left(x\right),dx$
is any convenient point at which to start the integration.
It is sometimes possible to define improper integrals where both endpoints are infinite, such as the Gaussian integral $int_\left\{-infty\right\}^infty e^\left\{-x^2\right\},dx = sqrt\left\{pi\right
\}$. But one cannot even define other integrals of this kind unambiguously, such as $int_\left\{-infty\right\}^infty x,dx$, since the double limit diverges:
$lim_\left\{ato -infty\right\}int_a^cx,dx+lim_\left\{btoinfty\right\}int_c^bx,dx$
In this case, one can however define an improper integral in the sense of
Cauchy principal value
$p.v.int_\left\{-infty\right\}^infty x,dx=lim_\left\{btoinfty\right\}int_\left\{-b\right\}^bx,dx = 0.$
The questions one must address in determining an improper integral are:
• Does the limit exist?
• Can the limit be computed?
The first question is an issue of mathematical analysis. The second one can be addressed by calculus techniques, but also in some cases by contour integration, Fourier transforms and other more
advanced methods.
Types of integrals
There is more than one theory of
mathematical integration
. From the point of view of calculus, the
Riemann integral
theory is usually assumed as the default theory. In using improper integrals, it can matter which integration theory is in play.
• For the Darboux integral, improper integration is necessary both for unbounded intervals (since one cannot divide the interval into finitely many subintervals of finite length) and for unbounded
functions with finite integral (since, supposing it is unbounded above, then the upper integral will be infinite, but the lower integral will be finite).
• The Riemann integral, improper integration is also necessary for unbounded intervals and for unbounded functions, as with the Darboux integral.
• The Lebesgue integral deals differently with unbounded domains and unbounded functions, so that often an integral which only exists as an improper Riemann integral will exist as a (proper)
Lebesgue integral, such as $int_1^infty frac\left\{1\right\}\left\{x^2\right\},dx$. On the other hand, there are also integrals that have an improper Riemann integral do not have a (proper)
Lebesgue integral, such as $int_0^infty frac\left\{sin x\right\}\left\{x\right\},dx$. The Lebesgue theory does not see this as a deficiency: from the point of view of measure theory, $int_0^infty
frac\left\{sin x\right\}\left\{x\right\},dx = infty - infty$ and cannot be defined satisfactorily. In some situations, however, it may be convenient to employ improper Lebesgue integrals as is
the case, for instance, when defining the Cauchy principal value.
• For the Henstock-Kurzweil integral, improper integration is not necessary, and this is seen as a strength of the theory: it encompasses all Lebesgue integrable and improper Riemann integrable
Improper Riemann integrals and Lebesgue integrals
In some cases, the integral
$int_a^c f\left(x\right),dx,$
can be defined as an integral (a Lebesgue integral, for instance) without reference to the limit
$lim_\left\{bto c^-\right\}int_a^b f\left(x\right),dx,$
but cannot otherwise be conveniently computed. This often happens when the function f being integrated from a to c has a vertical asymptote at c, or if c = ∞ (see Figures 1 and 2). In such cases, the
improper Riemann integral allows one to calculate the Lebesgue integral of the function. Specifically, the following theorem holds :
• If a function f is Riemann integrable on [a,b] for every b ≥ a, and the partial integrals
are bounded as b → ∞, then the improper Riemann integrals
$int_a^infty f\left(x\right), dx,quadmbox\left\{and\right\} int_a^infty |f\left(x\right)|, dx$
both exist. Furthermore, f is Lebesgue integrable on [a, ∞), and its Lebesgue integral is equal to its improper Riemann integral.
For example, the integral
can be interpreted alternatively as the improper integral
or it may be interpreted instead as a
Lebesgue integral
over the set (0, ∞). Since both of these kinds of integral agree, one is free to choose the first method to calculate the value of the integral, even if one ultimately wishes to regard it as a
Lebesgue integral. Thus improper integrals are clearly useful tools for obtaining the actual values of integrals.
In other cases, however, the integral from a to c is not even defined, because the integrals of the positive and negative parts of f(x) dx from a to c are both infinite, but nonetheless the limit may
exist. Such cases are "properly improper" integrals, i.e. their values cannot be defined except as such limits. For example,
cannot be interpreted as a Lebesgue integral, since
This is therefore a "properly" improper integral, whose value is given by
One can speak of the singularities of an improper integral, meaning those points of the extended real number line at which limits are used.
Such an integral is often written symbolically just like a standard definite integral, perhaps with infinity as a limit of integration. But that conceals the limiting process. By using the more
advanced Lebesgue integral, rather than the Riemann integral, one can in some cases bypass this requirement, but if one simply wants to evaluate the limit to a definite answer, that technical fix may
not necessarily help. It is more or less essential in the theoretical treatment for the Fourier transform, with pervasive use of integrals over the whole real line.
Cauchy principal value
Consider the difference in values of two limits:
$lim_\left\{arightarrow 0+\right\}left\left(int_\left\{-1\right\}^\left\{-a\right\}frac\left\{dx\right\}\left\{x\right\}+int_a^1frac\left\{dx\right\}\left\{x\right\}right\right)=0,$
$lim_\left\{arightarrow 0+\right\}left\left(int_\left\{-1\right\}^\left\{-a\right\}frac\left\{dx\right\}\left\{x\right\}+int_\left\{2a\right\}^1frac\left\{dx\right\}\left\{x\right\}right\right)=
-ln 2.$
The former is the Cauchy principal value of the otherwise ill-defined expression
$int_\left\{-1\right\}^1frac\left\{dx\right\}\left\{x\right\}\left\{ \right\}$
left(mbox{which} mbox{gives} -infty+inftyright).
Similarly, we have
$lim_\left\{arightarrowinfty\right\}int_\left\{-2a\right\}^afrac\left\{2x,dx\right\}\left\{x^2+1\right\}=-ln 4.$
The former is the principal value of the otherwise ill-defined expression
$int_\left\{-infty\right\}^inftyfrac\left\{2x,dx\right\}\left\{x^2+1\right\}\left\{ \right\}$
left(mbox{which} mbox{gives} -infty+inftyright).
All of the above limits are cases of the indeterminate form ∞ − ∞.
These pathologies do not affect "Lebesgue-integrable" functions, that is, functions the integrals of whose absolute values are finite.
An indefinite integral may diverge in the sense that the limit defining it may not exist. In this case, there are more sophisticated definitions of the limit which can produce a convergent value for
the improper integral. These are called
One summability method, popular in Fourier analysis, is that of Cesàro summation. The integral
$int_0^infty f\left(x\right),dx$
is Cesàro summable (C, α) if
$lim_\left\{lambdatoinfty\right\}int_0^lambdaleft\left(1-frac\left\{x\right\}\left\{lambda\right\}right\right)^alpha f\left(x\right), dx$
exists and is finite . The value of this limit, should it exist, is the (C, α) sum of the integral.
An integral is (C, 0) summability precisely when it exists as an improper integral. However, there are integrals which are (C, α) summable for α > 0 which fail to converge as improper integrals (in
the sense of Riemann or Lebesgue). One example is the integral
$int_0^inftysin x, dx$
which fails to exist as an improper integral, but is (C,α) summable for every α > 0, with value 1. This is an integral version of Grandi's series.
External links | {"url":"http://www.reference.com/browse/improper-integral","timestamp":"2014-04-16T16:52:30Z","content_type":null,"content_length":"86644","record_id":"<urn:uuid:1b41191d-875c-46e3-9242-812fa25764e0>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00145-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Dirac(m,n).... what is it?
Replies: 2 Last Post: Nov 17, 1996 10:03 AM
Messages: [ Previous | Next ]
Dirac(m,n).... what is it?
Posted: Nov 12, 1996 1:50 PM
I am doing Fourier transforms, and every once in awhile I get a Dirac delta
function out of it. In the case of Dirac(w), I know that the value is 0
unless w=0, in which case it is infinite. What does the notation Dirac(m,n)
stand for? It could be like the Kronecker delta, where it is zero unless m=n,
but I am not sure...
Date Subject Author
11/12/96 Dirac(m,n).... what is it? michael kennan
11/13/96 Re: Dirac(m,n).... what is it? Cleve Moler
11/17/96 Re: Dirac(m,n).... what is it? Johan Carlson | {"url":"http://mathforum.org/kb/thread.jspa?threadID=241248","timestamp":"2014-04-19T23:33:27Z","content_type":null,"content_length":"18406","record_id":"<urn:uuid:6c851508-76d7-49f0-b578-91ce60aaa1e5>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00535-ip-10-147-4-33.ec2.internal.warc.gz"} |
RE: st: Re: hours:minutes:seconds
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
RE: st: Re: hours:minutes:seconds
From "Nick Cox" <n.j.cox@durham.ac.uk>
To <statalist@hsphsun2.harvard.edu>
Subject RE: st: Re: hours:minutes:seconds
Date Tue, 17 Dec 2002 10:29:38 -0000
Gary Longton replied to Christa Scholtz and Radu Ban
> > I have a data field where the duration of an event is
> recorded in this
> > format:
> >
> > hour:minute:second
> >
> > eg: an event that lasts 5 hours, 27 minutes and 13
> seconds is 5:27:13.
> >
> > How do I get Stata to convert this into total number of seconds?
> and Radu Ban replied:
> > you can try sth like:
> > if your_time is the variable you have for time
> >
> > gen str8 stime = your_time
> > gen shours = substr(stime, 1, 2) *takes the first two digits
> > gen hours = real(shour) *reads first two digits as number
> > gen sminutes = substr(stime, 4, 2) *takes digits 4 and 5
> > gen minutes = real(sminutes)
> > gen sseconds = substr(stime, 7, 2)
> > gen seconds = real(sseconds)
> >
> > *now add up
> > gen totsecs = 3600*hours + 60*minutes + seconds
> Radu's approach assumes that the original time string will
> always have
> 2-digit hours, which will often be too restrictive, and
> won't work for
> Christa's example.
> An easier one-step approach for parsing the time string into the 3
> component numeric variables would be to use Nick Cox's
> -split- program
> (available on SSC), which could be followed with Radu's
> expression for
> total seconds.
Anyone in this territory might want to know of
various -egen- functions in the -egenmore- package
on SSC.
dhms(d h m s) [ , format(format) ] creates a date
variable from Stata date variable or date d with a fractional part
reflecting the number of hours, minutes and seconds past midnight.
h can be a variable containing integers between 0 and 23
inclusive or a single integer in that range. m and s can be
containing integers between 0 and 59 or single integer(s) in that
range. Optionally a format, usually but not necessarily a date
can be specified. The resulting variable, which is by default
as a double, may be used in date and time arithmetic in which the
time of day is taken into account.
elap(time) [ , format(format) ] creates a string variable
which contains the number of days, hours, minutes and seconds
associated with an integer variable containing a number of
elapsed seconds. Such a variable might be the result of date/time
arithmetic, where a time interval between two timestamps has been
expressed in terms of elapsed seconds. Leading zeroes are included
in the hours, minutes, and seconds fields. Optionally, a format
can be specified.
elap2(time1 time2) [ , format(format) ] creates a string variable
which contains the number of days, hours, minutes and seconds
associated with a pair of time values, expressed as fractional
where time1 is no greater than time2. Such time values may be
by function dhms(). elap2() expresses the interval between these
time values in readable form. Leading zeroes are included in the
minutes, and seconds fields. Optionally, a format can be
hmm(timevar) generates a string variable showing timevar, interpreted
as indicating time in minutes, represented as hours and minutes in
the form "[...h]h:mm". For example, times of 9, 90, 900 and
9000 minutes would be represented as "0:09","1:30", "15:00"
and "150:00". The option round(#) rounds the result: round(1)
rounds the time to the nearest minute. The option trim trims the
result of leading zeros and colons, except that an isolated 0 is
not trimmed. With trim "0:09" is trimmed to "9" and "0:00"
is trimmed to "0".
hmm() serves equally well for representing times in seconds in
minutes and seconds in the form "[...m]m:ss".
hmmss(timevar) generates a string variable showing timevar,
as indicating time in seconds, represented as hours, minutes and
in the form "[...h:]mm:ss". For example, times of 9, 90, 900 and
9000 seconds would be represented as "00:09","01:30", "15:00"
and "2:30:00". The option round(#) rounds the result: round(1)
rounds the time to the nearest second. The option trim trims the
result of leading zeros and colons, except that an isolated 0 is
not trimmed. With trim "00:09" is trimmed to "9" and "00:00"
is trimmed to "0".
hms(h m s) [ , format(format) ] creates an elapsed
time variable containing the number of seconds past midnight.
h can be a variable containing integers between 0 and 23
inclusive or a single integer in that range. m and s can be
containing integers between 0 and 59 or single integer(s) in that
range. Optionally a format can be specified.
tod(time) [ , format(format) ] creates a string
variable which contains the number of hours, minutes and seconds
associated with an integer in the range 0 to 86399, one less than
the number of seconds in a day. Such a variable is produced by
hms(), which see above. Leading zeroes are included in the hours,
minutes, and seconds fields. Colons are used as separators.
Optionally a format can be specified.
Kit Baum (baum@bc.edu) is the author of dhms(), elap(), elap2(),
hms() and tod().
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2002-12/msg00355.html","timestamp":"2014-04-20T13:53:05Z","content_type":null,"content_length":"10374","record_id":"<urn:uuid:c2689503-f6b0-435c-8f86-3bff0a8ac35c>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00270-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-dev] subversion commit policy for rename files question
Robert Kern robert.kern@gmail....
Fri Nov 21 23:25:57 CST 2008
On Fri, Nov 21, 2008 at 23:10, <josef.pktd@gmail.com> wrote:
> On Fri, Nov 21, 2008 at 11:30 PM, Robert Kern <robert.kern@gmail.com> wrote:
>> On Fri, Nov 21, 2008 at 21:29, <josef.pktd@gmail.com> wrote:
>>> I want to do the renaming and importing in __all__ discussed here:
>>> http://projects.scipy.org/pipermail/scipy-dev/2008-November/010241.html
>>> For this I had to resolve some circular imports and add some missing
>>> functions to __all__.
>>> Is there a policy whether renames should be committed separately or
>>> can it be together with changes in the file,
>>> or it doesn't matter?
>> I take my "doesn't matter" back. Yes, please do file renames and
>> internal modifications separately.
>> --
>> Robert Kern
> Thanks, I will do it in several steps.
> All tests pass (after making sure that no old stuff is lying around),
> but not every function is tested.
> Also np.lookfor picks it up
> Robert,
> given our previous discussion, and the wikipedia definition of
> percentileofscore, I don't see any reason not to do a very simple
> implementation.
> Initially, I thought the proposed implementation can be vectorized,
> but I don't see how. Without vectorization, this version looks much
> simpler and, I guess, should be about as fast:
> import numpy as np
> def percentileofscore(a, score, kind = 'mean' ):
> a=np.array(a)
> n = len(a)
> if kind == 'strict':
> return sum(a<score) / float(n) * 100
> elif kind == 'weak':
> return sum(a<=score) / float(n) * 100
> elif kind == 'mean':
> return (sum(a<score) + sum(a<=score)) * 50 / float(n)
> else:
> raise NotImplementedError
> If you think this is ok, I put it in svn, I'm not sure whether to call
> the type, "kind", doctest pass the same as previous version
I'd raise a ValueError with a message stating that 'strong', 'weak',
and 'mean' are the only correct values, but otherwise, that looks
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
-- Umberto Eco
More information about the Scipy-dev mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-dev/2008-November/010275.html","timestamp":"2014-04-16T22:15:41Z","content_type":null,"content_length":"5724","record_id":"<urn:uuid:d06126ac-b3e7-49bf-83a2-3d35c62e8949>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00180-ip-10-147-4-33.ec2.internal.warc.gz"} |
F19AB2 Applied mathematics B
(Third year module given in the second semester.)
Lecturer. Simon Malham, Room CM T.21, Mathematics Department.
Contact. email: simonm@ma.hw.ac.uk and tel: 0131 451 3254.
Lectures. Monday 2:15pm in SR320, Tuesday 10:15am in SR320 and Friday 9:15am in SR320.
Tutorials. One a week: Friday at 10:15am in SR320.
Webpages. This course homepage where you can download lecture notes, further handouts, solutions to the exercises, past papers is:
More information about the module can be found at:
which can also be reached from the Mathematics Department's Homepage --> Teaching --> Information for students already on course --> Modules mainly taken by Mathematics and AMS students.
Vision. You can also find all the lecture notes, handouts, solutions to exercises and past papers and so forth on the module's VISION page.
Course aims and objectives. The objective of the module is to introduce some fundamental ideas and techniques in Applied Mathematics.
• Calculus of variations: Calculus of variations: variational derivative; Euler-Lagrange equations; examples including the Brachistochrone, isoperimetrical, and soap bubble problems; extensions to
higher derivatives, several dependent variables; constraints and Lagrange multipliers. (6 lectures)
• Lagrangian mechanics: Action; Hamilton's Principle; Lagrange's equations; examples including the Kepler and simple pendulum problems; derive incompressible Euler equations; Hamilton's equations;
Poisson brackets and the Hamilton-Jacobi partial differential equation. (5 lectures)
• Fluid equations: Continuum Hypothesis; Lagrangian and Eulerian formulations; material derivative; continuity equation; balance of momentum; Transport Theorem; Equation of State;
incompressibility. (4 lectures)
• Flows and applications: isentropic fluids and Bernoulli's Theorem; streamlines; vorticity; Couette flow. (3 lectures)
• Fourier Analysis: Full and half range Fourier series (3 lectures)
• An introduction to PDEs: Simple PDEs; Separation of Variables; Solution of Heat equation, Laplace's equation and the wave equation making use of Fourier series. (8 lectures)
Assessment. The continuous assessment consists of a one hour midterm exam counting for 10 percent of the final mark, homework counting for 5 percent of the final mark (details below) and a two-hour
final exam at the end of term which counts for 85 percent of the course mark. The midterm will be held on
Tuesday February 18th
The homework assessment consists of the specified exercises in the table below, to be handed in before or on the dates indicated. You can score up to 20 marks per homework. Your best 5 homeworks out
of the 7 will be added together to generate your overall continuous assessment score (for maximum credit you need to score a total of 100 marks).
There is a resit in August for the ordinary course. The resit assessment is purely on the basis of a two-hour exam.
Calculators. In the final exam you will only be allowed to use either the Casio fx-85WA or fx-85MS. This is a University regulation. Personally, I do not think you will need a calculator.
Contract. Students are expected to read the notes in this booklet before, during and after the lectures and tutorials. Lectures will act as a more formal forum for the lecturer to explain the ideas
of the course and give alternative examples, whilst tutorials will take a less formal and more personal form. There are exercises at the end of each chapter and students must attempt these.
Mathematics is best learned through grappling with the underlying ideas presented in lectures and then tackling problems given in the exercises.
You cannot learn to swim by reading a book about it!
Hence try the exercises, and if you get stuck, ask the lecturer either after a lecture, during the tutorials. It is vital that you can solve problems proficiently. If you need help, then
Ask, ask, ask!
Attendance sheets. Students will be required to sign an attendance sheet with their initials in every lecture and tutorial. If any one student misses three consecutive such contact events, or more
than one-third of them overall up until that date, then their personal mentor will be contacted.
Evaluations. At the end the course students will have an opportunity to fill out formal university evaluations on the course.
Books. The two main recommended books are V.I. Arnold and Chorin and Marsden (see the bibliographies of the lecture notes for details).
• The midterm exam will be in week 6 on Tuesday February 18th at 10:15am in SR320. It lasts for one hour. It will cover the lectures An introduction to Lagrangian and Hamiltonian mechanics.
Electronic resources
Syllabus (from the official department module pages)
Lecture notes.
Exercises and solutions. The solutions will be made available as the course progresses.
┃ Topic/Exercise sheet │Date out│Solutions ┃
┃An introduction to Lagrangian and Hamiltonian mechanics │Beg Feb │Solutions ┃
┃ │ │ ┃
┃ Introductory ideal fluid mechanics │Beg Mar │Solutions ┃
┃ │ │ ┃
┃ PDEs/Separation of variables │Mid Mar │Solutions ┃
┃ │ │ ┃
Movies. These are the movies shown during the course. Download them and use them freely.
Exam papers. Hardcopies of solutions for the speciman exam paper can be obtained from me later in the semester. We will discuss this in week 10 during the usual lecture times.
Homework timetable
This may change slightly as we progress so keep checking this webpage.
There are 7 homeworks here, each is worth 20 marks. Your best 5 will be used to make your final score (which is worth 5% of your overall mark for the module).
┃ Exercises │Date due┃
┃ Euler-Lagrange alternative form + Soap film │Jan 28th┃
┃ Hanging rope │Feb 4th ┃
┃ Central force field + Spherical pendulum │Feb 11th┃
┃ None---midterm this day. │Feb 18th┃
┃Channel shear flow + steady oscillating channel flow │Feb 25th┃
┃ Hurricane + Clepsydra │Mar 11th┃
┃ Fourier series: questions 1 + 3 │Mar 18th┃
┃ Heat equation: questions 1 + 2 │Mar 25th┃
This webpage and its content was started on 26/1/2009.
Please feel free to download and use any of the material accessible from this page---provided that it is not used for commercial gain.
Last updated: 27/3/2014.
simonm [at] ma.hw.ac.uk | {"url":"http://www.macs.hw.ac.uk/~simonm/F19AB2/index.html","timestamp":"2014-04-19T19:33:34Z","content_type":null,"content_length":"16848","record_id":"<urn:uuid:f0525c11-8a48-44e9-826f-a1abb63f24a2>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00591-ip-10-147-4-33.ec2.internal.warc.gz"} |
Design of Optical XOR, XNOR, NAND, and OR Logic Gates Based on Multi-Mode Interference Waveguides for Binary-Phase-Shift-Keyed Signal
We present a novel design method and potential application for optical XOR, XNOR, NAND, and OR logic gates for binary-phase-shift-keyed signal processing devices. The devices are composed of
multi-mode interference waveguides and convert the phase information of the input signal to amplitude at the output. We apply the finite element method for numerical simulations, and the evaluated
least ON to OFF logic-level contrast ratios for the XOR, XNOR, NAND, and OR logic gates are 21.5 dB, 21.5 dB, 22.3 dB, and 22.3 dB, respectively. The proposed logic gates are extremely promising
signal processing devices for binary-phase-shift-keyed signals in packet switching systems.
© 2011 IEEE
Yuhei Ishizaka, Yuki Kawaguchi, Kunimasa Saitoh, and Masanori Koshiba, "Design of Optical XOR, XNOR, NAND, and OR Logic Gates Based on Multi-Mode Interference Waveguides for Binary-Phase-Shift-Keyed
Signal," J. Lightwave Technol. 29, 2836-2846 (2011)
Sort: Year | Journal | Reset
1. V. Van, T. A. Ibrahim, P. P. Absil, F. G. Johnson, R. Grover, P.-T. Ho, "Optical signal processing using nonlinear semiconductor microring resonators," IEEE J. Sel. Top. Quantum Electron. 8,
705-713 (2002).
2. Z. Li, Z. Chen, B. Li, "Optical pulse controlled all-optical logic gates in SiGe/Si multimode interference," Opt. Exp. 13, 1033-1038 (2005).
3. Z. Zhu, W. Ye, J. Ji, X. Yuan, C. Zen, "High-contrast light-by-light switching and AND gate based on nonlinear photonic crystals," Opt. Exp. 14, 1783-1788 (2006).
4. T. Fujisawa, M. Koshiba, "All-optical logic gates based on nonlinear slot-waveguide couplers," J. Opt. Soc. Amer. B 23, 684-691 (2006).
5. Z. Li, G. Li, "Ultrahigh-speed reconfigurable logic gates based on four wave mixing in a semiconductor optical amplifier," IEEE Photon. Technol. Lett. 18, 1341-1343 (2006).
6. Y. Zhang, Y. Zhang, B. Li, "Optical switches and logic gates based on self-collimated beams in two-dimensional photonic crystals," Opt. Exp. 15, 9287-9292 (2007).
7. Y. Wu, T. Shih, "New all-optical logic gates based on the local nonlinear Mach–Zehnder interferometer," Opt. Exp. 16, 248-257 (2008).
8. Q. Liu, Z. Quyang, C. J. Wu, C. P. Liu, J. C. Wang, "All-optical half adder based on cross structures in two-dimensional photonic crystals," Opt. Exp. 16, 18992-19000 (2008).
9. P. Andalib, N. Granpayeh, "All-optical ultracompact photonic crystal AND gate based on nonlinear ring resonators," J. Opt. Soc. Amer. B 26, 10-16 (2009).
10. B. M. Isfahani, T. A. Tameh, N. Granpayeh, A. R. M. Javan, "All-optical NOR gate based on nonlinear photonic crystal microring resonators," J. Opt. Soc. Amer. B 26, 1097-1102 (2009).
11. J. Bai, J. Wang, J. Jiang, X. Chen, H. Li, Y. Qiu, Z. Qiang, "Photonic NOT and NOR gates based on a single compact photonic crystal ring resonator," Appl. Opt. 48, 6923-6927 (2009).
12. S. Zeng, Y. Zhang, B. Li, E. Y. Pun, "Ultrasmall optical logic gates based on silicon periodic dielectric waveguides," Photon. Nanostruct. 8, 32-37 (2010).
13. M. Zhang, L. Wang, P. Ye, "All-optical XOR logic gates: Technologies and experiment demonstrations," IEEE Commun. Mag. 43, S19-S24 (2005).
14. Z. Zalevsky, A. Rudnitsky, "Nano photonic and ultra fast all-optical processing modules," Opt. Exp. 13, 10272-10284 (2005).
15. D. K. Hunter, I. Andonovic, "Approaches to optical internet packet switching," IEEE Commun. Mag. 38, 116-122 (2000).
16. D. J. Blumenthal, B. Olsson, G. Rossi, T. E. Dimmick, L. Rau, M. Masanovic, O. Lavrova, R. Doshi, O. Jerphagnon, J. E. Bowers, V. Kaman, L. A. Coldren, J. Barton, "All-optical label swapping
networks and technologies," J. Lightw. Technol. 18, 2058-2075 (2000).
17. R. Clavero, J. M. Martínez, F. Ramos, J. Martí, "All-optical packet routing scheme for optical label-swapping networks," Opt. Exp. 12, 4326-4332 (2004).
18. R. Vilar, J. M. Martínez, F. Ramos, J. Martí, "All-optical dedcrementing of a packet's time-to-live (TTL) field using logic XOR gates," Opt. Exp. 16, 19734-19740 (2008).
19. K. Mishina, A. Marta, S. Mitani, T. Miyahara, K. Ishida, K. Shimizu, T. Hatta, K. Motoshima, K. Kitayama, "NRZ-OOK-to-RZ-BPSK modulation-format conversion using SOA-MZI wavelength converter," J.
Lightw. Technol. 24, 3751-3758 (2006).
20. Z. Liu, S. Xiao, L. Cai, Z. Liang, "Multi-format receiver for non-return-to-zero binary-phase-shift-keyed and non-return-to-zero amplitude-shift-keyed signals," Opt. Exp. 17, 2918-2925 (2009).
21. A. H. Gnauck, S. Chandrasekhar, J. Leuthold, L. Stulz, "Demonstration of 42.7-Gb/s DPSK receiver with 45 photons/bit sensitivity," IEEE Photon. Technol. Lett. 15, 99-101 (2003).
22. A. H. Gnauck, G. Raybon, S. Chandrasekhar, J. Leuthold, C. Doerr, L. Stulz, E. Burrows, "25$\,\times\,$40-Gb/s copolarized DPSK transmission over 12$\,\times\,$100-km NZDF with 50-GHz channel
spacing," IEEE Photon. Technol. Lett. 15, 467-469 (2003).
23. T. Mizuochi, K. Ishida, T. Kobayashi, J. Abe, K. Kinjo, K. Motoshima, K. Kasahara, "A comparative study of DPSK and OOK WDM transmission over transoceanic distances and their performance
degradations due to nonlinear phase noise," J. Lightw. Technol. 21, 1933-1943 (2003).
24. M. L. Gilmore, F. R. Steel, J. A. Tempka, Digital Detection System for Differential Phase Shift Keyed Signal U.S. Patent 3 993 956 (1976).
25. C. Xu, X. Liu, X. Wei, "Differential phase-shift keying for high spectral efficiency optical transmissions," IEEE J. Sel. Top. Quantum Electron. 10, 281-293 (2001).
26. A. H. Gnauck, P. J. Winzer, "Optical phase-shift-keyed transmission," J. Lightw. Technol. 23, 115-130 (2005).
27. N. Deng, K. Chan, C.-K. Chan, L.-K. Chen, "An all-optical XOR logic gate for high-speed RZ-DPSK signals by FWM in semiconductor optical amplifier," IEEE J. Sel. Top. Quantum Electron. 12, 702-707
28. J. Xu, X. Zhang, Y. Zhang, J. Dong, D. Liu, D. Huang, "Reconfigurable all-optical logic gates for multi-input differential phase-shift keying signals: Design and experiments," J. Lightw. Technol.
27, 5268-5275 (2009).
29. J. Wang, J. Sun, X. Zhang, D. Huang, M. M. Fejer, "Ultrafast all-optical three-input Boolean XOR operation for differential phase-shift keying signals using periodically poled lithium niobate,"
Opt. Lett. 33, 1419-1421 (2008).
30. E. Ip, A. P. T. Lau, D. J. F. Barros, J. M. Kahn, "Coherent detection in optical fiber system," Opt. Exp. 16, 753-791 (2008).
31. V. Ferrero, S. Camatel, "Optical phase locking techniques: An overview and a novel method based on single side sub-carrier modulation," Opt. Exp. 16, 818-828 (2008).
32. M. J. Fice, A. Chiuchiarelli, E. Ciaramella, A. J. Seeds, "Homodyne Cherent optical receiver using an optical injection phase-lock loop," J. Lightw. Technol. 29, 1152-1164 (2011).
33. J. M. Kahn, K.-P. Ho, "Spectral efficiency limits and modulation/detection techniques for DWDM system," IEEE J. Sel. Top. Quantum Electron. 10, 259-272 (2004).
34. P. A. Besse, M. Bachmann, H. Melchior, L. B. Soldano, M. K. Smit, "Optical bandwidth and fabrication tolerances of multimode interference couplers," J. Lightw. Technol. 12, 1001-1009 (1994).
35. D. X. Dai, S. L. He, "Optimization of ultracompact polarization-insensitive multimode interference couplers based on ridge waveguides," IEEE Photon. Technol. Lett. 18, 2017-2019 (2006).
36. J. Xiao, X. Liu, X. Sum, "Design of an ultracompact MMI wavelength demultiplexer in slot waveguide structures," Opt. Exp. 15, 8300-8308 (2007).
37. Y. Tsuji, M. Koshiba, "Finite element method using port truncation by perfectly matched layer boundary conditions for optical waveguide discontinuity problems," J. Lightw. Technol. 20, 463-468
38. L. B. Soldano, C. M. Pennings, "Optical multi-mode interference devices based on self-imaging: Principles and applications," J. Lightw. Technol. 13, 615-627 (1995).
39. E. Bonek, W. R. Leeb, A. L. Scholtz, H. K. Phillipp, "Optical PLLs see the light," Microwaves & RF 22, 65-70 (1983).
40. K.-P. Ho, Phase-Modulated Optical Communication Systems (Springer Science Business Media Inc, 2005).
41. F. L. Texeira, W. C. Chew, "General closed-form PML constitutive tensors to match arbitrary bianisotropic and dispersive linear media," IEEE Microw. Guided Wave Lett. 8, 223-225 (1998).
OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies.
In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.
« Previous Article | Next Article » | {"url":"http://www.opticsinfobase.org/jlt/abstract.cfm?uri=jlt-29-18-2836","timestamp":"2014-04-17T18:49:20Z","content_type":null,"content_length":"108436","record_id":"<urn:uuid:a449cb2d-4cd1-4f00-ac7a-c45a662a48fb>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00045-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Factor completely, then place the answer in the proper location on the grid. 6a2 - 8a - 30
• one year ago
• one year ago
Best Response
You've already chosen the best response.
First factor out 2 from all the terms: 2(3a^2 -4a - 15) Factor (3a^2 -4a - 15). : 2(3a - 5)(a - 3)
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50ab3e8ae4b06b5e49335453","timestamp":"2014-04-19T01:57:29Z","content_type":null,"content_length":"27717","record_id":"<urn:uuid:b1155aa6-6e3b-456c-83c2-d70134c42125>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00213-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: February 2004 [00006]
[Date Index] [Thread Index] [Author Index]
Re: Defining a function in module problem?
• To: mathgroup at smc.vnet.net
• Subject: [mg45999] Re: Defining a function in module problem?
• From: bobhanlon at aol.com (Bob Hanlon)
• Date: Mon, 2 Feb 2004 05:20:41 -0500 (EST)
• References: <bvhpvs$8dh$1@smc.vnet.net>
• Sender: owner-wri-mathgroup at wolfram.com
The z in the definition of g is in a global context and the z in the definition
of f is in a local context. They are different. Using only the local context
Which would actually be written
a[x_] := x/3;
Bob Hanlon
In article <bvhpvs$8dh$1 at smc.vnet.net>, jflanigan at netzero.net (jose flanigan)
<< why does this
a[x_] := Module[{f, g}, g = z/3; f = Function[z, Evaluate[g]]; f[x]]
a[1] = z/3
instead of
I don't understand the philosophy here. | {"url":"http://forums.wolfram.com/mathgroup/archive/2004/Feb/msg00006.html","timestamp":"2014-04-19T04:45:16Z","content_type":null,"content_length":"34810","record_id":"<urn:uuid:d48505ea-3c74-46b1-8410-cc5eeb853917>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00315-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
What is the solution to the rational equation 4/3+3/x+5=21/3x+15 ?
Best Response
You've already chosen the best response.
4/3+3/x+5 = 21/3x + 15 4/3+5 + 3/x = 7/x + 15 4/3+5-15 = 7/x+3/x -8 - 2/3 = 10/x x = 10/(-8-2/3)
Best Response
You've already chosen the best response.
\[4/3+3/x+5=21/3x+15\] First you simplify the equation by combining like terms and simplifying fractions: \[6 \frac{1}{3} +\frac{3}{x}=\frac{7}{x}+15\] Then you use the subtraction property of
equality and subtract \[(\frac{3}{x}+15)\]from both sides and get: \[-8\frac{1}{3}=\frac{4}{x}\] When you multiply both sides by x you get: \[-\frac{25}{3}x=4\] Divide both sides by the
coefficient of x to get that \[x=\frac{12}{25}\] :)
Best Response
You've already chosen the best response.
4/3 + 3/(x+5) =21/(3x+15) 4(x+5)+9 21 --------- = --------- 3(x+5) 3(x+5) 4x+20+9=21 4x+29=21 4x=21-29 4x=-8 x=-8/4 x=-2
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4ef4f37de4b01ad20b50745e","timestamp":"2014-04-18T14:00:36Z","content_type":null,"content_length":"32897","record_id":"<urn:uuid:63f483b2-79da-4a0b-be0f-127f067b0a50>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00545-ip-10-147-4-33.ec2.internal.warc.gz"} |
West Medford Algebra Tutor
Find a West Medford Algebra Tutor
...I teach a variety of levels of students from advanced to students with special needs. I show students a variety of ways to answer a problem because my view is that as long as student can answer
a question, understand how they got the answer and can explain how they do so, it doesn't matter the m...
5 Subjects: including algebra 1, algebra 2, precalculus, study skills
...I currently work as software developer at IBM. When it comes to tutoring, I prefer to help students with homework problems or review sheets that they have been assigned. I prefer to focus on
examples from each section, rather than each specific problem, to make sure they understand all of the concepts.
17 Subjects: including algebra 2, geometry, algebra 1, economics
I am currently employed as a Development Engineer for a medical device company after having graduated from Boston University with a degree in Biomedical Engineering. As a student athlete
throughout college, I learned the value of time management and how to best utilize my strong work ethic, and suc...
15 Subjects: including algebra 2, algebra 1, chemistry, English
...The lessons we teach ourselves are the ones we remember best. Once I understand what concept a student needs to be taught or clarified, I devise a series of problems or logic steps that the
student can solve in succession. Ultimately this will allow the student to start from a place of confiden...
12 Subjects: including algebra 1, algebra 2, physics, chemistry
...I teach high school through college students and can teach in person or, if convenient, via Skype. I don't want to take your tests or quizzes, so I may need to verify in some way that I'm not
doing that! If you happen to be Mandarin Chinese I know a little of your language: yi, ar, san, si ...I've taught Discrete Mathematics for undergraduates at SUNY Cortland.
14 Subjects: including algebra 2, algebra 1, calculus, trigonometry | {"url":"http://www.purplemath.com/west_medford_algebra_tutors.php","timestamp":"2014-04-17T19:37:54Z","content_type":null,"content_length":"24162","record_id":"<urn:uuid:c1403b8c-a219-4a76-881a-20f6a740356f>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00174-ip-10-147-4-33.ec2.internal.warc.gz"} |
Building up to a Point via Adjunctions
{-# LANGUAGE RankNTypes #-}
For some time, there's been a strange creature, Pointed, that was folks have been proposing. Originating in Edward Kmett's category extras, it looks like this:
class Pointed f where
point :: a -> f a
The notion behind it was that one could decompose, e.g., Applicative into an instance of the Pointed typeclass and an instance of the Apply typeclass (giving apply :: f (a -> b) -> f a -> f b) and an
instance of Pointed, such that the two interact properly.
The basic problem is that point is too general and lawless. The only "law" one can give is that if f is both Functor and Pointed, then fmap f . point === point . f -- but, by parametricity, this
comes for free! It provides no guidance on what point does or how. Rather, it follows for all functions a -> f a as a consequence of the functor laws.
Another way to think of this in universal algebra we think relationally, in terms of operations. We define first a semigroup with an associative operation (*), and then we can 'complete' it by
providing it with an identity. The identity object is defined uniquely up to isomorphism by the properties it possess -- we don't define operations by what objects provide them identities. The way to
see "point" as really yielding a unit object is to define
class Inhabited f where
unit :: f ()
Now, for f a functor we have unit = point () and point x = fmap (const x) unit.
Dan Doel has provided some truly weird examples of legitimate pointeds.
It so happens that there is another typeclass, already in base, which also has no laws, although there's a good deal of "intuition" behind it. This typeclass is Foldable.
We typically think of foldable as such:
class Foldable f where
foldr :: (a -> b -> b) -> b -> f a -> b
However, its equally powerful (and I find more convenient) to think of Foldable simply giving a function toList :: f a -> [a].
Just like Pointed, Foldable has no laws other than, for f a Functor, the free theorem fmap f . toList === toList . fmap f. This in fact flows directly from the fact that for f a Functor, toList has
the type of a natural transformation between endofunctors on Hask: type NatTrans f g = (Functor f, Functor g) => forall a. f a -> g a.
Every instance of Traversable is also an instance of Folded. Traversable can be given as
class Traversable f where
traverse :: Applicative f => (a -> f b) -> t a -> f (t b)
Traversable has two laws:
1. (Identity) : traverse Identity === Identity
2. (Composition) : traverse (Compose . fmap g . f) === Compose . fmap (traverse g) . traverse f
Incidentally, the first law ensures that getId . traverse (Id . f) === fmap f The second in turn specializes to give the composition law for functors. This leads me to suspect that we can define (in
a categorical sense) Traversable as a special type of functor and capture the fashion in which it is "genuinely" a subclass of Functor -- i.e. a functor preserving certain special morphisms intrinsic
to Hask, just as Applicative is a functor preserving the monoidalness (and closedness) of Hask.
Given traverse, we can define toList = getConst . traverse (const . (:[]))
So for any given f that is both Foldable and Traversable, we can give the law that its definition of toList must coincide with that given by Traversable. And this gives at least some sort of law on
Foldable. Intuitively, the composition law gives us that "every a in f a that is traversed must be traversed at most once" (linearity). Hence a lawful foldable gives us no duplication or results in
our toList.
However, there are many things that are Foldable but not Traversable! Set is not traversable, because it is not a Functor on all of Hask, only that subcategory subject to the Ord constraint. As
another example, the unwrapped reader Functor/Monad/Applicative ((->) r) is not traversable. Hence data Store r a = Store (r -> a) r has an obvious toList but is also not traversable.
Are the Foldable instances for such things doomed to be lawless? Obviously, since Foldable only has one method, we can't define laws by the interaction of methods. Similarly, since the one method of
Foldable eliminates the f, we can't define a law by the composition of its methods.
What is necessary is a further subclass of Foldable, such that it harbors a useful relationship to Foldable without simply specializing it. Fortunately, such a subclass exists, and gives us the
'point' of Pointed along the way! Foldable can be thought of as yielding a natural transformation to list. And since list is the free monoid generated by a type, this gives in fact, by composition, a
natural transformation to any monoid generated by a Haskell type (such as Set, Sum, First, etc.) A slight generalization of this is what is captured by the fact that foldMap :: Monoid m -> (a -> m)
-> t a -> m, foldr, and toList are equivalent in power (nb: this is a lie if you give foldMap an instance of Monoid that is unlawful). Thus the fact that list is such a prevalent data structure in
Haskell is really a first-order reflection of the fact that fold is such a fundamental operation in functional programming in general.
In any case, what I suggest is that we provide the following typeclass, dual to Foldable:
class Buildable f where
fromList :: [a] -> f a
fromList xs = build (\cons unit -> foldr cons unit xs)
build :: (forall b. (a -> b -> b) -> b -> b) -> f a
build g = fromList . g (:) []
singleton :: a -> f a
singleton = fromList . (:[])
The singleton function is now our desired "point" operation. Foldable and Buildable, each individually lawless, together can be given a strong set of laws, though not strong enough to always
determine their implementations up to unique isomorphism.
The key is that foldr and build on typical lists are subject to a very strong set of laws, which we typically refer to as short cut fusion:
foldr c n (build g) === g c n
We can understand short cut fusion as follows:
type FoldList a = forall b. (a -> b -> b) -> b -> b
toFoldList :: [a] -> FoldList a
toFoldList xs = \cons unit -> foldr cons unit xs
fromFoldList :: FoldList a -> [a]
fromFoldList fl = fl (:) []
Note that fromFoldList is a synonym for build, and by equational reasoning, foldr c n === ($ c) . ($ n) . toFoldList. This we can rewrite the fusion rule as:
($ c) . ($ n) . toFoldList . fromFoldList === ($ c) . ($ n)
Which derives directly from toFoldList . fromFoldList === id.
This works because lists are basically isomorphic to the catamorphism over them (ignoring some hairiness with seq) -- this is at the heart of church encoding and much else. For Foldable/Buildable we
aren't necessarily dealing with isomorphisms, but ideally we can formulate rules along the same lines, if not quite as strong. What should such rules be? Let's look at some examples. For Set, we have
fromList . toList === id
Similarly for Last, First, Sequence, etc.
We can view this property as given by the fact that [a] is the free monoid generated by a type. Since all these other type constructors also give monoids generated by a type, there is an adjunction,
with fromList universally the right (forgetful) adjoint.
However, this law is too strong for all cases. Consider:
data Tree a = Nil | Leaf a | Branch (Tree a) (Tree a)
Now give it an "append" as such:
append = Branch
This "append" is not associative. It contains more information than a list, not less. But our forgetful functor -- fromList -- doesn't have to generate every value in our target category. In fact,
"opening up" isomorphisms into a broader class of almost-equivalences is in one sense the point of adjunctions.
What really matters is that we get
toList . fromList . toList === toList
fromList . toList . fromList === fromList
where there is a transformation from toList . fromList to id that is natural (i.e. anything we do on the a to . fro of a list via some structure can be mapped to something we do on lists directly),
and a transformation from id to fromList . toList that is natural (i.e. anything that we do on a structure directly can be mapped to something we do on that structure as munged through a list).
In any case these laws capture a rather intuitive notion -- any information that we "throw away" will only be "discarded" once.
However, such laws don't determine Foldable or Buildable uniquely. A simple example is the following:
newtype Goofy a = Goofy [a]
instance Foldable Goofy where
toList (Goofy g) = go
where go (_:x:xs) = x : toList xs
go _ = []
instance Buildable Goofy where
fromList = Goofy . go
where go (x:xs) = x:x:go xs
go _ = []
Here, we arbitrarily double every element going "into" Goofy, and strip half of Goofy going back out. Our laws are obeyed, but not in the expected way. Other examples should be easy to generate.
Now, anything can be made Foldable, since in the worst case we can just send all objects to the empty list. However, not anything can be made Buildable. Here's a simple example:
instance Foldable (,) a where
toList (_,x) = [x]
instance Buildable (,) a where
fromList = ???
(We can of course fix this by adding a monoid constraint to a in the Buildable instance)
In any case, our two laws have nice consequences as far as interaction between themselves and with other typeclasses as well. We get properties such as the following:
Foldable law:
fold f === fold f . toList
Buildable extension:
fold f === fold f . fromList . toList
Foldable law:
fmap toList . traverse f === traverse f . toList
Buildable extension:
fmap toList . traverse f === fmap toList . traverse f . fromList . toList
Monoid/Buildable law:
(Buildable f, Monoid f a) => fromList xs `mappend` fromList ys === fromList (xs ++ ys)
I'm sure there are some other nice properties that I haven't arrived at either.
Also note that Buildable is not only a nice construction to work with, but also can allow a few efficiencies that we don't currently have. In particular, if I have a Sequence a and wish to turn it to
a Set a, I have two "obvious" avenues. A) I map Set.singleton over it and then call fold. B) I simply call Set.fromList . Sequence.toList.
The "right" thing is in fact to foldr a set-builder function such as Set.insert, beginning with the empty set. This is exactly what the build for Set does!
So we can write a general function:
fromFoldable :: (Foldable f, Buildable g) => f a -> g a
fromFoldable x = build (\cons nil -> foldr cons nil x)
and assuming our Foldable and Buildable instances are defined well, this will be quite efficient, avoiding the creation of unnecessary intermediate structures!
So what's the punchline here?
• Foldable is handy, but lawless.
• Pointed is considered handy, but lawless.
• Buildable generalized Pointed strictly -- everything Buildable is Pointed, and everything Pointed is Buildable (to get the latter, just always build from a one element list).
• Buildable is even more handy than pointed (it lets us generalize a range of, though not all, existing fromList functions).
• Buildable and Foldable together form a nice dual pair that has very nice laws.
• These nice laws get even nicer in conjunction with other lawful typeclasses (and I suspect there's more I haven't worked out yet here too).
• Perhaps we should consider creating a nice Buildable package for potential inclusion in the platform.
Given that we've established a family of adjunctions, the obvious question to ask is what monads we get out of them, and what we can say about such monads in general. I have some suspicions, but
haven't bothered to work it through.
There is one class of things that are Pointed that Buildable does not generalize -- those things such that they require at least one a -- e.g. Identity, NonEmptyList, Pair, or infinite streams. There
are a few solutions here -- none great.
The most principled thing is to define Buildable1 as a dual to Foldable1 (in the semigroups package).
class Foldable1 f where
toNEL :: f a -> NEL a
class Buildable1 f where
fromNEL :: NEL a -> f a
singleton :: a -> f a
Due to contravariance, while fewer things are Foldable1 than Foldable, more things are Buildable1 than Buildable. For those which are both Foldable1 and Buildable1 we can provide the same adunction
laws regarding toNEL . fromNEL . toNEL === toNEL, etc. This group of things includes Identity and NonEmptyList, but excludes many other common types like Set, First, and even List.
The most practical solution, despite its ickiness, seems to be to provide both fromList and fromNonEmptyList in Buildable, noting that the former is partial in some instances. On the one hand,
partial functions are terrible. On the other, this mirrors most closely how we've already structured many other functions in Haskell, and it allows those who so choose to only work in the total | {"url":"https://www.fpcomplete.com/user/gbaz/building-up-to-a-point-via-adjunctions","timestamp":"2014-04-20T10:59:33Z","content_type":null,"content_length":"44413","record_id":"<urn:uuid:e677da36-a827-421b-80c5-717b9ac7b72c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00324-ip-10-147-4-33.ec2.internal.warc.gz"} |
4th grade Algebra
Posted by jared on Wednesday, March 6, 2013 at 4:48pm.
What is another way to write the expression 44n?
What is another way to write the expression 44 divided by n?
• 4th grade Algebra - Ms. Sue, Wednesday, March 6, 2013 at 4:55pm
44 * n
44 x n
• 4th grade Algebra - jared, Wednesday, March 6, 2013 at 5:03pm
thank you
• 4th grade Algebra - Ms. Sue, Wednesday, March 6, 2013 at 5:14pm
You're welcome.
Related Questions
4th grade math - What is another way to write sixty-five hundreds?
4th grade - What is the correct way to write each of these Titles correctly? 1....
4th grade english - Sometimes she copy something over to get it right. What ...
Biology - I'm in 4th grade to i need to write a hypothesis on "why do some ...
4th grade English - Which sentence is the correct way to write. A tick has no ...
4th grade - what is another way of naming 541,000
4th grade - what is another way of naming 541,000
4th grade math?? help!! - 2 to the 2 power??? i need help !!!! 2^2= 2*2= 4 ...
4th grade math - write the digit 5's value
4th grade math - So far John has run 1/4 of the way to school and walked 3/8 of ... | {"url":"http://www.jiskha.com/display.cgi?id=1362606481","timestamp":"2014-04-18T07:43:25Z","content_type":null,"content_length":"8508","record_id":"<urn:uuid:bdf329b5-40e5-4c44-9ab2-2c0ad94c090f>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00343-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Universe of Discourse : Mental astronomical calculations
Mental astronomical calculations
As you can see from the following graph, the daylight length starts increasing after the winter solstice (last week) but it does so quite slowly at first, picking up speed, and reaching a maximum
rate of increase at the vernal equinox.
The other day I was musing on this, and it is a nice mental calculation to compute the rate of increase.
The day length is given by a sinusoid with amplitude that depends on your latitude (and also on the axial tilt of the Earth, which is a constant that we can disregard for this problem.) That is, it
is a function of the form a + k sin 2πt/p, where a is the average day length (12 hours), k is the amplitude, p is the period, which is exactly one year, and t is amount of time since the vernal
equinox. For Philadelphia, where I live, k is pretty close to 3 hours because the shortest day is about 3 hours shorter than average, and the longest day is about 3 hours longer than average. So we
day length = 12 hours + 3 hours · sin(2πt / 1 year)
Now let's compute the rate of change on the equinox. The derivative of the day length function is:
rate of change = 3h · (2π / 1y) · cos(2πt / 1y)
At the vernal equinox, t=0, and cos(…) = 1, so we have simply:
rate of change = 6πh / 1 year = 18.9 h / 365.25 days
The numerator and the denominator match pretty well. If you're in a hurry, you might say "Well, 360 = 18·20, so 365.25 / 18.9 is probably about 20," and you would be right. If you're in slightly less
of a hurry, you might say "Well, 361 = 19^2, so 365.25 / 18.9 is pretty close to 19, maybe around 19.2." Then you'd be even righter.
So the change in day length around the equinox (in Philadelphia) is around 1/20 or 1/19 of an hour per day—three minutes, in other words.
The exact answer, which I just looked up, is 2m38s. Not too bad. Most of the error came from my estimation of k as 3h. I guessed that the sun had been going down around 4:30, as indeed it had—it had
been going down around 4:40, so the correct value is not 3h but only 2h40m. Had I used the correct k, my final result would have been within a couple of seconds of the right answer.
Exercise: The full moon appears about the same size as a U.S. quarter (1 inch diameter circle) held nine feet away (!) and also the same size as the sun, as demonstrated by solar eclipses. The moon
is a quarter million miles away and the sun is 93 million miles away. What is the actual diameter of the sun?
[ Addendum 20120104: An earlier version of this article falsely claimed that the full moon appears the same size as a quarter held at arm's length. This was a momentary brain fart, not a
calculational error. Thanks to Eric Roode for pointing out this mistake. ]
[Other articles in category /calendar] permanent link | {"url":"http://blog.plover.com/calendar/calculus.html","timestamp":"2014-04-16T16:04:26Z","content_type":null,"content_length":"15473","record_id":"<urn:uuid:21be4412-d21d-4d5a-a379-c17cd3992dc7>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00479-ip-10-147-4-33.ec2.internal.warc.gz"} |
Related Rates #2
November 7th 2009, 01:16 PM #1
Junior Member
Nov 2009
Related Rates #2
The illumination at a point is inversely proportional to the square of the distance of the point from the light source and directly proportional to the intensity of the light source. If two light
sources are 20 feet apart and their intensities are 40 and 30 respectively, at what point between them will the sum of their illuminations be a minimum?
Let x be the distance from the brighter source at which the sum of the illuminations is a minimum. Then x = how many feet?
If we define $I$ to be the total illumination, then
\begin{aligned}<br /> i_1&=\frac{40k}{x^2}\\<br /> i_2&=\frac{30k}{(20-x)^2}.<br /> \end{aligned}
Because $I$ approaches $\infty$ at $x=0,20$ and is differentiable everywhere else, the minimum will occur at a point at which $I'=0$.
So, I tried to take the derivative...
But I got stuck...
it looks like:
The Derivative of: 40k/x^2 = (0 * x^2 - 2x * 40) / x^4
The Derivative of: 30k/(20-x)^2 = (0 - 30*2(20-x)*-1/(20-x)^4
But these are giving the wrong answer. Where am I going wrong?
Simplify your derivatives, add them together, and set them equal to 0. (-80/x^3)+(60/(20-x)^3)=0.
Then invert the equation
Then solve for x.
November 7th 2009, 03:28 PM #2
Senior Member
Dec 2008
November 7th 2009, 04:44 PM #3
Junior Member
Nov 2009
November 7th 2009, 05:34 PM #4
Nov 2009 | {"url":"http://mathhelpforum.com/calculus/112994-related-rates-2-a.html","timestamp":"2014-04-20T00:08:54Z","content_type":null,"content_length":"37209","record_id":"<urn:uuid:f68499c5-6ea5-416f-9676-435a2a7759b3>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00114-ip-10-147-4-33.ec2.internal.warc.gz"} |
New Castle, DE Statistics Tutor
Find a New Castle, DE Statistics Tutor
...I truly enjoy helping students achieve their goals. Thanks for visiting my page, and best of luck!Scored 780/800 on SAT Math in high school and 800/800 on January 26, 2013 test. Routinely score
800/800 on practice tests.
19 Subjects: including statistics, calculus, geometry, algebra 1
...My background in academia and industry allows me to teach calculus from either a theoretical or an applied approach, depending on student needs and interests. I studied statistics as part of
the actuarial exam process. Two of the early exams focused on statistical methods, including regression, parameter fitting, Bayesian, and non-Bayesian techniques.
18 Subjects: including statistics, calculus, geometry, GRE
...I hold a B.S. in Mathematics from Rensselear Polytechnic Institute (RPI), and I offer tutoring in all math levels as well as chemistry and physics. My credentials include over 10 years tutoring
experience and over 4 years professional teaching experience. I received 800/800 on the GRE math sect...
58 Subjects: including statistics, reading, geometry, biology
...I also take Multivariable Calculus, AP Biology, AP Physics B and C, AP Literature, and AP Statistics. I have managed to earn straight A's the past two years. I also took AP Psychology my junior
year and managed to earn a 5.
21 Subjects: including statistics, chemistry, English, biology
I have scored 750 on the GMAT test with a 98 percentile score. I have also scored a perfect 800 score in GRE Math with the 99 percentile. I have been teaching SAT, GMAT, ACT, MCAT and GRE prep for
over 7 years.
18 Subjects: including statistics, physics, calculus, finance | {"url":"http://www.purplemath.com/New_Castle_DE_Statistics_tutors.php","timestamp":"2014-04-17T13:52:40Z","content_type":null,"content_length":"24115","record_id":"<urn:uuid:9f676d97-6b72-45f7-b705-538fbc294cce>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00285-ip-10-147-4-33.ec2.internal.warc.gz"} |
Extending Einstein's theory beyond light speed
University of Adelaide applied mathematicians have extended Einstein's theory of special relativity to work beyond the speed of light.
Einstein's theory holds that nothing could move faster than the speed of light, but Professor Jim Hill and Dr Barry Cox in the University's School of Mathematical Sciences have developed new formulas
that allow for travel beyond this limit.
Einstein's Theory of Special Relativity was published in 1905 and explains how motion and speed is always relative to the observer's frame of reference. The theory connects measurements of the same
physical incident viewed from these different points in a way that depends on the relative velocity of the two observers.
"Since the introduction of special relativity there has been much speculation as to whether or not it might be possible to travel faster than the speed of light, noting that there is no substantial
evidence to suggest that this is presently feasible with any existing transportation mechanisms," said Professor Hill.
"About this time last year, experiments at CERN, the European centre for particle physics in Switzerland, suggested that perhaps neutrinos could be accelerated just a very small amount faster than
the speed of light; at this point we started to think about how to deal with the issues from both a mathematical and physical perspective.
"Questions have since been raised over the experimental results but we were already well on our way to successfully formulating a theory of special relativity, applicable to relative velocities in
excess of the speed of light.
"Our approach is a natural and logical extension of the Einstein Theory of Special Relativity, and produces anticipated formulae without the need for imaginary numbers or complicated physics."
The research has been published in the prestigious Proceedings of the Royal Society A in a paper, 'Einstein's special relativity beyond the speed of light'. Their formulas extend special relativity
to a situation where the relative velocity can be infinite, and can be used to describe motion at speeds faster than light.
"We are mathematicians, not physicists, so we've approached this problem from a theoretical mathematical perspective," said Dr Cox. "Should it, however, be proven that motion faster than light is
possible, then that would be game changing.
"Our paper doesn't try and explain how this could be achieved, just how equations of motion might operate in such regimes."
You know that a home mathematic genius, not connected to any university or government funded program is going to find this solution. In fact I will look into it myself :)
Not clear at all how the Lorenz Transformation could allow anything to travel faster than the speed of light.
It appears from this summary the author is just ignoring an experimentally proven phenomena.
The Lorentz transformation specifically refers to space and time being a function of speed of the observer.
That is to do with length contraction, and time dilation.
Secondarily, there is a third effect, and that is mass dilation, meaning that as speed increases, mass increases, until at the speed of light, mass becomes infinite.
And THAT is why, conventionally, no standard spacecraft can reach the speed of light, for just before reaching that speed, one needs infinite horsepower to accelerate near to and past infinite mass.
There have been hypothetical proposals that one can surpass this problem of infinite obstacles, by taking a spacetime shortcut sideways.
And example analogy is that to get a pencil to travel from top to bottom of the page takes a certain finite time. But, so say these theorists, one could instead fold the paper over, top to bottom,
and push the pencil tip THROUGH the paper, from top to bottom (or vice versa), and thus achieve the 'travel distance & time' much more easily.
Alas, this is a furphy, for even if this movement were possible, nowhere has superluminal speed been reached, nor surpassed. Further, to "punch through" the "space time membranes" of these
metaphorical paper-pages, there would also be massive distortions of gravitational force, just as if a spaceship had travelled within the Schwarzchikd radius around a blackhole. No human body could
survive this.
So, it is quite legitimate to write equations for superluminal geometry, and that has been known for some time.
But it is not legitimate to allege that any human could ever get to that superluminal regime.
The first portion of that is right. It was summed up beautifully by the character Prot (Kevin Spacey) in K-PAX; his psychologist stated that Einstein says nothing can go faster than the speed of
light, and he replies "Then I'd say you've misread Einstein... What Einstein actually said was that nothing can accelerate to the speed of light because its mass would become infinite. Einstein said
nothing about entities already traveling at the speed of light or faster."
But the flaw later in your argument is that, if superluminal motion is possible, we really don't know how it might be achieved. It's reasonable so suppose that very strong and "twisted" gravitational
forces may be involved, and that these would almost certainly be harmful to say the least, but we really don't know. Until there is a viable working theory about how it's done, there's no way of
saying whether it is survivable or not.
But personally I don't think it's possible at all.
Post new comment | {"url":"http://www.sciencecodex.com/extending_einsteins_theory_beyond_light_speed-99867","timestamp":"2014-04-19T02:27:42Z","content_type":null,"content_length":"30181","record_id":"<urn:uuid:bf219065-7c86-406d-a522-1ef5e5aa6c4d>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00421-ip-10-147-4-33.ec2.internal.warc.gz"} |
Seek steering
One problem that jumps out immediately:
1 vx=rotateX(vx, vy, STEER); // <- changing vx here
2 vy=rotateY(vx, vy, STEER); // <- using NEW vx value here. Not intentional?
This might cause wonkiness. Also if your angles are outside of [0,360) these calculations simply won't work. So unless you are wrapping to make sure you're always within those bounds (I don't see
where you are in this routine) this approach will just fail.
For example... an angle of -90 degrees is the same as 270, so you would need to decrease the angle.... but since -90 is less than 180 this code would be increasing it.
The good news is there is a much simpler approach to this problem in Linear Algebra. It's the dot product (and perpendicular dot product).
Get used to these functions. Once you understand them, you'll find 1001 uses for them. They really are extremely versatile:
1 inline float dot(const Vector& a, const Vector& b) { return (a.x*b.x) + (a.y*b.y); }
2 inline float perpDot(const Vector& a, const Vector& b) { return (a.y*b.x) - (a.x*b.y); }
These functions are magic. They have the below properties:
1 dot(A,B) == cos( theta ) * length( A ) * length( B )
2 perpDot(A,B) == sin( theta ) * length( A ) * length( B )
4 // where 'theta' is the angle between vectors A,B
Since I don't know whether or not you really care, I'll spare you the details of what this means, and I'll just tell you that this can be applied as follows:
- let vector A be the movement vector of the enemy (the direction he's currently facing)
- let vector B be the vector between the player and the enemy
- dot(A,B) will be positive if the enemy is moving closer to the enemy, and negative if moving away.
- perpDot(A,B) will be positive if the player is to the right of the enemy, and negative if the player is to the left (** note I might have this backwards... positive might mean to the left.... I can
never remember which is which... try it and see! **)
- perpDot(A,B) will be zero if A and B are parallel (ie: enemy is moving directly towards the player, or directly away from them).
For your purposes, you can use perpDot to determine if you need to steer left or right. If perpDot is zero (parallel), you can use dot() to determine whether or not you are facing the right way:
1 Vector shipDirection = /*the direction the ship is facing*/;
2 Vector targetDirection = targetPosition - shipPosition;
4 float pd = perpDot( shipDirection, targetDirection );
5 if( pd < 0 )
6 turnRight(); // or left if I have it backwards
7 else if( pd > 0 )
8 turnLeft(); // or right if I have it backwards
9 else if( dot(shipDirection, targetDirection) < 0 )
10 turnRight(); // need to pull a 180, pick a direction and go for it
11 //else we have no need to turn because we're heading right for them
EDIT: If you are interested in understanding more of why/how this works, I can get into more detail. Just let me know. I just didn't want to waste the time on it if you weren't interested.
Topic archived. No new replies allowed. | {"url":"http://www.cplusplus.com/forum/general/97154/","timestamp":"2014-04-19T04:25:34Z","content_type":null,"content_length":"17368","record_id":"<urn:uuid:698efc49-35e9-4cc8-a454-bcbbc407b7ab>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00647-ip-10-147-4-33.ec2.internal.warc.gz"} |
East Somerville, MA Trigonometry Tutor
Find an East Somerville, MA Trigonometry Tutor
...I can teach many physical techniques for yoga, including meditation and yoga poses ("asanas"). I'm especially good at guiding students in modifying asanas so they may be able to do them at the
beginning, or to accommodate a challenge or disability. I can even help you stand on your head if you want! I have a strong background in SAS.
18 Subjects: including trigonometry, English, writing, statistics
...That program was created by a computer programmer, writing in a programming language like C++, Java, Pascal, Javascript, Perl, etc. I can teach the basics of computer programming. I have
experience in C++ as a programmer for 2 years and in my undergraduate major.
19 Subjects: including trigonometry, calculus, physics, algebra 2
...If you are intending on majoring in computer science, are interested in deep questions of optimization, system management, or advanced theory, I would suggest finding a person who is a
professional computer scientist. I can offer a practical and basic introduction, but I'll leave the advanced an...
63 Subjects: including trigonometry, English, reading, chemistry
I am a recent graduate of MIT with 6 years of experience working with a wide range of students in grades 8-12 to improve SAT scores in the Reading, Math, and Writing sections. I have successfully
tutored students at all skill levels and have achieved measurable success in all cases. I am a patient...
18 Subjects: including trigonometry, chemistry, calculus, physics
...I have tutored COOP, HSPT, ISEE, SSAT, PSAT (Math and Verbal) ACT (Math and Verbal) SAT (Math and Verbal). I feel that I am definitely qualified to tutor for COOP/HSPT prep. I am a former
teacher with 25+ years tutoring experience. I have tutored test prep including ISEE, SSAT, SAT, SAT II, ACT.
19 Subjects: including trigonometry, geometry, GRE, algebra 1 | {"url":"http://www.purplemath.com/East_Somerville_MA_Trigonometry_tutors.php","timestamp":"2014-04-19T10:10:18Z","content_type":null,"content_length":"24747","record_id":"<urn:uuid:097964b1-12d8-4a92-9b52-0a00c7ab7597>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00157-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Different Look at Power
David C. Howell
We normally think of power in terms of its precise definition, which is the probability of rejecting a false null hypothesis. We have a traditional system that includes a null hypothesis and an
alternative hypothesis, where the alternative hypothesis is really just the negative of the null. We either reject the null hypothesis of equality, or we don't.
Suppose that we are testing two drugs for the treatment of clots in heart attack patients. By our traditional system, we have a null hypothesis that says that the two drugs are equally effective in
the treatment of clots, and as alternative hypothesis that says that one drug is better than the other. Power, then, is the probability of rejecting the null hypothesis, in favor of the alternative,
when in fact one drug is (no matter how trivially) more effective than the other.
But, the world really isn't as simple as that choice would suggest. The following argument is based on an article in Discover magazine for May, 1996, which builds on a paper by Brophy and Joseph in
the Journal of the American Medical Association (May, 1995). That paper looks at a Bayseian view of hypothesis testing, but the argument that I make here really doesn't depend on your opinion of the
good Reverend Bayes, who didn't even make a mark until after he was dead.
If you go stumbling into the hospital showing symptoms of a serious heart attack, you could be prescribed one of two clot-dissolving drugs--streptokinase or t-PA (tissue plasminogen activator). The
question is, which one should you get? Someone with a good standard statistical training might assume, as we usually do, that the issue is easily resolved. Just give a whole bunch of patients
streptokinase, and another bunch of patients t-PA, and then wait and see which group shows the higher survival rate. If we assume that the null hypothesis is false, then one drug is superior to the
other, and power is simply the probability of finding that difference that is really there. But I left out an interesting fact. T-PA sells for $1,530 a pop, whereas you can get streptokinase for a
mere $220. Oh! Well, if the drugs are equally effective, you would probably go for the cheaper one, unless your insurance company is paying the bill. But, then, we have to define what "equally
effective" means. If streptokinase will save 90 percent of those who receive it, and t-PA will save 90.3 percent of those who receive it, and if we're talking about a 6 - 7 fold difference in price,
you might be tempted, if you were paying the bill, to go with streptokinase. But what if the difference in survival rate is 5%, or 3%, or even 1%? Then which would you prefer?
I assume that we could agree that there is some point at which we would decide that the difference in survival rate is so clearly on the side of t-PA that we would vote for it regardless of cost. But
I also suppose that there is a point at which we would decide that the difference is so small that we would go for the cheaper streptokinase. (Remember, if we spend huge amounts of our medical
dollars on one treatment, we can't spend it on others.) The only question is where is that cutoff? Well, in traditional treatments of power there is no particular cutoff. We speak about the null
hypotheisis being "false;" we don't speak about it being false by a certain amount--though certainly our calculations take into account how false it is.
There have been several studies of the relative effectiveness of the two drugs. One study with 20,000 patients found in favor of one of the two drugs, while another, with 30,000 patients, found in
favor of the other. (As a psychologist, I can only marvel at the huge sample sizes. Isn't it wonderful what money will buy?) But the conflicting results left people in doubt, so the manufacturers of
t-PA teamed up with some other foks and funded a really huge study of 40,000 patients. They decided that a difference of 1% in survival percentages would be a meaningful difference, and were truly
excited when they found that 93.7 percent of patients who received t-PA survived, while only 92.7% of those who received streptokinase survived. They argued that this was convincing evidence that
cardiologists should forget about the burden on the poor patient, worry about the burdens on their own liability insurance, and prescribe the (much) more expensive drug t-PA. But, argued Brady and
Joseph, what does this study really tell us? It's true that the odds of getting that particular result, if the drugs are equally effect, are 1000:1. But who's talking about "equally effective?" What
the originators of the study were talking about as "clinically superior" was a 1% difference. And that is exactly what they found. Now suppose that there really is a 1% difference between the two
drugs. And suppose that you ran a study and concluded that you would support t-PA only if 1% or more of the patients who received it actually survived. Then, in fact, you actually have a 50:50 chance
of getting a result that you would call significant.(If the true difference is 1%, then half the time you will find differences like 1.1% and 1.4%, and half the time you will find differences like
0.7% and 0.9%.) If we ignore everything else, and we assume that the sampling distribution of our result is symmetrically distributed, the probability of a result greater than 1% is only 50:50. Or,
put another way, the probability that the result of an actual 1% difference means that t-PA is "clinically superior," is only 50:50. In other words, Brady and Josph argue, the data don't really
resolve anything.
It is not my intention to argue the case for one pharmaceutical company over another, especially given how little I know about pharmacology. It is my point that we have to stop and think about what
we mean by power. The paper by Brody and Joseph raises the interesting point that we may be asking the wrong question when we ask about the power of rejecting the null hypothesis that µ[1] = µ[2].
What we really need to think about is the probability of finding a difference that we would call "meaningful." That is quite a different thing, and that's not what we are usually talking about.
This page isn't intended to come up with a definitive statement of what we mean by power in this situation. It is intended to raise questions--some of which I can't really answer. Put yourself in the
position of an intelligent and compassionate HMO. (I know that most people think that is an oxymoron, but let that pass.) You don't want people to die, but neither do you want to spend your very
scarce resources needlessly. Furthermore, you agree that a true 1% difference in survival is worth paying for, but a (true) 0.9% difference is not.How are you going to design the definitive study,
assuming that large subject populations are readily available?
Gee, that would make a great exam question.
Return to Dave Howell's Statistical Home Page
University of Vermont Home Page
Last revised: 7/11/98 | {"url":"http://www.uvm.edu/~dhowell/StatPages/More_Stuff/PowerDrugs.html","timestamp":"2014-04-19T17:19:58Z","content_type":null,"content_length":"9078","record_id":"<urn:uuid:85596ec8-bd9b-484d-a970-085673335de2>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00550-ip-10-147-4-33.ec2.internal.warc.gz"} |
Riegelsville Algebra Tutor
Find a Riegelsville Algebra Tutor
...I have completed numerous IEPs on kids with ADD/ADHD. I have a lot of experience connecting with them including behavioral techniques to help achievement. Social Studies was one of the academic
areas that was required of all teachers in Philadelphia.
19 Subjects: including algebra 2, English, algebra 1, reading
...Proper writing is not simply "opening the word processor and hoping something will happen." I will show you how to organize your thoughts efficiently by brainstorming, by drawing concept maps,
and by using "the corkboard method." Students pursuing a career in advanced physics and engineering ...
13 Subjects: including algebra 1, algebra 2, reading, physics
...Although my primary love is science, I also tutor algebra 1. I have worked with students in all levels both in the classroom and during private tutoring sessions. Whether it is a general
science class or an AP chemistry class, I am comfortable with the material.
6 Subjects: including algebra 1, algebra 2, chemistry, prealgebra
I am a tutor with over five years of experience teaching math, science, and humanities at the secondary level. I have worked with students from all backgrounds; I have also worked extensively with
children with disabilities. I hold a BA in Anthropology from Florida Atlantic University and am currently a graduate student in the Anthropology Department.
55 Subjects: including algebra 1, algebra 2, Spanish, reading
...In NJ public schools I worked with children with special needs from ages PK-12. I have extensive experience with autism and ABA trials in a pre-school setting. I taught a 1st/2nd grade class
and a 3rd/4th grade class of children with multiple disabilities including dyslexia, autism, fetal alcohol syndrome, cognitive impairment, and developmental and language disorders.
20 Subjects: including algebra 1, reading, geometry, dyslexia | {"url":"http://www.purplemath.com/Riegelsville_Algebra_tutors.php","timestamp":"2014-04-19T10:10:56Z","content_type":null,"content_length":"24135","record_id":"<urn:uuid:e89748c1-331c-4929-8379-ecb11d8767f7>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00382-ip-10-147-4-33.ec2.internal.warc.gz"} |
Generate prime numbers within fiboncacci series?
Join Date
Aug 2010
Rep Power
i am new to java programming,and i am working a simple program that generate prime numbers whithin the generated fibonacci series,i tryed a lot to track the error out,but its just not
displaying me the prime number(though fibonnaci numbers are generated),hope this forum can help to track my error which is bugging me from the past 5-6 hours.(platform used is netbeans 6.9)
/* program to generate prime numbers within the fibonnaci numbers */
* To change this template, choose Tools | Templates
* and open the template in the editor.
Java Code:
package javaapplication5;
import java.lang.*;
import java.util.*;
public class Main
public static void main(String[] args)
int fib1,fib2,fib3,n=0,flag, int i;
int a[]=new int[20];
Scanner sc=new Scanner(System.in);
System.out.println(fib1+ " " +fib2);
System.out.println("the prime numbers are");
Last edited by Eranga; 08-07-2010 at 02:28 PM. Reason: code tags added
Join Date
Jul 2007
Colombo, Sri Lanka
Blog Entries
Rep Power
Do you want to fine a prime number within the Fibonacci series or wise verse?
Join Date
Jun 2008
Blog Entries
Rep Power
Moving thread from Forum Lobby to the New to Java section.
Join Date
Sep 2008
Voorschoten, the Netherlands
Blog Entries
Rep Power
i am new to java programming,and i am working a simple program that generate prime numbers whithin the generated fibonacci series,i tryed a lot to track the error out,but its just not
displaying me the prime number(though fibonnaci numbers are generated),hope this forum can help to track my error which is bugging me from the past 5-6 hours.(platform used is netbeans 6.9)
IMHO it already helps quite a bit if you decompose your problem into smaller problems; as a first step a bird-eye view would look like this:
Java Code:
n= <maximum fibonacci number>
m= 0
while (m < n) {
if (isPrime(m))
print m
m= <next fibonacci number>
It seems that you can generate fibonacci numbers alright but the prime finding issue warrants its own little method to keep the bird-eye view clean. Don't stick everything in a single loop in
a single method, that makes my dizzy and blurs what you are actually trying to do.
Checking whether or not a number is a prime number is easy: check all numbers up to the square root of the number and see if they evenly divide your number; if one of them does your number is
not a prime number, otherwise it is. There are more sophisticated methods for prime number checking in all sorts and sizes ...
kind regards,
Join Date
Aug 2010
Rep Power
thanks for the reply everyone. the program does generate the prime numbers but not the correct ones.
Eranga: i would like to generate only the prime numbers from the generated fibonnaci series(upto n)
josah:yes we can use a method for this,but is it possible to generate prime without using any method?
can you guys tell me where exactly the logical error has occured since i am working on a notepad(cant debug :( )
Kind Regards,
Join Date
Sep 2008
Voorschoten, the Netherlands
Blog Entries
Rep Power
Your program logic in the primality test logic is incorrect but you probably can't see it because your code is very 'Basic' style and its indentation sucks; that why I suggested to turn it in
a separate method. But if you don't want that (due to some silly restriction?) you have to struggle through your own code ...
kind regards,
thanks for the reply everyone. the program does generate the prime numbers but not the correct ones.
Eranga: i would like to generate only the prime numbers from the generated fibonnaci series(upto n)
josah:yes we can use a method for this,but is it possible to generate prime without using any method?
can you guys tell me where exactly the logical error has occured since i am working on a notepad(cant debug :( )
Kind Regards,
i had a quick google search an i didnt find any built in methods in the math class for finding prime numbers, so you will have to write your own, however theres many different examples on
Like josah said, its better to write the code for calculating a prime number in a method (code block) and then calling that method within the loop because it looks more presentable.
method would look something similiar to this
PHP Code:
private static boolean isPrime(int checkNo){
boolean isPrime = false;
//some code here to check if its prime or not, if it is prime then do
//isPrime = true; else just leave it as false
return isPrime; //then u return the results here
Teaching myself java so that i can eventually join the industry! Started in June 2010
Join Date
Jun 2008
Blog Entries
Rep Power
Yikes. Original poster this suggests that you don't care if volunteers duplicate work that's been done and posted elsewhere, essentially wasting someone's time. Please don't do this but
instead make it a habit to notify all threads of cross-posts. Many here refuse to help cross-posters if they don't provide this notification as they value their time.
Join Date
Jul 2007
Colombo, Sri Lanka
Blog Entries
Rep Power
Yikes. Original poster this suggests that you don't care if volunteers duplicate work that's been done and posted elsewhere, essentially wasting someone's time. Please don't do this but
instead make it a habit to notify all threads of cross-posts. Many here refuse to help cross-posters if they don't provide this notification as they value their time.
Well said Fubarable. I'm really hate this too. | {"url":"http://www.java-forums.org/new-java/31454-generate-prime-numbers-within-fiboncacci-series.html","timestamp":"2014-04-19T20:59:19Z","content_type":null,"content_length":"110399","record_id":"<urn:uuid:51db27bf-873c-464e-b218-383c56fc0206>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00583-ip-10-147-4-33.ec2.internal.warc.gz"} |
Machine Learning Meetup Notes: 2010-04-28
From Noisebridge
• Mike S presented a mathematical overview of SVMs
□ Started with introduction to linear classification [1]
□ Discussed the kernel trick [2]
□ Loosely derived the loss function and dual loss function for support vector machines [3]
□ Emphasized two important aspects of SVMs:
☆ Dual problem is a quadratic programming problem that is easier to solve than the primal problem
☆ After dual problem is optimized, only the support vectors (the data points whose langrangian multipliers are > 0) are needed to make predictions for new data (and their associated
• Thomas talked about the KDD conference and their data competition [4]
• Sai skyped in and talked a bit about his use of libSVM for classification of user history on his website cssfingerprint.com
• We talked a little bit about libSVM [5] | {"url":"https://www.noisebridge.net/index.php?title=Machine_Learning_Meetup_Notes:_2010-04-28&direction=prev&oldid=10977","timestamp":"2014-04-18T08:07:07Z","content_type":null,"content_length":"14902","record_id":"<urn:uuid:c555acd6-c993-47fa-af8e-bda921fbe171>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00611-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mind on Statistics (with CD-ROM and Internet Companion for Statistics)
Why Rent from Knetbooks?
Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option
for you. Simply select a rental period, enter your information and your book will be on its way!
Top 5 reasons to order all your textbooks from Knetbooks:
• We have the lowest prices on thousands of popular textbooks
• Free shipping both ways on ALL orders
• Most orders ship within 48 hours
• Need your book longer than expected? Extending your rental is simple
• Our customer support team is always here to help | {"url":"http://www.knetbooks.com/mind-statistics-2nd-uttsjessica-m/bk/9780534393052","timestamp":"2014-04-21T02:12:16Z","content_type":null,"content_length":"32184","record_id":"<urn:uuid:d1b9b7cd-9919-430c-9201-dd71ee8a8638>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00043-ip-10-147-4-33.ec2.internal.warc.gz"} |
Trinoid Viewer
Trinoid Viewer
This applet demonstrates a subset of the family of CMC trinoids, constant mean curvature genus-zero surfaces with three asymptotically Delaunay ends.
The three sliders set the asymptotic necksizes of the ends, and the neck/bulge button toggles between a neck and bulge at the center of the surface. Some examples have a static center between a
neck and a bulge.
The necksizes range from 1/2 to -1/2. Positive necksizes are embeeded unduloid ends, while negative necksizes correspond to immersed nodoid ends.
The absolute values of the three necksizes must satisfy the spherical triangle inequalities, and in the case of nodoid ends, the weights (which depend quadratically on the necksizes) must satisfy
weight balancing inequalities.
Java Archives:
Java NoidViewer archive: NoidViewer.jar
TrinoidViewer image library: TrinoidImage100.jar
Java NoidViewer source code: NoidViewerSrc.jar
NoidViewer uses the following JavaView archives: javaview.jar jvx.jar | {"url":"http://www.gang.umass.edu/reu/2002/TrinoidViewer1.html","timestamp":"2014-04-21T12:11:54Z","content_type":null,"content_length":"2555","record_id":"<urn:uuid:d400556f-e89f-43ec-b8e6-1ce886f8da43>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00232-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematics Where Students Learn by Doing
Upper School students learn to solve new math problems by applying what they already know
From the Winter 2011-12 Caller
By Jim Wysocki
In a progressive school, the methods by which courses are taught will often differ greatly from what we teachers experienced as students. One such
“What do you mean, we have to do the problems before you teach us the material?” asks a student at the beginning of a course taught in a problem-based format. This is then followed by, “Wait, we have
to present the solutions? Aren’t you going to teach us?!” the next day. Students initially struggle with the method because they have come to expect certain practices in a math classroom. Although
this is an overgeneralization, many students have come to expect, rightly or wrongly, that a math classroom is about taking notes, writing down procedures, and then practicing those procedures. Even
when they have not been successful with such an approach, they cling to it because it is familiar.
However, in problem-based learning, students learn content and skills through their application—rather than apart from it. Whereas students already do this often in English, history, or modern
languages it is less common in mathematics, where the assumption is often that you must learn skills before applying them. Imagine English classes that teach students about language decoding, grammar
and syntax, and the writing process maybe years before they begin to actually read and write. The approach to problem-based learning being used at Catlin Gabel right now is to present students with
an ongoing series of problems that alternately introduce, provide practice for, and ultimately apply mathematical concepts to new and different problems.
No matter what method is used, two primary components of the problem-based method are the importance of asking questions and the development of the skill of transfer. While getting students to ask
questions in the beginning is difficult, they come to recognize their value. One student recently wrote, “It is always better to ask a question than not know its answer.” While questions are an
essential part of the method, the ability to apply knowledge to new and different problems, on a regular basis, is fundamental. This is the nature of problem solving, and although challenging in the
beginning, the students adapt. One student commented that problem solving “comes very naturally now, and I think that in many cases it seems like after working through it for a bit I understand it
well enough to have learned it from a teacher.”
Problem-based learning is used right now in Upper School in courses that include Year Two of the integrated program, Accelerated Precalculus, and Calculus 2. Each of these classes approaches the
method in similar, yet different ways. The Calculus 2 curriculum is a set of over 400 problems, organized in a logical progression of skills and concepts. Although they are not arranged into units,
certain themes come and go throughout the course. In the Year Two and Accelerated Precalculus courses, the problem sets are much more explicitly unit-based. Because of the nature of Catlin Gabel’s
own curriculum we create the problems ourselves, using our experience in teaching many of the topics as well as considerable resources gathered over the years. In addition, other techniques help
students adjust to the method, including returning to traditional lecture format periodically to “wrap” things up and allow for specific review of topics before assessments, and the use of material
they developed as part of previous courses.
It is becoming more commonly accepted and realized that students need to have an opportunity to work through ideas with feedback from others in order to master concepts. This does not merely need to
be feedback from the teacher, although their role is critical to the success of the method, but from the students as well. In fact, as the year has progressed our students are beginning to recognize
the value of their peers’ feedback, and their ability to provide it. As one student said, “I like how in class we share our work on the board, because I like to see how other people decide to do
different problems. It gives me insight on other possible ways to do something, and I learn a lot.”
Problem-based learning recognizes this, and thrives on it. Not all the problems are “real-world” ones, but students are given a carefully designed set of problems they have the tools to solve,
without necessarily having learned an algorithm for them. One student’s comment was reflective of her efforts when she said, “I think over the course of these months I have become a more creative
thinker.” And, in recognizing that the teacher’s goal is to develop independent learners, one student realized what was behind the teacher’s willingness to give students room to think and work by
acknowledging that “it means that we almost control our education.”
Jim Wysocki, chair of Catlin Gabel’s Upper School math department, has been at the school since 2010. He previously taught in California at Chadwick School and the Irvine Unified School District, and
was a Math-Science Fellow with the Coalition of Essential Schools. | {"url":"http://www.catlin.edu/news/upper-school/mathematics-where-students-learn-by-doing","timestamp":"2014-04-18T08:03:24Z","content_type":null,"content_length":"31549","record_id":"<urn:uuid:268f3e33-92ea-42ad-bd20-e8d24058d3a9>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00146-ip-10-147-4-33.ec2.internal.warc.gz"} |
NCERT Class 7 Science Motion and Time
Primary tabs
NCERT Class 7 Science Motion and Time. Download NCERT Chapters and Books in pdf format. Easy to print and read. Copies of these textbooks may be downloaded and used as textbooks or for reference.
Refer to other chapters and books at other links (NCERT now providing you soft copies of all textbooks of all subjects from class first to twelfth online).
13 Motion and Time
In Class VI, you learnt about different types of motions. You learnt that a motion could be along a straight line, it could be circular or periodic. Can you recall these three types of motions? Table
13.1 gives some common examples of motions. Identify the type of motion in each case. It is common experience that the motion of some objects is slow while that of some others is fast
13.1 SLOW OR FAST
We know that some vehicles move faster than others. Even the same vehicle may move faster or slower at different times. Make a list of ten objects moving along a straight path. Group the motion of
these objects as slow and fast. How did you decide which object is moving slow and which one is moving fast. If vehicles are moving on a road in the same direction, we can easily tell which one of
them is moving faster than the other.
13.2 SPEED
You are probably familiar with the word speed. In the examples given above, a higher speed seems to indicate that a given distance has been covered in a shorter time, or a larger distance covered in
a given time.
The most convenient way to find out which of the two or more objects is moving faster is to compare the distances moved by them in a unit time. Thus, if we know the distance covered by two buses in
one hour, we can tell which one is slower. We call the distance covered by an object in a unit time as the speed of the object. When we say that a car is moving with a speed of 50 kilometres per
hour, it implies that it will cover a distance of 50 kilometres in one hour. However, a car seldom moves with a constant speed for one hour. In fact, it starts moving slowly and then picks up speed.
So, when we say that the car has a speed of 50 kilometres per hour, we usually consider only the total distance covered by it in one hour. We do not bother whether the car has been moving with a
constant speed or not during thathour. The speed calculated here is actually the average speed of the car. In this book we shall use the term speed for average speed. So, for us the speed is the
total distance covered divided by the total time taken.
We can determine the speed of a given object once we can measure the time taken by it to cover a certain distance. In Class VI you learnt how to measure distances. But, how do we measure time? Let us
find out.
1. Classify the following as motion along a straight line, circular or oscillatory motion:
(i) Motion of your hands while running.
(ii) Motion of a horse pulling a cart on a straight road.
(iii) Motion of a child in a merry-go-round.
(iv) Motion of a child on a see-saw.
(v) Motion of the hammer of an electric bell.
(vi) Motion of a train on a straight bridge.
2. Which of the following are not correct?
(i) The basic unit of time is second.
(ii) Every object moves with a constant speed.
(iii) Distances between two cities are measured in kilometres.
(iv) The time period of a given pendulum is not constant.
(v) The speed of a train is expressed in m/h.
3. A simple pendulum takes 32 s to complete 20 oscillations. What is the time period of the pendulum?
4. The distance between two stations is 240 km. A train takes 4 hours to cover this distance. Calculate the speed of the train.
5. The odometer of a car reads 57321.0 km when the clock shows the time 08:30 AM. What is the distance moved by the car, if at 08:50 AM, the odometer reading has changed to 57336.0 km? Calculate the
speed of the car in km/min during this time. Express the speed in km/h also.
6. Salma takes 15 minutes from her house to reach her school on a bicycle. If the bicycle has a speed of 2 m/s, calculate the distance between her house and the school.
7. Show the shape of the distance-time graph for the motion in the following cases:
(i) A car moving with a constant speed.
(ii) A car parked on a side road.
Please refer to attached file for NCERT Class 7 Science Motion and Time | {"url":"http://www.cbseacademics.in/download-book/ncert-class-7-science-motion-and-time-174946.html","timestamp":"2014-04-16T13:47:41Z","content_type":null,"content_length":"46803","record_id":"<urn:uuid:3f23f313-d133-4dce-b76f-497ef8c90a92>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00258-ip-10-147-4-33.ec2.internal.warc.gz"} |
Can you write this decimal 0.8 as a mixed number or fraction in simplest form?
An irreducible fraction (or fraction in lowest terms or reduced fraction) is a fraction in which the numerator and denominator are integers that have no other common divisors than 1 (and -1, when
negative numbers are considered). In other words, a fraction a⁄[b] is irreducible if and only if a and b are coprime, that is, if a and b have a greatest common divisor of 1. In higher mathematics, "
irreducible fraction" may also refer to irreducible rational fractions.
An equivalent definition is sometimes useful: if a, b are integers, then the fraction a⁄[b] is irreducible if and only if there is no other equal fraction c⁄[d] such that |c| < |a| or |d| < |b|,
where |a| means the absolute value of a. (Let us recall that to fractions a⁄[b] and c⁄[d] are equal or equivalent if and only if ad = bc.)
In Swami Bharati Krishna Tirtha's Vedic mathematics, the auxiliary fraction method is used to convert a fraction to its equivalent decimal representation. The "auxiliary fraction" is not a true
fraction, but is simply a mnemonic aid used in the calculation. The method is essentially the long division algorithm adapted for mental calculation. It is simplest when the fraction's denominator is
one less than a multiple of 10, when it uses the identity
Variants of the method used when the denominator is not one less than a multiple of 10 become progressively more complex but still in the realm of mental math or with one line of notation.
Geographic coordinates consist of latitude and longitude.
All of the following are valid and acceptable ways to write geographic coordinates:
Elementary arithmetic is the simplified portion of arithmetic which includes the operations of addition, subtraction, multiplication, and division.
Elementary arithmetic starts with the natural numbers and the written symbols (digits) which represent them. The process for combining a pair of these numbers with the four basic operations
traditionally relies on memorized results for small values of numbers, including the contents of a multiplication table to assist with multiplication and division.
Related Websites: | {"url":"http://answerparty.com/question/answer/can-you-write-this-decimal-0-8-as-a-mixed-number-or-fraction-in-simplest-form","timestamp":"2014-04-19T06:02:57Z","content_type":null,"content_length":"26980","record_id":"<urn:uuid:45d7d148-ba84-4b03-8266-ca35a19424af>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00513-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-dev] Linalg2 benchmarks
pearu at scipy.org pearu at scipy.org
Fri Apr 5 09:01:21 CST 2002
On 4 Apr 2002, Jochen Küpper wrote:
> Travis showed really impressive numbers for linal2. Here is what I
> get -- less impressive, still ok?
Yes, it is still ok.
> I don't know what the problem is, but it looks as scipy scales worse
> than Numeric?
I don't understand how can you conclude that but if scipy and Numeric
use the same ATLAS (the Jochen case) then:
1) for n->oo, where n is the size of the problem, there would be no
difference in speeds as the hard computation is done by the same ATLAS
2) for n fixed but repeating the computation for c times, then for c->oo
you would find that scipy is 2-3 times faster that Numeric. This speed up
is gained only because of the f2py generated interface between Python and
the ATLAS routines that scipy uses.
BUT, if Numeric is linked with its lapack_lite (the Travis case), then
you will have huge speedups (approx. 10 times) mainly because scipy
uses highly optimized ATLAS routines.
So, I don't find these testing results strange as you commented.
What I find surprising is that there is very small difference in
the results for contiguous and non-contiguous input data. This shows that
memory copy is a really cheap operation and one should not worry too much
if the input data is non-contiguous, at least, if you have plenty of
memory in your computer.
More surprising is that sometimes with non-contiguous input data the
calculation is actually faster(!) and not slower as I would expect. I have
no explanation for this one.
More information about the Scipy-dev mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-dev/2002-April/000795.html","timestamp":"2014-04-20T01:39:00Z","content_type":null,"content_length":"3909","record_id":"<urn:uuid:a8e271df-3c55-4fcf-8008-bb692d5e4d32>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00230-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: Complex survey design
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: Complex survey design
From sergio salis <ssalis22@yahoo.it>
To statalist@hsphsun2.harvard.edu
Subject st: Complex survey design
Date Thu, 19 Oct 2006 19:52:19 +0200 (CEST)
Dear all,
I have never used the complex survey design in stata
and I would really appreciate if you could give me an
advice. I have a firm-level dataset which includes 2
subsets. The first subset includes data that I don't
want to analyse, while the second is the one of
My aim is to draw statistics for each region in the
subset 2.
I suppose that I should proceed as follows:
Create one identifier for each reg in subset 2
gen reg1=1 if subsample==2 & region==1
replace reg1=0 if reg1!=1
(do the same for all the regions...)
Now I would calculate the mean as follows:
svy: mean productivity, sub(reg1)
svy: mean productivity, sub(reg2)
svy: mean productivity, sub(regN)
I would do so because I realised that the new commands
(stata 9) for svy do not include the option 'by'
previously available. Is the way of proceeding
described above correct to obtain the regional means?
(I guess that first dropping the observations for
which subset==2 and then creating the reg dummies is
completely wrong.)
I read on this webpage
that using 'if' to analyse subsets of the dataset
(instead of the subpop option) is wrong since for the
variance, standard error, and confidence intervals to
be calculated properly svy must use all the available
observations. Hence the only way to proceed seems to
be as I described above...please let me know whether I
am following the right procedure.
Thanks in advance for your help
--- Maarten Buis <M.Buis@fsw.vu.nl> ha scritto:
> Jenny:
> Have a look at the thread started with:
> and continued at
> HTH,
> Maarten
> -----------------------------------------
> Maarten L. Buis
> Department of Social Research Methodology
> Vrije Universiteit Amsterdam
> Boelelaan 1081
> 1081 HV Amsterdam
> The Netherlands
> visiting adress:
> Buitenveldertselaan 3 (Metropolitan), room Z434
> +31 20 5986715
> http://home.fsw.vu.nl/m.buis/
> -----------------------------------------
> --- Jenny Säve-Söderbergh wrote:
> I have data on individuals either selecting funds
> for their portfolio or not
> selecting funds(1 /0 variable). For those who did
> select funds, I have an
> ordinal variable representing the choice of a high,
> medium or low risk fund
> (risk=1,2,3). I am interested only in the choice of
> portfolio risk, but I want
> to control for the possible selection arising from
> not everyone selecting
> funds. Therefore I want to simultaneously estimate
> an ordinal logit model and a
> Heckman selection model.
> Would anyone know how I can do this? Would it be
> possible just to do a probit
> equation on the first choice, calculate the Mills
> ratio, and then include these
> in the ordinal probit?
> *
> * For searches and help try:
> *
> http://www.stata.com/support/faqs/res/findit.html
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
Do You Yahoo!?
Poco spazio e tanto spam? Yahoo! Mail ti protegge dallo spam e ti da tanto spazio gratuito per i tuoi file e i messaggi
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2006-10/msg00811.html","timestamp":"2014-04-19T10:18:53Z","content_type":null,"content_length":"9629","record_id":"<urn:uuid:96f2aafa-25a6-4d8f-bf8b-990cbaa1e4a7>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00123-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebraic Chess Notation Pgn
A selection of articles related to algebraic chess notation pgn.
Original articles from our library related to the Algebraic Chess Notation Pgn. See Table of Contents for further available material (downloadable resources) on Algebraic Chess Notation Pgn.
Algebraic Chess Notation Pgn is described in multiple online sources, as addition to our editors' articles, see section below for printable documents, Algebraic Chess Notation Pgn books and related
Suggested Pdf Resources
are denoted using the Standard Algebraic Chess Notation [11]. A PGN parser was written in Python to extract information from the PGN database files.
chess games and positions are recorded using the algebraic chess notation ( FIDE, 2008), which is portable game notation (PGN) file (Edwards, 1994).
Importing Yahoo Chess games, Saving PGN files Browser uses a standard format known as PGN (Portable Game Notation). Most . algebraic is your best bet.
The player may record moves either by accepted chess notation, or by a displayed diagram on the scoresheet. (Symbolic algebraic, character algebraic , or descriptive). .
Files. In order to access Endgame Study Database III you'll need software that is able to open. Portable Game Notation (PGN) – files.
Suggested Web Resources
Great care has been taken to prepare the information on this page. Elements of the content come from factual and lexical knowledge databases, realmagick.com library and third-party sources. We
appreciate your suggestions and comments on further improvements of the site. | {"url":"http://www.realmagick.com/algebraic-chess-notation-pgn/","timestamp":"2014-04-20T11:30:58Z","content_type":null,"content_length":"29412","record_id":"<urn:uuid:791eb365-396a-4c36-9eda-2ddeba5e0f97>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00099-ip-10-147-4-33.ec2.internal.warc.gz"} |
Positions Available:
Assistant and/or Associate Professor Positions
The Department of Mathematics at the University of Georgia invites applications for two tenure-track positions beginning in August 2014. One position is at the Assistant or Associate Professor level
with a research emphasis in the area of Analysis and the other is at the Assistant Professor level and is open to all areas of mathematical research but with a strong preference towards Algebraic
Geometry. Duties will consist of research and teaching in mathematics, as well as the usual ancillary departmental responsibilities of a faculty member. A Ph.D., or equivalent foreign degree, in
Mathematics or a closely related field is required. Successful applicants will carry outstanding credentials in mathematical research and demonstrate a commitment to excellence in teaching and
mentoring undergraduate and graduate students. Review of applications will begin November 1, 2013, and will continue until the positions are filled.
To apply, please submit a vita with a list of publications, a teaching statement, a statement about your research, and at least four letters of recommendation, through mathjobs.org. One of the
letters should address the candidate's teaching skills. If possible the letters should be submitted online at mathjobs; however, if necessary they may also be sent by ordinary mail to:
Search Committee
Department of Mathematics
University of Georgia
Athens, GA 30602
The Mathematics Department, the Franklin College of Arts and Sciences, and the University of Georgia are committed to increasing the diversity of its faculty and students, and sustaining a work and
learning environment that is inclusive. Women, minorities, and people with disabilities are encouraged to apply. The University of Georgia is an EEO/AA institution. Georgia is well known for its
quality of life in regard to both outdoor and urban activities (www.georgia.org). UGA is a land and sea grant institution located in Athens, 70 miles northeast of Atlanta, the state capital
(www.visitathensga.com; www.uga.edu).
Postdoctoral Teaching and Research Associate positions
The Department of Mathematics at the University of Georgia seeks to fill two Postdoctoral Teaching and Research Associate positions beginning in August 2014. We will entertain applications from all
fields of mathematics but some preference will be given to applicants whose research is broadly defined to be in the areas of analysis and geometry. Each position is of two years duration, with the
possibility of renewal for a third year. Teaching duties are three courses per year. In addition, postdocs are expected to participate in the research life of the Department, including the pursuit of
an original research program. Applicants must exhibit potential for significant research and high quality teaching. Applicants are encouraged to identify a member of the current faculty with whom
they would like to work. Complete applications (including letters of recommendation) must be received by December 15, 2013 to ensure full consideration.
To apply, please submit a vita with a list of publications, a teaching statement, a statement about your research, and four letters of recommendation, through mathjobs.org. One of the letters should
address the candidate's teaching skills. If possible the letters should be submitted online at mathjobs; however, if necessary they may also be sent by ordinary mail to:
Postdoctoral Search Committee
Department of Mathematics
University of Georgia
Athens, GA 30602
The Mathematics Department, the Franklin College of Arts and Sciences, and the University of Georgia are committed to increasing the diversity of its faculty and students, and sustaining a work and
learning environment that is inclusive. Women, minorities, and people with disabilities are encouraged to apply. The University of Georgia is an EEO/AA institution.
The University of Georgia is an Equal Opportunity/Affirmative Action Employer. | {"url":"http://www.math.uga.edu/about_us/positions.html","timestamp":"2014-04-20T16:01:15Z","content_type":null,"content_length":"20633","record_id":"<urn:uuid:a5573e5d-ffc8-4a96-bf7c-a8a11094c00f>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00208-ip-10-147-4-33.ec2.internal.warc.gz"} |
[TowerTalk] tower raising winch
Let's do numbers.
If the winch and ginpole are at the pivot point of the tower, the
analysis of forces is pretty simple. It's simple mechanics (think back
to high school physics class). It's more complicated if the tower is
pivoting at a point not exactly below the top pulling force. But you can
get a ballpark estimate by looking at how high your pulling wire is at
the point it crosses the base, and how high up the attach is on the tower.
Higher is obviously better, for both.
It's VERY easy to underestimate how high you want, which is what
everyone is trying to say.
I had done some analysis for raising my HG-70HD a while back. I was
trying to answer the question "how big a ginpole, assuming I'm just
using a Fulton 1500 winch and single wire pull?" I use a guyed ginpole
with a winch at the base, pulley at top, single cable pull. (also a good
reminder on how good a cable you need!)
The key question for me was how big a ginpole, and what forces were
created where. (both compressive on the ginpole and on the back guys).
Also how much pulling force for the winch. Just using HG-70HD tower
weight here: 1200 lbs
Assuming the attach distance to the tower (from the base) is at the same
distance as the height of the ginpole, I got the following, for pulling
force at the winch (add a safety factor to this for your design). Do
your own math for your situation. (The key in the match is converting
the distributed tower load to point loads at the appropriate places).
6' - 3394 lbs
9' - 2263 lbs
12' - 1697 lbs
18' - 1131 lbs
24' -849 lbs
Remember, 45 degree angles in this analysis. If shallower angles get
created, the forces are different.
You can see why the relatively short commercial raising fixtures have to
be so beefy.
(Notice for short ginpoles, the pulling force is > the weight of the tower)
The greatest pull is required immediately off the ground, as everyone
knows. This can be reduced by using stuff like a shop crane to raise the
tower the initial 8' or so. But it's good to just use the 0' pull for
worst case. Remember that the same force exists if you ever lower. It's
easy to forget to use the shop crane on the lower! So for safety, assume
the 0' pull for analysis! ALSO: if you let the tower bounce/oscillate on
the pull, the dynamic forces can be higher! Don't do that! (sometimes
happens if the hand winch is hard to crank)
This assumes a even distribution of weight in the tower. The base will
be slightly heavier, but ignore that.
If the tower has antenna/mast/rotor while raising, then it's worse, and
all the extra weight is concentrated at the end. This analysis doesn't
include stuff like that.
Matching intuition, the higher the ginpole, the better. (the compressive
force on the ginpole is higher than this, roughly 1.5x in my setup). My
setup is about 12' ginpole (not falling derrick. guyed ginpole).
If you think of a triangle created by attach to the tower, highest
position of the pulley or pulling force, and the winch...the bigger it
is, obviously the better (for lowering forces in the overall system).
Note that as this triangle (assuming 45 degree angles at tower and top
pulley) becomes > 18', the pulling force is less than the weight of the
tower, which we like.
Let's assume any homebrew situation doesn't want forces in cables that
exceed 1500 lbs or so (for safety). I use 1/4" 7x19 wire rope, with a
WLL of 1500 lbs. Shackles, pulleys, etc in the system see more than this
(analysis not included)
Breaking it down into ballpark estimates (you should do the exact
analysis for the situation though), you can see that you want the
raising cable to be more than 12' above the tower base, and the attach
to the tower to be more than halfway up. (assuming 24' tower cranked down).
For some crazy setups, the height of the cable above the pivot point is
going to vary during the raise, like the proposed roof pull. But if the
wire is always 12' above the base for the entire pull, you can see how
it might be doable.
I know the commercial raising fixtures are smaller and use a double rope
setup, which changes the analysis.
TowerTalk mailing list | {"url":"http://lists.contesting.com/_towertalk/2009-06/msg00528.html?contestingsid=6es24h41be4mbueql9ovnf0733","timestamp":"2014-04-20T06:18:33Z","content_type":null,"content_length":"11237","record_id":"<urn:uuid:fdf86bcb-6f42-4606-8831-9f0ddff01d17>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00090-ip-10-147-4-33.ec2.internal.warc.gz"} |
Statistics Definitions
The market capitalization of a stock exchange is the total number of issued shares of domestic companies, including their several classes, multiplied by their respective prices at a given time. This
figure reflects the comprehensive value of the market at that time.
The market capitalization figures include:
- shares of domestic companies ;
- shares of foreign companies which are exclusively listed on an exchange, i.e. the foreign company is not quoted on any other exchange
- common and preferred shares of domestic companies
- shares without voting rights
The market capitalization figures exclude:
- collective investment funds ;
- rights, warrants, ETFs, convertible instruments ;
- options, futures ;
- foreign listed shares other than exclusively listed ones ;
- companies whose only business goal is to hold shares of other listed companies
- companies admitted to trading (companies admitted to trading are companies whose shares are traded at the exchange but not listed at the exchange) | {"url":"http://www.world-exchanges.org/statistics/statistics-definitions","timestamp":"2014-04-21T12:54:29Z","content_type":null,"content_length":"51853","record_id":"<urn:uuid:09eb073f-123d-4263-bdeb-744ce8ca412a>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00302-ip-10-147-4-33.ec2.internal.warc.gz"} |
Featured here are textbooks published by the MAA. Many of these may be used as your primary text (P) or as a supplement (S) for another course you are teaching. Listed below each topic are book
Abstract Algebra
Field Theory and Its Classical Problems (S)
Learning Modern Algebra: From early Attempts to Prove Fermat's Last Theorem (P)
Visual Group Theory (S)
Actuarial Science
Mathematical Interest Theory (P)
Calculus: An Active Approach with Projects (P)
The Calculus Collection: A Resource for AP* and Beyond (S)
Counterexamples in Calculus (S)
Mathematical Modeling in the Environment (S)
Real Infinite Series (S)
Field Theory and Its Classical Problems (P)
College Algebra
Functions, Data, and Models: An Applied Approach to College Algebra (P)
Combinatorics: A Guided Tour (P)
Combinatorics: A Problem Oriented Approach (S)
Mathematics of Choice: How to Count without Counting (P)
Proofs that Really Count: The Art of Combinatorial Proof (P)
Complex Analysis
Invitation to Complex Analysis (P)
Complex Variables
Complex Numbers & Geometry (S)
Cryptological Mathematics (P)
Elementary Cryptanalysis: A Mathematical Approach (P)
Differential Geometry
Differential Geometry and Its Applications (P)
Fourier Analysis
Game Theory
Game Theory and Strategy (P)
The Mathematics of Games and Gambling (P)
General Education Mathematics
Understanding our Quantitative World (P)
Complex Numbers & Geometry (S)
Field Theory and Its Classical Problems (S)
Geometry Revisited (P)
Graph Theory
Graph Theory: A Problem Oriented Approach (P)
Group Theory
History of Mathematics
An Episodic History of Mathematics: Mathematical Culture Through Problem Solving (P)
Field Theory and Its Classical Problems (S)
History of Mathematics: Highways and Byways (P)
Math through the Ages: A Gentle History for Teachers and Others (P)
A Radical Approach to Lebesgue’s Theory of Integration (S)
A Radical Approach to Real Analysis (P, S)
Honors Calculus
Calculus Deconstructed: A Second Course in First-Year Calculus (P, S)
Introduction to Mathematical Modeling
A Course in Mathematical Modeling (P)
Mathematical Modeling in the Environment (P)
Introduction to Topology
First Concepts of Topology: The Geometry of Mappings of Segments, Curves, Circles, and Disks (P)
Topology Now! (P)
Knot Theory
Liberal Arts Mathematics
Combinatorics: A Problem Oriented Approach (P)
Cryptological Mathematics (P)
Game Theory and Strategy (P)
Graph Theory: A Problem Oriented Approach (P)
Mathematical Connections: A Companion for Teachers and Others (P)
Mathematics of Choice: How to Count without Counting (P)
The Mathematics of Games and Gambling (P)
Number Theory Through Inquiry (P)
Proofs that Really Count: The Art of Combinatorial Proof (P)
Lie Groups
Lie Groups: A Problem-Oriented Introduction via Matrix Groups (P)
Linear Algebra
Lie Groups: A Problem-Oriented Introduction via Matrix Groups (S)
Mathematical Modeling in the Environment (S)
Mathematics for Business Decisions
Mathematics for Business Decisions (with Interdisciplinary Multimedia Projects) (P)
Most Undergraduate Curriculum
Calculus Gems: Brief Lives and Memorable Moments (S)
Number Theory
Cryptological Mathematics (S)
Learning Modern Algebra: From early Attempts to Prove Fermat's Last Theorem (S)
Number Theory Through Inquiry (P)
Ordinary Differential Equations
Ordinary Differential Equations: from Calculus to Dynamical Systems (P)
Partial Differential Equations
Mathematical Interest Theory (S)
Mathematical Modeling in the Environment (S)
The Mathematics of Games and Gambling (S)
Problem Solving
Combinatorics: A Problem Oriented Approach (P)
Proofs that Really Count: The Art of Combinatorial Proof (P)
Real Infinite Series (S)
Teaching Secondary Mathematics
Mathematical Connections: A Companion for Teachers and Others (P)
Mathematics for Secondary School Teachers (P)
Transition to Proof
Bridge to Abstract Mathematics (P)
Calculus Deconstructed: A Second Course in First-Year Calculus (P, S)
Distilling Ideas: An Introduction to Mathematical Thinking (P)
Lie Groups: A Problem-Oriented Introduction via Matrix Groups (P)
Number Theory Through Inquiry (P)
Real Analysis
Calculus Deconstructed: A Second Course in First-Year Calculus (P, S)
Counterexamples in Calculus (S)
Invitation to Complex Analysis (S)
Mathematical Interest Theory (S)
A Primer of Real Functions (P, S)
A Radical Approach to Lebesgues’ Theory of Integration (S)
A Radical Approach to Real Analysis (P)
Real Infinite Series (S)
2nd Real Analysis Course
A Radical Approach to Lebesgue’s Theory of Integration (P)
Special Topics | {"url":"http://www.maa.org/publications/textbooks?device=mobile","timestamp":"2014-04-16T08:31:49Z","content_type":null,"content_length":"35941","record_id":"<urn:uuid:25418b0c-19f3-4fa9-8f89-9e18f8af6d10>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00651-ip-10-147-4-33.ec2.internal.warc.gz"} |
Berwyn Heights, MD Geometry Tutor
Find a Berwyn Heights, MD Geometry Tutor
...As a former math teacher with a Bachelor's degree in Mathematics from Stanford and a Master's degree in Teaching from American, I know that there are infinite ways to approach math problems. I
also know that exploring many methods is the best way to build conceptual understanding of math. In ma...
16 Subjects: including geometry, English, calculus, GRE
...My interest in tutoring begins with a deep love for the subject matter, which means that for me there's no substitute for actually understanding it: getting the right answer isn't nearly as
important as being able to explain why it's right. As a tutor, my main job isn't to talk, but to listen: I...
18 Subjects: including geometry, calculus, writing, algebra 1
...My broad background in math, science, and engineering combined with my extensive research experience provides me with the unique tools to teach effectively to students at all levels.As an
undergraduate student in physics and electrical engineering and as a doctoral student in physics, I took adva...
16 Subjects: including geometry, calculus, physics, statistics
...I have composed and edited formal correspondence for politicians and small and large organizations. I enjoy editing writing to ensure it is smooth, vibrant, and concise. I love to help others
learn and discover, and there's no greater satisfaction than seeing a student learn and succeed.
38 Subjects: including geometry, English, reading, chemistry
...I have a bachelor and masters degree in engineering, and scored 740 (out of 800) on the GRE quantitative. I took AP calculus in high school and got a score of 5 (maximum) on the AP exam. I
took differential equations during my undergraduate program and passed with grade of A.
34 Subjects: including geometry, calculus, physics, accounting
Related Berwyn Heights, MD Tutors
Berwyn Heights, MD Accounting Tutors
Berwyn Heights, MD ACT Tutors
Berwyn Heights, MD Algebra Tutors
Berwyn Heights, MD Algebra 2 Tutors
Berwyn Heights, MD Calculus Tutors
Berwyn Heights, MD Geometry Tutors
Berwyn Heights, MD Math Tutors
Berwyn Heights, MD Prealgebra Tutors
Berwyn Heights, MD Precalculus Tutors
Berwyn Heights, MD SAT Tutors
Berwyn Heights, MD SAT Math Tutors
Berwyn Heights, MD Science Tutors
Berwyn Heights, MD Statistics Tutors
Berwyn Heights, MD Trigonometry Tutors
Nearby Cities With geometry Tutor
Berwyn, MD geometry Tutors
Brentwood, MD geometry Tutors
College Park geometry Tutors
Colmar Manor, MD geometry Tutors
Cottage City, MD geometry Tutors
Edmonston, MD geometry Tutors
Greenbelt geometry Tutors
Landover Hills, MD geometry Tutors
Mount Rainier geometry Tutors
North Brentwood, MD geometry Tutors
North College Park, MD geometry Tutors
Riverdale Park, MD geometry Tutors
Riverdale Pk, MD geometry Tutors
Riverdale, MD geometry Tutors
University Park, MD geometry Tutors | {"url":"http://www.purplemath.com/Berwyn_Heights_MD_Geometry_tutors.php","timestamp":"2014-04-20T04:11:47Z","content_type":null,"content_length":"24535","record_id":"<urn:uuid:65d0d06f-ca66-404c-a243-840d5c0c8a4e>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00225-ip-10-147-4-33.ec2.internal.warc.gz"} |
Commonly useful utilites for manipulating the Core language
Constructing expressions
bindNonRec :: Id -> CoreExpr -> CoreExpr -> CoreExprSource
bindNonRec x r b produces either:
let x = r in b
case r of x { _DEFAULT_ -> b }
depending on whether we have to use a case or let binding for the expression (see needsCaseBinding). It's used by the desugarer to avoid building bindings that give Core Lint a heart attack, although
actually the simplifier deals with them perfectly well. See also mkCoreLet
:: AltCon Case alternative constructor
-> [CoreBndr] Things bound by the pattern match
-> [Type] The type arguments to the case alternative
-> CoreExpr
This guy constructs the value that the scrutinee must have given that you are in one particular branch of a case
Taking expressions apart
mergeAlts :: [(AltCon, a, b)] -> [(AltCon, a, b)] -> [(AltCon, a, b)]Source
Merge alternatives preserving order; alternatives in the first argument shadow ones in the second
trimConArgs :: AltCon -> [CoreArg] -> [CoreArg]Source
case (C a b x y) of
C b x y -> ...
We want to drop the leading type argument of the scrutinee leaving the arguments to match agains the pattern
:: [Unique] Supply of uniques used in case we have to manufacture a new AltCon
-> Type Type of scrutinee (used to prune possibilities)
-> [AltCon] imposs_cons: constructors known to be impossible due to the form of the scrutinee
-> [(AltCon, [Var], a)] Alternatives
-> ([AltCon], Bool, [(AltCon, [Var], a)])
Properties of expressions
exprType :: CoreExpr -> TypeSource
Recover the type of a well-typed Core expression. Fails when applied to the actual Type expression as it cannot really be said to have a type
exprIsHNF :: CoreExpr -> BoolSource
exprIsHNF returns true for expressions that are certainly already evaluated to head normal form. This is used to decide whether it's ok to change:
case x of _ -> e
and to decide whether it's safe to discard a seq.
So, it does not treat variables as evaluated, unless they say they are. However, it does treat partial applications and constructor applications as values, even if their arguments are non-trivial,
provided the argument type is lifted. For example, both of these are values:
(:) (f x) (map f xs)
map (...redex...)
because seq on such things completes immediately.
For unlifted argument types, we have to be careful:
C (f x :: Int#)
Suppose f x diverges; then C (f x) is not a value. However this can't happen: see CoreSyn. This invariant states that arguments of unboxed type must be ok-for-speculation (or trivial).
exprOkForSpeculation :: Expr b -> BoolSource
exprOkForSpeculation returns True of an expression that is:
• Safe to evaluate even if normal order eval might not evaluate the expression at all, or
• Safe not to evaluate even if normal order would do so
It is usually called on arguments of unlifted type, but not always In particular, Simplify.rebuildCase calls it on lifted types when a 'case' is a plain seq. See the example in Note
[exprOkForSpeculation: case expressions] below
Precisely, it returns True iff:
• The expression guarantees to terminate, * soon, * without raising an exception, * without causing a side effect (e.g. writing a mutable variable)
Note that if exprIsHNF e, then exprOkForSpecuation e. As an example of the considerations in this test, consider:
let x = case y# +# 1# of { r# -> I# r# }
in E
being translated to:
case y# +# 1# of { r# ->
let x = I# r#
in E
We can only do this if the y + 1 is ok for speculation: it has no side effects, and can't diverge or raise an exception.
exprOkForSideEffects :: Expr b -> BoolSource
exprOkForSpeculation returns True of an expression that is:
• Safe to evaluate even if normal order eval might not evaluate the expression at all, or
• Safe not to evaluate even if normal order would do so
It is usually called on arguments of unlifted type, but not always In particular, Simplify.rebuildCase calls it on lifted types when a 'case' is a plain seq. See the example in Note
[exprOkForSpeculation: case expressions] below
Precisely, it returns True iff:
• The expression guarantees to terminate, * soon, * without raising an exception, * without causing a side effect (e.g. writing a mutable variable)
Note that if exprIsHNF e, then exprOkForSpecuation e. As an example of the considerations in this test, consider:
let x = case y# +# 1# of { r# -> I# r# }
in E
being translated to:
case y# +# 1# of { r# ->
let x = I# r#
in E
We can only do this if the y + 1 is ok for speculation: it has no side effects, and can't diverge or raise an exception.
rhsIsStatic :: (Name -> Bool) -> CoreExpr -> BoolSource
This function is called only on *top-level* right-hand sides. Returns True if the RHS can be allocated statically in the output, with no thunks involved at all.
Expression and bindings size
exprSize :: CoreExpr -> IntSource
A measure of the size of the expressions, strictly greater than 0 It also forces the expression pretty drastically as a side effect Counts *leaves*, not internal nodes. Types and coercions are not
hashExpr :: CoreExpr -> IntSource
Two expressions that hash to the same Int may be equal (but may not be) Two expressions that hash to the different Ints are definitely unequal.
The emphasis is on a crude, fast hash, rather than on high precision.
But unequal here means "not identical"; two alpha-equivalent expressions may hash to the different Ints.
We must be careful that \x.x and \y.y map to the same hash code, (at least if we want the above invariant to be true).
cheapEqExpr :: Expr b -> Expr b -> BoolSource
A cheap equality test which bales out fast! If it returns True the arguments are definitely equal, otherwise, they may or may not be equal.
See also exprIsBig
eqExprX :: IdUnfoldingFun -> RnEnv2 -> CoreExpr -> CoreExpr -> BoolSource
Compares expressions for equality, modulo alpha. Does not look through newtypes or predicate types Used in rule matching, and also CSE
Eta reduction
Manipulating data constructors and types
applyTypeToArgs :: CoreExpr -> Type -> [CoreExpr] -> TypeSource
A more efficient version of applyTypeToArg when we have several arguments. The first argument is just for debugging, and gives some context | {"url":"http://www.haskell.org/ghc/dist/stable/docs/html/libraries/ghc-7.6.1.20121125/CoreUtils.html","timestamp":"2014-04-17T19:04:17Z","content_type":null,"content_length":"37329","record_id":"<urn:uuid:a24ed133-e97c-4981-a27a-8182d58c10f7>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00018-ip-10-147-4-33.ec2.internal.warc.gz"} |
Compound Faces
Next: Playing Cards Up: Flexagon Previous: B. The Sign Sequence   Contents
Just as we had met heterocyclic flexagons before they were discussed as such, so among the heterocyclic flexagons will find flexagons with compound faces.
A compound face is one in which the two pats are of different face degrees. Now, the face degree of a pat is the sum of the signs for the leaves in that pat, except that alternate pats will have the
sign of the face degree reversed. The reason for this latter fact is that the sum of the entire sign sequence is the angle between the perpendiculars on the hinges entering and leaving the
folded-together unit (see fig. 13.1). When one pat is unfolded from the unit, as shown, it is turned upside down, so that its face degree becomes negative in respect to that of the other pat. If the
face degrees of the two pats ^13.1
Here, then, is merely a restatement of the rule that the sum of the sign sequence must be congruent to zero,
In our discussion of heterocyclic flexagons, it was said in effect that compound faces would be allowed only so long as
Why do we trouble ourselves with compound flexagons at all, if they do not flex in a simple, respectable manner? It is because there are so many compound flexagons. There are far more compound
flexagons than noncompound, of course, but there are also far more compound flexagons than there are heterocyclic flexagons. To find the compound flexagons made up of one type of cycle only, we
return to our proof that
Therefore, why can we not use arbitrary signs in the sign sequence corresponding to a given number sequence, so long as hinges are not allowed to intersect? The reasons that we have created rules to
prevent doing this are twofold: first, zero-faces might result. This would happen whenever the leaf polygons would be made to coincide and complete cycles would be used. Second,
The main difference between compound and non-compound flexagons, as far as general appearance is concerned, is that compound flexagons do not lie about the center of the flexagon in an even ring of
pats. Alternate pats lie closer to the center than the others. The reason for this will be seen by the reader as we progress. Compound flexagons usually require more units than other flexagons. To
determine how many units will be used in a specific case, we must find the angle between the hinges entering and leaving each unit. If the face degrees of the two pats are
The number of units used must then be at least
In order to give an indication of the variety of compound flexagons, we will arbitrarily limit ourselves in this discussion to flexagons made up of coinciding regular leaves. These flexagons are
perhaps the most picturesque of the compound flexagons. To begin, we examine the possible faces for
These lie in a hexagon, with flaps between the units (see fig. 13.4). Clearly, whenever one pat has face degree
When this is the case, we merely give the flexagon two units and do not permit it to lie flat. This face then would look like an octahedron with two opposite faces removed.
The flexagon
The face degree
One very important question that remains unanswered is how we can tell the face degree of a compound flexagon without actually experimenting. This problem is easily dispelled when we remember that
Three examples are shown in fig. 13.11: one for a flexagon with re-gular coinciding leaves worked in degrees, one for a flexagon with regular coinciding leaves worked on in terms of a definite
This method at last gives us a strong tool for working effectively with heterocyclics in which
Next: Playing Cards Up: Flexagon Previous: B. The Sign Sequence   Contents Pedro 2001-08-22 | {"url":"http://delta.cs.cinvestav.mx/~mcintosh/comun/flexagon/node27.html","timestamp":"2014-04-21T09:43:55Z","content_type":null,"content_length":"29551","record_id":"<urn:uuid:665917ee-833c-4780-b306-824bb0754f1b>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00280-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-user] [Announce] Numpy 1.3.0 rc1
David Cournapeau david@ar.media.kyoto-u.ac...
Sat Mar 28 08:26:31 CDT 2009
I am pleased to announce the release of the rc1 for numpy
1.3.0. You can find source tarballs and installers for both Mac OS X
and Windows on the sourceforge page:
The release note for the 1.3.0 release are below,
The Numpy developers
NumPy 1.3.0 Release Notes
This minor includes numerous bug fixes, official python 2.6 support, and
several new features such as generalized ufuncs.
Python 2.6 support
Python 2.6 is now supported on all previously supported platforms, including
Generalized ufuncs
There is a general need for looping over not only functions on scalars
but also
over functions on vectors (or arrays), as explained on
http://scipy.org/scipy/numpy/wiki/GeneralLoopingFunctions. We propose to
realize this concept by generalizing the universal functions (ufuncs), and
provide a C implementation that adds ~500 lines to the numpy code base. In
current (specialized) ufuncs, the elementary function is limited to
element-by-element operations, whereas the generalized version supports
"sub-array" by "sub-array" operations. The Perl vector library PDL
provides a
similar functionality and its terms are re-used in the following.
Each generalized ufunc has information associated with it that states
what the
"core" dimensionality of the inputs is, as well as the corresponding
dimensionality of the outputs (the element-wise ufuncs have zero core
dimensions). The list of the core dimensions for all arguments is called the
"signature" of a ufunc. For example, the ufunc numpy.add has signature
"(),()->()" defining two scalar inputs and one scalar output.
Another example is (see the GeneralLoopingFunctions page) the function
inner1d(a,b) with a signature of "(i),(i)->()". This applies the inner
along the last axis of each input, but keeps the remaining indices
intact. For
example, where a is of shape (3,5,N) and b is of shape (5,N), this will
an output of shape (3,5). The underlying elementary function is called 3*5
times. In the signature, we specify one core dimension "(i)" for each
input and
zero core dimensions "()" for the output, since it takes two 1-d arrays and
returns a scalar. By using the same name "i", we specify that the two
corresponding dimensions should be of the same size (or one of them is
of size
1 and will be broadcasted).
The dimensions beyond the core dimensions are called "loop" dimensions.
In the
above example, this corresponds to (3,5).
The usual numpy "broadcasting" rules apply, where the signature
determines how
the dimensions of each input/output object are split into core and loop
While an input array has a smaller dimensionality than the corresponding
of core dimensions, 1's are pre-pended to its shape. The core
dimensions are
removed from all inputs and the remaining dimensions are broadcasted;
the loop dimensions. The output is given by the loop dimensions plus the
output core dimensions.
Experimental Windows 64 bits support
Numpy can now be built on windows 64 bits (amd64 only, not IA64), with
both MS
compilers and mingw-w64 compilers:
This is *highly experimental*: DO NOT USE FOR PRODUCTION USE. See
Windows 64 bits section for more information on limitations and how to
build it
by yourself.
New features
Formatting issues
Float formatting is now handled by numpy instead of the C runtime: this
locale independent formatting, more robust fromstring and related methods.
Special values (inf and nan) are also more consistent across platforms
(nan vs
IND/NaN, etc...), and more consistent with recent python formatting work (in
2.6 and later).
Nan handling in max/min
The maximum/minimum ufuncs now reliably propagate nans. If one of the
arguments is a nan, then nan is retured. This affects np.min/np.max,
and the array methods max/min. New ufuncs fmax and fmin have been added
to deal
with non-propagating nans.
Nan handling in sign
The ufunc sign now returns nan for the sign of anan.
New ufuncs
#. fmax - same as maximum for integer types and non-nan floats. Returns the
non-nan argument if one argument is nan and returns nan if both arguments
are nan.
#. fmin - same as minimum for integer types and non-nan floats. Returns the
non-nan argument if one argument is nan and returns nan if both arguments
are nan.
#. deg2rad - converts degrees to radians, same as the radians ufunc.
#. rad2deg - converts radians to degrees, same as the degrees ufunc.
#. log2 - base 2 logarithm.
#. exp2 - base 2 exponential.
#. trunc - truncate floats to nearest integer towards zero.
#. logaddexp - add numbers stored as logarithms and return the logarithm
of the result.
#. logaddexp2 - add numbers stored as base 2 logarithms and return the
base 2
logarithm of the result result.
Masked arrays
Several new features and bug fixes, including:
* structured arrays should now be fully supported by MaskedArray
(r6463, r6324, r6305, r6300, r6294...)
* Minor bug fixes (r6356, r6352, r6335, r6299, r6298)
* Improved support for __iter__ (r6326)
* made baseclass, sharedmask and hardmask accesible to the user (but
* doc update
gfortran support on windows
Gfortran can now be used as a fortran compiler for numpy on windows,
even when
the C compiler is Visual Studio (VS 2005 and above; VS 2003 will NOT work).
Gfortran + Visual studio does not work on windows 64 bits (but gcc +
does). It is unclear whether it will be possible to use gfortran and visual
studio at all on x64.
Arch option for windows binary
Automatic arch detection can now be bypassed from the command line for
the superpack installed:
numpy-1.3.0-superpack-win32.exe /arch=nosse
will install a numpy which works on any x86, even if the running computer
supports SSE set.
Deprecated features
The semantics of histogram has been modified to fix long-standing issues
with outliers handling. The main changes concern
#. the definition of the bin edges, now including the rightmost edge, and
#. the handling of upper outliers, now ignored rather than tallied in the
rightmost bin.
The previous behavior is still accessible using `new=False`, but this is
deprecated, and will be removed entirely in 1.4.0.
Documentation changes
A lot of documentation has been added. Both user guide and references can be
built from sphinx.
New C API
Multiarray API
The following functions have been added to the multiarray C API:
* PyArray_GetEndianness: to get runtime endianness
Ufunc API
The following functions have been added to the ufunc API:
* PyUFunc_FromFuncAndDataAndSignature: to declare a more general ufunc
(generalized ufunc).
New defines
New public C defines are available for ARCH specific code through
* NPY_CPU_X86: x86 arch (32 bits)
* NPY_CPU_AMD64: amd64 arch (x86_64, NOT Itanium)
* NPY_CPU_PPC: 32 bits ppc
* NPY_CPU_PPC64: 64 bits ppc
* NPY_CPU_SPARC: 32 bits sparc
* NPY_CPU_SPARC64: 64 bits sparc
* NPY_CPU_S390: S390
* NPY_CPU_IA64: ia64
* NPY_CPU_PARISC: PARISC
New macros for CPU endianness has been added as well (see internal changes
below for details):
* NPY_BYTE_ORDER: integer
* NPY_LITTLE_ENDIAN/NPY_BIG_ENDIAN defines
Those provide portable alternatives to glibc endian.h macros for platforms
without it.
Portable NAN, INFINITY, etc...
npy_math.h now makes available several portable macro to get NAN, INFINITY:
* NPY_NAN: equivalent to NAN, which is a GNU extension
* NPY_INFINITY: equivalent to C99 INFINITY
* NPY_PZERO, NPY_NZERO: positive and negative zero respectively
Corresponding single and extended precision macros are available as
well. All
references to NAN, or home-grown computation of NAN on the fly have been
removed for consistency.
Internal changes
numpy.core math configuration revamp
This should make the porting to new platforms easier, and more robust. In
particular, the configuration stage does not need to execute any code on the
target platform, which is a first step toward cross-compilation.
umath refactor
A lot of code cleanup for umath/ufunc code (charris).
Improvements to build warnings
Numpy can now build with -W -Wall without warnings
Separate core math library
The core math functions (sin, cos, etc... for basic C types) have been
put into
a separate library; it acts as a compatibility layer, to support most
C99 maths
functions (real only for now). The library includes platform-specific
fixes for
various maths functions, such as using those versions should be more robust
than using your platform functions directly. The API for existing
functions is
exactly the same as the C99 math functions API; the only difference is
the npy
prefix (npy_cos vs cos).
The core library will be made available to any extension in 1.4.0.
CPU arch detection
npy_cpu.h defines numpy specific CPU defines, such as NPY_CPU_X86, etc...
Those are portable across OS and toolchains, and set up when the header is
parsed, so that they can be safely used even in the case of
(the values is not set when numpy is built), or for multi-arch binaries
fat binaries on Max OS X).
npy_endian.h defines numpy specific endianness defines, modeled on the glibc
endian.h. NPY_BYTE_ORDER is equivalent to BYTE_ORDER, and one of
NPY_LITTLE_ENDIAN or NPY_BIG_ENDIAN is defined. As for CPU archs, those
are set
when the header is parsed by the compiler, and as such can be used for
cross-compilation and multi-arch binaries.
5c6b2f02d0846317c6e7bffa39f6f828 release/installers/numpy-1.3.0rc1.zip
20cdddd69594420b0f8556bbc4a27a5a release/installers/numpy-1.3.0rc1.tar.gz
More information about the SciPy-user mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-user/2009-March/020532.html","timestamp":"2014-04-18T18:18:01Z","content_type":null,"content_length":"13772","record_id":"<urn:uuid:0b8bf517-da43-44cd-8047-beac6edee1d4>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00433-ip-10-147-4-33.ec2.internal.warc.gz"} |
William Timothy Gowers
Born: 20 November 1963 in Marlborough, Wiltshire, England
Timothy Gowers is known as Tim. His parents are Caroline Maurice and William Patrick Gowers, and he has two sisters Rebecca Gowers and Katharine Gowers. Patrick Gowers is a composer, famous for
composing the music for many films and also known for his works for the guitar. He has a doctorate from the University of Cambridge: Eric Satie: his studies, notebooks and critics (1965). Rebecca
Gowers is a freelance journalist and author. Her first book, The Swamp of Death, was shortlisted in 2004 for a CWA Golden Dagger Award for Non-Fiction and, more recently, her book When to Walk was
longlisted for the Orange Broadband Prize for Fiction. Katharine Gowers is a violinist with:-
... exceptional gifts of musicianship and technical command. Even more important is her innate sense of style: a priceless gift which lifts her playing to a level of exceptional maturity.
In fact the Gowers family has other exceptionally distinguished members such as Tim Gowers' great-grandfather Sir Ernest Arthur Gowers GCB GBE (1880-1966) who was a civil servant, but best known for
work on style guides for writing the English language. He edited Fowler's Modern English Usage, and wrote a book titled Plain Words which is still in print.
Tim Gowers was sent to King's College School, Cambridge where he was a boarder as his parents were living in London at the time. Given what we have already mentioned about his sister Katharine, it
will come as no surprise to learn that Gowers is extremely musical and was a chorister at the School. He had some excellent mathematics teaching at the School from Mary Briggs, who had studied under
Mary Cartwright at Girton College. He won a King's scholarship to Eton College
where [10]:-
I had another inspirational teacher, Norman Routledge, who had been a fellow of King's. He did not allow himself to be limited to the syllabus but ranged far more widely. In my last two years at
Eton, the mathematics specialists were given a weekly sheet of challenging problems which were only loosely based on the syllabus, if at all. Of course, boys being boys, we tended to do nothing
for five days and then rush at them for two days, but even so it was a very valuable experience.
After completing his school education at Eton, Gowers matriculated at Trinity College, Cambridge. It was while he was an undergraduate that Gowers decided that he wanted to become a professional
mathematician [10]:-
I became certain that I would like to be a professional mathematician some time when I was an undergraduate - though, even then, I had little idea of what this meant.
However, it was not until he took Béla Bollobás' course on the geometry of Banach spaces while studying for Part III of the Mathematical Tripos that Gowers found an area of mathematics which he felt
was the right one for him to begin research [10]:-
Looking back it is amusing to remember how little idea I had of what research in different areas would be like when I made such an important choice. But I was lucky and found myself in an area
that suited me very well, and with an excellent supervisor.
Gowers married Emily Joanna Thomas, daughter of Valerie Little and Sir Keith Thomas (historian and President of Corpus Christi College Oxford) in 1988; they had two sons and a daughter. In 1990
Gowers was awarded his doctorate for his thesis Symmetric Structures in Banach Spaces written with Béla Bollobás as his thesis advisor. His first paper Symmetric block bases in finite-dimensional
normed spaces was published in 1989 and in the same year he gave a survey lecture Symmetric sequences in finite-dimensional normed spaces to the conference 'Geometry of Banach spaces' held in Strobl,
Austria. He was appointed as a Research Fellow at Trinity College in 1989, holding this position until 1993. He was appointed as a Lecturer at University College, London, in 1991 and spent four years
there. However, in some sense he never left Cambridge [10]:-
I used to commute from Cambridge, and found the train a congenial place to work, making at least one genuine breakthrough on it.
He was promoted to Reader in 1994 and in the same year was an invited speaker at the International Congress of Mathematicians held in Zurich where he gave the address Recent results in the theory of
infinite-dimensional Banach spaces. During the four years he spent at University College he continued to work on Banach spaces and he was awarded the 1995 Junior Whitehead Prize by the London
Mathematical Society for this work. The citation for the prize reads [12]:-
Dr W T Gowers of University College, London, is awarded a Junior Whitehead Prize for his work in applying infinite combinatorics to resolve a series of longstanding questions in Banach space
theory, some originating with Banach himself. Dr Gowers' achievements include the following: a solution to the notorious Banach hyperplane problem (to find a Banach space which is not isomorphic
to any hyperplane), a counterexample to the Banach space Schröder-Bernstein theorem, a proof that if all closed infinite-dimensional subspaces of a Banach space are isomorphic then it is a
Hilbert space, and an example of a Banach space such that every bounded operator is a Fredholm operator. Over the past five years, Gowers has made the geometry of Banach spaces look completely
different. The techniques he uses are highly individual; in particular, he makes use of a Ramsey theory for linear spaces, stating a dichotomy for subspaces rather than subsequences. In this
area, where there is initially little structure, imagination and technical strength of a high calibre are needed. The work demonstrates both characteristics, and the techniques seem likely to
find further application in different fields in the future.
In 1995 Gowers was appointed as a lecturer at the University of Cambridge. In the following year he was awarded a European Mathematical Society Prize at the 2^nd European Congress of Mathematics held
in Budapest, Hungary. The citation for the Prize reads:-
William Timothy Gowers' work has made the geometry of Banach spaces look completely different. To mention some of his spectacular results: he solved the notorious Banach hyperplane problem, to
find a Banach space which is not isomorphic to any of its hyperplanes. He gave a counterexample to the Schröder-Bernstein theorem for Banach spaces. He proved a deep dichotomy principle for
Banach spaces which, if combined with a result of Komorowski and Tomczak-Jaegermann, shows that if all closed infinite-dimensional subspaces of a Banach space are isomorphic to the space, then it
is a Hilbert space. He gave (jointly with Maurey) an example of a Banach space such that every bounded operator from the space to itself is a Fredholm operator. His mathematics is both very
original and technically very strong. The techniques he uses are highly individual; in particular, he makes very clever use of infinite Ramsey theory.
At the European Congress, Gowers lectured on Banach spaces with few operators. Two years later, he received a Fields Medal at the International Congress of Mathematicians held in Berlin in 1998. The
citation begins [4]:-
William Timothy Gowers has provided important contributions to functional analysis, making extensive use of methods from combination theory. These two fields apparently have little to do with
each other, and a significant achievement of Gowers has been to combine these fruitfully.
The citation ends:-
A year ago, Gowers attracted attention in the field of combination analysis when he delivered a new proof for a theorem of the mathematician Emre Szemerédi which is shorter and more elegant than
the original line of argument. Such a feat requires extremely deep mathematical understanding.
In 1998 Gowers was named Rouse Ball Professor of Mathematics at Cambridge. In 1999 he became a fellow of the Royal Society of London. He continues to produce papers of great significance such as
Hypergraph regularity and the multidimensional Szemerédi theorem (2007); Gabor Sarkozy's review begins:-
In this breakthrough paper the author proves his version of the Hypergraph Regularity Lemma and the associated Counting Lemma. As an application, he gives the first combinatorial proof of the
multidimensional Szemerédi theorem of Furstenberg and Katznelson, and the first proof that provides an explicit bound.
Another highly significant recent paper by Gowers is Quasirandom groups (2008) but we will mention several important works by Gowers which are major contributions to mathematics in addition to his
amazing research contributions. First let us mention his book Mathematics. A very short introduction (2002). The book contains eight chapters: What does it mean to use mathematics to model the real
world?; What are numbers, and in what sense do they exist (especially "imaginary" numbers)?; What is a mathematical proof?; What do infinite decimals mean, and why is this subtle?; What does it mean
to discuss high-dimensional (e.g. 26-dimensional) space?; What's the deal with non-Euclidean geometry?; How can mathematics address questions that cannot be answered exactly, but only approximately?;
Is it true that mathematicians burn out at the age of 25? and other sociological questions about the mathematical community.
Let us end this biography with giving some details of two further innovative projects in which Gowers has been involved. The first of these is the book The Princeton Companion to Mathematics (2008)
with Gowers as editor, and June Barrow-Green and Imre Leader as associate editors. Terence Tao begins a review writing:-
The Princeton companion to mathematics is a unique text, which does not fall neatly into any of the usual categories of mathematical writing. It is not quite a mathematical encyclopedia, it is
not quite a collection of mathematical surveys, it is not quite a popular introduction to mathematics, and it is certainly not a mathematics textbook; and yet it is still an immensely rich and
valuable reference work that covers almost all aspects of modern mathematics today (although there is certainly an emphasis on pure mathematics at the research level). An encyclopedia might focus
primarily on definitions, a survey article might focus on history or on the latest research, and a popular introduction might focus on analogies, personalities or entertaining narrative; in
contrast, this book is intended to answer (or at least address) basic questions about mathematics, such as "What is arithmetic geometry?", "Why do we care about function spaces?", "How is
mathematics used today in biology?", "What is the significance of the Riemann hypothesis?", "Why are there so many number systems?'", or "Is mathematical research all about proving theorems
Tao ends his review by writing:-
In summary, this unique book is an extraordinarily broad and surprisingly accessible reference work for a remarkably large fraction of modern (and historical) mathematics. While it does not
substitute by any means for more traditional textbooks in mathematics, it complements these more detailed, precise, and technical texts nicely, and is one of the rare places where one can
actually see all of mathematics as a unified subject, with coherent themes and goals.
The final project we mention is the "Polymath project." Gowers suggested:-
... if a large group of mathematicians could connect their brains efficiently, they could perhaps solve problems very efficiently as well.
Michael Nielsen explains:-
Using principles similar to those employed in open source programming projects, [Gowers] used blogs and a wiki to organize an open mathematical collaboration attempting to find a new proof of an
important mathematical theorem known as the density Hales-Jewett theorem.
The project has been very successful although perhaps fewer mathematicians took part than Gowers had hoped.
Gowers' first marriage was dissolved in 2007 and in 2008 he married Julie Barrau; they have one son.
Gowers was Knighted in the Queen's Birthday Honours List 2012. On 16 June 2012 it was announced that the Knighthood was to Professor William Timothy Gowers, FRS, Royal Society Research Professor,
Department of Pure Mathematics and Mathematical Statistics, University of Cambridge: For services to Mathematics. He received a further honour in 2013 when, on Friday 13 September, in the Younger
Hall, St Andrews, he was given an honorary degree during a special graduation ceremony which formed part of the University of St Andrews' 600th Anniversary celebrations. He was one of seventeen
"international scholars and thinkers", "some of the best minds of our generation", who were honoured in this way.
The laureation address by Kenneth Falconer, School of Mathematics and Statistics, University of St Andrews, is at THIS LINK
Finally let us quote some words by Gowers about mathematics:-
In very general terms I suppose if you divide mathematics into that which uses 'elementary' methods and mathematics that uses a lot of sophisticated theory and well-established techniques, then
I'm drawn towards the former rather than the latter.
I just try and find certain problems that I might be able to think about profitably, and quite often I think about problems and get absolutely nowhere. ... with most problems I think about I get
absolutely nowhere, but with Banach spaces and combinatorics it was just the case with both of them, there were problems that seemed reasonable to tackle, and I found them interesting.
I like to talk about completing the square, when you have result A that generalises in one direction to result B and in another direction to result C: then you want to find the generalisation of
C that corresponds to how B generalises A.
I can work at home and in my office, and those are the two places I work most. Just anywhere where I've got a pad of paper and a biro ... But if you're sitting waiting in an airport lounge, which
for many people would be a very boring experience, for a mathematician it isn't. Get out some paper and have a think about things.
We encourage the reader to look further at Gowers' ideas about mathematics set out in [6], [7] and [8].
Article by: J J O'Connor and E F Robertson
July 2009
MacTutor History of Mathematics | {"url":"http://www-groups.dcs.st-and.ac.uk/~history/Printonly/Gowers.html","timestamp":"2014-04-19T19:36:25Z","content_type":null,"content_length":"16434","record_id":"<urn:uuid:34d14b29-b359-4cfe-8d9f-823b773abf16>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00230-ip-10-147-4-33.ec2.internal.warc.gz"} |
Partial Type Assignment
Left Linear Applicative Term Rewriting Systems?
Theory, Applications and Implementation.
Steffen van Bakel1y Sjaak Smetsers1 Simon Brock2
1) Department of Informatics, Faculty of Mathematics and Informatics, University of Nijmegen, Toernooiveld 1, 6525 ED Nijmegen, The Netherlands.
2) School of Information Systems, University of East Anglia, Norwich NR4 7TJ, United Kingdom.
This paper introduces a notion of partial type assignment on left linear applicative term rewriting systems that is based on the extension defined by Mycroft of Curry's type assignment system. The
left linear applicative TRS we consider are extensions to those suggested by most functional programming languages in that they do not discriminate against the varieties of function symbols that can
be used in patterns. As such there is no distinction between function symbols (such as append and plus) and constructor symbols (such as cons and succ). Terms and rewrite rules will be written as
trees, and type assignment will consist of assigning types to function symbols, nodes and edges between nodes. The only constraints on this system are imposed by the relation between the type
assigned to a node and those assigned to its incoming and out-going edges. We will show that every typeable term has a principal type, and formulate a needed and sufficient condition typeable rewrite
rules should satisfy in order to gain preservance of types under rewriting. As an example we will show that the optimisation function performed after bracket abstraction is typeable. Finally we will
present a type check algorithm that checks if rewrite rules are correctly typed, and finds the principal pair for typeable terms.
In the recent years several paradigms have been investigated for the implementation of functional programming languages. Not only the lambda calculus [Barendregt '84], but also term rewriting systems
[Klop '92] and graph rewriting systems [Barendregt et al. '87] are topics of research. Lambda calculus (or rather combinator systems) forms the underlying model for the functional programming
language Miranda [Turner '85], term rewriting systems are used in the underlying model for the language OBJ [Futatsugi et al. '85], and graph rewriting systems is the model for the language Clean
[Brus et al. '87, N?ocker et al. '91].
There exists a well understood and well defined notion of type assignment on lambda terms, known as the Curry type assignment system [Curry & Feys '58]. This type assignment system is the basis for
many type checkers and inferers used in functional programming languages. For example the type assignment system for the language ML [Milner '78], as defined by R. Milner forms in fact an extension
of Curry's system. The type inference algorithm for the functional programming language Miranda works in roughly the same way as the one for ML. A real difference between ?Supported by the Esprit
Basic Research Action 3074 ?Semagraph?.
yPartially supported by the Netherlands Organisation for the advancement of pure research (N.W.O.). | {"url":"http://www.nzdl.org/gsdlmod?e=d-00000-00---off-0cstr--00-0----0-10-0---0---0direct-10---4-------0-1l--11-en-50---20-preferences---00-0-1-00-0--4----0-0-11-10-0utfZz-8-00&cl=CL1.270&d=HASH0142a05c36eab0249573b16d.1&gg=text","timestamp":"2014-04-16T08:58:13Z","content_type":null,"content_length":"10986","record_id":"<urn:uuid:3a577138-aee0-42d6-ba7c-ea44edf6563e>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00319-ip-10-147-4-33.ec2.internal.warc.gz"} |
Find all positive integers for an equation
April 8th 2011, 06:40 AM #1
Apr 2011
Find all positive integers for an equation
Find all possible integers n for which the equation X^3 + Y^3 = n!+4 has solutions in integers.
I suppose the following:
X+Y = (n!+4)^(1/3)
Can you help finish this problem? Thanks in advance
These two are not at all same.
All you can do is to get the values of $Y$ from the equation: $Y = ( n!+4 - X^3)^{1/3}$ by taking different values of $X$.
As an addendum to Sambit's response, please note that $\sqrt[3]{X^3 + Y^3}$ is not X + Y any more than $\sqrt{X^2 + Y^2}$ is equal to X + Y.
April 8th 2011, 07:23 AM #2
April 8th 2011, 07:42 AM #3 | {"url":"http://mathhelpforum.com/algebra/177248-find-all-positive-integers-equation.html","timestamp":"2014-04-20T07:29:51Z","content_type":null,"content_length":"38078","record_id":"<urn:uuid:d09820fc-1665-41da-8480-c02e55bffa3f>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00157-ip-10-147-4-33.ec2.internal.warc.gz"} |
Complex Numbers - Pre University level I
1. What are the Real and Imaginary parts of the number 4 -i√3.
2. What is the conjugate of
3. If z=2 - i√3, then what is the value of
4. Express the following complex number in the forma + ib.
5. Find the real and imaginary parts of
6. If
find the conjugate of .
Character is who you are when no one is looking. | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=100861","timestamp":"2014-04-20T13:41:38Z","content_type":null,"content_length":"14867","record_id":"<urn:uuid:400a4475-7f40-45a3-a2c7-4e3fbf64faec>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00007-ip-10-147-4-33.ec2.internal.warc.gz"} |
work done
My entries
Defined browse
Then Select
Then Select
work done
Definition/Summary Breakdown
Work is a measure of change of energy. Physics
> Classical
The net work done equals the change in Kinetic Energy (this is the Work-Energy Theorem). Mechanics
>> Newtonian
Work is the integral of the scalar product (dot-product) of two vectors: Force and Displacement. "Displacement" means the change in position of the point at which the force is Dynamics
So Work is a scalar (an ordinary number), with dimensions of mass times distance-squared over time-squared.
The SI unit is the amount of Work done by a Force of one newton acting over a displacement of one metre, and is called the joule (J), or newton-metre (N-m).
The SI unit of Power, which is the rate of Work done, is one joule per second, and is called the watt (W).
Work is the integral of the dot product of force and displacement.
[tex]W\,=\,\int_{\mathbf{a}}^{\mathbf{a}+\mathbf{d}} \mathbf{F} \cdot d\mathbf{r}[/tex]
For a constant force, work is the dot product of the force with the total displacement.
The above work equals the magnitude of the force times the magnitude of the displacement times the cosine of the angle between the force and displacement:
In a uniform gravitational field, Work done by gravity on a body moving along any path C starting at a height [itex]h_1[/itex] and ending at a height [itex]h_2[/itex] is:
[tex]W\,=\,\int_{(x,y,h_1)}^{(x',y',h_2)} (-mg\hat z)\cdot d\mathbf{r}\,=\,mg(h_1 - h_2)[/tex]
In an inverse-square gravitational field, Work done by gravity on a body moving along any path C starting at a height [itex]r_1[/itex] and ending at a height [itex]r_2[/itex]
[tex]W\,=\,\int_{r_1}^{r_2} -GmM\frac{\mathbf{r}}{r^3}\cdot d\mathbf{r}\,=\,GmM\left(\frac{1}{r_2} - \frac{1}{r_1}\right)[/tex]
which, if [itex]r_1[/itex] is very close to [itex]r_2[/itex], is approximately the same as the previous formula, with [tex]g\,=\,\frac{GM}{r_1^2}[/tex]
Extended explanation
Conservative Force:
If the work done after a total displacement of zero is zero (so the change in Kinetic Energy is zero), then the force is said to be conservative (for example, friction is not conservative, because a
body moving in a full circle under friction loses Kinetic Energy, but gravity is conservative).
For a completely non-conservative force, work equals loss of mechanical energy (the energy lost generally becomes radiation or heat).
Potential energy:
For a conservative force, work done depends only on position and not on the path taken.
Potential energy is another name for work done by a conservative force.
Potential energy depends only on position, and is the work done relative to some arbitrarily-chosen position (the position of zero potential energy, chosen so as to make calculations easy).
For example, in a uniform gravitational field of strength g, when a mass m is moved by any path through a height h, the work done is mgh.
A machine has gearing G if the force out is G times the force in: [itex]F_1\,=\,G F_0[/itex]
If no energy is lost, then the work out equals the work in.
Since work equals force times displacement (strictly, the inner product of force and displacement), that means that the displacement of the point of application of the force out is 1/G times the
displacement of the point of application of the force in: [tex]d_1\,=\,\frac{d_0}{G}[/tex]
Conversely, if a system has [itex]d_1\,\neq\,d_0[/itex], then [tex]F_1\,=\,\frac{d_0}{d_1} F_0[/tex]
For example, the gearing of a lever is the ratio of the lengths of its two "lever arms".
So a lever, or a pulley system, in which the displacement out is less than the displacement in, can lift a heavy object with a force less than its weight.
Derivation of Work-Energy Theorem:
[tex]\Delta\,W\ =\ \int d\mathbf{x} \cdot \mathbf{F}_{\rm net}\ =\ \int (\mathbf{v}\,dt) \cdot \left( \frac{d(m\mathbf{v})}{dt} \right)\ =\ \int d \left( \frac{1}{2} m\mathbf{v}^2 \right) \ =\ \Delta
Relativistic version:
[tex]\Delta\,W\ =\ \int d\mathbf{x} \cdot \mathbf{F}_{\rm net}\ =\ \int (\mathbf{v}\,dt)\cdot\left(\frac{d(m\mathbf{v}/\sqrt{1
[tex]=\ \int d\left(mc^2\sqrt{1
\mathbf{v}^2/c^2}\right)\ \ +\ \int d\left(\frac{m\mathbf{v}^2}{\sqrt{1
\mathbf{v}^2/c^2}}\right)\ =\ \int d\left(\frac{mc^2}{\sqrt{1
\mathbf{v}^2/c^2}}\right)\ =\ \Delta\,E[/tex]
Aterioluwa @ 08:05 AM Nov17-13
I'm so confused guys. the work i know of is just force * distance
Govind.A.S @ 06:54 AM Sep5-12
Why do you say "Potential energy depends only on position,"?. What about "configuration"? or elastic potential energy of a spring? Microscopically, it depends on positin of atoms, but when we say
"position of a spring" we mean entire spring. A compressed spring in gravitational field has more potential energy than a relaxed on at the same "position".
christpinus @ 07:16 AM Aug4-12
~edit(tiny-tim): put r[1] = ∞, so 1/r[1] = 0
@ 12:44 PM Aug12-11
force comes with change in inertia????????
@ 09:54 AM Mar22-11
by KARAMAWORLD
explanation suppost be with examples
MechaMZ @ 02:34 AM Sep30-09
the "conservative force" link under "see also" at the side bar is broken
~EDIT(tiny-tim): fixed: thanks, MechaMZ!
tiny-tim @ 05:44 PM Mar20-09
Added potential energy is another name for work done by a conservative force.
tiny-tim @ 02:31 AM Dec2-08
Sorry, but I don't see how "net force" can do work, since in the general case the displacements (of the points of application) are different, so there's no such thing as "net displacement" to
multiply the "net force" by. Reference to "net force" removed.
robphy @ 10:37 PM Dec1-08
Added "net work" and "Work done by the net force" to the sentence for the Work-Energy theorem in definition/summary section.
tiny-tim @ 08:23 AM Oct17-08
Added relativistic version and its derivation.
tiny-tim @ 12:18 PM Oct13-08
Changed to "Work-Energy Theorem", since that's what everyone on PF seems to call it.
olgranpappy @ 08:36 PM Oct12-08
changed "derivation" in extexpl to "derivation of work kinetic energy theorem".
olgranpappy @ 08:32 PM Oct12-08
corrected inverse square-law force in Equation to actually be inverse *square* law. corrected misstatement regarding size of "heights" at end of Equation section (don't both have to be "small" rather
one must be near the other). Corrected definition of Force and following Equations.
tiny-tim @ 03:44 AM Oct12-08
Added derivation, based on suggestion by olgranpappy
tiny-tim @ 05:34 AM Jul24-08
Ping! Had the bright idea of changing the title from "work (mechanics)" to "work done".
"work (mechanics)" was getting no autolinking at all, but "work done" will get plenty, with virtually no false positives. | {"url":"http://www.physicsforums.com/library.php?do=view_item&itemid=75","timestamp":"2014-04-21T04:50:39Z","content_type":null,"content_length":"31560","record_id":"<urn:uuid:abaa9572-62c7-4e09-af0f-1a3088e6fc7e>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00015-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Dynamical System
Next: Representation Up: The Proposed Nonlinear Recursive Previous: The Proposed Nonlinear Recursive
We briefly discuss the dynamics of the Structure from Motion problem. As shown earlier, it is often the case (i.e. in cinematographic post-production, robotics, etc.) that cameras do not teleport
around the scene and objects do not move about too suddenly. These bodies are governed by physical dynamics and it thus makes sense to constrain the possible configurations of the camera to have some
smooth temporal changes over a causal time sequence. For instance, we consider the typical dynamic system: ^4
Here, the observations are the 2D features (in u,v coordinates) which are concatenated into an observation vector R[t]. The matrix R[t] probabilistically encodes the accuracy of the measured 2D
feature coordinates and can represent features that are missing in certain frames when large variances are imputed into R[t] appropriately.
In addition, the dynamics of the internal state are constrained. The 3D structure, 3D motion and camera geometry do not vary wildly but are linearly dependent (via Q. For generality, we assume that
the motion of the camera through the scene is not known a priori and thus,
This dynamic system encodes the causal and dynamic nature of the SfM problem and allows an elegant integration of multiple frames from image sequences. It is also a probabilistic framework for
representing uncertainty. These dynamical systems have been extensively studied are routinely solved via reliable Kalman Filtering (KF) techniques. In our nonlinear case, an Extended Kalman Filter
(EKF) is utilized which linearizes
The representation of the measurement vector
Next: Representation Up: The Proposed Nonlinear Recursive Previous: The Proposed Nonlinear Recursive | {"url":"http://www.cs.columbia.edu/~jebara/htmlpapers/SFM/node20.html","timestamp":"2014-04-20T08:18:00Z","content_type":null,"content_length":"8373","record_id":"<urn:uuid:646c0385-dfa6-41f5-be61-10df27c72563>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00614-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mengetahui dan memahami maksud dari momen gaya, momen kopel, dan cara memindah gaya ... 1. In statics, a couple is defined as _ separated by a perpendicular distance. ... – PowerPoint PPT
Number of Views:2042
Avg rating:3.0/5.0
Slides: 29
Added by: Anonymous
more less
Transcript and Presenter's Notes | {"url":"http://www.powershow.com/view/11f7d1-Zjk3N/KULIAH_III_MEKANIKA_TEKNIK_TI_MOMEN_GAYA_powerpoint_ppt_presentation","timestamp":"2014-04-16T19:54:52Z","content_type":null,"content_length":"98992","record_id":"<urn:uuid:0a5f9c71-f05d-4430-bdff-f636907d5bce>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00321-ip-10-147-4-33.ec2.internal.warc.gz"} |
A stochastic version of the Price equation reveals the interplay of deterministic and stochastic processes in evolution
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
BMC Evol Biol. 2008; 8: 262.
A stochastic version of the Price equation reveals the interplay of deterministic and stochastic processes in evolution
Evolution involves both deterministic and random processes, both of which are known to contribute to directional evolutionary change. A number of studies have shown that when fitness is treated as a
random variable, meaning that each individual has a distribution of possible fitness values, then both the mean and variance of individual fitness distributions contribute to directional evolution.
Unfortunately the most general mathematical description of evolution that we have, the Price equation, is derived under the assumption that both fitness and offspring phenotype are fixed values that
are known exactly. The Price equation is thus poorly equipped to study an important class of evolutionary processes.
I present a general equation for directional evolutionary change that incorporates both deterministic and stochastic processes and applies to any evolving system. This is essentially a stochastic
version of the Price equation, but it is derived independently and contains terms with no analog in Price's formulation. This equation shows that the effects of selection are actually amplified by
random variation in fitness. It also generalizes the known tendency of populations to be pulled towards phenotypes with minimum variance in fitness, and shows that this is matched by a tendency to be
pulled towards phenotypes with maximum positive asymmetry in fitness. This equation also contains a term, having no analog in the Price equation, that captures cases in which the fitness of parents
has a direct effect on the phenotype of their offspring.
Directional evolution is influenced by the entire distribution of individual fitness, not just the mean and variance. Though all moments of individuals' fitness distributions contribute to
evolutionary change, the ways that they do so follow some general rules. These rules are invisible to the Price equation because it describes evolution retrospectively. An equally general prospective
evolution equation compliments the Price equation and shows that the influence of stochastic processes on directional evolution is more diverse than has generally been recognized.
Evolution involves both deterministic processes, such as selection, and random processes such as drift. When deterministic and stochastic processes are combined in the same model it is common to use
the "diffusion approximation" – essentially assuming that populations are large (so that evolution can be approximated as a continuous process), that population size is relatively stable, and that
selection is weak [1-4]. The diffusion approximation is nearly always used when analytical (rather than numerical) solutions are sought.
The diffusion approximation has yielded many important results concerning the interaction of deterministic and stochastic evolutionary processes. In particular, a number of different models have
shown that the direction of evolution is influenced not only by the relative mean fitnesses of different strategies (or alleles) but also by the variances in possible fitness values associated with
each strategy [5-10]. If the variance in each individual's fitness distribution influences directional evolution, then it seems likely that other aspects of the fitness distribution (i.e. other
moments) should do so as well. However, most of the models that have been studied have used methods (such as the Itô calculus [11]) which make it difficult to see the effects of higher moments of the
fitness distribution of an individual.
The most general (in the sense of making the fewest simplifying assumptions) mathematical description of evolution that we currently have, the Price equation [12], does not easily accommodate
stochastic evolutionary processes. The Price equation is an exact description of the relation that must hold between the phenotype of parents, the fitness of parents, the difference between parents
and offspring, and evolutionary change [13]. Unfortunately, all of these parameters must be specified exactly. The Price equation is thus exact only in hindsight, after reproduction has taken place
and we know the precise value of each individual's fitness and the mean phenotype of its offspring.
Despite this apparent limitation, the Price equation has been used extensively to study social evolution [14-16], the foundations of quantitative genetics [13,17], and the analysis of multilevel
selection [13,18-20] as well as in other fields such as ecology [21,22]. Since all of these fields also involve stochastic processes, it would be of value to have a theory with the generality of the
Price equation that does not require that all parameters are known exactly to begin with.
Below, I present a general equation for directional evolutionary change that treats fitness and offspring phenotype as random variables, rather than numbers, but imposes no restrictions on the
distributions associated with these random variables. This is essentially a stochastic version of the Price equation, though it is derived independently and contains a term not found in Price's
formulation. This theory accommodates all processes that influence directional evolution, both deterministic and stochastic. Using this result, I show that deterministic and stochastic processes
interact in complex ways. One result is that stochastic variation in fitness amplifies the effects of selection in small or fluctuating populations. Furthermore, the role of fitness variation within
an individual is more complex than has generally been recognized. The well known tendency for populations to be pulled towards phenotypes with minimum variance in fitness turns out to be one instance
of a more general rule that, all else held equal, populations are pulled towards phenotypes with minimum symmetrical variation in fitness, as measured by all of the even moments of an individual's
fitness distribution. This process can actually cause the variance in fitness to increase (so long as higher even moments decrease). There is also a tendency for populations to be pulled towards
phenotypes with maximum positive asymmetry in fitness (as measured by the odd moments). Finally, this equation contains a term, capturing the direct effects of reproduction on offspring phenotype,
that has no analog in the Price equation.
In the following analysis, the fitness of an individual (w) measures the number of descendants that the individual has at some future time, potentially including the individual itself [13]. We
consider a population of individuals that have not yet reproduced, and therefore treat fitness and offspring phenotype not as fixed values, but as random variables, each having a distribution of
possible fitness values. The mean of an individual's fitness distribution, $w^$, is the number of descendants that the individual is expected to leave.
Because each individual has a distribution of possible fitness values, the mean fitness in the population ($w¯$), which determines population growth rate, is also a random variable. If $w¯$ = 0 then
the population goes extinct, and the change in mean phenotype is undefined. We thus define $Ω=(ww¯|w¯≠0)$ as the ratio of individual fitness to mean population fitness, conditional on the population
not going extinct. Throughout this discussion, $x¯$ refers to the average value of x in a population, and $x^$ refers to the expected value of random variable x.
The general equation
Using the notation given in Table Table1,1, the expected change in mean phenotype over some interval (denoted $Δϕ¯^$) is given by (see Methods for derivation):
This is essentially a stochastic version of Price's theorem. Note, though, that it contains a term that has no analog in Price's formulation. This new term, $covi(ϕo,Ω)¯$, describes the population
average of the covariance, within an individual, between the average phenotype of that individual's offspring (ϕ^o) and the individual's contribution to population growth (Ω). This term does not
appear in Price's theorem because that equation treats offspring phenotype and fitness as parameters, rather than random variables (i.e. each individual has a specific value of ϕ^o and of fitness,
rather than a distribution of possible values for each of these). This is why Price's theorem is exact only after reproduction has taken place.
We can write Equation 1 in more familiar form by defining δ as the difference between the mean phenotype of an individual's offspring and that individual's own phenotype, then substituting ϕ + δ for
ϕ^o, to yield:
Note that ϕ, the current phenotype of an individual, is not treated as a random variable. This is because, at whatever time we look at the system, ϕ already has a value for each individual. By
contrast, w and δ are random variables because they concern future events and thus could have a range of possible values. Terms in Equation 2 containing δ concern processes, such as mutation and
recombination, that cause offspring to, on average, differ from their parents. If we set δ = 0, we are left with only cov(ϕ,$Ω^$), which is the change due only to differential survival and
reproduction. This term corresponds to the "selection differential" term in the Price equation [13]. However, we will see that because both individual fitness (w) and mean population fitness ($w¯$)
are now random variables, the term cov($ϕ^,Ω^$) now contains more than just selection.
Because it is the expected value of the ratio of two correlated random variables, $Ω^k$ (the expected value of the ratio of individual k's fitness to mean population fitness conditional on $w¯$ ≠ 0)
can behave in unexpected ways. In order to tease these apart, we can expand $Ω^k$ to yield (see Methods):
Here, H($w¯$) is the harmonic mean of the distribution of possible values of $w¯$, and μ[i+1 ](w[k ]$w¯i$) is the (i + 1)^st mixed central moment of w[k ]and $w¯i$. (The first of these terms, μ[2](w[
k ]$w¯$), is the covariance between individual fitness and mean population fitness). The value of the μ[i+1](w[k ]$w¯i$) terms is determined by the source of random variation in fitness. We will
consider two special cases: pure demographic stochasticity and random environmental change. For this discussion, we will set δ = 0, which is equivalent to looking only at the "selection
differential", S, which ignores mutation, recombination, and other processes that could cause offspring to not resemble their parents.
Demographic stochasticity in a constant environment
Even in an environment that seems constant to an outside observer, there will be variation in individual fitness values, even among individuals with the same phenotype. This variation corresponds to
what is generally called demographic stochasticity, and it will be present in all populations [23]. Pure demographic stochasticity is roughly equivalent to the "within-generation" component of
variation discussed by Gillespie [7].
If the fitness values of different individuals are independent (meaning that the number of descendants of individual j is independent of whether individual i leaves more or fewer descendants than
expected), and the environment does not change from generation to generation, then we can find the selection differential (S) by expanding cov(ϕ, $Ω^$). Considering only the first three terms in the
expansion, this yields:
Here, N designates actual, rather than effective, population size. The three terms on the right-hand side of Equation 4 each correspond to different directional evolutionary forces acting on the
population. These are: 1) selection (here a function of N because of the H($w¯$) term), 2) a force pulling the population towards phenotypes with minimum variance in fitness, and 3) a force pulling
the population towards phenotypes with maximum positive skewness in fitness. The terms corresponding to higher moments of the distribution of fitness follow the same pattern; those containing even
central moments are negative and those containing odd central moments are positive.
Random environmental change
In addition to pure demographic stochasticity, the environment may change over time in ways that differentially affect different phenotypes (the "between-generation" component of Gillespie [7]). In
this case, the expected fitness of individuals with a particular phenotype will itself vary over time, so the total fitness distribution of an individual will be a function of both the distribution
of expected fitness values, given its phenotype, and the distribution of variation around this expected value due to demographic stochasticity. In such a case, we can write the fitness of individual
i as w[i ]= $w˜i$ + s[i], where $w˜i$ is the expected fitness in the current environment of individuals with the same phenotype as i, and s[i ]is the deviation of individual i from this expectation
due to pure demographic stochasticity.
If we denote the frequency of phenotype ϕ in the population as f[ϕ], then in a very large population, the expected change in mean phenotype is approximated (to the first three terms) by:
Equations 4 and 5 have the same form. The difference is that in Equation 4 we are assuming that the fitness of each individual is independent of the fitness of every other individual, whereas in
Equation 5 we assume that the fitness values of all individuals with the same phenotype are correlated, since they are all influenced in the same way by the environment. For intermediate sized
populations experiencing a varying environment, both var(s) and var($w˜$) will enter the calculations (see Methods).
Equations 1 and 2 apply to any evolving system. These equations are based only on the assumption of a population of things that leave descendants and have measurable phenotypes, and they encompasses
all factors, both deterministic and stochastic, that contribute to directional evolutionary change in a closed population. If we specify the exact population size in the next generation (fixing the
value of $w¯$), and fix the value of δ for each individual, then Equation 2 becomes equivalent to the Price equation with fitness simply replaced by expected fitness, $w^$[24]. For simplicity, I will
often refer to ancestors as "parents" and descendants as "offspring", with the understanding that the same equation applies regardless of the time interval over which we look. Furthermore, the
ancestors and descendants need not be the same type of biological unit. For sexually reproducing organisms, we can treat a mated pair as the ancestor and an individual offspring as a descendant, or
an individual as the ancestor and a successful gamete as the descendant. Descendants may also include the ancestors at a later time, allowing for overlapping generations.
The phenotype, ϕ, may be any measurable trait. This fact allows us to derive much of classical evolutionary theory from Equation 2 simply by choosing the appropriate phenotype. For example, we can
derive standard population genetic models for change in frequency of an allele, A, by defining the phenotype (ϕ) of an individual as the frequency of A within that individual's genotype (ϕ is
therefore 0, 0.5, or 1). Defining ϕ in this way, $ϕ¯$ is equal to the frequency of the A allele in the population [13,25], so Equation 2 gives the change in allele frequency.
Many (though not all) of the evolutionary processes that I discuss in the following sections appear because $w¯$ is a random variable. This is a biologically interesting case because demographic
stochasticity – stochastic fluctuations in natural populations due to variation in individual reproduction – is ubiquitous in nature [23]. In most of the following discussion, I will focus on the
special case in which the fitness values of different individuals within the same generation are independent. It is important to note that this does not preclude density dependent population
regulation. For example, if all individuals in a population happen to produce more offspring than needed for replacement, then the population size will increase. In the next generation, though, the
resulting increased competition may reduce the fitness of all descendants, preventing (or reducing the probability of) further population increase. If this reduced fitness of descendants is manifest
as reduced viability, then this is equivalent to the different "culling" processes discussed by Gillespie [26]. In the case of "exact culling" [9], the mean phenotype is unchanged by the culling
process, so the changes in mean phenotype discussed here will occur even though the population does not increase over multiple generations.
It is of course possible for density dependence to involve a direct influence of one individual's reproduction on that of another. One example is the case of cavity nesting birds where the number of
suitable cavities is fixed. In this case, the act of one pair locating a cavity directly reduces the probability that another pair will do so. In such cases, there will be a negative covariance
between the fitness values of different individuals (or pairs). This negative covariance will appear in the values of the μ[i+1](w $w¯i$) terms in Equation 3. Specifically, making no assumptions
about independence of fitness values, $μ2(wiw¯)=1Nvar(wi)+N−1Ncov(wi,wj≠i)¯$. The term $cov(wi,wj≠i)¯$ is the average covariance between individual i's fitness and that of other members of the
The first term on the righthand side of Equation 2, cov(ϕ, $Ω^$), captures the contribution of differential survival and reproduction to directional evolutionary change. Though this is traditionally
called the "selection differential" [13], the expansion of this term (Equations 4 and 5) shows that stochastic processes can contribute substantially to directional evolution, both in small
populations and in populations subjected to random environmental variation. In this discussion, I will define "selection" as differential expected production of descendants that is causally
determined by differences in phenotype. Under this definition, some of the processes that contribute to cov(ϕ, $Ω^$) are not kinds of selection. I will nonetheless continue to use "selection
differential", designated S, because it is the standard term.
Equations 4 and 5 show the expected selection differential for cases corresponding to different sources of fitness variation. The difference between these equations makes sense when we note that, in
Equation 5, all individuals with the same phenotypic value have the same fitness in any particular generation. What matters is thus the frequencies of the different phenotypic values (f[ϕ]). This is
also true in Equation 4. Here, however, each individual's fitness is independent of that of all other individuals, so each individual is effectively its own "type", with frequency $1N$. It thus makes
sense that the powers of $1N$ in Equation 4 are replaced, in Equation 5 by powers of f[ϕ].
The expected selection differential is amplified by random variation in fitness
The first term on the righthand side of Equation 4, cov(ϕ, $w^$)/H($w¯$), shows that the magnitude of the expected selection differential increases with increasing variation in $w¯$. This follows
from the fact that the term cov(ϕ, $w^$) which captures the effects of selection, is divided by the harmonic mean of $w¯$, H($w¯$). Since the harmonic mean is disproportionately influenced by small
values, H($w¯$) will tend to decrease as the variation in $w¯$ increases, as is expected in small populations or in a variable environment. Equation 17 in the Methods section shows how 1/H($w¯$)
depends on variation in $w¯$.
To understand the biology behind this phenomenon, note that the selection differential is inversely proportional to mean population fitness ($w¯$); it is thus disproportionately influenced by small
values of $w¯$ (Fig. (Fig.1A).1A). For a population of size N, $w¯$ is essentially the mean of a sample of N points drawn from the overall fitness distribution. In a very large population (i.e. a
very large sample), the value of $w¯$ will nearly always be very close to the expected value, $w¯^$. By contrast, in a small population, there is a significant chance that $w¯$ will be much larger or
much smaller than $w¯^$. Since the small values have a disproportionate effect on the selection differential, the expected selection differential increases as population size decreases. The same
thing occurs even in large populations if $w¯$ is uncertain due to random environmental fluctuations. In order to test this conclusion, I performed monte-carlo simulations, following a population
over one generation, using the fitness distributions in Fig. Fig.1C.1C. The mean change in $ϕ¯$, averaged over 100,000 runs, is shown in Fig. Fig.1D.1D. Note that in this case, the expected change
due to selection in a very small population can be substantially larger than would be expected from classical theory. In this example, the environment is held constant, so the amplification of the
selection differential decays with increasing population size. If the variation in $w¯$ is a consequence of environmental variation that differentially affects different phenotypes, then we will see
the same amplification in large populations as well. The "Worked example" section in Methods explains how to calculate 1/H($w¯$) from the individual fitness distributions.
Amplification of expected selection differentials in small populations. Fitness distributions for two phenotypes; the size of the dot indicates the probability of that fitness value. In (A) and (B),
Individuals with phenotypic value 0 leave either 0 or ...
Though this phenomenon is not generally recognized in the literature, a special case can actually be derived from equations in Gillespie's 1977 paper [27] and in Proulx [9]. (In Equation 7 in [27],
set var(X) = var(Y) and $X¯≠Y¯$, using the notation of that paper. I am indebted to Steve Proulx for pointing this out). In this special case, the expected change increases with the variance in
individual fitness values. In general, Equations 3 and 17 show that all of the moments of the individual fitness distributions contribute to 1/H($w¯$), and Figure Figure55 shows that considering
only the variance can easily underestimate the degree to which the effects of selection are amplified.
Approximating the curve of $Δϕ¯^$using moments. A fitness distribution (A) and the corresponding expected change in mean phenotype for different population sizes (B). The solid black curve in (B) is
the curve resulting from 100,000 ...
This phenomenon at first seems at odds with the theoretical [28] and experimental [29] studies that have suggested that the average long term response to selection increases with increasing N,
resulting from the increased availability of genetic variation in larger populations [29]. The reason that this effect has been missed is that theoretical studies have treated $w¯$ as a fixed
parameter (or, equivalently, they hold population size fixed, as in Robertson's theory of selection limits [28]). Holding $w¯$ fixed means that $Ω^=w^w¯$, which is independent of population size
(compare with Equation 14, in which $w¯$ is not fixed). Experimental studies have effectively done the same thing, by choosing the same number of individuals in each round of selection and by using
truncation selection [29], minimizing the variation in individual fitness. Recent theoretical and empirical studies concerning the adaptive potential of small populations [30,31] have considered the
effects of population size only on genetic variation, assuming that the selection differential is independent of population size. The loss of heritable variation should indeed cause the long term
response to selection to be reduced in small populations. Over the short term, though, the amplification described here should facilitate a rapid adaptive response over the first few generations.
Such an amplified selection response could contribute to population differentiation in peripheral isolates.
The even-moment effect: Populations are pulled towards phenotypes having minimum symmetrical variation in fitness
Symmetrical spread about the mean of a distribution is measured by the even central moments. In the summation on the right-hand side of Equation 3, the terms containing even moments are all negative
(since, if i + 1 is even, i is odd so (-1)^i = -1). The covariance between phenotype and these terms thus corresponds to the population being pulled towards phenotypes with minimal symmetrical
variation. This is apparent in Equations 4 and 5, in which the term containing the variance (the second moment) is negative.
This is illustrated in Figs. Figs.2A2A and and2B.2B. The even-moment effect results from the fact that the fitness of individuals (or phenotypes) with the most variable fitness covary most strongly
with $w¯$[32]. When those individuals with high variation in fitness leave many descendants, the value of $w¯$ also tends to be high, reducing the magnitude of change. Conversely, when those with
high variation leave few descendants (and therefore decrease in frequency), $w¯$ tends to be low, increasing the magnitude of the decline (Fig. (Fig.2A).2A). In a constant environment, this effect
drops off with increasing population size, since the even moments of $w¯$ are all divided by increasing powers of 1/N. As with the amplification of selection differentials discussed above, though,
the even-moment effect remains strong in large populations when variation in individual fitness is due largely to environmental variation.
The even-moment effect. (A): Case in which individuals with phenotype 0 have a higher expected fitness than do those with phenotype 1 ($w^0$ = 1.04, $w^1$ = 1) as well as a higher variance in fitness
(var(w[0]) = 0.9984, var(w[1]) = 0). (B): The mean value of ...
The tendency of populations to be pulled towards phenotypes with low variance in fitness has been noted by many authors [5,6,8,9,32-34]. Most of these studies used some form of the diffusion
approximation, and thus assumed that higher moments could be ignored (though Proulx [9] presents an equation that can be expanded to yield the effects of higher moments, and notes that these need to
be considered when the variation in fitness for each individual is not small). Equation 3 shows that, in fact, all even moments contribute to this phenomenon. To illustrate this, Figure Figure2C2C
shows a case in which the expected direction of evolution is towards the phenotype with the higher variance in fitness. The reason is that the fourth and higher even moments of the fitness
distribution associated with the phenotype with higher variance are much smaller than those associated with the other phenotype. If the variation in fitness is due to pure demographic stochasticity
alone, then in this example variance in fitness is expected to increase only in very small populations, since the fourth moment term will be divided by N^3 and so will drop off quickly as N
increases. On the other hand, if variation in fitness is primarily a result of environmental variation, then the fourth moment term will be multiplied by $fϕ3$ rather than $1N3$, so the higher
moments may have an influence even in large populations, especially when the different phenotypes have similar frequencies.
The even-moment effect has sometimes been associated with the idea that selection acts on the geometric mean of individuals' fitness distributions. [35,36]. While geometric mean fitness is
appropriate when fitness varies in a deterministic and predictable manner over time, it is not relevant in the case discussed here, where fitness is a random variable within a generation [24,27,32].
To illustrate this, Fig. Fig.2D2D shows a case in which the strategy with the lowest geometric mean fitness is the one that is expected to increase in frequency. Instead, the direction of evolution
is determined by $Ω^$. Specifically, when $Ω^i$ > 1, the descendants of individual i are expected to comprise an increasing proportion of the population. Though $Ω^$ resembles traditional "relative
fitness", the fact that $w¯$ is a random variable that is correlated with w means that $Ω^$ does not scale like relative fitness (which preserves the relative order of the fitness values of different
individuals [37]). This is why it is possible to have $Ω^0<Ω^1$ even if $w^0$ > $w^1$, in which case the trait that is expected to increase in frequency is also the one that causes individuals
possessing it to have the lowest expected reproductive output (Fig. (Fig.2A).2A). The "expected relative fitness" discussed by Lande [32] is a special case of $Ω^$ (See Methods). Note that the term
"relative fitness" is used in different ways in the literature. In some cases, relative fitness refers to the fitness of an individual (or a phenotype) divided by mean population fitness (i.e. $wiw¯$
) [10,24]. In other cases, relative fitness refers to the fitness of one individual or phenotype divided by the fitness of another individual or phenotype (this is the interpretation that suggests
the importance of geometric mean fitness [38,39]). The exact reason that these two interpretations yield different results will be discussed elsewhere. For now, Fig. Fig.2D2D is sufficient to show
that geometric mean fitness does not necessarily identify which strategy will increase. Though the even moment effect is sometimes referred to as selection acting on variance, I argue below that the
even-moment effect should not be treated as a kind of selection.
Previous discussions of the even-moment effect have treated it as a function only of population size. However, Equation 4 shows that this effect scales as 1/($w¯^2$); it is thus amplified if $w¯^$ <
1, meaning that the population is expected to decline in size. The pull towards phenotypes with minimum variance in fitness can thus be important even in larger populations if they are rapidly
declining. The degree to which declining population size amplifies the even-moment effect will depend on how the variance (and higher even moments) scales with the mean. In the extreme case in which
the variance in fitness is independent of the mean, declining populations will be strongly influenced by the even-moment effect. As an example, consider a population of 10,000 individuals that is
declining such that the expected number of individuals in the next time interval is 1000. In this case, The strength of the force pulling the population towards phenotypes with minimum variance in
fitness is the same as it would be in a stable population with the same variances in fitness and size N = 100. (since, if N = 10,000 and $w¯^$ = 0.1, N $w¯^2$ = 100.) If the fitness distributions are
approximately Poisson, then the variance will scale linearly with the mean and so dividing by $w¯^2$ will still amplify the even-moment effect, though to a lesser degree.
The fact that declining populations may be particularly prone to the even-moment effect could have consequences for the probability of extinction. Stochastic extinction – resulting from chance
fluctuations in population growth rate – is a substantial threat to very small populations [40,41]. If a declining population shifts towards phenotypes that have minimum variance in fitness, then
this could reduce the chance of stochastic extinction when the population becomes very small. Further study will be necessary to determine if this phenomenon can significantly influence extinction
The odd-moment effect: Populations are pulled towards phenotypes with maximum positive asymmetry of fitness
This follows from the fact that the odd moment terms on the right-hand side of Equation 3, which measure asymmetry of the fitness distribution, are all positive. Real fitness distributions will
almost always be asymmetrical. This follows from the fact that individual fitness can not be less than zero but could possibly be very large, and that $w^$ will usually be close to 1.
In the case of pure demographic stochasticity, the odd-moment effect will be noticeable only in very small populations, since the third moment term in Equation 3 is divided by N^2, the fifth moment
term by N^4, and so on. As with the even-moment effect discussed above, the odd-moment effect may be significant even in large populations when fitness variation is due to environmental fluctuations.
For example, a phenotype that normally has moderate fitness but does much better than others during rare good years may show a long term increase that is greater than would be expected from the mean
and variance of its fitness distribution.
Note that the asymmetry that we are considering here is in the distribution of possible fitness values of an individual (e.g. the distribution associated with ϕ = 1 in Figure Figure3A).3A). This is
quite different from the "asymmetric fitness function" often discussed in the evolutionary genetics literature [25,42], which describes a case in which the plot of fitness as a function of phenotype
is asymmetrical (i.e. fitness drops off more quickly in one direction than in the other when we move away from an optimum phenotype). It is also different from asymmetry in the distribution of
breeding values, which has long been known to influence evolution [43], as well as the asymmetry in the expected change under selection that appears in some diffusion models [44]. Rather, the
odd-moment effect is a directional evolutionary force that appears when different individuals have different degrees of asymmetry in their fitness distributions.
The odd-moment effect. (A): Fitness distributions for two phenotypes that have the same means ($w¯$ = 1) and variances (var(w) = 0.9) but different third moments. (B): Simulation results for the
change in mean phenotype given the fitness distributions ...
Associations between offspring number and offspring phenotype
Equation 2 contains two terms representing covariance between the degree to which offspring differ from their parents (δ) and contribution to population growth ($Ω^$). The first of these, cov($δ^,Ω^$
), captures the degree to which the individuals that have the highest expected contribution to population growth are also those that produce offspring that deviate most from their parents. In this
term, the covariance is over the entire population, and may result either from a direct causal influence of fitness on offspring phenotype, or any fortuitous association in which the phenotype that
confers the highest value of $Ω^$ happens to also be associated with individuals who's offspring differ most (or least) from their parents.
By contrast, the term cov[i](δ, Ω) measures the covariance within an individual between w and δ, meaning that if that individual produces more offspring than expected, then its offspring's phenotypes
are expected to deviate more (or less) from its own (In Equations 1 and 2, this property of individuals is averaged over the entire population). This term will be nonzero when there is a direct
connection between how many offspring an individual produces and the phenotypes of those offspring. One example of this would be a case in which, for any given individual, producing more offspring
directly causes those offspring to be smaller. Such "offspring-size/clutch-size tradeoffs" [45] are expected in cases in which parents provision their offspring with limited resources, so producing
more offspring necessitates giving fewer resources to each one. This term would also be nonzero in cases in which the offspring of a particular individual interact with one another in such a way that
their development is influenced by how many siblings they have (this will include in-utero interactions).
Relation between selection and directional stochastic evolution
Definition of selection
As mentioned above, I am defining selection as differential expected production of descendants that is causally influenced by variation in phenotype. Under this definition, selection is captured by
the term cov(ϕ, $w^$), assuming that the association is due to causal impacts of ϕ on $w^$. Some researchers (and reviewers) define selection differently, as any process involving differential
survival or reproduction that leads to a predictable change in allele frequency [33]. This definition runs into problems with processes like balancing selection, that do not lead to any directional
Furthermore, defining selection as everything that leads to directional change effectively precludes it from being a specific evolutionary mechanism, since it is defined as the set of all mechanisms
that produce a particular result. Defining selection in this way makes it effectively synonymous with directional evolution.
By contrast, if we define selection as differential production of descendants (or differential survival and reproduction) that is causally determined by variation in phenotype, then we have
identified a particular class of mechanisms that will produce predictably different consequences under different conditions. Balancing and stabilizing selection are easily accommodated by this
These definitional issues have no bearing on the evolutionary importance of the processes discussed above. Readers who prefer to define selection as anything that produces directional change may read
the following section as a discussion of different components of selection.
Directional stochastic effects
The even- and odd-moment effects discussed above result from the same random variation in individual reproduction that causes drift. To understand the relationship between directional stochastic
evolution, drift, and selection, it is important to distinguish between two different factors that can produce directional change: 1) the relative probabilities of the mean phenotype increasing or
decreasing, and 2) the expected magnitude of change in each direction (Fig. (Fig.4).4). In the case of pure drift, these factors exactly cancel one another out – a higher probability of moving in
one direction is exactly balanced by a larger step size in the other direction – leading to a net expected change of zero (in some special cases, such as two alleles at equal frequency, both the
probability and step size are the same in both directions). Drift is thus non-directional (E(Δ$ϕ¯$) = 0), but has a magnitude measured by the variance in Δ$ϕ¯$. Drift can occur only if there is
variation in the fitness distributions of individuals. As population size increases, the magnitude of drift decreases, approaching zero as N → ∞. (In Fig. Fig.4,4, all fitness variation results from
pure demographic stochasticity. If fitness variation results from environmental variation, then there can be directional change even in cases like that in Fig. Fig.4A4A.)
Schematic illustration of different evolutionary processes. Examples of fitness distributions corresponding to different processes. The variation in fitness for each phenotype is due to pure
demographic stochasticity. The dashed gray lines show the regression ...
Directional stochastic effects behave like drift insomuch as they require that individuals have distributions of possible fitness values. However, the probability of moving in each direction and the
expected step size in each direction do not cancel one another out. In a constant environment, the expected magnitude of change declines towards zero as N → ∞, as in the case of drift.
In the case of selection, there is both a higher probability of the mean phenotype changing in one direction and a larger expected step size in that direction. Unlike drift and directional stochastic
evolution, selection can take place even if there is no variation in any of the individual fitness distributions. As population size increases, the expected change due to selection decreases
somewhat, but does not go to zero; instead asymptotically approaching the value $cov(ϕ,w^)w¯^$. Furthermore, the probability of the population changing by this amount in the direction specified by
selection approaches 1 as population size approaches infinity.
Selection also differs from the directional stochastic terms in that it involves covariance between phenotype and the first raw (not central) moment of the fitness distribution. By contrast, all of
the directional stochastic terms involve central moments of w. Also, the denominator in the first (selection) term is the harmonic mean of $w¯$, (H($w¯$)), whereas all subsequent terms involve
dividing by powers of the expected value of $w¯$, ($w¯^$).
Consequences for adaptive landscape models
The concept a surface describing fitness as a function of a set of phenotypic traits (one version of the "adaptive landscape"), has a long history in evolutionary theory [46,47], and variants of this
idea have recently been presented as unifying concepts in evolutionary biology [48,49]. This is indeed an important kind of abstraction that both hones our intuition about evolution and allows us to
visualize an important set of formal evolutionary models. The results presented above, though, show that thinking of evolution in terms of an adaptive landscape can also lead us to miss important
evolutionary processes.
By its nature, an adaptive landscape treats $w¯$ as a number, rather than as a random variable (which has a distribution, rather than a single value). Because of this, both the amplification of
selection differentials and all directional stochastic evolutionary processes are eliminated from adaptive landscape models. Even in a stable environment with frequency independent selection,
directional stochastic effects could pull a population downhill on an adaptive landscape.
One possible way around this would be to to consider a surface of expected relative fitness, essentially plotting $Ω^$, rather than $w^$ or $w¯^$, as a function of phenotype [10,32]. However,
Equation 3 shows that $Ω^$ is itself a function of population size, meaning that such a landscape would change shape as N changes even if selection is not density dependent in the classical sense
(meaning that the fitness distribution of each individual is independent of N).
The more appropriate visual image would be an adaptive fog, with variable density and thickness corresponding to different fitness distributions for different phenotypes. The dynamics of evolution
through such a fog are described by Equations 1 and 2, and are determined not only by the slope of expected mean fitness ($w¯$) but also by variations in the thickness of the fog and by population
size (since this will influence H($w¯$)). Unfortunately, this image lacks the visual simplicity of the adaptive landscape, which remains a very useful concept but should be recognized as an
approximation based on the assumption that fitness values are fixed.
Relation between Equation 1 and the Price equation
I refer to Equation 1 (and 2, which is equivalent) as a stochastic version of the Price equation because it is derived in an analogous way. Equations 1 and 2 are not, however, equivalent to the Price
equation and can not be derived directly from it (specifically, the term $covi(δ,Ω)¯$ can not be derived simply by treating w and δ as random variables in the Price equation). The reason for this is
that the Price equation is derived by treating fitness and offspring phenotype as parameters, having numerical values, rather than as random variables, which have distributions. This is why the Price
equation is exact only in hindsight, when we know how many descendants each individual had and what their phenotypes are. (Graffen [24] derived an equation equivalent to 2 under the assumption that δ
= 0).
We can, of course, apply the Price equation to looking forward in time if we are willing to assume that expected fitness ($w^$) can be used in place of the actual number of descendants that an
individual will leave, and to further assume that we can predict the phenotypes of offspring. (Price himself appears to make this assumption in his example of students with different IQs taking a
course [12]). However, the preceding discussion shows that considering only expected fitness (instead of $Ω^$) leads us to miss an entire class of evolutionary mechanisms.
How, then, is it possible for both Equations 1 and the Price equation to be exactly true given that they are different? Any evolving system must satisfy both Equation 1 and the Price equation.
However, if we focus on change over a particular generation, these equations are appropriate at different times. Prior to reproduction, when fitness and offspring phenotype are not yet exactly
determined, Equations 1 and 2 are exact descriptors of the expected change over the coming generation. After reproduction has taken place, the Price equation will, retrospectively, be an exact
description of what just transpired.
The limitations of general theories in biology
Equations 1 and 2 and Price's equation are general in the sense that they apply exactly to any evolving system. Note, though, that this does not mean that they answer all of our questions about
evolution. Two objections that are sometimes raised about the Price equation (and which apply to Equations 1 and 2 as well) are that it is not dynamically sufficient [16], and that it does not
directly address some important evolutionary questions, such as the probability of fixation of an allele.
As discussed in the Methods section (see "Worked example"), whether or not Equation 1 can be iterated into the future (i.e. is dynamically sufficient) is determined by the kinds of phenotypes that we
are studying and what assumptions we make about them (see also [50]). In the case of a population containing two distinct phenotypes (such as a one locus haploid model with two alleles), the entire
distribution is uniquely defined by the mean. In such a case, we can iterate Equation 1 through time with no further simplifying assumptions. If there are more than two phenotypes (such as in diploid
models where genotypes take the role of ϕ), then some further assumption, such as Hardy-Weinberg equilibrium, is necessary to achieve dynamic sufficiency. In the case of a continuous phenotypic
trait, a simple way to make the model dynamically sufficient is to assume that the trait is normally distributed, meaning that we need only calculate the change in the mean and variance (change in
variance is obtained from Equations 1 or 2 by substituting (ϕ - $ϕ¯$)^2 for ϕ [13,51]).
These are exactly the same assumptions that make models in population and quantitative genetics dynamically sufficient. Thus, the general equations discussed here are no less dynamically sufficient
than any of the standard models (since these are special cases). The general equations simply apply to a much broader set of cases, some of which do not allow for a single, compact, dynamically
sufficient equation [52].
Another criticism is that these equations describe only the change over a generation, which does not, by itself, answer some evolutionary questions. However, the change in mean phenotype (of which a
special case is change in the frequency of an allele or strategy) is one of the most basic pieces of formal evolutionary theory. In some fields, such as quantitative genetics, change in $ϕ¯$ is the
primary quantity of interest. In other cases, such as evolutionary game theory, it is a key factor in evaluating the quantity of interest (evolutionary stability). In population genetics, change over
a generation is sometimes the quantity of interest, and even when it is not (such as when the goal is to calculate fixation probabilities), change in allele frequency is an essential part of the
answer (e.g. it defines M (p) in a diffusion equation). Though the general models discussed here do not answer all of our questions, their value lies in their ability to generalize and unify special
case models, and to give us insights into the mechanics of evolution that can be obscured by the assumptions necessary to predict the long term behavior of particular model systems.
The interplay of deterministic and stochastic processes is central to much of evolutionary theory. Unfortunately, our most general mathematical description of evolution, the Price equation, is not
well suited to the study of stochasticity. This is because the Price equation describes evolution exactly only after change has taken place, meaning that it contains no stochastic terms (since all
parameters are known exactly in hindsight). A general stochastic evolution equation, derived in a similar way to the Price equation but different in that fitness and offspring phenotype are treated
as random variables, reveals a number of general rules about the interaction of deterministic and stochastic processes in evolution.
One result is that variation in mean population fitness, resulting either from small population size or environmental fluctuations, tends to amplify the effects of selection. This suggests that the
adaptive potential of small populations may be greater than has been assumed. Another result is that the well known tendency for populations to be pulled towards phenotypes with minimum variance in
fitness turns out to be a special case of a general trend to minimize symmetric variation in fitness. This process can actually cause variance in fitness to increase, so long as higher even moments
decrease. This even-moment effect is matched by an odd-moment effect, which tends to pull populations towards phenotypes with maximum positive asymmetry in fitness.
Both the even- and odd-moment effects can drive a population to evolve towards phenotypes with lower expected fitness. This is consistent with (and is a generalization of) previous results showing
that differential variance in fitness can drive directional evolution. It is not, however, consistent with the idea that geometric mean fitness determines the direction of evolution. Instead, in
cases of perfect heritability, the direction of evolution is determined by the expected value of individual fitness divided by mean population fitness ($ww¯$), conditional on the population not going
extinct. This confirms the importance of "expected relative fitness" [10,32], when defined properly, as a determining factor in evolutionary dynamics.
Finally, the general equations presented here contain a term capturing the direct influence of parental fitness on offspring phenotype. This term, which has no analog in the Price equation, may be
important in the many cases in which parents provision their offspring or in which individual development is influenced by interactions with siblings. This also illustrates the value of treating
offspring phenotype, like fitness, as a random variable.
Derivation of Equation 1
In the following derivations, it is essential to distinguish between: 1) the expected value of a random variable and 2) the average value of that variable in a population. For example: before
reproduction takes place, individual fitness (w) is a random variable, meaning that each individual has a distribution of possible fitness values. The expected value of this distribution, for a
particular individual, is $w^$. It is critical to distinguish between this expected value and the average value of w in the population, denoted $w¯$, which is an important term in its own right (it
measures per capita population growth rate). $w¯$ is itself a random variable, since prior to reproduction we can not know exactly how the population will change in size. We thus have $w¯^$ as the
expected value of average fitness. An important identity is E(Ave(x)) = Ave(E(x)) or $x¯^=x^¯$ (this is easily shown by noting that $E(1N∑i=1Nxi)=1N∑i=1NE(xi)$).
Define $ϕ¯′$ as the mean phenotype in the population after one time interval, and $ϕijo$ as the phenotype of the j^th descendant of individual i in the current population. Then, conditional on $∑i=
If we denote the average phenotype of descendants of individual i as simply $ϕio$, then $∑j=1wiϕijo=ϕiowi$ and $∑i=1Nwi=Nw¯$, and Equation 6 becomes:
Using the fact that E(Ave(x)) = Ave(E(x)) and noting that the rule E(xy) = cov(x, y) + E(x)E(y) applies as well to Ave(), we can expand Equation 7 to yield:
Defining $δ¯^=ϕo¯^−ϕ¯$, noting that Ave[E(w/$w¯$)] = E[Ave(w/$w¯$)] = 1, and using the fact that $E(Δϕ¯)=E(ϕ¯′)−ϕ¯$, we get:
Defining $Ωk=(wkw¯|w¯≠0)$ and noting that $ϕio$ = ϕ[i ]+ δ[i ]yields Equation 1.
Derivation of Equation 3 and 4
For a random variable, x, denote the difference between x and its expected value as x*; so x = E(x) + x*, E(x*) = 0, and E[(x*)^n] is the n^th central moment of x (this is just the delta method). We
can now write $Ω^k$ as:
The Taylor series expansion of Equation 10 does converge (so long as we calculate all probabilities conditional on $w¯$ ≠ 0), but it contains a rather non intuitive mix of terms involving both w and
$w¯$, producing a mix of higher moments that is difficult to interpret biologically. We can make things clearer by noting that Equation 10 involves the sum of two different series. One of these
contains terms involving the mixed moments of w and $w¯$, while the other contains only moments of $w¯$. This second series can be pulled out by noting that it is the reciprocal of the harmonic mean
of $w¯$:
Combining Equations 10 and 11 yields:
Expanding $(1+w¯∗E(w¯))−1$ in a Taylor series and taking the expected value yields Equation 3:
Where μ[i+1](w[k ]$w¯i$) is the (i + 1)^st mixed central moment of w[k ]and $w¯i$. If we assume that H($w¯$) = $w¯^$ and consider only the first term in the summation, then Equation 13 yields Lande's
expected relative fitness [32] (since μ[2](w[k ]$w¯$) = cov(w[k], $w¯$)). Proulx [9,34,53] has presented a series that groups terms differently than does Equation 13, grouping them based on their
order in an approximation of small variance in offspring numbers.
If the actual number of descendants of different individuals are independent – meaning that the number of descendants of individual k is independent of whether individual j leaves more or fewer
descendants than expected – then cov(w[k], w[j ≠ k]) = 0, so $μ2(wkw¯)=1Nvar(wk)$ and $μ3(wk,w¯2)=1N2μ3(wi)$. We thus have:
Substituting Equation 14 into cov(ϕ, $Ω^$) yields Equation 4.
Derivation of Equation 5
Consider a case in which individuals with certain phenotypes are consistently influenced the same way by environmental variation across generations (e.g. wet and dry years occur at random, and wet
years influence the fitness of large individuals differently from the way that they influence small individuals). In such a case, we can write individual fitness as w[i ]= $w˜i$ + s[i], where $w˜i$
is the expected fitness, in the current environment, of individuals with the same phenotype as individual i, and s[i ]is the deviation from this expected fitness due to pure demographic
stochasticity. In this case,
If the effects of pure demographic stochasticity are independent of the environment, and N is large, then we need only consider the term $fϕi$ var($w˜i$).
Under the same assumptions, the third moment effect is captured by:
Substituting $μ2(wiw¯)=fϕivar(w˜i)$ and $μ3(wiw¯2)=fϕi2μ3(w˜i)$ into Equation 3 yields Equation 5.
Worked Example
Figure Figure55 shows a case in which directional selection is acting simultaneously with the even- and odd-moment effects. In order to analytically solve for the selection differential as a
function of population size, we need to calculate $Ω^$. The most difficult term in Equation 3 to calculate is the first one, containing the reciprocal of the harmonic mean of $w¯$. For very small
populations, we can sometimes calculate 1/H($w¯$) directly. For larger populations, though, we need to use a series approximation. Expanding the right-hand side of Equation 11 yields:
Next, we need to calculate the moments of $w¯$ from the moments of the fitness distributions associated with each phenotype (which is what we are starting out with). If the actual fitness of each
individual is independent of that of others in the same generation, then for the case in which there are P distinct phenotypes and n[i ]individuals with phenotype i, the second, third, and fourth
central moments of $w¯$ are given by:
Equation 20 is derived using the fact that $w¯$ is a sum of different values, and assuming that the fitness values of different individuals are independent. The equations for the higher moments get
large, but have a straightforward form. The number of terms in the series in Equation 17 that are needed to get a good approximation is determined by the individual fitness distributions and
population size. Figure Figure55 shows an example in which using only the first two terms yields an underestimate for small populations, but using the first four terms yields a good fit at all
population sizes. In the example in Figure Figure5,5, there are two phenotypes, scored as 0 and 1, with $w^0$ = 1, $w^1$ = 2, var(w[0]) = 2, var(w[1]) = 1.5, μ[3](w[0]) = 2, μ[3](w[1]) = 1.5, μ[4](w
[0]) = 6, μ[4](w[1]) = 4.5.
Next, we need to specify the current frequencies of each phenotype and solve for the covariance terms. For the case of only two phenotypes, assigned values 0 and 1 and having frequencies f[0 ]and f
[1], the general rule is:
cov(ϕ, μ[i](w)) = f[0]f[1 ][μ[i](w[1]) - μ[i](w[0])]
For this example, I set the frequencies to be equal, so that n[0 ]= n[1 ]= N/2. For this case, we have $w¯^$ = 1.5, cov(ϕ, $w^$) = 0.25, cov(ϕ, var(w)) = -0.125, cov(ϕ, μ[3](w)) = -0.125, cov(ϕ, μ[4]
(w)) = -0.375, The dashed curves in the figure were derived by using the moments of the fitness distributions for each phenotype to approximate 1/H($w¯$) (using Equations 18 – 20 and Equation 17) and
to calculate the covariance terms using Equation 21.
Note that, for the case of two phenotypes, we can calculate all of the necessary terms using only the fitness distributions for each phenotype and the mean phenotype (from which we can calculate the
phenotypic frequencies if there are only two). We can thus iterate this process forward in time. If there are more than two phenotypes, then iteration is not possible unless we make further
assumptions (such as assuming Hardy-Weinberg frequencies for genotypes or a normal distribution with fixed variance for a continuous trait), that allow us to specify the entire distribution given
only the mean.
It is sometimes necessary to use moments higher than μ[4]($w¯$) for very small populations with highly asymmetrical fitness distributions. As Equations 18 – 20 show, though, the higher moments of
$w¯$ contain increasing powers of $1N$. Using only the first few terms on the right-hand side of Equation 17 thus tends to give a very good approximation for populations larger than a few dozen
Monte-carlo simulations
The monte-carlo simulations used asexual individuals with non overlapping generations. The value of Δ$ϕ¯$ was calculated by looking over a single generation starting with N individuals, evenly
divided between the two phenotypic values (so initially $ϕ¯$ = 0.5). Each individual's contribution to the next generation is drawn at random from its fitness distribution and the new mean phenotype
is calculated. The curves presented are the averages of 100,000 runs for each population size.
Authors' contributions
SHR did the work and wrote the paper.
This paper benefited greatly from reviews by John Heywood, Steve Proulx, Andy Gardner, and an anonymous reviewer. This work was supported by NSF grant DEB-0616942 to the author.
• Karlin S, Levikson B. Temporal fluctuations in selection intensities: case of small population size. Theor Popul Biol. 1974;6(3):383–412. doi: 10.1016/0040-5809(74)90017-3. [Cross Ref]
• Takahata N, Ishii K, Matsuda H. effect of temporal fluctuation of selection coefficient on gene frequency in a population. Proc Nat Acad Sci USA. 1975;72(11):4541–4545. doi: 10.1073/
pnas.72.11.4541. [PMC free article] [PubMed] [Cross Ref]
• Ewens WJ. Mathematical population genetics. Berlin: Springer; 2004.
• Huerta-Sanchez E, Durrett R, Bustamante CD. Population genetics of polymorphism and divergence under fluctuating selection. Genetics. 2008;178:325–337. doi: 10.1534/genetics.107.073361. [PMC free
article] [PubMed] [Cross Ref]
• Hartl DL, Cook RD. Balanced polymorphisms of quasineutral alleles. Theor Popul Biol. 1973;4(2):163–172. doi: 10.1016/0040-5809(73)90026-9. [Cross Ref]
• Karlin S, Liberman U. Random temporal variation in selection intensities: Case of large population size. Theor Popul Biol. 1974;6(3):355–382. doi: 10.1016/0040-5809(74)90016-1. [PubMed] [Cross
• Gillespie JH. Natural selection for within-generation variance in offspring number. Genetics. 1974;76(3):601–606. [PMC free article] [PubMed]
• Frank SA, Slatkin M. Evolution in a variable environment. Am Nat. 1990;136(2):244–260. doi: 10.1086/285094. [Cross Ref]
• Proulx SR. The ESS and spatial variation with applications to sex allocation. Theor Popul Biol. 2000;58:33–47. doi: 10.1006/tpbi.2000.1474. [PubMed] [Cross Ref]
• Lande R. Adaptive topography of fluctuating selection in a Mendelian population. J Evol Biol. 2008;21:1096–1105. doi: 10.1111/j.1420-9101.2008.01533.x. [PubMed] [Cross Ref]
• Karlin S, Taylor HM. A second course in stochastic processes. San Diego, CA: Academic Press; 1981.
• Price GR. selection and covariance. Nature. 1970;277:520–521. doi: 10.1038/227520a0. [PubMed] [Cross Ref]
• Rice SH. Evolutionary theory: mathematical and conceptual foundations. Sunderland, MA: Sinauer Associates; 2004.
• Hamilton WD. In: Biosocial Anthropology. Fox R, editor. New York, NY: Wiley; 1975. Innate social aptitudes in man: an approach from evolutionary genetics; pp. 133–155.
• Queller DC. A general model for kin selection. Evolution. 1992;46(2):376–380. doi: 10.2307/2409858. [Cross Ref]
• Frank SA. Foundations of social evolution. Princeton, N.J.: Princeton University Press; 1998.
• Heywood JS. An exact form of the breeder's equation for the evolution of a quantitative trait under natural selection. Evolution. 2005;59(11):2287–2298. [PubMed]
• Price GR. Extension of covariance selection mathematics. Ann Hum Genet. 1972;35:485–490. [PubMed]
• Wade MJ. Soft selection, hard selection, kin selection, and group selection. Am Nat. 1985;125:61–73. doi: 10.1086/284328. [Cross Ref]
• Okasha S. Evolution and the levels of selection. Oxford, UK: Oxford Univ. Press; 2007.
• Loreau M, Hector A. Partitioning selection and complementarity in biodiversity experiments. Nature. 2001;412:72–76. doi: 10.1038/35083573. [PubMed] [Cross Ref]
• Fox J. Using the Price equation to partition the effects of biodiversity loss on ecosystem function. Ecology. 2007;87:2687–2696. doi: 10.1890/0012-9658(2006)87[2687:UTPETP]2.0.CO;2. [PubMed] [
Cross Ref]
• Inchausti P, Halley J. The long-term temporal variability and spectral colour of animal populations. Evolutionary Ecology Research. 2002;4:1033–1048.
• Grafen A. Developments of the Price equation and natural selection under uncertainty. Proc R Soc Lond B. 2000;267:1223–1227. doi: 10.1098/rspb.2000.1131. [PMC free article] [PubMed] [Cross Ref]
• Kirkpatrick M, Johnson T, Barton N. General models of multilocus selection. Genetics. 2002;161:1727–1750. [PMC free article] [PubMed]
• Gillespie JH. Natural selection for within-generation variance in offspring number II. Discrete haploid models. Genetics. 1975;81(2):403–413. [PMC free article] [PubMed]
• Gillespie JH. Natural selection for variance in offspring numbers: a new evolutionary principle. Am Nat. 1977;111(981):1010–1014. doi: 10.1086/283230. [Cross Ref]
• Robertson A. A theory of limits in artificial selection. Proc R Soc Lond B. 1960;153:234–249.
• Weber KE, Diggins LT. Increased Selection Response in Larger Populations. II. Selection for Ethanol Vapor Resistance in Drosophila melanogaster at Two Population Sizes. Genetics. 1990;125
(3):585–597. [PMC free article] [PubMed]
• Swindell WR, Bouzat JL. Modeling the adaptive potential of isolated populations: experimental simulations using Drosophila. Evolution. 2005;59(10):2159–2169. [PubMed]
• Willi Y, Buskirk JV, Hoffmann AA. Limits to the Adaptive Potential of Small Populations. Annu Rev Ecol Syst. 2006;37:433–458. doi: 10.1146/annurev.ecolsys.37.091305.110145. [Cross Ref]
• Lande R. Expected relative fitness and the adaptive topography of fluctuating selection. Evolution. 2007;61:1835–1846. doi: 10.1111/j.1558-5646.2007.00170.x. [PubMed] [Cross Ref]
• Gillespie JH. Natural selection with varying selection coefficients – a haploid model. Genet Res. 1973;21:115–120.
• Shpak M, Proulx SR. The role of life cycle and migration in selection for variance in offspring number. Bulletin of Mathematical Biology. 2007;69:837–860. doi: 10.1007/s11538-006-9164-y. [PubMed]
[Cross Ref]
• Stearns SC. Daniel Bernoulli (1738): evolution and economics under risk. J Biosci. 2000;25(3):221–228. doi: 10.1007/BF02703928. [PubMed] [Cross Ref]
• Orr HA. Absolute fitness, relative fitness, and utility. Evolution. 2007;61(12):2997–3000. doi: 10.1111/j.1558-5646.2007.00237.x. [PubMed] [Cross Ref]
• Burger R. The mathematical theory of selection, recombination, and mutation. Chichester: John Wiley & Sons; 2000.
• Bulmer M. Theoretical evolutionary ecology. Sunderland, MA: Sinauer Associates; 1994.
• Grafen A. Formal darwinism, the individual-as-maximizing-agent analogy and bet-hedging. Proc R Soc Lond B. 1999;266:799–803. doi: 10.1098/rspb.1999.0708. [Cross Ref]
• Pimm SL, Jones HL, Diamond J. On the risk of extinction. Am Nat. 1988;132(6):757–785. doi: 10.1086/284889. [Cross Ref]
• Lande R. Risks of population extinction from demographic and environmental stochasticity and random catastrophes. Am Nat. 1993;142:911–927. doi: 10.1086/285580. [Cross Ref]
• Barton NH, Turelli M. Natural and sexual selection on many loci. Genetics. 1991;127:229–255. [PMC free article] [PubMed]
• Falconer DS, Mackay TF. Introduction to quantitative genetics. Menlo Park, CA: Benjamin Cummings; 1996.
• Lande R. Demographic stochasticity and Allee effect on a scale with isotropic noise. Oikos. 1998;83(2):353–358. doi: 10.2307/3546849. [Cross Ref]
• Charnov EL, Ernest SKM. The offspring-Size/Clutch-Size Trade-Off in Mammals. Am Nat. 2006;167(4):578–582. doi: 10.1086/501141. [PubMed] [Cross Ref]
• Pearson K. Mathematical contributions to the theory of evolution. XI. On the influence of natural selection on the variability and correlations of organs. Phil Trans Roy Soc London A. 1903;200
:1–66. doi: 10.1098/rsta.1903.0001. [Cross Ref]
• Simpson GG. Tempo and mode in evolution. New York, NY: Columbia University Press; 1944.
• Arnold SJ, Pfrender ME, Jones AG. The adaptive landscape as a conceptual bridge between micro- and macroevolution. Genetica. 2001;112-113:9–32. doi: 10.1023/A:1013373907708. [PubMed] [Cross Ref]
• McGhee GR. The geometry of evolution: adaptive landscapes and theoretical morphospaces. Cambridge, UK: Cambridge; 2007.
• Gardner A, West SA, Barton NH. The Relation between Multilocus Population Genetics and Social Evolution Theory. Am Nat. 2007;169(2):207–226. doi: 10.1086/510602. [PubMed] [Cross Ref]
• Frank SA. George Price's contributions to evolutionary genetics. J theor Biol. 1995;175:373–388. doi: 10.1006/jtbi.1995.0148. [PubMed] [Cross Ref]
• Rice SH. Theoretical Approaches to the Evolution of Development and Genetic Architecture. Ann NY Acad Sci. 2008;1133:67–86. doi: 10.1196/annals.1438.002. [PubMed] [Cross Ref]
• Proulx SR. Sources of stochasticity in models of sex allocation in spatially structured populations. J Evol Biol. 2004;17:924–930. doi: 10.1111/j.1420-9101.2004.00723.x. [PubMed] [Cross Ref]
Articles from BMC Evolutionary Biology are provided here courtesy of BioMed Central
• PubMed
PubMed citations for these articles
Your browsing activity is empty.
Activity recording is turned off.
See more... | {"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2577117/?tool=pubmed","timestamp":"2014-04-16T23:20:06Z","content_type":null,"content_length":"217002","record_id":"<urn:uuid:4887c765-7eac-43e9-b88e-56e61e6fb9fd>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00440-ip-10-147-4-33.ec2.internal.warc.gz"} |
Operational congruences for reactive systems
Results 1 - 10 of 23
, 2003
"... A bigraphical reactive system (BRS) involves bigraphs, in which the nesting of nodes represents locality, independently of the edges connecting them; it also allows bigraphs to reconfigure
themselves. BRSs aim to provide a uniform way to model spatially distributed systems that both compute and comm ..."
Cited by 1000 (29 self)
Add to MetaCart
A bigraphical reactive system (BRS) involves bigraphs, in which the nesting of nodes represents locality, independently of the edges connecting them; it also allows bigraphs to reconfigure
themselves. BRSs aim to provide a uniform way to model spatially distributed systems that both compute and communicate. In this memorandum we develop their static and dynamic theory. In part I, we
, 2004
"... Motivated by recent work on the derivation of labelled transitions and bisimulation congruences from unlabelled reaction rules, we show how to solve this problem in the DPO (double-pushout)
approach to graph rewriting. Unlike in previous approaches, we consider graphs as objects, instead of arrows, ..."
Cited by 61 (10 self)
Add to MetaCart
Motivated by recent work on the derivation of labelled transitions and bisimulation congruences from unlabelled reaction rules, we show how to solve this problem in the DPO (double-pushout) approach
to graph rewriting. Unlike in previous approaches, we consider graphs as objects, instead of arrows, of the category under consideration. This allows us to present a very simple way of deriving
labelled transitions (called rewriting steps with borrowed context) which smoothly integrates with the DPO approach, has a very constructive nature and requires only a minimum of category theory. The
core part of this paper is the proof sketch that the bisimilarity based on rewriting with borrowed contexts is a congruence relation.
, 2004
"... A bigraphical reactive system (BRS) involves bigraphs, in which the nesting of nodes represents locality, independently of the edges connecting them; it also allows bigraphs to reconfigure
themselves. BRSs aim to provide a uniform way to model spatially distributed systems that both compute and comm ..."
Cited by 59 (6 self)
Add to MetaCart
A bigraphical reactive system (BRS) involves bigraphs, in which the nesting of nodes represents locality, independently of the edges connecting them; it also allows bigraphs to reconfigure
themselves. BRSs aim to provide a uniform way to model spatially distributed systems that both compute and communicate. In this memorandum we develop their static and dynamic theory. In Part I we
, 2005
"... Bigraphs are graphs whose nodes may be nested, representing locality, independently of the edges connecting them. They may be equipped with reaction rules, forming a bigraphical reactive system
(Brs) in which bigraphs can reconfigure themselves. Following an earlier paper describing link graphs, a c ..."
Cited by 50 (5 self)
Add to MetaCart
Bigraphs are graphs whose nodes may be nested, representing locality, independently of the edges connecting them. They may be equipped with reaction rules, forming a bigraphical reactive system (Brs)
in which bigraphs can reconfigure themselves. Following an earlier paper describing link graphs, a constituent of bigraphs, this paper is a devoted to pure bigraphs, which in turn underlie various
more refined forms. Elsewhere it is shown that behavioural analysis for Petri nets, π-calculus and mobile ambients can all be recovered in the uniform framework of bigraphs. The paper first develops
the dynamic theory of an abstract structure, a wide reactive system (Wrs), of which a Brs is an instance. In this context, labelled transitions are defined in such a way that the induced bisimilarity
is a congruence. This work is then specialised to Brss, whose graphical structure allows many refinements of the theory. The latter part of the paper emphasizes bigraphical theory that is relevant to
the treatment of dynamics via labelled transitions. As a running example, the theory is applied to finite pure CCS, whose resulting transition system and bisimilarity are analysed in detail. The
paper also mentions briefly the use of bigraphs to model pervasive computing and
- UNDER CONSIDERATION FOR PUBLICATION IN MATH. STRUCT. IN COMP. SCIENCE , 2005
"... This paper axiomatises the structure of bigraphs, and proves that the resulting theory is complete. Bigraphs are graphs with double structure, representing locality and connectivity. They have
been shown to represent dynamic theories for the #-calculus, mobile ambients and Petri nets, in a way th ..."
Cited by 36 (8 self)
Add to MetaCart
This paper axiomatises the structure of bigraphs, and proves that the resulting theory is complete. Bigraphs are graphs with double structure, representing locality and connectivity. They have been
shown to represent dynamic theories for the #-calculus, mobile ambients and Petri nets, in a way that is faithful to each of those models of discrete behaviour. While the main purpose of bigraphs is
to understand mobile systems, a prerequisite for this understanding is a well-behaved theory of the structure of states in such systems. The algebra of bigraph structure is surprisingly simple, as
the paper demonstrates; this is because bigraphs treat locality and connectivity orthogonally
, 2005
"... The theory of reactive systems, introduced by Leifer and Milner and previously extended by the authors, allows the derivation of well-behaved labelled transition systems (LTS) for semantic
models with an underlying reduction semantics. The derivation procedure requires the presence of certain colimi ..."
Cited by 36 (2 self)
Add to MetaCart
The theory of reactive systems, introduced by Leifer and Milner and previously extended by the authors, allows the derivation of well-behaved labelled transition systems (LTS) for semantic models
with an underlying reduction semantics. The derivation procedure requires the presence of certain colimits (or, more usually and generally, bicolimits) which need to be constructed separately within
each model. In this paper, we o#er a general construction of such bicolimits in a class of bicategories of cospans. The construction sheds light on as well as extends Ehrig and Konig's rewriting via
borrowed contexts and opens the way to a unified treatment of several applications.
, 2004
"... A framework is defined within which reactive systems can be studied formally. The framework is based upon s-categories, a new variety of categories, within which reactive systems can be set up
in such a way that labelled transition systems can be uniformly extracted. These lead in turn to behavi ..."
Cited by 26 (5 self)
Add to MetaCart
A framework is defined within which reactive systems can be studied formally. The framework is based upon s-categories, a new variety of categories, within which reactive systems can be set up in
such a way that labelled transition systems can be uniformly extracted. These lead in turn to behavioural preorders and equivalences, such as the failures preorder (treated elsewhere) and
bisimilarity, which are guaranteed to be congruential. The theory rests upon the notion of relative pushout previously introduced by the authors. The framework
- PROCEEDINGS OF THE INTERNATIONAL CONFERENCE OF MATHEMATICIANS , 2001
"... A notion of bigraph is proposed as the basis for a model of mobile interaction. A bigraph consists of two independent structures: a topograph representing locality and a monograph representing
connectivity. Bigraphs are equipped with reaction rules to form bigraphical reactive systems (BRSs), which ..."
Cited by 25 (6 self)
Add to MetaCart
A notion of bigraph is proposed as the basis for a model of mobile interaction. A bigraph consists of two independent structures: a topograph representing locality and a monograph representing
connectivity. Bigraphs are equipped with reaction rules to form bigraphical reactive systems (BRSs), which include versions of the -calculus and the ambient calculus. Bigraphs are shown to be a
special case of a more abstract notion, wide reactive systems (WRSs), not assuming any particular graphical or other structure but equipped with a notion of width, which expresses that agents,
contexts and reactions may all be widely distributed entities. A behavioural theory is established for WRSs using the categorical notion of relative pushout; it allows labelled transition systems to
be derived uniformly, in such a way that familiar behavioural preorders and equivalences, in particular bisimilarity, are congruential under certain conditions. Then the theory of bigraphs is
developed, and they are shown to meet these conditions. It is shown that, using certain functors, other WRSs which meet the conditions may also be derived; these may, for example, be forms of BRS
with additional structure. Simple examples of bigraphical systems are discussed; the theory is developed in a number of ways in preparation for deeper application studies.
- PNGT’04 , 2004
"... We introduce a way of viewing Petri nets as open systems. This is done by considering a bicategory of cospans over a category of p/t nets and embeddings. We derive a labelled transition system
(LTS) semantics for such nets using GIPOs and characterise the resulting congruence. Technically, our resul ..."
Cited by 23 (10 self)
Add to MetaCart
We introduce a way of viewing Petri nets as open systems. This is done by considering a bicategory of cospans over a category of p/t nets and embeddings. We derive a labelled transition system (LTS)
semantics for such nets using GIPOs and characterise the resulting congruence. Technically, our results are similar to the recent work by Milner on applying the theory of bigraphs to Petri Nets. The
two main differences are that we treat p/t nets instead of c/e nets and we deal directly with a category of nets instead of encoding them into bigraphs.
- In FOSSACS ’03, volume 2620 of LNCS , 2003
"... G-relative pushouts (GRPOs) have recently been proposed by the authors as a new foundation for Leifer and Milner’s approach to deriving labelled bisimulation congruences from reduction systems.
This paper develops the theory of GRPOs further, arguing that they provide a simple and powerful basis tow ..."
Cited by 22 (7 self)
Add to MetaCart
G-relative pushouts (GRPOs) have recently been proposed by the authors as a new foundation for Leifer and Milner’s approach to deriving labelled bisimulation congruences from reduction systems. This
paper develops the theory of GRPOs further, arguing that they provide a simple and powerful basis towards a comprehensive solution. As an example, we construct GRPOs in a category of ‘bunches and
wirings. ’ We then examine the approach based on Milner’s precategories and Leifer’s functorial reactive systems, and show that it can be recast in a much simpler way into the 2-categorical theory of | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=108417","timestamp":"2014-04-17T20:08:06Z","content_type":null,"content_length":"36823","record_id":"<urn:uuid:29731e9a-57d3-4a1d-909a-444dce749ce0>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00344-ip-10-147-4-33.ec2.internal.warc.gz"} |
Simplified Method of Calculating Evaporation From Swimming Pools
Introducing tables intended to make highly accurate formulas easier to use
Accurate calculation of evaporation from swimming pools is needed for proper sizing of ventilation and dehumidification equipment. Calculations are needed for both occupied and unoccupied conditions
for proper modulation of equipment capacity, as well as estimation of energy consumption. The most common method is the one recommended in ASHRAE Handbook—HVAC Applications.^1 It involves calculation
of evaporation using the Carrier equation^2 and correction of the output using activity factors. Comparisons with test data have shown this method is inaccurate. The American Society of Heating,
Refrigerating and Air-Conditioning Engineers (ASHRAE) is undertaking a research program to test it and other methods.
In 2002 and 2003,^3,4 the author developed formulas for occupied and unoccupied pools that were verified with a wide range of data. Those formulas were summarized in an article in HPAC Engineering.^5
Despite the formulas' proven accuracy, many engineers found them difficult to use, as computer programming is needed for accurate calculation of air properties. To simplify use of the formulas, this
article presents tables that give the results of calculations over a wide range of conditions. Additionally, the article introduces a formula for the rare condition in which natural convection does
not occur.
UPDATE: New Method for Calculating Evaporation From Occupied Swimming Pools
Although the article is written in inch-pound (I-P) units, results are given in both I-P and Systeme International (SI) units in the tables.
Author's Formulas
Unoccupied pools. For unoccupied pools, evaporation is the larger of the results of the following equations:
E[0] = evaporation from unoccupied pool, pounds per hour per square foot
D[w] = density of air saturated at water temperature, pounds per cubic foot of dry air
D[r] = density of air at room condition, pounds per cubic foot of dry air
W[w] = humidity ratio, air saturated at water temperature, pounds per pound
W[r] = humidity ratio, air at room condition, pounds per pound
p[w] = water-vapor pressure in air, air saturated at water temperature, inches of mercury
p[r] = water-vapor pressure in air, air at room condition, inches of mercury
Equation 1 gives the rate of evaporation caused by natural convection. It was obtained from the analogy between heat and mass transfer without any empirical factor. Its full derivation can be seen in
Shah (2008).6 Equation 2 gives the rate of evaporation attributed to forced convection by air currents generated by a building ventilation system. It was obtained by analyzing test data for
conditions in which the density of air at the surface of water was greater than the density of air in the room.
This method of calculation differs from the one given in Shah (2004)^5, as it includes negative-density differences. Further, the 2004 method uses only Equation 1, increasing the calculated
evaporation at very low density differences by 15 percent. The present method was compared with the same extensive database as the earlier method. The overall mean deviation was the same, although
individual data points had higher or lower deviations. The ranges of the test data are given in Table 1.
Occupied pools. For occupied pools, the author gave an analytical formula, as well as an empirical formula. The empirical formula produces much closer agreement. For fully occupied pools:
E = evaporation from pool, pounds per hour per square foot
U = utilization factor (number of people in pool area multiplied by 48.4 divided by pool area), the applicable range of which is 0.1 (10-percent occupied) to 1 (fully occupied). The ranges of test
data are given in Table 1.
Table 2 gives values of evaporation from unoccupied pools calculated using the method above. (Table 3 gives the corresponding values in SI units.) Values are given at 2-degree intervals. For
in-between temperatures, linear interpolation can be performed. This table is applicable to all types of pools with an undisturbed water surface.
Table 4 gives values of evaporation from fully occupied pools calculated using Equation 3. (Table 5 gives the corresponding values in SI units.) The data are applicable to pools with air temperatures
of 76°F to 90°F and water temperatures of 76°F to 86°F.
Use of the tables can be illustrated with two examples:
Example 1. A 10,000-sq-ft public swimming pool has a water temperature of 80°F, an air temperature of 78°F, and relative humidity of 50 percent.
From Table 2:
• Evaporation when pool is unoccupied: 0.0291 lb per hour per square foot.
• Total evaporation: 0.0291 × 10,000 = 291 lb per hour. From Table 4:
• Evaporation when pool is fully occupied: 0.0455 lb per hour per square foot.
• Total evaporation: 0.0455 × 10,000 = 455 lb per hour.
Example 2. A 10,000-sq-ft public swimming pool has a water temperature of 79°F, an air temperature of 78°F, and relative humidity of 50 percent.
Table 2 does not list 79°F water temperature, so interpolation is required. Evaporation at 78°F water temperature is 0.0232 lb per hour per square foot, while evaporation at 80°F water temperature is
0.0291 lb per hour per square foot. Evaporation at 79°F water temperature, then, is:
(0.0232 + 0.0291) ÷ 2 = 0.026 lb per hour per square foot
The method for unoccupied pools presented here has been verified with a wide range of test data and has a firm theoretical foundation. It can be used with confidence for all types of pools.
The method for occupied pools presented here was verified with test data from four public pools. The ASHRAE Handbook1 method was found to have a mean deviation of 36.9 percent, while the method
presented here had a mean deviation of only 16.2 percent.
Download this article in .PDF format
This file type includes high resolution graphics and schematics when applicable.
1) ASHRAE. (2007). ASHRAE handbook—HVAC applications. Atlanta: American Society of Heating, Refrigerating and Air-Conditioning Engineers.
2) Carrier, W.H. (1918). The temperature of evaporation. ASHVE Transactions, 24, 25-50.
3) Shah, M.M. (2002). Rate of evaporation from undisturbed water pools: Evaluation of available correlations. International Journal of HVAC&R Research, 8, 125-132.
4) Shah, M.M. (2003). Prediction of evaporation from occupied indoor swimming pools. Energy & Buildings, 35, 707-713.
5) Shah, M.M. (2004, March). Calculating evaporation from indoor water pools. HPAC Engineering, pp. 21, 22, 24, 26.
6) Shah, M.M. (2008). Analytical formulas for calculating water evaporation from pools. ASHRAE Transactions, 114.
Mirza M. Shah, PhD, PE, long has been active in design, analysis, and research in the areas of HVAC, refrigeration, energy systems, and heat transfer. His formulas for boiling and condensation heat
transfer are widely used and included in many engineering reference books. He can be contacted at mshah.erc@gmail.com.
Did you find this article useful? Send comments and suggestions to Executive Editor Scott Arnold at scott.arnold@penton.com. | {"url":"http://hpac.com/humidity-control/simplified-method-calculating-evaporation-swimming-pools","timestamp":"2014-04-17T04:19:04Z","content_type":null,"content_length":"97706","record_id":"<urn:uuid:d7bde7a0-c684-4d39-b95e-f9e9c84ca6b0>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00390-ip-10-147-4-33.ec2.internal.warc.gz"} |
, and Apparent Power
Home Learning Center Articles Real, Reactive, and Apparent Power
Real, Reactive, and Apparent Power
The apparent power is the vector sum of real and reactive power
Engineers use the following terms to describe energy flow in a system (and assign each of them a different unit to differentiate between them):
• Real power (P) [Unit: W]
• Reactive power (Q) [Unit: VAR]
• Complex power (S)
• Apparent Power (|S|) [Unit: VA]: i.e. the absolute value of complex power S.
In the diagram, P is the real power, Q is the reactive power (in this case negative), S is the complex power and the length of S is the apparent power.
The unit for all forms of power is the watt (symbol: W). However, this unit is generally reserved for the real power component. Apparent power is conventionally expressed in volt-amperes (VA) since
it is the simple product of rms voltage and rms current. The unit for reactive power is given the special name "VAR", which stands for volt-amperes reactive (since reactive power flow transfers no
net energy to the load, it is sometimes called "wattless" power). Note that it does not make sense to assign a single unit to complex power because it is a complex number and it is therefore defined
as a pair of two units: W and VAR.
Understanding the relationship between these three quantities lies at the heart of understanding power engineering. The mathematical relationship among them can be represented by vectors or expressed
using complex numbers,
(where j is the imaginary unit).
The complex value S is referred to as the complex power.
Consider an ideal alternating current (AC) circuit consisting of a source and a generalized load, where both the current and voltage are sinusoidal. If the load is purely resistive, the two
quantities reverse their polarity at the same time, the direction of energy flow does not reverse, and only real power flows. If the load is purely reactive, then the voltage and current are 90
degrees out of phase and there is no net power flow. This energy flowing backwards and forwards is known as reactive power.
If a capacitor and an inductor are placed in parallel, then the currents flowing through the inductor and the capacitor oppose and tend to cancel out rather than adding. Conventionally, capacitors
are considered to generate reactive power and inductors to consume it. This is the fundamental mechanism for controlling the power factor in electric power transmission; capacitors (or inductors) are
inserted in a circuit to partially cancel reactive power of the load. A practical load will have resistive, inductive, and capacitive parts, and so both real and reactive power will flow to the load.
The apparent power is the product of voltage and current. Apparent power is handy for sizing of equipment or wiring. However, adding the apparent power for two loads will not accurately give the
total apparent power unless they have the same displacement between current and voltage.
Power factor:
Power factor measures the efficiency of an AC power system. Power factor is the real power per unit of apparent power. (pf = Wh/VAh) A power factor of one is perfect, and 99% is good. Where the
waveforms are purely sinusoidal, the power factor is the cosine of the phase angle (φ) between the current and voltage sinusoid waveforms. Equipment data sheets and nameplates often will abbreviate
power factor as "cosφ" for this reason.
Power factor equals 1 when the voltage and current are in phase, and is zero when the current leads or lags the voltage by 90 degrees. Power factors are usually stated as "leading" or "lagging" to
show the sign of the phase angle, where leading indicates a negative sign. For two systems transmitting the same amount of real power, the system with the lower power factor will have higher
circulating currents due to energy that returns to the source from energy storage in the load. These higher currents in a practical system will produce higher losses and reduce overall transmission
efficiency. A lower power factor circuit will have a higher apparent power and higher losses for the same amount of real power transfer.
Purely capacitive circuits cause reactive power with the current waveform leading the voltage wave by 90 degrees, while purely inductive circuits cause reactive power with the current waveform
lagging the voltage waveform by 90 degrees. The result of this is that capacitive and inductive circuit elements tend to cancel each other out.
Reactive power flow:
In power transmission and distribution, significant effort is made to control the reactive power flow. This is typically done automatically by switching inductors or capacitor banks in and out, by
adjusting generator excitation, and by other means. Electricity retailers may use electricity meters which measure reactive power to financially penalise customers with low power factor loads. This
is particularly relevant to customers operating highly inductive loads such as motors at water pumping stations.
Intelligent Battery:
Output current depends upon the battery's state. An intelligent charger may monitor the battery's voltage, temperature and/or time under charge to determine the optimum charge current at that
instant. Charging is terminated when a combination of the voltage, temperature and/or time indicates that the battery is fully charged.
For Ni-Cd and NiMH batteries, the voltage across the battery increases slowly during the charging process, until the battery is fully charged. After that, the voltage decreases, which indicates to an
intelligent charger that the battery is fully charged. Such chargers are often labeled as a ΔV, or "delta-V," charger, indicating that they monitor the voltage change.
A typical intelligent charger fast-charges a battery up to about 85% of its maximum capacity in less than an hour, then switches to trickle charging, which takes several hours to top off the battery
to its full capacity.
Volt Amperes:
A volt-ampere in electrical terms, means the amount of apparent power in an alternating current circuit equal to a current of one ampere at an emf of one volt. It is equivalent to watts for
non-reactive circuits.
● 10 kV·A = 10,000 watts capability (where the SI prefix k equals kilo)
● 10 MV·A = 10,000,000 watts capability (where M equals mega)
While the volt-ampere and the watt are dimensionally equivalent one may find products rated in both VAs and watts with different numbers. This is common practice on UPSs (Uninterruptible Power
Supplies). The VA rating is the apparent power that a UPS is capable of producing, while the watt rating is the real power (or true power) it is capable of producing, as opposed to reactive power.
Reactive power arises due to the effects of capacitance and inductance of components in the load to be powered by the AC circuit. In a purely resistive load (incandescent lights for example), the
apparent power is equal to the true power and the amount of VAs and watts used would be equivalent. However, in more complex loads, such as computers (which UPSs are intended to power) the apparent
power used (VAs) will be larger than the true power used (watts). The ratio of these two quantities is called the power factor. | {"url":"http://www.cableorganizer.com/articles/real-reactive-apparent-power.html","timestamp":"2014-04-20T15:53:34Z","content_type":null,"content_length":"30646","record_id":"<urn:uuid:96731eef-52be-4829-9b0d-375d484f736e>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00131-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: TWO-POINT BVP
Consider the two-point boundary value problem of a
second-order linear equation:
Y 00(x) = p(x) Y 0(x) + q(x) Y (x) + r(x)
a x b
Y (a) = g1; Y (b) = g2
Assume the given functions p, q and r are continuous
on [a; b]. Unlike the initial value problem of the equa-
tion that always has a unique solution, the theory of
the two-point boundary value problem is more com-
plicated. We will assume the problem has a unique
smooth solution Y ; a su cient condition for this is
q(x) > 0 for x 2 [a; b].
In general, we need to depend on numerical methods
to solve the problem.
We derive a nite di erence scheme for the two-point
boundary value problem in three steps.
Step 1. Discretize the interval [a; b].
Let N be a positive integer, and divide the interval | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/697/2062568.html","timestamp":"2014-04-19T04:32:58Z","content_type":null,"content_length":"7950","record_id":"<urn:uuid:691a9ddc-516a-47d0-9ba7-6a1d5a98e9ef>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00470-ip-10-147-4-33.ec2.internal.warc.gz"} |
8000 gbp in us dollars
You asked:
8000 gbp in us dollars
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire. | {"url":"http://www.evi.com/q/8000_gbp_in_us_dollars","timestamp":"2014-04-21T07:23:07Z","content_type":null,"content_length":"57675","record_id":"<urn:uuid:c15e2060-8ef6-4754-a7cb-9223aa144566>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00241-ip-10-147-4-33.ec2.internal.warc.gz"} |
first-class polymorphism beats rank-2 polymorphism
Simon Peyton-Jones simonpj@microsoft.com
Fri, 8 Mar 2002 04:40:41 -0800
| So I would claim that these two types are the same:
| forall x. Class x =3D> (forall y. Class y =3D> y -> y) -> x -> x
| (forall y. Class y =3D> y -> y) -> (forall x. Class x =3D> x -> x)
| ...so you should be able to do this:
| combinator :: (forall y. Class y =3D> y -> y) -> (forall x.=20
| Class x =3D> x -> x)
| combinator f x =3D combinator' f x
| but for some reason GHC 5.02.2 complains. I think this is a bug.=20
Indeed the two types are the same. In fact GHC does "forall-lifting"
on type signatures to bring the foralls to the front. But there's a bug
in 5.02's forall-lifting... it doesn't bring the constraints to the
front too.
I fixed this in 5.03 a while ago, but didn't back-propagate the fix to=20
5.02. And indeed, 5.03 is happy with the pure rank-2 program.
class Class x where
combinator' :: (forall y. Class y =3D> y -> y) -> x -> x
combinator :: (forall y. Class y =3D> y -> y)
-> (forall x. Class x =3D> x -> x)
combinator f =3D combinator' f
It's quite a bit of extra work propagating fixes into the 5.02 branch,
so I probably won't do so for this one, since only a small minority
of people will trip over it. Perhaps you can try the 5.03 snapshot | {"url":"http://www.haskell.org/pipermail/haskell/2002-March/009109.html","timestamp":"2014-04-21T07:31:37Z","content_type":null,"content_length":"3459","record_id":"<urn:uuid:e5d8575b-9e61-4dcb-bd89-41c18497aa51>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00027-ip-10-147-4-33.ec2.internal.warc.gz"} |
Instant CIC
Posted by Markus Nentwig on May 8 2012 under Matlab | Basics | Tips and Tricks
A floating point model for a CIC decimator, including the frequency response.
A CIC filter relies on a peculiarity of its fixed-point implementation: Normal operation involves repeated internal overflows that have no effect to the output signal, as they cancel in the following
One way to put it intuitively is that only the speed (and rate of change) of every little "wheel" in the clockworks carries information, but its absolute position is arbitrary.
Modeling a CIC filter without use of bit-accurate numbers is not completely straightforward. Here, I'll show some "instant" solution for the decimating variant. Just add water and stir.
The good. The bad. The ugly.
Let's start with "ugly".
The "time domain" implementation (complete code below) is what I get when I close my eyes to the numerical overflow problem and simply write the CIC difference equations with floats.
Now double precision arithmetics are quite forgiving, and it works well enough.
It should do fine for a classroom demo (and please don't use it in the next Mars mission).
Then, the "frequency domain" implementation:
• Take the FFT of the signal (assuming for simplicity it wraps around at the end - one cycle of an infinitely periodic signal - Fourier theory applies)
• evaluate the known frequency response of a CIC filter at the FFT frequency bins. Reference: [1], eq. 3
• Multiply
• Use IFFT to go back to the time domain
• Decimate
This is only "bad", as taking the FFT of the whole signal all at once might be inconvenient.
Finally, what I'd consider a "good" solution:
• Evaluate the impulse response before decimation
• Apply the impulse response using an off-the-shelf FIR filter component, i.e. filter() command in Octave / Matlab
• Decimate
The approach is accurate, as despite its internal recursion, the impulse response of a CIC filter has a finite length (it is of FIR type, not IIR).
To apply the impulse resonse b to an input sequence x, use y = filter(b, 1, x);
The plot shows that all three methods give exactly the same impulse response (accurate to ~10^-15). Decimating discards three samples out of four, leaving only red-trace samples that coincide with
the other two traces.
Frequency response:
The frequency response is evaluated on the higher (input) rate. Therefore, it is straightforward to investigate the rejection on input frequencies that will cause aliasing, once decimated.
The example shows how to model a CIC decimator in floating point as a conventional FIR filter by sampling the finite-length impulse response.
% CIC decimator example
% includes
% - time domain implementation
% - frequency domain model via z-domain transfer function
% Parameters:
% R = rate change factor
% N = nStages
% M = differential delay
% reference: [1] http://www.altera.com/literature/an/an455.pdf
function instantCIC()
close all;
testvec = zeros(1, 43);
R = 4;
testvec(1) = 1;
N = 3;
M = 2;
a = CICdec_timeDomainModel(testvec, N, M, R);
[b, H] = CICdec_freqDomainModel(testvec, N, M, R, false);
[c, H] = CICdec_freqDomainModel(testvec, N, M, R, true);
figure(1); grid on; hold on;
plot(1:R:R*numel(a), a, 'bx');
plot(b, 'ro');
plot(1:R:R*numel(c), c, 'k+');
title('CIC decimator impulse response');
legend('time domain', 'freq. domain', 'freq. domain decimated');
% plot
H = fft([ifft(H), zeros(1, 1000)]); % zero-padding
figure(2); clf(); grid on; hold on;
plot(linspace(-0.5, 0.5, numel(H)), fftshift(20*log10(abs(H) + 1e-15)), 'b');
xlim([-0.5, 0.5]);
title('frequency response');
function [vec, H] = CICdec_freqDomainModel(vec, N, M, R, doDecim)
flag = isreal(vec);
% evaluate frequency response (z-domain transfer function)
n = numel(vec);
zInv = exp(-2i*pi*(0:(n-1))/n);
b = ones(1, R * M);
H = polyval(b, zInv) .^ N;
H = H / H(1);
% apply frequency response
vec = ifft(fft(vec) .* H);
% decimate
if doDecim
vec = vec(1:R:end);
% don't let FFT roundoff error turn real signal into complex
if flag
vec = real(vec);
function vec = CICdec_timeDomainModel(vec, N, M, R)
nLeadIn = M * N * R;
nLeadOut = nLeadIn / R;
ix = mod(-nLeadIn:-1, numel(vec)) + 1;
% prepend end of cyclic signal
vec = [vec(ix) vec];
% integrator
for ix = 1:N
vec = cumsum(vec);
% decimator
vec = vec(1:R:end);
% differentiator
for ix = 1:N
vec = vec - circshift(vec, [0, M]);
% remove the added length
vec = vec(nLeadOut+1:end);
% scale gain
gain = (R * M) .^ N;
vec = vec / gain;
Rate this article:
posted by Markus Nentwig
Markus received his Dipl. Ing. degree in electrical engineering / communications in 1999. Work interests include RF transceiver system design, implementation, modeling and verification. He works as
senior architect for Renesas Mobile Europe in Finland.
Previous post by Markus Nentwig:
Design study: 1:64 interpolating pulse shaping FIR
Next post by Markus Nentwig:
Weighted least-squares FIR with shared coefficients all articles by Markus Nentwig
It looks like the image instantCIC3.png is too wide for the blog container. Great post Markus!
2 years ago
Sorry, you need javascript enabled to post any comments. | {"url":"http://www.dsprelated.com/showarticle/163.php","timestamp":"2014-04-19T02:05:03Z","content_type":null,"content_length":"38881","record_id":"<urn:uuid:ab9502d2-ea96-4285-a7a6-c21b19dc59fb>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00296-ip-10-147-4-33.ec2.internal.warc.gz"} |
visual c++ samples and examples
Basic algebra 0.1. - BASIC LOGIC
Is any statement of which is true if it is true or false, but not both at once. If a statement is true is assigned the letter V (or value 1) and if false, the letter F (or 0).
autology i
s a proposition that is always true.
a proposition that is always false.
Is any statement of which is true if it is true or
false, but not both at once. If a statement is true is assigned the letter V (or value 1) and if false, the letter F (or 0).
• Disjunction: p or q (p ∨ q).
• Conjunction: p and q (p ∧ q).
• Implication: p ⇒ q (p → q). It reads: If p then q.
• Double implication: p ⇔ q (p ↔ q). It reads: p iff q.Los operating results propositions are logically
determined by the truth values assigned to them, which depend on having the starting propositions. These are usually represented in so-called "truth tables". Truth tables defined above propositions
Propositional function:
If p (x) is an expression that becomes proposition by substituting x by a mathematical object, then we say that p is a propositional function. There is also propositional functions over a
mathematical object (p (x, y) p(x, y, z), ...).
: If p (x) is a proposition that is true for any x, then write ∀ x, p (x), which is read "for all x is verified p"
a) If p (x) is a true proposition ever for at least some x, then write x, p (x), which is read "there exists at least one x for which verifies p"
b)If p (x) is a true proposition for a single x, then write ∃ x, p (x), which reads "there is only one x for which it verifies p"
In a mathematical theory, called theorems true propositions concerning mathematical objects of the same. Theorems are often enunciate as implications, and the implications that being true
propositions are theorems. Generally, the terms theorem and involvement are considered equivalent. o DEMONSTRATION: The logical process, from a proven proposition p, leads to the truth of another
proposition q, is the proof of the theorem p ⇒ q. The proof of a theorem can be done in several ways:
• Direct
: If certain py and p ⇒ q, then so is q.
: involves in proving that (not q) ⇒ (not p), rather than to prove p ⇒ q.
• Reduction to the absurd: is to assume that q is false, so the hypothesis becomes p (not q) and prove that it follows a contradiction. '
o Refutation
Disproving theorem alleging p ⇒ q ,is proven to be false, what is indicated by putting p ⇒ Not q. To do this, we may assume that p and q are true and find from this a contradiction, or finding a
counterexample, ie given a particular case in which p is true and q is false.
2 Set Theory TEORÍA DE CONJUNTOS
It defines a set as a collection or gathering of well-defined and distinct elements. Usually capital letters are used to designate a set. Can be expressed in two ways:
• By extension: Indicating each and every one of its elements. Usually write the elements of the set in braces ({...}).
• By realization: Giving a property that met all the elements of the set and only them. To indicate that an item "belongs" to a set using the symbol of belonging, ∈.
A set S is a subset of another set C if all elements of S are elements of C. In this case the set S is also called "part" of C. To indicate this we use the symbol of inclusion, ⊂ ⊃; (S ⊂C , C ⊃ S).
The inclusion of sets may be understood in two ways:
• Broadly: C ⊆ S.
• Strictly: C ⊂ S.
Empty set: The set that has no elements. It is denoted by ∅
It is the Set to which contains"all" item, which all sets are subsets. It is denoted by I (belong to U).
Note: This is a reference that owns all the elements of the sets emerged in a situation or problem. If a universal joint is regarded also what are all sets that contain it.
They are represented in the level of the joint.
Set Operations:
• UNION A∪B = { x / x ∈A o x ∈ B } • INTERSECTION A ∩ B = {x / x ∈ A and x ∈ B } • DIFFERENCE A−B = {x / x ∈ A And ∉ B } • SYMMETRIC DIFFERENCE A ∆ B= ( A − B) ∪( B − A)
Properties of the union and intersection:
∎ Associative A ∪ B = {x / x ∈ A OR x ∈B }
∎ INTERSECTION A ∩ B = {x / x ∈ A AND x ∈B}
∎ DIFFERENCE : A − B = {x /x ∈ A AND x ∉B }
∎ SYMMETRIC DIFFERENCE A ∆ B= ( A − B) ∪( B − A)
Properties of the union and intersection:
∪ (
∪ C ) = (A ∪ B) ∪ C
∩ ( B ∩ C ) = (A ∩ B) ∩ C •
A ∪ B = B ∪ A
A ∩ B = B ∩ A
• Idempotent
A ∪ A = A
A ∩ A = A
• Absolution
A ∪ I = I A ∩ ∅ = ∅ •
∪ ( A ∩ B) = A
A ∩ ( A ∪ B) = A •
A ∪( B ∩ C) = (A ∪ B)
∩ (A
∪ B)
A ∩ (B ∪ C ) = (A ∩ B)
∪ (A ∩ C)
Disjoint and Complementary Sets Disjoint sets:
Two or more sets are called disjoint sets if their intersection is Empty set
( A ∩ B ∩ ......) = ∅
Complementary Set
Two or more sets are called complementary when their union
equal the universal Set
(A∪B ∪ .....= I) . The complement Set A is denoted by
A^c Properties •Complementarities:
A ∪
= I
A ∩
A^c =∅ • Package of parts of a Set
Given a set A, is called set of parts of A and is denoted by
(A) to another set whose elements are all possible subsets or portions of A
( A ) = {B / B ⊆ A}
Covering set
Given a set and a family of subsets
(finite or not) .It is said this family is covering set if
∪ A
∪ .....∪ A
= A
Partition of a set
A partition of a set A is also a covering verifying that
∩ A
j = ∅ ∀ i ≠ j .
Disjoint sets and complementary
Cardinal of a finite set:
It is the number of elements have the assembly .It is generally represented
) or card
The most common formulas, not the only, that relate the cardinal and operations between sets are
N (A ∪ B) =N( A) + N( B) − N(A
∩ B)
A^c ) = N(I) - N(A) N( A − B )=N(A)- N(A
Being N(I) the total number of elements of of reference and taking note that N (∅) = 0. 0.3. - APPLICATIONS
Cartesian product: Set is defined in Cartesian product by two sets A and B and is denoted by A*B set of ordered pairs(a,b) where a ∈ A and b ∈ B that is to say: A× B ={(a,b), ∀a ∈A and ∀b ∈B}
The elements of the first set "source" correspond to elements of the second set "image":
Is any subset of the Cartesian product: A× B ⊇ G = {(a, / b) a ∈ A, b ∈B}
The elements of the first set "source" correspond to elements of the second set "image":
It is a correspondence that verifies: x ∈ A ∃ | y ∈B /(x, y) ∈ G
Applications, also called "tasks" are usually represent by the letters f, g,, h, ...,,, F,G,H, ..., and are represented by:
f f A: → B Or A → B
so we can express the application definition: x ∈A ∃ | y ∈ B y = f ( x)
Application domain: It is the subset of source array elements for which there is an image:
Domf =x ∈ {A / y, ∃∈B, y = f(x)}
Application Image :Is the subset of the set of image elements which have their origin
Im f = {y ∈ B /∃ x ∈ A , y = f (x) } = f(x)
surjective: Every element of the whole image has "at least" one source:
∀y ∈ B , ∃x ∈ A / y = f (x) injective: Different elements in the original group have distinct images in the image set:
∀x, z, ∈ A / x ≠ z ⇒ f (x) ≠ f (z)
bijective: When it is surjective and injective at once: ∀y ∈ B , ∃ | x ∈ A / y = f (x)
Given the applications, f: A → B And g:B → C,verifying that Im f⊂ D ⊂ omg ,You can define the Application Composition h= g D f in the form
h: A →C/∀ x ∈ A , h(x) = (g D f )(x) = g[ f (x)]
Applications, also called "functions" are usually represented by the letters letters, and are represented by:
Domain of the application: Is the subset of the set of elements for which there is home image
The application image: Is the subset of the set of image elements which have their origin
Types of Applications
surjective Every element of the whole picture has "at least" one source:
Structure of applications
Inverse mapping: Given an application, f A: → B, if you can define another application ƒ^−1: B → A such that if y f(x) ⇔ x = ƒ^−1 (y) . The ƒ^−1^ application is called inverse or reciprocal f is
0.4 Binary relations Binary relation defined on a set to any subset of the Cartesian product AxA and
R⊂ AxA
Properties The binary relations can have the following propertiesif
•Reflexive:: a Ra , ∀a ∈ A •Symmetric: if a Rb ⇔ bRa •Antisymmetric: if a Rb and bRa ⇒ a = b •Transitive :if a Rb and bRc ⇒ aR c•Associated : if a Rb ó bRa ,∀a ,b ∈ A
Equivalence relationships:
A binary relation is said equivalence, if it satisfies the properties reflexive, symmetric and transitive
Equivalence classes: Equivalence class is called an element If a ∈ A and is denoted by cl(a) the subset consisting of all elements of the set the subset consisting of all elements of the set
associated with it.
cl(a) = { x ∈ A / xRa}
• No type is empty, with a ∈ cl () • Items belonging to the same class are interrelated. • Classes are disjoint, since if two classes have a common element is that they are the same class. Therefore,
an equivalence class is defined by "any" of its elements.
Conclusion: When defining an equivalence relation on a set, every element of the set is in a class and only one.
Quotient set: Any equivalence relation defined on a set, it creates a partition into equivalence classes. The set consists of all equivalence classes is called the quotient set. Designating porA / R.
Order relationships
They can be of two types
BROAD: If you meet the properties reflexive, antisymmetric and transitive. STRICT: If you meet antisymmetric and transitive properties.
Also, if the property meets related is said to be a total order, otherwise it is a partial order.
Bounded Sets
Given a set ordered by the relation ≡ ≤ RA and SA ⊂ is then:
The subset S is said to be upper bounded and / or lower bounded if they have any upper limit and / or below respectively
EXTREMES:Let A be an ordered set and let S ⊂ A a bounded set, then it is called:
• Higher end : The lower of all upper bounds. • Bottom end :The majority of all lower bounds.
MAXIMUM AND MINIMUM: If ∃k higher end /k ∈ S ⇒k is a maximum
∃k bottom end /k ∈ S ⇒k is a minimum
Let A and B be two ordered sets ( A,≤) And (B,≤) and the application is f: A → B, then we say that
INCREASING :x, y ∈ A /x<y ⇒f(x) ≤ f(y)
DECREASING:x, y ∈ A /x<y ⇒f(x) ≥ f(y)
By substituting the signs ≤ y ≥ by < y > respectively, f is said to be strictly monotonous
Bounded function:.
Let f: A → B/ (B,≤) is ordained, it says that the application is bounded upwards or downwards if the image set f (A) is bounded upwards or downwards respectively in B
0.5. - ALGEBRAIC STRUCTURES
Internal composition law:
Given a nonempty set A, is called internal composition law or "internal operation" to the application:
f : A × A → A/c = f (a ,b ) , a ,b, c ∈ A
Generally, these laws of composition or operations are denoted with symbols ∗, ⊥, ∆,+,⋅,… and is written for example c = f(a,b) = a∗b .
Any law of internal composition, for example *, can have the following properties:
• Associative: a∗(b∗c) = (a∗b)∗c ∀a, b,c ∈ A •Commutative: a∗b = b∗a ,∀a ,b ∈ A •Neutral element: ∃e ∈ A/ e∗a = a∗e = a , ∀a ∈ A •Symmetric element : ∀a ∈A ,∃a′ ∈ A/ a∗a′ = a′∗a = e •Idempotent
elements: Those elements of the set that verify x∗ x = x •Regular elements:
Left if x∗a= x∗b ⇒a=b Right if a∗y =b∗y ⇒ a=b
•Distributive:Given two laws of composition,for example ∗ y ∆:
a∗(b∆c) = (a∗b) ∆ (a∗c), ∀a, b,c ∈ A ⇒∗ distributive over ∆
a∆(b∗c) = (a∆b)∗(a∆c), ∀a, b,c ∈ A ⇒∗ distributive over ∗
A set equipped one or more internal composition law and / or external.
MORFISMOS among structures: homomorphisms:
Given two structures (E,∗) and (F,∆) ,the application is called a homomorphism
f :(E,∗)→ (F,∆) /f (x∗ y) =f ( x) ∆ f ( y), ∀x, y ∈ E
•The image of the neutral element of (E,∗) ,exists is the image of the neutral element (Imf ,∆) : ∀ x∈ E, f (x) =f (x∗e) =f ( x)∆f(e) •The image of the symmetrical element of any element (E,∗),if
there is the element of the image symmetric element (Imf ,∆) : ∀ x∈ E/ ∃x′,f (e) = f ( x∗ x′)= f ( x)∆f ( x′).
Depending on the type of application and the structures, the homomorphisms can be classified into:
∎ MONOMORPHISM: If f is injective. ∎ EPIMORPHISM: If f is surjective ∎ ISOMORPHISM: If f is bijective.
∎ ENDOMORPHISM: If (E,∗) ≡ (F,∆) .
∎ AUTOMORPHISM: If (E,∗) ≡ (F,∆) and f is bijective
Given a set G equipped with an internal operation ∗.It is said that the structure (G,∗) ,the group is if it satisfies the associative
neutral element existence occurrence of symmetric element .
If you also check the commutative property states that (G, *) is an abelian group or commutative.
• The neutral element is unique.
• ∀a ∈G,∃
a′ ∈G /a∗a′ = a′∗a = e
• (a ∗b)′ =b′∗a′, ∀a,b ∈G
•(a′)′ =a
•All the elements of G are regular.
A non-empty subset S of a group (G,∗) ,it is a subgroup if have group structure. That is:
1 ∀a,b ∈
,a∗b ∈
2 The element e of (G,∗) belongs to
3 ∀a ∈
S, ,
a′ ∈
CHARACTERISTICS OF SUBGROUPS :A nonempty subset S of a group is a subgroup (G,∗) ,if and only if:
∀a,b ∈ S,a∗b ∈
S } ⇔
∀a,b ∈
S , a ∗b′
2 ∀a ∈ S ,a′ ∈ S
Given a set A equipped with two internal operations,+ And ×.It is said that the structure ( A,+, ×) ring is a commutative group if in order to the law +and the law × satisfies the associative and
distributive over +.If you also check the commutative property to the latter,If further verifies the commutative property to the latter, says (A,+, ×) it is abelian or commutative ring.Whether the
law in order to ×, there is a neutral element, then it is a unitary ring.
• ∀a ∈A,a ×0 ,0 × a = 0, (0 is the neutral element of +) •
∀a,b ∈A
a ×(-b)= (-a) × b = −(a× b)
(-a)×(-b)= a×b
•Invertible elements : In a unitary ring ( A,+, ×) elements that symmetrical in order the law x .namely ∃a 777777 1 being the unitary element of the ring. •Idempotent elements: These are the elements
that verify ring,a× a = a .
Nilpotent elements:
These are the elements that verify Ring
a^n = a ×a× .......× a =0,n∈N
•Zero divisors: Are nonzero elements that verify ring, a×b= b× a =0.
Is a ring without zero divisors. If a ring is unitary and integrity is called commutative integral domain.
A non-empty subset S of a ring
( A,+, ×) is a subring if ( s,+, ×) has ring structure. That is:
1 (S,+) is a commutative subgroup 2 The law × is closed on S. Namely ,∀a ,b ∈ S, a × b ∈ S
∎ SUBRING CHARACTERIZATION : A subset S is nonempty subring (A,+, ×) is a subring if only if:
∀a,b ∈ S BODIES:
a-b ∈ S a × b ∈ S
A body (K,+, ×) is a unitary commutative ring in which all elements are reversed, except 0 (neutral element of +).That is:
(K,+, ×)is⇔ (K,+) is
(K,-{0},x) is group commutative
The law ×is distributive over the law +
Properties :
•In a body there are divisors of zero because all elements other than 0 are inverse,therefore, it can be stated that (K,+, ×) body ⇒ (K,+, ×) is integral domain. •In an equation body a x × = b/a, b
∈K ; a ≠ 0 and always have this solution is unique.
A non-empty subset S of a body (K,+, ×) is a subfield if it has body structure (S,+, ×) . That is:
1.(S,+, ×) is a subring of (K,+, ×) 2. ∀a∈S / a≠ 0 ⇒ ∃a−1∈S
∎CHARACTERIZATION subfields: A non-empty subset S of a body (K,+, ×) is a subfield if only if:
∀a,b ∈
a -b
∈S a× b−1 ∈S/
≠ 0 | {"url":"http://visualcsamples.blogspot.com/2012_12_01_archive.html","timestamp":"2014-04-21T01:59:33Z","content_type":null,"content_length":"198183","record_id":"<urn:uuid:7b3a5a62-4114-4630-b82b-c1c07790bbce>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00054-ip-10-147-4-33.ec2.internal.warc.gz"} |
Strings with Maximally Many Distinct Subsequences and Substrings
A natural problem in extremal combinatorics is to maximize the number of distinct subsequences for any length-$n$ string over a finite alphabet $\Sigma$; this value grows exponentially, but slower
than $2^n$. We use the probabilistic method to determine the maximizing string, which is a cyclically repeating string. The number of distinct subsequences is exactly enumerated by a generating
function, from which we also derive asymptotic estimates. For the alphabet $\Sigma=\{1,2\}$, $\,(1,2,1,2,\dots)$ has the maximum number of distinct subsequences, namely ${\rm Fib}(n+3)-1 \sim \left
((1+\sqrt5)/2\right)^{n+3} \! / \sqrt{5}$.
We also consider the same problem with substrings in lieu of subsequences. Here, we show that an appropriately truncated de Bruijn word attains the maximum. For both problems, we compare the
performance of random strings with that of the optimal ones.
Full Text: | {"url":"http://www.combinatorics.org/ojs/index.php/eljc/article/view/v11i1r8","timestamp":"2014-04-17T04:08:32Z","content_type":null,"content_length":"15788","record_id":"<urn:uuid:7753d032-fc6f-4d90-871c-5619531c3ebf>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00340-ip-10-147-4-33.ec2.internal.warc.gz"} |
San Jacinto Calculus Tutors
...I have a solid understanding of not only the material, but what concepts students struggle with and how to help them "get it." My background goes even deeper than the high school level, as
geometry was the field I focused on for my senior thesis in college. I have tutored 4 students in SAT math...
40 Subjects: including calculus, Spanish, reading, chemistry
...I specialize in rudimental snare drum. I am my school's Drum Captain and Percussion Drum Major. I have also been my school's Pit Master in past years, specializing in playing marimba.
8 Subjects: including calculus, geometry, algebra 1, algebra 2
I tutored for two years in elementary Geometry/Algebra all the way through to advanced placement calculus and physics. I am a good tutor because I have an inherent passion for the topics
themselves and find that I can communicate their intricacies thoroughly to my students. I have taken: Different...
14 Subjects: including calculus, physics, statistics, geometry
...COURSE MATERIALS : Basic Text, Geometry, Concepts and Applications . Glencoe / McGraw-Hill, 2006. 12. COURSE MATERIALS : Basic Text, Geometry, Concepts and Applications . Glencoe / McGraw-Hill,
2006. TOOLS - Scientific Calculator, compass, protractor, ruler and straightedge three-ring binder, paper, and pencil(s). 13.
38 Subjects: including calculus, English, chemistry, reading
...I studied mathematics at UCLA, under a strict teacher preparation program. Through this program, I received an exemption for the CSET, a requirement for all teachers in California. I excel in
tutoring all math subjects, but have a specialty in Geometry, Algebra, and Calculus both within the sch...
16 Subjects: including calculus, chemistry, physics, geometry | {"url":"http://www.algebrahelp.com/San_Jacinto_calculus_tutors.jsp","timestamp":"2014-04-19T07:22:43Z","content_type":null,"content_length":"25025","record_id":"<urn:uuid:c51c3860-8a54-4f10-9378-aed892fd6665>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00526-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Five Famous Fractals
This Demonstration presents a sample from the varied world of fractals. The fractals presented here are images of the complex plane where each pixel is colored differently. Points are colored based
on their behavior under iteration of a complex function, with different functions used to generate the different fractals shown here. Sometimes the sequence of iterates diverges by growing without
bound in magnitude, and sometimes it converges to a fixed point.
The values and give the coordinates of the current image's center and is half the width of this image. The status bar at the bottom of the controls shows the progress as a new image is generated and
the number of seconds required for the calculation. Use a high number for the resolution when moving about and then choose 1 to get a finer image.
Mandelbrot set
was the first of its type to be visualized using a computer. It has become famous for its beautiful and complex structures. It is produced by the iteration of the simple function .
The Mandelbox is a fractal recently discovered by attempting to expand the Mandelbrot set into three dimensions. The iterated function involves geometric folding operations and can be applied to
points of any dimensions.
A Newton attraction basin is created by using the iterating function from Newton's
numerical method
of finding roots . Here the basin is for .
The magnet fractal comes from formulas describing magnetic phase transitions, which give a fractal related to the Mandelbrot set; the map is . The fractal contains both convergent and divergent
Burning Ship
fractal could perhaps be described as an incorrectly implemented attempt at the Mandelbrot set. Zoom in on the right-hand side around to see the evocative image that gives the fractal its name. | {"url":"http://demonstrations.wolfram.com/FiveFamousFractals/","timestamp":"2014-04-16T07:14:31Z","content_type":null,"content_length":"44173","record_id":"<urn:uuid:64b129be-c72c-4f4d-a965-615dd56fb49c>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00119-ip-10-147-4-33.ec2.internal.warc.gz"} |
LINE_STIPPLE problem
12-16-2012, 12:57 PM
LINE_STIPPLE problem
I am drawing an array of lines.
I have some lines that need to be stippled and the rest just regular lines.
Im doing something like this (i just hand coded this obviously)
//draw lines
glenable stipple
//draw stipple lines
gldisable stipple
If i comment out the enable stipple part then the program runs just fine just without my stippled lines....
If i leave the enable stipple in there, then it draws everything just fine but crashes....
Im using VC++ 6.0 over a remote desktop connection. Im assuming its crashing my laptop video card and not able to do much debugging. i won't have access to the main PC till after the new year. I
can't seem to get any debug info. At the office it was giving me some kind of nVidia errors... I can't remember what the error was because i was working on other parts of the software at the
12-17-2012, 04:06 PM
As far as I know, the problem is that you can't glEnable(GL_LINE_STIPPLE) inside of a glBegin()/glEnd() block. In immediate mode OpenGL doesn't like it much when you do anything except pass
specific drawing commands.
12-18-2012, 09:14 AM
Besides that, is there a problem with calling the glBegin/glEnd say 6000 times? Otherwise, is there another way with doing the line stipple? I have thousands of lines to draw all in an array and
a lot of them need to be stippled.
12-18-2012, 10:18 AM
There's no reason you should call glBegin()/glEnd() that many times. OpenGL is a state machine at heart; when you call glBegin(GL_LINES) you go into immediate mode, where all subsequent calls are
interpreted accordingly until glEnd(). In your case, if you must use immediate mode, you need to place your for loop inside glBegin() and glEnd().
I don't have any definite reference to point you to, but generally it's not good to call any OpenGL functions more than a few hundred times per frame, since each call takes some CPU time and some
bus transfer time. The point of OpenGL is hardware acceleration, so too many OpenGL function calls defeats the purpose, since it occupies the CPU.
I'm bad at explaining in words, so I'll write code. You can do it somewhat like this (I'm assuming a lot of things, like orthographic projection). Unfortunately the closest thing I can write to
C++ is C89, so please bear with me.
Code :
// Given line data in the following struct
typedef struct {
int x0;
int y0;
int x1;
int y1;
char stippled; // Functions as a boolean, 1 true, 0 false
} line;
Code :
// Given an array of lines "lines" and length "length"
int i;
for (i = 0; i < length; i++) {
if (lines[i].stippled == 0) {
glVertex2f(lines[i].x0, lines[i].y0); // Each pair of vertices is interpreted as the beginning and end of a line,
glVertex2f(lines[i].x1, lines[i].y1); // so you can give OpenGL multiple pairs in each block
// Do the glBegin()/glEnd() block + for loop again but testing for stipple flag
Of course, this way requires two semi-redundant passes, but since you can't enable stipple from inside a glBegin()/glEnd() block, it's pretty much the only immediate mode approach (assuming I
guessed your structure correctly).
However, a better way to do it would require restructuring your data. Separate the stippled lines from the regular lines, and make a primitive (probably float) array for each where (assuming
lines from 0 to 1, 2 to 3, etc):
Code :
{x0, y0, x1, y1, x2, y2, x3, y3, ...}
Then draw it with vertex arrays:
Code :
// Given float arrays "stippled" and "not_stippled" with lengths "length_s" and "length_n" respectively
glVertexPointer(2, GL_FLOAT, 0, not_stippled); // Where 2 is number of floats per vertex, GL_FLOAT is the type, and 0 is the "stride" (how many to skip between vertices)
glDrawArrays(GL_LINES, 0, length_n / 2); // Where GL_LINES is draw mode, 0 is start index, and length_s / 2 is how many vertices to draw
glVertexPointer(2, GL_FLOAT, 0, stippled);
glDrawArrays(GL_LINES, 0, length_s / 2);
You could take a step further and use VBOs, but I've gone on long enough... I'll leave that to you to figure out, I suppose ;) | {"url":"http://www.opengl.org/discussion_boards/printthread.php?t=180224&pp=10&page=1","timestamp":"2014-04-19T22:27:44Z","content_type":null,"content_length":"10548","record_id":"<urn:uuid:6525317d-32bf-4855-a227-8c5a70009fbe>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00364-ip-10-147-4-33.ec2.internal.warc.gz"} |
And Other Mathematical Tales of Chance
With essays on chance, fate, anticipation, chaos, risk, and statistics,
The Broken Dice
is as much philosophy as mathematics. Each of its essays begins with a story taken from the
Icelandic sagas
or the Bible and is illustrated with examples from elsewhere in literature, history, and everyday life.
In "Chance", Olaf Haroldsson's dicing for an island with the king of Sweden (from the Heimskringla) is connected, via a Borgesian tale of a now-lost monastic manuscript, to the problem of
pseudo-random number generation. "Fate" looks at contingency and determinism in mathematics, exploring regularity in sequences, entropy, and contingent series and touching on the possible contingency
of mathematics itself. "Anticipation" considers expectations and how they influence their own accuracy; Ekeland weaves examples from markets, poker, and soccer in with stories from The Saga of Olaf
Trygvesson and Rabelais. The longest essay, "Chaos", is a brief introduction to chaos theory, to the concepts of exponential instability and attractors; examples include the stability of the solar
system and Gylippus' meeting of the Peloponnesian fleet in 413 BC. Gunnar's abrupt decision not to go into exile in Njal's Saga is the starting point for "Risk", which examines the psychology of
human risk-avoidance and risk-taking, ranging from the Athenian decision to fight at Marathon to the hazards of nuclear technology. And a brief piece on "Statistics" exposes its philosophical and
mathematical foundations — having started with Joseph, Pharaoh, and the seven years of famine and plenty.
The Broken Dice is a wonderful work, an exploration of the ties between mathematics and philosophy (outside mathematical logic). The essays vary in their assumption of mathematical knowledge: anyone
totally unfamiliar with chaos theory will not find "Chaos" an ideal introduction, for example, but no mathematics at all is needed to follow "Risk" and the other essays assume no more than basic high
school mathematics. The Broken Dice will entertain mathematicians and non-mathematicians alike, but especially those with a humanities background, who will most appreciate the historical and literary
June 1997
External links:
- buy from Amazon.com or Amazon.co.uk
- details at the University of Chicago Press
Related reviews:
- more French literature
- books about mathematics
- books about philosophy
- more popular science
- books published by The University of Chicago Press | {"url":"http://dannyreviews.com/h/Broken_Dice.html","timestamp":"2014-04-18T00:35:12Z","content_type":null,"content_length":"7258","record_id":"<urn:uuid:5c64a60f-32a3-4afc-93bb-e74906b57556>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00139-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to calculate deadweight loss; easy 4 step method
Deadweight loss occurs when an economy’s welfare is not at the maximum possible. Many times, professors will ask you to calculate the deadweight loss that occurs in an economy when certain conditions
unfold. These conditions include different market structures, externalities, and government regulations. Review this
past post for more information on deadweight loss
The trick to remember when calculating deadweight loss, is that deadweight loss occurs whenever
marginal benefit is not equal to marginal cost. In order to get the total deadweight loss for the economy you must consider every unit that is produced where marginal cost is greater than marginal
benefit (a net loss to the economy if MC>MB). Also, it is possible that more should be produced if marginal benefit is greater than marginal cost, this results in foregone welfare because we are not
producing enough in the economy even though MB>MC. (Review info on why marginal benefit should equal marginal cost)
Calculating deadweight loss can be done in a few easy steps:
1) Identify where what amount of a good or service is currently being produced (we will call this Q1).
2) Identify where the societal optimum should be and figure out the quantity produced in this equilibrium (should occur where society’s MC = society’s MB, we will call this Q2).
3) Because of the nature of the MC (supply) and MB (demand) curves, we should get a triangle shape, with the two curves (supply and demand) crossing at Q2. This triangle shape will have a base (the
difference between Q2 and Q1) as well as a height (the difference between MC and MB at Q1 (most common the difference in prices)).
4) The equation for the area of a triangle is ½(base*height). We know what the base and the height are in this scenario so we can calculate the deadweight loss by figuring out the area of this
triangle: ½(difference between Q1 and Q2 * the difference between MC and MB at the wide end).
Now let’s go through an example to demonstrate how these four steps can be used to actually calculate the deadweight loss.
Looking at the example above, we see that equilibrium in this market occurs at a price of 5, and a quantity of 5. If we have a tax imposed on the economy, then we see equilibrium quantity go down to
4. This means that our Q1 is 4, and our Q2 is 5. So the base of our deadweight loss triangle will be 1. The difference between supply and demand curve (with the tax imposed) at Q1 is 2. So our
equation for deadweight loss will be ½(1*2) or 1. So here, when we calculate deadweight loss for this example, we get a deadweight loss equal to 1.
Summary: Deadweight loss is generally triangular shaped and will be located between the two equilibrium quantities. Remember that the equation for a triangle is 1/2(base*height).
Spread the knowledge!
0 comments: | {"url":"http://www.freeeconhelp.com/2011/10/how-to-calculate-deadweight-loss-easy-4.html","timestamp":"2014-04-21T07:05:30Z","content_type":null,"content_length":"74979","record_id":"<urn:uuid:45279808-645d-4d59-bff9-dce7f0def9d2>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00428-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Teisha on Tuesday, March 9, 2010 at 10:33pm.
1. What pressure would be needed to compress 25.1 mL of hydrogen at 1.01 atm to 25% of its original volume?
2. If the pressure on a 1.04-L sample of gas is doubled at constant temperature, what will be the new volume of the gas?
3. A 1.04-L sample of gas at 759 mm Hg pressure is expanded until its volume is 2.24 L. What will be the pressure in the expanded gas sample (at constant temperature)?
4. What pressure would have to be applied to a 27.2-mL sample of gas at 25 degrees celsius and 1.00 atm to compress its volume to 1.00 mL without a change in temperature?
5. How would you describe absolute zero?
6. A favorite demonstration in introductory chemistry is to illustrate how the volume of a gas is affected by temperature by blowing up a balloon at room temperature and then placing the balloon into
a container of dry ice or liquid nitrogen (both of which are very cold). Suppose a balloon containing 1.15 L of air at 25.2 degrees Celsius is placed into a flask containing liquid nitrogen at -78.5
degrees celsius. What will the volume of the sample become (at constant pressure)?
7.Suppose a 1.25 L of argon is cooled from 291 K to 78 K. What will be the new volume of the argon sample?
8. Suppose a gas sample is cooled from 600 K to 300 K. Ho will the new volume of the gas be related to its original volume?
9. Th label of an aerosol spray can contains a warning that the can should not be heated to over 130 degrees F because of the danger of explosion due to the pressure increase as it is heated.
Calculate the potential volume of the gas contained in 500-mL aerosol can when it is heated from 25 degrees C to 54 degrees C (approx. 130 degrees F) assuming a constant pressure.
10. A sample of gas has a volume of 127 mL in a boiling water bath at 100 degrees celsius. Calculate the volume of the sample of gas 10 degrees celsius intervals after the heat source is turned off
and the gas sample begins to cool down to eh temperature of the laboratory, 20 degrees celsius.
11. At conditions of constant temperature and pressure, the volume of a sample of ideal gas is _____ proportional to the number of moles of gas present.
12. A mathematical expression that summarizes Avogadro's law is ________.
13. If 0.105 mol of helium gas occupies a volume of 2.35 L at a certain temperature and pressure, what volume would 0.337 mol of helium occupy under the same conditions?
14. If 1.00 mol of helium occupies a volume of 22.4 L at 273 K at 1.00 atm, what volume will 1.00 g of helium will occupy under the same conditions?
15. If 3.25 mol of argon gas occupies a volume if 100. L at a particular and temperature and pressure, what volume does 14.15 mol of argon occupy under the same conditions?
16. If 2.71 g of argon gas occupies a volume of 4.21 L, what volume will 1.29 mol of argon occupy under the same conditions?
17. If gaseous mixture is made of 2.41 g of He and 2.79 g of Ne in an evacuated 1.04-L container at 25 degrees celsius, what will be the partial pressure of each gas and the total pressure in the
18. A tank contains a mixture of 3.0 mol of N2, 2.0 mol of O2, and 1.0 mol of CO2 at 25 degrees celsius and a total pressure of 10.0 atm. Calculate the partial pressure (in torr) of each gas in the
19. How many moles of helium gas would be required 2.14-L container to a pressure of 759 mm Hg at 25 degrees celsius? How many moles of neon gas would be required to fill a similar tank to the same
pressure at 25 degrees celsius?
20. Calculate what mass of argon gas is required to fill a 20.4-L container to a pressure of 1.09 atm at 25 degrees celsius.
21. What is the pressure in a 245-L tank that contains 5.21 kg of helium at 27 degrees celsius?
22. What mass of helium gas is needed to pressurize a 100.0-L tank to a 255 atm at 25 degrees celsius? What mass of oxygen gas would be needed to pressurize a similar tank to the same specifications?
23. At what temperature will a 1.0 g sample of neon gas exert a pressure of 500. torr in a 5.0 L container?
These are some review questions that can help me study so all the help would be great so I can study from it :) Thank you!
Please help me, I will be grateful!
• sciencee - Ms. Sue, Tuesday, March 9, 2010 at 10:37pm
If these are review questions for an exam, you will be far, far better off to review your text book and work out the answers yourself.
• sciencee - DrBob222, Wednesday, March 10, 2010 at 12:08am
I agree with Ms. Sue; however, I can give you a couple of tips.
I notice that most of these review questions are for gases with pressure, temperature, moles (or grams) and volume changes.
(P1V1)/T1 = (P2V2)/T2 will work those that don't have grams or moles listed.
For the others, most can be worked with the universal gas law equation, PV = nRT. When using this equation don't forget to change temperature to Kelvin. Kelvin = 273 + degrees celsius.
If you are given grams, you can change to moles by moles = grams/molar mass.
If you run into problems, post A (not dozens) problem and show your work. Please explain what you don't understand about the problem and why you are confused about the next step to take.
• chemistry - Asah, Tuesday, May 11, 2010 at 11:56am
i need help
• chemsitry - Asah, Tuesday, May 11, 2010 at 11:57am
i need help for this quation
Related Questions
Chem hw! - 1. What pressure would be needed to compress 25.1 mL of hydrogen at 1...
Chem hw! - 1. What pressure would be needed to compress 25.1 mL of hydrogen at 1...
Chem - Would you please tell me if these are right? A sample of helium has a ...
Chemistry - Need help with #2 & # 3 1. Record the pairs of data for pressure (...
chemistry - If the pressure on a 2.50 mL gas sample were doubled from 0.500 atm ...
ap chemistry - Suppose that a sample of gas occupies 117 mL of volume at 25C ...
CHEMISTRY - 1. If 29.0 L of methane, CH4, undergoes complete combustion at 0.961...
chemistry - what pressure would have to be applied to a 27.2 ml sample of gas at...
AP Chemistry - Suppose that a sample of gas occupies 140 mL of volume at 25C ...
Chemistry - Please help and explain each problem. PLEASE!!!! 1. If 29.0 L of ... | {"url":"http://www.jiskha.com/display.cgi?id=1268191998","timestamp":"2014-04-18T07:22:21Z","content_type":null,"content_length":"13774","record_id":"<urn:uuid:1f5d3a4f-46b2-4cc5-879f-c20340e6e221>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00130-ip-10-147-4-33.ec2.internal.warc.gz"} |
New tool could unpick complex cancer causes and help sociologists mine Facebook
Researchers at the University of Warwick's Department of Statistics and Centre for Complexity Science have devised a new research tool that could help unpick the complex cell interactions that lead
to cancer and also allow social scientists to mine social networking sites such as Facebook for useful insights.
An approach called "graphical models" can be used by researchers to gain an understanding of a range of systems with multiple interacting factors. These models use mathematical objects called graphs
to describe and depict the probability of relationships between each of the components. When used to study molecular biology researchers may be interested in saying something about which molecules
influence one another; in the social sciences researchers would use them to understand the relationships between various economic and demographic factors.
However gaining such information from a graphical model can be a very challenging exercise, because of the vast range of possible graphs needed for even a relatively small number of variables. For
instance the relatively small network studied by the University of Warwick led team for this research paper had just 14 proteins which were implicated in the development of a form of cancer, but
those 14 proteins had a vast number of combinations of possible mutual interactions.
Such tasks would be made much easier if the mathematical tools used to undertake the analysis could somehow embody all the current knowledge of what was likely, and or probable, in the networks they
were analysing. Such a mathematical method could be viewed as mimicking how human researchers learn from data, in effect interpreting new information in light of what is already known.
The Warwick researchers led by Dr Sach Mukherjee of Warwick's Department of Statistics and Centre for Complexity Science have devised just such a method that embeds current knowledge in the
mathematical analysis to cut through the vast complexity of this type of analysis using a mechanism called "Informative Priors".
The researchers took the 14 protein network and created a mathematical tool that was able to incorporate all of what the interactions, and limits on interactions, that were likely and/or probable in
such a network of these particular proteins. This allowed a rapid and accurate analysis of the probabilities of interactions between each on the 14 proteins. The technique even able to cope with
misconceptions in current understanding of particular networks as it the was designed to "overturn" any reject any data included in the "Informative Priors" that was consistently at odds with any
observed new data.
Analysis with these network models was much better able to resolve complex interactions than simple, correlation-based methods. Moreover using informative priors, gave much more accurate results than
analysis that incorporated no prior understanding of the network (so called "flat priors").
The researchers will now use their new technique to examine the network of proteins behind the development of breast cancer but they are also looking at how the tool could be used in social science
to mine a vast amount of useful anonymised data from social networking sites such as Facebook to gain significant new understandings of large scale interactions and relationships in society at large.
The research paper is entitled Network inference using informative priors by Dr Sach Mukherjee of the University of Warwick, Terence P. Speed of the University of California, Berkeley. It has just
been published in PNAS.
Source: University of Warwick | {"url":"http://phys.org/news148561729.html","timestamp":"2014-04-20T11:10:23Z","content_type":null,"content_length":"66541","record_id":"<urn:uuid:8c9a3a52-4d5e-4dce-939d-0b378c8dc742>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00359-ip-10-147-4-33.ec2.internal.warc.gz"} |
Andy successfully defended his masters thesis on June 11, 2009
Neural Networks and Atmospheric Scattering Calculations
For the majority of the particles in the atmosphere, calculations of scattering energy loss are increasingly accurate in proportion to the computing time afforded them. Accurate radiative transfer
calculations, even with the most efficient numerical methods, are computationally expensive. This becomes a serious problem for multi-decadal climate simulations for which an accurate representation
of the radiative impact of atmospheric constituents is crucial. This thesis presents one method for reducing the computational expense radiative transfer calculations of aerosol scattering
properties, which are used in chemical models. The goal of this research is to develop a fast scattering code using a neural network that is trained on input and output data derived from an accurate
T-Matrix scattering algorithm. The input space to the neural net consists of scattering parameters that describe the atmospheric scattering conditions such as wavelength of incoming light, effective
particle radius, and index of refraction. The output space consists of the coefficients of a Legendre polynomial expansion of the phase function. The neural net finds the nonlinear mapping between
the input and output spaces for a training set and can subsequently be used to generate the phase function for arbitrary wavelength, particle radius and index of refraction. In this research, a
neural network applicable to both Lorenz and Mie scattering is developed and tested for both accuracy and speed. The accuracy of the neural net is found to be excellent, with errors well below 10%,
and runtime testing shows that the neural net is approximately 5 times faster than a lookup table. | {"url":"http://www.umbc.edu/blogs/physics/2009/06/ms_defense_andrew_rickert.html","timestamp":"2014-04-16T13:37:01Z","content_type":null,"content_length":"17545","record_id":"<urn:uuid:c9c9aa74-6b2c-4362-8d74-bb6168a4b4a9>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00272-ip-10-147-4-33.ec2.internal.warc.gz"} |
Miller's theorem and virtual ground
1. 8th February 2010, 13:20 #1
Full Member level 3
Join Date
Feb 2007
Rio de Janeiro
10 / 10
Miller's theorem and virtual ground
There are many misinterpretations of such a simple theorem, including textbooks by Razavi, Sedra, this forum, etc. For example, Razavi writes that Miller's theorem can give wrong gain. Sedra
does not suggest finding the output impedance after applying Miller's theorem. A number of people on this forum say that zeroes of the gain are lost after applying Miller's theorem.
All the above is caused by the misuse of a virtual ground. Miller's concept is actually based on the substitution of an impedance Zµ with two series ones, Zµin and Zµo, in such a way that
their total impedance equals Zµ, and the potential at their interconnection equal zero, which is a virtual ground. If one does not connect the virtual ground to the real ground, he or she has
no problems with gain, zeroes, etc. This is so because the current entering the virtual ground from Zµin should continue to flow through Zµo. This does not happen when the virtual ground and
the real ground are short circuted. Short circuit a virtual ground and the real ground in any circuit and it will puzzle you, so why to do it in the case of Miller?
Regarding the zeroes of the gain, consider a CE amplifier. In this amplifier, the Miller gain, µ=-[1+gm*(Rc||ro)]/[1+j*ω*(Rc||ro)*Cµ]+1, can easily be found from the following equation with
one unknown: µ=-gm*[Rc||ro||1/(j*ω*Cµ*(1-1/µ)]. Look at µ, does it become zeroes for an ω?
Advanced Member level 2
Join Date
Jul 2009
35 / 35
Re: Miller's theorem and virtual ground
hi Jasmin ur question is not clear can you reform??
Advanced Member level 5
Join Date
May 2008
1425 / 1425
Re: Miller's theorem and virtual ground
Hi jasmin !
Interesting contribution!
However, there is one question from my side:
Don't you think that application of equ. 6-3 from the Razavi excerpt leads to Z2=-R2 only (instead of the parallel connection as shown in Fig. 6-3) ?
And - what is the consequence on the gain for the example (voltage divider) ?
Added somewhat later: JASMIN, , please forgot the above question. I know my mistake now. Sorry!
Advanced Member level 5
Join Date
May 2008
1425 / 1425
Re: Miller's theorem and virtual ground
HI jasmin!
thank you again for your contribution because it directs our attention to a formulation from RAZAVI (explanation to his Fig. 6.3) which sounds a bit "strange".
I think the purpose and the motivation of Miller`s theorem is to allocate a current flowing through an element and which is driven by TWO sources to only ONE single source. Thus, the parmeter
"input impedance" can be computet.
With other words, the Miller theorem can and should be applied only to a signal path between two nodes (X resp. Y) which are connected to TWO signals sources.
Therefore, it makes really no sense (and does not simplify any calculations) to apply ("misuse") the theorem to a simple voltage divider.
Even if (perhaps by accident) this may lead to some correct results (input resistance and "gain"), it is - as mentioned by you - a misuse of the sentence (what about the output resistance for
the voltage divider example?).
Question to the community: Does anybody has a counter example?
That means: Any circuitry with only one source which can be analyzed in a simplified manner using the Miller theorem ? | {"url":"http://www.edaboard.com/thread169384.html","timestamp":"2014-04-20T00:39:42Z","content_type":null,"content_length":"69830","record_id":"<urn:uuid:3b656cb7-2eb5-43e4-9964-47ac56d6c3b8>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00572-ip-10-147-4-33.ec2.internal.warc.gz"} |
Elliptic captures and describes a collection of terms commonly used in the security and cryptography field.
Many customers find security to be a fascinating and complex world. Elliptic hopes to simplify and clarify options for customers through helpful links on the site and this collection of terms
commonly used in security.
The Advanced Encryption Standard is a security standard which is recommended for all new designs by the National Institute of Science and Technology (NIST). It has many different variants including
CBC, CCM and GCM.
American National Standards Institute.
Application Programming Interface.
An attempt at breaking part or all of a cryptosystem. Examples include algebraic attack, birthday attack, brute force attack, chosen ciphertext attack, chosen plaintext attack, differential
cryptanalysis, known plaintext attack, linear cryptanalysis, middleperson attack.
The action of verifying information such as identity, ownership or authorization.
A sequence of bits of fixed length; longer sequences of bits can be broken down into blocks.
block cipher
A symmetric cipher which encrypts a message by breaking it down into blocks and encrypting each block.
Cipher block chaining. AES-CBC and 3DES-CBC are the most common ciphers used in IPsec.
Content Protection for Recordable Media and Content Protection for Pre-Recorded Media are mechanisms for controlling the copying, moving and deletion of digital media on a host device, such as a
personal computer, or other digital player. It is a form DRM developed by the 4C Entity, LLC (IBM, Intel, Matsushita and Toshiba). The use of the CPRM specification and access to the IP and
cryptographic details required to implement it requires a license from 4C Entity, LLC.
Content Scramble System (CSS) is an encryption system used on most DVDs. It uses a weak, proprietary 40-bit encryption stream cipher algorithm. The CSS key sets are licensed to manufacturers who
incorporate them into products such as DVD drives, DVD players and DVD movie releases.
certificate or cert
An electronic document binding some pieces of information together, such as a user's identity and public-key. Certifying Authorities (CA's) provide certificates.
certificate revocation list
A list of certificates that have been revoked before their expiration date.
Certifying Authority (CA)
A person or organization that creates certificates.
An encryption-decryption algorithm.
The art and science of using mathematics to secure information and create a high degree of trust in electronic design.
Data Encryption Standard or DES
Data Encryption Standard, a block cipher developed by IBM and the U.S. government in the 1970's as an official standard.
dictionary attack
A brute force attack that tries passwords and or keys from a precompiled list of values.
Diffie-Hellman key exchange
A key exchange protocol allowing the participants to agree on a key over an insecure channel.
Commonly used to refer to the output of a hash function, e.g. message digest refers to the hash of a message.
digital signature
The encryption of a message digest with a private key.
discrete logarithm problem
The problem of finding r such that gr = d, where d and g are elements in a given group. For some groups, the discrete logarithm problem is a hard problem used in public-key cryptography.
DRM Digital Rights Management
Security designs aimed at preserving the integrity of content such as music and films when such content is distributed over digital media such at Firewire, USB and IP networks.
Digital Signature Algorithm. DSA is a public-key method based on the discrete logarithm problem.
Digital Transmission Content Protection. A DRM design created by Hitachi, Intel, Matsushita, Sony, and Toshiba.
Elliptic Curve Cryptography; A public-key cryptosystem based on the properties of elliptic curves.
elliptic curve
The set of points (x, y) satisfying an equation of the form
y2 = x3 + ax + b
for variables x, y and constants a, b Î F, where F is a field. The National Security Agency has recommended curves and fields for use in public key cryptography to replace the RSA algorithm.
elliptic curve discrete logarithm (ECDL) problem
The problem of finding m such that m•P = Q, where P and Q are two points on an elliptic curve.
The transformation of plaintext into an apparently less readable form (called ciphertext) through a mathematical process. The ciphertext may be read by anyone who has the key that decrypts (undoes
the encryption) the ciphertext.
export licensing
Encryption, in any form which leaves its country of origin requires a license from the government as encryption is dual-use technology, i.e. technology which can be used for either commercial or
military purposes.
In the U.S., export licensing of cryptography is governed by the Bureau of Industry and Security (BIS) and their web site can be found at www.bis.gov. This link will take you right to the page that
explains the export licensing laws relating to cryptography. It is important to distinguish between the export licensing laws as they apply to Elliptic versus those that apply to the final product.
Elliptic licenses cryptography technology in the form of semiconductor IP or software. Elliptic customers transform the IP into an end product which is the form that the export license considerations
are applied to in the licensing process. In many cases, the final product such as an integrated circuit or final product may or may not require a license depending on how the cryptography is used.
The only way to find out is to apply for an export permit through BIS and they are by law required to provide responses to requests in a 30 day period.
In the United Kingdom, the export of products containing cryptography is governed by the Department for Business Enterprise and Regulatory Reform. The web page dealing with export controls of
products containing cryptography can be found through the following link www.berr.gov.uk.
Federal Information Processing Standards
Forward Lock
A DRM method which locks content to a specific device or user preventing content from being further distributed
A mathematical relationship between two values. For example, f defined on the set of real numbers as f(x) = x3 is a function with input any real number x and with output the cube of x.
Galois Counter Mode is a block cipher mode of operation that uses universal hashing over a binary Galois field to provide authenticated encryption.
Galois field
A field with a finite number of elements. The size of a finite field must be a power of a prime number.
A mathematical structure consisting of a finite or infinite set together with a binary operation called group multiplication satisfying certain axioms.
High-Bandwidth Digital Content Protection (HDCP) is a form of DRM developed by the Intel Corporation to control digital audio and video content as it travels across Digital Visual Interface (DVI) or
High Definition Multimedia Interface (HDMI) connections. The HDCP specification is proprietary and an implementation of HDCP requires a license from Digital Content Protection, LLC, a subsidiary of
hash-based MAC
A message authentication counter that uses a hash function to reduce the size of the data it processes.
hash function
A function that takes a variable sized input and derives a fixed size output based upon an algorithm such as SHA-1 or MD5.
Institute of Electrical and Electronics Engineers, a body that creates standards that frequently includes security. 802.16 or WiMAX is an example of a wireless standard created and ratified by the
Internet Engineering Task Force. A body that creates standards for us in the Internet. RFC 4301 for example is the IETF standard that specifies the security design for the Internet - IPsec.
A process through which one ascertains the identity of another person or entity.
International Telecommunications Union - Telecommunications standardization sector.
A string of bits used widely in cryptography, allowing people to encrypt and decrypt data; a key can be used to perform other mathematical operations as well. Given a cipher, a key determines the
mapping of the plaintext to the ciphertext.
key agreement
A process used by two or more parties to agree upon a secret symmetric key.
key exchange
A process used by two more parties to exchange keys in cryptosystems.
key expansion
A process that creates a larger key from the original key.
key generation
The act of creating a key.
key management
The various processes that deal with the creation, distribution, authentication, and storage of keys.
key pair
The full key information in a public-key cryptosystem, consisting of the public key and private key.
key recovery
A special feature of a key management scheme that allows messages to be decrypted even if the original key is lost.
key space
The collection of all possible keys for a given cryptosystem.
linear cryptanalysis
A known plaintext attack that uses linear approximations to describe the behavior of the block cipher.
linear feedback shift register. Used in many hardware implementation of security algorithms because of its ability to cost effectively implement mathematical functions.
The IEEE considering the LRW-AES cipher for storage security. Unfortunately, several security holes were found in the cipher and it dropped from the standard in favor of XTS-AES.
MAC or Message Authentication Code
A MAC is a function that takes a variable length input and a key to produce a fixed-length output.
message digest
The result of applying a hash function to a message.
Millions of Instructions Per Second, a measurement of computing speed.
modular arithmetic
A form of arithmetic where integers are considered equal if they leave the same remainder when divided by the modulus.
National Institute of Standards and Technology, a United States agency that produces security and cryptography related standards which are then published as FIPS documents.
A property of a cryptosystem. Non-repudiation cryptosystems are those in which the users cannot deny actions they performed.
National Security Agency. A security-conscious U. S. government agency whose mission is to decipher and monitor foreign communications.
Public-key Infrastructure. PKIs are designed to solve the key management problem.
Extra bits concatenated with a key, password, or plaintext.
Public-key cryptography Standards. A series of cryptographic standards dealing with public-key issues, published by RSA Laboratories.
The data to be encrypted.
prime factor
A prime number that is a factor of another number is called a prime factor of that number.
prime number
Any integer greater than 1 that is divisible only by 1 and itself. The first twelve primes are 2,3,5,7,11,13,17,19,23,29,31, and 37.
private key
In public-key cryptography, this key is the secret key. It is primarily used for decryption but is also used for encryption with digital signatures.
A series of steps that two or more parties agree upon to complete a task.
provably secure
A property of a digital signature scheme stating that it is provably secure if its security can be tied closely to that of the cryptosystem involved.
pseudo-random number
A number extracted from a pseudo-random sequence.
public exponent
The public key in the RSA public-key cryptosystem.
public key
In public-key cryptography this key is made public to all, it is primarily used for encryption but can be used for verifying signatures.
public-key cryptography
Cryptography based on methods involving a public key and a private key.
RSA algorithm
A public-key cryptosystem based on the factoring problem. RSA stands for Rivest, Shamir and Adleman, the developers of the RSA public-key cryptosystem and the founders of RSA Data Security (now RSA
random number
As opposed to a pseudo-random number, a truly random number is a number produced independently of its generating criteria. For cryptographic purposes, numbers based on physical measurements, such as
a Geiger counter, are considered random.
The number of times a function, called a round function, is applied to a block in a Feistel cipher.
Secure Socket Layer. An application layer protocol used for secure Internet communications.
secret key
In secret-key cryptography, this is the key used both for encryption and decryption.
secure channel
A communication medium safe from the threat of eavesdroppers.
A typically random bit sequence used to generate another, usually longer pseudo-random bit sequence.
session key
A key for symmetric-key cryptosystems which is used for the duration of one message or communication session.
stream cipher
A secret-key encryption algorithm that operates on a bit at a time. This is compared to a block cipher which operates on multiple bits (the block) at a time.
symmetric cipher
An encryption algorithm that uses the same key is used for encryption as decryption.
A DRM method which allows individuals to transfer content they have acquired to other users (ie friends and family) who in turn retrieve their rights to play content from the appropriate license
tamper resistant
In cryptographic terms, this usually refers to a hardware device that is either impossible or extremely difficult to reverse engineer or extract information from.
tamper reaction
A hardware device which has mechanical devices and electronic circuitry to respond to an attempt to compromise the device. The reaction usually includes the immediate erasure of private information
such as keys or constants used in the security design.
The act of recognizing that a person or entity is who or what it claims to be.
weak key
A key giving a poor security implementation, or causing regularities in encryption which can be used by cryptanalysts to break codes.
A binary bitwise operator yielding the result one if the two values are different and zero otherwise. XOR is an abbreviation for exclusive-OR.
The IEEE P1619 Standard for Cryptographic Protection of Data on Block-Oriented Storage Devices mandates the use of XTS-AES cipher for disk security. XTS-AES is a narrow-block tweakable cipher and has
the unique characteristic that the ciphertext is the same size of the plaintext making it ideal for storage applications. | {"url":"http://www.elliptictech.com/index.php/knowledge-center/glossary","timestamp":"2014-04-20T00:39:08Z","content_type":null,"content_length":"84753","record_id":"<urn:uuid:cd5ab772-7bcf-44ba-b4a4-93008a2f6b8f>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00010-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quantifying Exposure
05-09-2013 09:14 PM #21
Join Date
May 2006
Aurora, IL
Is it not reading the illuminance (intensity) of reflected light and then using Exposure = Illuminance x Time?
Edit: I feel like I'm missing something here...
As Ben said, some of us can get unnecessary too technical but if you feel like you're missing something then yes you do. While it's simple Exposure=Illumninance x Time but the meter reads the
luminance( and not Illuminance) of reflected light and thus there is a relatively complex process to arrive at what will be the Illuminance at the film plane.
That isn't what I meant by missing something. I know the variables involved. I meant it might be a Benskin-esque "plain sight" trap.
Stephen Benskin
Take a look at his flare graph. He starts it at 0.10 over Fb+f and has the amount of flare based from that point, when it is actually based off a stop below. A flare factor of 2 will not double
the exposure at 0.10. His exposure testing also uses Zones and stopping down 4 stops.
What about the technique of shooting a step tablet with a camera, whether it is inside like with Shaffer's method or outside like WBM? The camera is the exposing device and the exposure meter
determines the exposure. Wouldn't it be beneficial to understand what the film plane exposure should be (Hg)?
Let's say we are testing a 125 speed film. What should the exposure be at the speed point? What should the exposure be at the metered exposure point? What is the difference between the speed
point constant and the exposure constant?
Stephen, for the speed point we know the exposure should be Hm in S=0.8/Hm when the ISO conditions are satisfied. Determing Hm is what I find tricky. Simplifying without ISO requirements, even
when just targetting an arbitrary fixed density speed point with a step tablet test (in camera, out of camera, whatever), I find it somewhat less than straight forward.
Michael R 1974
Stephen, for the speed point we know the exposure should be Hm in S=0.8/Hm when the ISO conditions are satisfied. Determing Hm is what I find tricky. Simplifying without ISO requirements, even
when just targetting an arbitrary fixed density speed point with a step tablet test (in camera, out of camera, whatever), I find it somewhat less than straight forward.
I'm speaking more theoretically. Under the ISO conditions, what should Hm be if the film speed was 125? What would the metered exposure point be? And what is the difference between the two?
This isn't as hard or overly technical as some are suggesting, and by understanding a few basic rules of exposure, it's possible to evaluate the validity of a test method like Schaffer's or WBM.
To start with, what are the exposure instructions for the WBM method? It's a simple job of comparing the expected results with the two known exposure points in the above question.
Books don't seem to cover this. The more technical books assume their readers are familiar with the values of the variables and don't bother to show examples. More general photography books
usually don't attempt to cover it, so the opportunity of working with actual numbers associated with exposure falls through the cracks. I think this deprives people of a very useful tool for
analysis or simply for a better understanding of the process. How can someone think to properly analyze something like the ISO speed standard when they don't have the necessary tools.
Isn't it just 10 times? So instead of .8 it's 8...
Bill Burk
Isn't it just 10 times? So instead of .8 it's 8...
Yup. So what would Hm and Hg be for a 125 speed then?
So, the speed point would be..
125 = 0.8 / Hm
multiply both sides of equation by Hm... 125 Hm = 0.8
divide both sides of equation by 125... Hm = 0.8 / 125
Hm = 0.0064
Likewise, the metered point would be...
Hg = 0.064
Ten times... Seems extremely arbitrary or lucky to be such a round easy to remember number.
Bill Burk
So, the speed point would be..
125 = 0.8 / Hm
multiply both sides of equation by Hm... 125 Hm = 0.8
divide both sides of equation by 125... Hm = 0.8 / 125
Hm = 0.0064
Likewise, the metered point would be...
Hg = 0.064
Ten times... Seems extremely arbitrary or lucky to be such a round easy to remember number.
So it takes 0.064 Lux.Sec to produce a density of .10 plus fog on the film?
When the rest of the conditions are met, but yes, that is the idea.
I should have labeled the exposure units...
05-10-2013 07:02 AM #22
Join Date
Feb 2010
Montreal, Canada
Multi Format
05-10-2013 07:12 AM #23
Join Date
Feb 2010
Montreal, Canada
Multi Format
05-10-2013 07:25 AM #24
Join Date
Jan 2005
Los Angeles
4x5 Format
05-10-2013 01:16 PM #25
Join Date
Jan 2005
Los Angeles
4x5 Format
05-10-2013 01:44 PM #26
05-10-2013 02:04 PM #27
Join Date
Jan 2005
Los Angeles
4x5 Format
05-10-2013 02:39 PM #28
05-10-2013 03:38 PM #29
Join Date
May 2006
Aurora, IL
05-10-2013 03:45 PM #30 | {"url":"http://www.apug.org/forums/forum48/118352-quantifying-exposure-3.html","timestamp":"2014-04-21T07:14:31Z","content_type":null,"content_length":"85258","record_id":"<urn:uuid:13ccf845-3946-4b32-9f4e-7e3173a139a5>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00338-ip-10-147-4-33.ec2.internal.warc.gz"} |
In trying to prove some interesting properties about theories relating to multimethods, I stumbled upon this fact that completely invalidates them. I'm embarassed to post it here, since it is so
simple and obvious that I should have seen it sooner. But, alas, I was cursed by my hubris, not verifying that the most essential properties held, and just assuming that they did. This is why
mathematicians insist on being rigorous!
So, let's illustrate by example. Let's say we have a theory Ring{^T} that requires that ^T have a ring structure. We have a role Int for integers, and a role Nat for naturals. Let's also call the set
of integers Z and the set of naturals N. Clearly N is a subset of Z.
Given Ring{Int} (the integers form a ring),
Int{Z} and Nat{N} by definition
Ring{^T} <= Int{^T} # expand model definition
N subset Z implies Int{N}, so Int{N} # role relation
Ring{N} since Int{N}.
But the naturals do not form a ring (under integer operations), though I seemed to have proven that they do. The error came in when I "expanded the model definition" from Ring{Int} to Ring{^T} <= Int
This can be fixed. The changes necessary to fix this are very minor from the point of view of the theory user, but very major from the point of view of the theorist. In particular, they imply that
theory expressions can no longer be reduced to first-order constraints on unnamed types. Instead, the system becomes unbounded-order, which is a bummer for the implementor.
However, I have that unbounded-order system almost entirely formulated in my mind. I'm pretty sure I can keep the benefits of theories while maintaining the solvability of an explicitly annotated
system (we know that completely inferring a system this complex is equivalent to the halting problem and thus undecidable). I hope that it is also inferrable "most of the time". Updates soon.
theory.pod sucks More | Login | Reply
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
theory.pod sucks 2 Comments /
Loading... please wait.
OK, I'm at a loss here. Naturals don't form a ring because they lack the number zero? Is that right? If so, that seems straightforward enough. However, I don't know what you mean when you wrote that
even though Naturals are rings, you seem to have proven they are. I certainly don't know what that means in relation to the code you posted.
Taking a guess, it seems what you're saying is that for a given set S which satisfies condition C, no arbitrary subset of S is necessarily guaranteed to satisfy C but
• > Naturals don't form a ring because they lack the
> number zero?
This is somewhat beside the point, but that is one reason, yes. The other reason is that there are elements (namely, all of them :-) that don't have additive inverses.
> Taking a guess, it seems what you're saying is
> that for a given set S which satisfies condition
> C, no arbitrary subset of S is necessarily
> guaranteed to satisfy C but you've accidentally
> implied that in your theory.
Precisely. It turns out that that's h | {"url":"http://use.perl.org/use.perl.org/_luqui/journal/27646.html","timestamp":"2014-04-18T23:21:44Z","content_type":null,"content_length":"27800","record_id":"<urn:uuid:a61fb236-fcb7-48f2-be8d-10b147cb4c20>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00047-ip-10-147-4-33.ec2.internal.warc.gz"} |
SICP Lecture 2A: Higher-order Procedures
Structure and Interpretation of Computer Programs
Higher-order Procedures
Covers Text
Section 1.3
You can
download the video lecture
or stream it on
MIT's OpenCourseWare site
. It's also available on
This lecture is presented by MIT's
Professor Gerald Jay Sussman
So far in the SICP lectures and exercises, we've seen that we can use procedures to describe compound operations on numbers. This is a first-order abstraction. Instead of performing primitive
operations on the values to solve a distinct instance of a problem, we can write a procedure that performs the correct operations, then tell it what numbers to operate on when we need it. For
example, instead of writing
(* 5 5)
when we want to find the value of 5 squared, we can define the square procedure:
(define (square x) (* x x))
then just invoke it on any number we need to square.
In this section we'll start to think about higher-order procedures. Instead of simply manipulating data (or more precisely, what we are used to
of as data), as in the procedures we've written so far, higher-order procedures manipulate other procedures by accepting them as arguments or returning them as values.
Procedures as Arguments
The video lecture explains this concept by looking at three very similar procedures, then showing how one idea can be refactored out of them to create a procedure that can be reused. The three
procedures are the sum of the integers from a to b, the sum of the squares from a to b (the text book uses the sum of cubes), and Leibniz's formula for finding π/8.
(define (sum-int a b)
(if (> a b)
(+ a (sum-int (+ a 1) b))))
(define (sum-sq a b)
(if (> a b)
(+ (square a) (sum-sq (+ a 1) b))))
(define (pi-sum a b)
(if (> a b)
(+ (/ 1.0 (* a (+ a 2)))
(pi-sum (+ a 4) b))))
As you can see, these procedures are all nearly identical. The only things that are different are the function being applied to each term in the sequence, how you get to the next term of the
sequence, and the names of the procedures themselves. We can use the similarities of these procedures to create a general pattern that describes all three.
(define (<name> a b)
(if (> a b)
(+ <term> (<name> (<next> a) b))))
The general pattern is that the procedure:
• is given a name
• takes two arguments (a lower bound and an upper bound)
• performs a test to see if the upper bound is exceeded
• applies some procedure to the current term and adds the result to the recursive call to get the next term
• computes of the next term
The important thing to note here is that some of the things that change are
, not just numbers.
There's nothing very special about numbers. Numbers are just one kind of data.
Since procedures are also a type of data in Scheme, we can use them as arguments to other procedures as well. Now that we have a general pattern, we use it to define a
(define (sum term a next b)
(if (> a b)
(+ (term a)
(sum term (next a) next b))))
The arguments
are procedures that are passed in to the
procedure, and will be used to compute the value of the current term and how to get to the next one.
We can use the procedure above to rewrite
as follows:
(define (sum-int a b)
(define (identity x) x)
(sum identity a inc b))
First, the
procedure simply takes an argument and returns that argument. We need this because the
procedure doesn't apply any function to each term of the sequence, it just sums up the raw terms. Our
procedure is expecting a procedure as its second argument, so we have to give it one. Next we simply call the
procedure, passing in the
procedures for
procedures can be rewritten as well (which is kind of the point).
(define (sum-sq a b)
(define (identity x) x)
(sum square a inc b))
(define (pi-sum a b)
(sum (lambda (i) (/ 1.0 (* x (+ x 2))))
(lambda (i) (+ x 4))
Note that in the
example, we use lambda notation to define the procedures for
as we are passing them in to
. We can redefine
without lambda notation by explicitly defining named procedures.
(define (pi-sum a b)
(define (pi-term x)
(/ 1.0 (* x (+ x 2))))
(define (pi-next x)
(+ x 4))
(sum pi-term a pi-next b))
This makes no difference in how the code is evaluated. In the first example we passed a procedure by its definition, in the second we passed it by its name. In future articles on lectures and
exercises, I'll be using more and more lambda notation since we should be becoming more familiar with it.
Separating the abstract idea of
into its own procedure allows us to reuse it in several situations where that idea is needed. Another important point is that we could reimplement
itself, in perhaps a more efficient way, and we would benefit from it everywhere it is used without having to rewrite all the procedures that use it. Here's an iterative implementation of
(define (sum term a next)
(define (iter j ans)
(if (> j b)
(iter (next j)
(+ (term j) ans))))
(iter a 0))
Decomposing procedures in this way allows us to change one abstraction without needing to change the every procedure where it is used. If we wanted to use the iterative approach to summation in the
original three procedures at the top, we'd have to rewrite all three of them individually. Putting the summation code in its own procedure allows you to change it in one place, but use it everywhere
the procedure is called.
section 1.1.8
we looked at Heron of Alexandria's method for computing a square root.
(define (sqrt x)
(define tolerance 0.00001)
(define (good-enough? y)
(< (abs (- (* y y) x)) tolerance ))
(define (improve y)
(average (/ x y) y))
(define (try y)
(if (good-enough? y)
(try (improve y))))
(try 1))
The algorithm for computing the square root of x is not intuitively obvious from looking at the code in this procedure. In this section we'll show how abstraction can be used to clarify how this
procedure works.
The procedure iteratively improves a guess until it is within a pre-defined tolerance of the correct answer. The function for improving the guess for the square root of x is to average the guess with
x divided by x. In mathematical terms
If you substitute in √x for y, you'll notice that we're looking for a fixed point of this function. (A
fixed point
of a function is a point that is mapped to itself by the function. If you put a fixed point into a function, you get the same value out.)
f(y) = (y + x/y) / 2
f(√x) = (√x + x/√x) / 2
f(√x) = (√x + √x) / 2
f(√x) = 2 * √x / 2
f(√x) = √x
We can use this to rewrite the
procedure in terms of computing a fixed point. (Even though we don't have a procedure to compute a fixed point yet. That will be coming up next.)
(define (sqrt x)
(lambda (y) (average (/ x y) y))
This procedure shows how to compute the square root of x in terms of computing a fixed point, but from this we only know that what we pass to the
procedure is another procedure and an initial guess. At this point, fixed-point is only "wishful thinking." Here's one way to compute fixed points of a function:
(define (fixed-point f start)
(define tolerance 0.00001)
(define (close-enuf? u v)
(< (abs (- u v)) tolerance))
(define (iter old new)
(if (close-enuf? old new)
(iter new (f new))))
(iter start (f start)))
This procedure computes:
the fixed point of the function computed by the procedure whose name will be "f" in this procedure.
This is a key quote from the lecture because it illustrates how procedures can treated like any other data in Scheme. Here, the variable
can take on the value of any procedure you want to pass in.
The fixed point of the function passed in to
is computed by iteratively applying the function to its own result, starting at the initial guess, until the change in the result is smaller than some tolerance. Note how the function that is passed
in as the parameter
in the iteration loop.
If you define an
procedure you can run the new
procedure in a Scheme interpreter to test it out.
(define (average x y)
(/ (+ x y) 2))
Procedures as Returned Values
There's another abstraction that we can pull out of the previous
procedure. Before we get to that, a simpler procedure for computing functions whose fixed point is the square root is:
g(y) = x/y
This has the same property as the function we looked at before, if you insert √x in for y, you get √x back. The reason that we didn't use this simpler function before is that for some inputs it will
oscillate. If x is 2, and your initial guess is 1, then the old and new values will oscillate between 2 and 1, never getting any closer together. The original function that uses the average procedure
is just damping out this oscillation. We can pull this damping concept out by first defining sqrt as a function of a fixed-point procedure that is itself a function of average damping.
(define (sqrt x)
(average-damp (lambda (y) (/ x y)))
takes a procedure as its argument and returns a procedure as its value. When given a procedure that takes an argument,
returns another procedure that computes the average of the values before and after applying the original procedure to its argument.
(define average-damp
(lambda (f)
(lambda (x) (average (f x) x))))
This is special because it's the first time we've seen a procedure that produces a procedure as its result.
Newton's method
A general method for finding roots (zeroes) of functions is called
Newton's method
. To find a y such that
f(y) = 0
the general procedure is to start with a guess y
, then iterate the following expression using the function:
y[n+1] = y[n] - f(y[n]) / df/dy | y = y[n]
This is a difference equation. Each term is the difference between the previous term and the function applied to the previous term divided by the derivative with respect to y of f evaluated at the
previous term. (The Scheme representation of this should be a lot more clear, but for now we just need to remember that the derivative of f with respect to y is a function.) We'll start the same way
we did before, by applying the method before we define it.
We can define
in terms of Newton's method as follows:
(define (sqrt x)
(newton (lambda (y) (- x (square y)))
The square root of x is computed by applying Newton's method to the function of y that computes the difference between x and the square of y. If we had a value of y for which the difference between x
and y
returned 0, then y would be the square root of x.
Now we have to define a procedure for Newton's method. Note that we're still using a method of iteratively improving a guess, just as we did in the earlier
procedures. Look again at the expression for Newton's method:
y[n+1] = y[n] - f(y[n]) / df/dy | y = y[n]
We'd like to find some value for y
such that when we plug it in on the right hand side of this expression, we get the same value back out on the left hand side (within some small tolerance). So once again, we are looking for a fixed
(define (newton f guess)
(define df (deriv f))
(lambda (x) (- x (/ (f x) (df x))))
This procedure takes a function and an initial guess, and computes the fixed point of the function that computes the difference of x and the quotient of the function of x and the derivative of the
function of x.
Wishful thinking is essential to good engineering, and certainly essential to good computer science.
Along the way we have to write a procedure that computes the derivative (a function) of the function passed to it.
(define deriv
(lambda (f)
(lambda (x)
(/ (- (f (+ x dx))
(f x))
(define dx 0.0000001)
You may remember from Calculus or a past life that the derivative of a function is the function of x that computes
f(x + Δx) - f(x) / Δx
for some small value of Δx. You might also remember (and perhaps a bit more readily) that the derivative of x
is 2x. So if we apply our
procedure to
, we should expect to get back a procedure that computes 2x. Unfortunately when I tried it in a Scheme interpreter I got back:
> (deriv square)
That's not very revealing, so we're not done experimenting yet. How can we figure out what that returned procedure does? Let's just apply it to some values.
> ((deriv square) 2)
> ((deriv square) 5)
> ((deriv square) 10)
> ((deriv square) 25)
> ((deriv square) 100)
That looks like a fair approximation to the expected procedure.
Abstractions and first-class procedures
The lecture wraps up with the following list by computer scientist
Christopher Strachey
, one of the inventors of
denotational semantics
The rights and privileges of first-class citizens:
• To be named by variables.
• To be passed as arguments to procedures.
• To be returned as values of procedures.
• To be incorporated into data structures.
We've seen that both values and procedures in Scheme meet the first three requirements. In later sections we'll see that they both meet the fourth requirement as well.
Having procedures as first class data allows us to make powerful abstractions that encode general methods like Newton's method in a very clear way.
For links to all of the SICP lecture notes and exercises that I've done so far, see
The SICP Challenge
2 comments:
Tim Kington said...
Now we're getting to the good stuff. Great write-up!
Bill the Lizard said...
Things are definitely getting interesting. It's amazing to see how little code it takes to express such powerful concepts in Scheme. This was my favorite section of the book so far. (I liked it
even more than the section on primality testing, if you can believe that!) Thanks for recommending such a great book so long ago. You're batting 1.000 on those so far. :) | {"url":"http://www.billthelizard.com/2010/04/sicp-lecture-2a-higher-order-procedures.html","timestamp":"2014-04-19T01:48:48Z","content_type":null,"content_length":"111055","record_id":"<urn:uuid:8c3f2be0-f77b-4746-a811-79b08fe005df>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00374-ip-10-147-4-33.ec2.internal.warc.gz"} |
Discretely-normed ring
From Encyclopedia of Mathematics
discrete valuation ring, discrete valuation domain
A ring with a discrete valuation, i.e. an integral domain with a unit element in which there exists an element Witt vector)
A discretely-normed ring may also be defined as a local principal ideal ring; as a local one-dimensional Krull ring; as a local Noetherian ring with a principal maximal ideal; as a Noetherian
valuation ring; or as a valuation ring with group of values
The completion (in the topology of a local ring) of a discretely-normed ring is also a discretely-normed ring. A discretely-normed ring is compact if and only if it is complete and its residue field
is finite; any such ring is either isomorphic to
is called the residue degree. This situation arises when one considers the integral closure
is valid. If
The theory of modules over a discretely-normed ring is very similar to the theory of Abelian groups [3]. Any module of finite type is a direct sum of cyclic modules; a torsion-free module is a flat
module; any projective module or submodule of a free module is free. However, the direct product of an infinite number of free modules is not free. A torsion-free module of countable rank over a
complete discretely-normed ring is a direct sum of modules of rank one.
[1] N. Bourbaki, "Elements of mathematics. Commutative algebra" , Addison-Wesley (1972) (Translated from French)
[2] J.W.S. Cassels (ed.) A. Fröhlich (ed.) , Algebraic number theory , Acad. Press (1967)
[3] J. Kaplansky, "Modules over Dedekind rings and valuation rings" Trans. Amer. Math. Soc. , 72 (1952) pp. 327–340
Let valuation is then defined by
How to Cite This Entry:
Discretely-normed ring. V.I. Danilov (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Discretely-normed_ring&oldid=15824
This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098 | {"url":"http://www.encyclopediaofmath.org/index.php/Discretely-normed_ring","timestamp":"2014-04-18T14:10:15Z","content_type":null,"content_length":"25454","record_id":"<urn:uuid:6f2fc437-b47d-41b1-82a1-0d5b8994795c>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00314-ip-10-147-4-33.ec2.internal.warc.gz"} |
convergent for the values of x
Hi, The following series is convergent for the values of x? $\sum_{n=o}^{\infty }\frac{n}{x^n}$ thanks
The ratio test states that the series will be convergent where \displaystyle \begin{align*} \left|\frac{a_{n+1}}{a_n}\right| < 1 \end{align*}, divergent where \displaystyle \begin{align*} \left|\frac
{a_{n+1}}{a_n}\right| > 1 \end{align*}, and inconclusive where \displaystyle \begin{align*} \left|\frac{a_{n+1}}{a_n}\right| = 1 \end{align*}. So a good place to start would be to evaluate where this
ratio is less than 1. | {"url":"http://mathhelpforum.com/calculus/197989-convergent-values-x.html","timestamp":"2014-04-20T13:39:56Z","content_type":null,"content_length":"42725","record_id":"<urn:uuid:a4923c99-0097-4ef9-80a6-e4705eabb588>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00534-ip-10-147-4-33.ec2.internal.warc.gz"} |
IT Skills for Successful Study
Home > Study skills > Personal effectiveness > IT Skills for Successful Study
Information and communication technology forms part of all aspects of education. It is now expected that students will word process their essays, analyse their findings using a spreadsheet, take part
in online discussion through email and present information using visual aids. IT Skills for Successful Study covers:
The example below illustrates how a spreadsheet model can help you produce excellent results.
Example - How to investigate the capacity of the university library?
The method was to count the number of students entering and leaving the library from opening at 9am to close at 9pm each day. Counting the number of students entering and leaving the library is easy
but it is a large task then to analyse the meaning of the mass of figures you have produced. Table 1 shows the basic numbers who enter and leave the library each hour. However, what do you need to do
in order to consider the capacity of the library? You will need to know:
1. How many students use the library each day?
2. How many students are in the library in each hour?
3. What is the pattern of using the library (i.e. are some days of parts of days mmore popular than others)?
4. What trends are you able to identify?
Addressing these questions would allow you to organise the library to meet student needs (e.g. how many staff to have on duty at specific times?), assess if the library is large enough and ensure the
safety of users (i.e. avoid overcrowding and decide on fire escape procedures).
You could undertake the analysis with a piece of paper and a calculator but this would take a long time and each time you wanted to change part of the approach it would mean starting again. A
spreadsheet allows you to create a model of the library in which you could explore change. It also provides a method of ensuring accuracy when undertaking lots of calculations.
Table 1 Entering and leaving the library
Monday Tuesday Wednesday Thursday Friday
Time Enter Leave Enter Leave Enter Leave Enter Leave Enter Leave Enter Leave
The next steps are:
1. Enter the information into a spreadsheet
2. Total the number of students who enter the library each day
3. Calculate how many students are in the library each hour
4. Analyse the daily use of the library
1. Enter the information into a spreadsheet
Figure 1 illustrates the basic spreadsheet. Notice that we have included a row and column totals as well as a column for the running total of students in the library.
Figure 1 Basic Spreadsheet
2. Total the number of students who enter the library each day
Microsoft Excel provides the Sum function to total columns and rows of figures. If you also use the function to total the leaving column, you have a check on the accuracy of your data since the
number entering and leaving should be equal over a day.
The total formula will be =Sum (C3:C14)
To get the total each hour involves two different calculations. The first is the total of students entering and the second of students leaving.
Formula Entering = (C3+F3+I3+L3+O3+R3)
Formula Leaving = (D3+G3+J3+M3+P3+S3)
3. Calculate how mant students are in the library each hour
This is slightly more complex in that in the first hour it is simply the difference between those entering and leaving. However, after the first hour it needs to consider the students who are already
in the library.
First Hour = Student entering minus students leaving (e.g. = C3-D3)
Second and subsequent hours = Student entering minus students leaving plus students already in library (e.g. =F3 + (C4-D4))
4. Analyse the daily use of the library
Figure 2 Analysis
This is the start of using the sheet to analyse the problem. You might wish to:
1. Present the information visually since this is often helpful in identifying trends. Figure 3 visually illustrates students entering and leaving in the library on Monday. What does it show you?
2. Identify and consider average values of attendance (e.g. average number of students entering and leaving the library).
3. Show the minimum and maximum numbers of students.
4. Consider options such as changing the opening hours on each day to reflect use or to save funds.
A spreadsheet provides you with a wide number of options. Try to consider some of the issues or other ones that you prefer.
Figure 4 Monday
For more advice on using IT skills for your study, see also Research using IT and e-learning skills.
This content has been written by Alan Clarke, author of IT Skills for Successful Study. | {"url":"http://www.palgrave.com/skills4study/studyskills/personal/study.asp","timestamp":"2014-04-19T23:00:39Z","content_type":null,"content_length":"48460","record_id":"<urn:uuid:238d85bd-bc77-4a8e-9964-c59ba28f9846>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00228-ip-10-147-4-33.ec2.internal.warc.gz"} |
Using Encoders to Drive Straight
From ROBOTC API Guide
Why is the robot not going straight?
You have probably noticed by now that your robot is often having trouble driving straight. This is because the robot's wheels do not go the same speed. If each wheel does not go at exactly the same
speed, the robot is not driving straight but it is in fact executing a very wide swing turn. "But I am telling them to go the same speed!" you may say. "The power value I am sending is the same for
each wheel!"
Unfortunately, even if you send the same power value to each wheel, this will absolutely not guarantee that each wheel will move at the same speed. Why? Because of tiny variables such as friction
resistance and manufacturing differences, one motor at power 50 will not go at the same speed as another motor at power 50.
How can I fix this?
Luckily, your robot is now equipped with encoders which means that you know how much your wheels turn. If you take the amount the wheels turn within a specific timeframe, say 1 second, and find they
go a certain amount of encoder ticks, say 300, you can easily determine the speed of the wheel. Now that you know the speed of each wheel, we can compare them, and apply more or less power to one
wheel to catch up with/slow down to the level of the other motor. This will make your robot drive much more straight.
The Theory
This is called 'proportional control' and is one of the more useful things you will learn when programming a robot. The important thing to accept is that we will need to use one motor as a 'master
motor', to which a constant power is applied, as we usually do, and a 'slave motor', whose power we will change to make sure it goes at the same speed as the master motor. It doesn't matter which
side you choose, but we will use the left motor as the master motor and the right motor as the slave.
Basically, think of it this way: the master is going along at his own pace. This pace might change slightly when he gets tired or has to climb over an obstacle. The slave's job is to keep alongside
the master by speeding up when he falls behind, and slowing down when he goes too far ahead.
The difference between the master's speed and the slave's speed is called the 'error'. If they are going along at exactly the same pace, the error value will be zero. If the slave is too slow, the
error is positive. If the slave is going too fast, the error is negative. Error is very simple to calculate, and can be defined as:
Error = Speed of master - Speed of slave.
After the error is found, it should be added onto the power value of the slave motor, so that it will go faster by an appropriate amount. However, there is something important to do first - the error
should be multiplied by a number called the Constant of Proportionality, usually expressed as 'kp'. What this does is converts the difference in encoder speeds (error) into something that can be used
to adjust the motor power.
For example: say we find that the master encoder is ticking at 300 ticks per second, and the slave encoder is ticking at 250. We apply the error formula and find that the error = 300 - 250 = 50.
However, we don't want to add 50 straight onto the motor power! It will overcompensate far too much and zoom ahead, and the next time it calculates the error it will speed backwards, overcompensating
again. To fix this, we need to multiply the error by a value, which here is kp. If kp is, say, 0.2, we multiply the error value (50) by kp (0.2) and get the result: 50 * 0.2 = 10. If we were to add
10 to the slave motor power instead of 50, this would give us a much more reasonable increase in speed.
There is no magic way to determine the best value of kp. You simply need to use trial and error to find a value that will result in neither overcompensation nor under compensation. This is refereed
to as 'tuning' kp.
In our circumstances, using the Arduino UNO with ROBOTC, we cannot use floating point numbers, which means that a value of, say, 0.2 for kp is not possible. However, if you consider that multiplying
by 0.2 is the same as dividing by 5, then we can use this property and set kp so that the error is divided by a whole number instead of multiplied by a decimal.
So, we have added a suitable value to the power of the slave motor to compensate for the error. Now, we need to do it again, looping around a certain amount of times per second. For this application,
ten times per second is plenty.
The Code
Let's write a program encompassing the concepts we have just talked about. This will make the robot drive straight indefinitely.
task main()
//The powers we give to both motors. masterPower will remain constant while slavePower will change so that
//the right wheel keeps the same speed as the left wheel.
int masterPower = 30;
int slavePower = 30;
//Essentially the difference between the master encoder and the slave encoder. Negative if slave has
//to slow down, positive if it has to speed up. If the motors moved at exactly the same speed, this
//value would be 0.
int error = 0;
//'Constant of proportionality' which the error is divided by. Usually this is a number between 1 and 0 the
//error is multiplied by, but we cannot use floating point numbers. Basically, it lets us choose how much
//the difference in encoder values effects the final power change to the motor.
int kp = 5;
//Reset the encoders.
SensorValue[leftEncoder] = 0;
SensorValue[rightEncoder] = 0;
//Repeat ten times a second.
//Set the motor powers to their respective variables.
motor[leftServo] = masterPower;
motor[rightServo] = slavePower;
//This is where the magic happens. The error value is set as a scaled value representing the amount the slave
//motor power needs to change. For example, if the left motor is moving faster than the right, then this will come
//out as a positive number, meaning the right motor has to speed up.
error = SensorValue[leftEncoder] - SensorValue[rightEncoder];
//This adds the error to slavePower, divided by kp. The '+=' operator literally means that this expression really says
//"slavePower = slavepower + error / kp", effectively adding on the value after the operator.
//Dividing by kp means that the error is scaled accordingly so that the motor value does not change too much or too
//little. You should 'tune' kp to get the best value. For us, this turned out to be around 5.
slavePower += error / kp;
//Reset the encoders every loop so we have a fresh value to use to calculate the error.
SensorValue[leftEncoder] = 0;
SensorValue[rightEncoder] = 0;
//Makes the loop repeat ten times a second. If it repeats too much we lose accuracy due to the fact that we don't have
//access to floating point math, however if it repeats to little the proportional algorithm will not be as effective.
//Keep in mind that if this value is changed, kp must change accordingly.
This may look like a long program, but without the comments it is only 18 lines long.
Run the program and you will see your robot drive much straighter than it has before.
Driving straight for a distance
We are now going to use this knowledge and apply it to the previous lesson by writing a function that will drive straight for a certain distance - very useful, indeed.
Recall the function driveDistance from the previous lesson:
void driveDistance(int tenthsOfIn, int power)
sensorValue[leftEncoder] = 0; // It is good practice to reset encoder values at the start of a function.
//Calculate tenths of an inch by multiplying the ratio we determined earlier with the amount of
//tenths of inches to go, then divide by ten as the ratio used is for an inch value.
//Since we don't want to calculate every iteration of the loop, we will find the clicks needed
//before we begin the loop.
int tickGoal = (42 * tenthsOfIn) / 10;
while(abs(SensorValue[leftEncoder]) < tickGoal)
motor[leftServo] = power; // We can now set the power from the function's second parameter.
motor[rightServo] = power;
motor[leftServo] = 0; // Stop the loop once the encoders have counted up the correct number of encoder ticks.
motor[rightServo] = 0;
Remember to put this at the top of the program, below the configuration code:
#define abs(X) ((X < 0) ? -1 * X : X)
See how, within the while loop in driveDistance, we simply set the motor powers? If we merge this with our drive straight code, we will get the new function:
void driveStraightDistance(int tenthsOfIn, int masterPower)
int tickGoal = (42 * tenthsOfIn) / 10;
//Initialise slavePower as masterPower - 5 so we don't get huge error for the first few iterations. The
//-5 value is based off a rough guess of how much the motors are different, which prevents the robot from
//veering off course at the start of the function.
int slavePower = masterPower - 5;
int error = 0;
int kp = 5;
SensorValue[leftEncoder] = 0;
SensorValue[rightEncoder] = 0;
//We still only have to monitor only one encoder as we have made it so that they will have the same values anyway.
while(abs(SensorValue[leftEncoder]) < tickGoal)
//Proportional algorithm to keep the robot going straight.
motor[leftServo] = masterPower;
motor[rightServo] = slavePower;
error = SensorValue[leftEncoder] - SensorValue[rightEncoder];
slavePower += error / kp;
SensorValue[leftEncoder] = 0;
SensorValue[rightEncoder] = 0;
motor[leftServo] = 0; // Stop the loop once the encoders have counted up the correct number of encoder ticks.
motor[rightServo] = 0;
Will this work? No. The while loop can never trigger! We reset both encoder values every iteration, so the encoder tick count never even gets near the threshold.
To fix this, we will need to add another variable called 'totalTicks', which will add the encoder values every time the loop iterates. Therefore, at any time, it will have a value equal to the total
encoder ticks since the function was called. It is very simple to implement:
void driveStraightDistance(int tenthsOfIn, int masterPower)
int tickGoal = (42 * tenthsOfIn) / 10;
//This will count up the total encoder ticks despite the fact that the encoders are constantly reset.
int totalTicks = 0;
//Initialise slavePower as masterPower - 5 so we don't get huge error for the first few iterations. The
//-5 value is based off a rough guess of how much the motors are different, which prevents the robot from
//veering off course at the start of the function.
int slavePower = masterPower - 5;
int error = 0;
int kp = 5;
SensorValue[leftEncoder] = 0;
SensorValue[rightEncoder] = 0;
//Monitor 'totalTicks', instead of the values of the encoders which are constantly reset.
while(abs(totalTicks) < tickGoal)
//Proportional algorithm to keep the robot going straight.
motor[leftServo] = masterPower;
motor[rightServo] = slavePower;
error = SensorValue[leftEncoder] - SensorValue[rightEncoder];
slavePower += error / kp;
SensorValue[leftEncoder] = 0;
SensorValue[rightEncoder] = 0;
//Add this iteration's encoder values to totalTicks.
totalTicks+= SensorValue[leftEncoder];
motor[leftServo] = 0; // Stop the loop once the encoders have counted up the correct number of encoder ticks.
motor[rightServo] = 0;
And that is our function! We now have the ability to easily drive any number of tenths of an inch, and the robot will drive straight throughout the distance. Let's use this in a program, where the
robot should eventually end up back at its starting point.
#pragma config(CircuitBoardType, typeCktBoardUNO)
#pragma config(UART_Usage, UART0, uartSystemCommPort, baudRate200000, IOPins, dgtl1, dgtl0)
#pragma config(Sensor, dgtl2, rightEncoder, sensorQuadEncoder)
#pragma config(Sensor, dgtl7, leftEncoder, sensorQuadEncoder)
#pragma config(Motor, servo_10, rightServo, tmotorServoContinuousRotation, openLoop, reversed, IOPins, dgtl10, None)
#pragma config(Motor, motor_11, leftServo, tmotorServoContinuousRotation, openLoop, IOPins, dgtl11, None)
//*!!Code automatically generated by 'ROBOTC' configuration wizard !!*//
#define abs(X) ((X < 0) ? -1 * X : X)
void driveStraightDistance(int tenthsOfIn, int masterPower)
int tickGoal = (42 * tenthsOfIn) / 10;
//This will count up the total encoder ticks despite the fact that the encoders are constantly reset.
int totalTicks = 0;
//Initialise slavePower as masterPower - 5 so we don't get huge error for the first few iterations. The
//-5 value is based off a rough guess of how much the motors are different, which prevents the robot from
//veering off course at the start of the function.
int slavePower = masterPower - 5;
int error = 0;
int kp = 5;
SensorValue[leftEncoder] = 0;
SensorValue[rightEncoder] = 0;
//Monitor 'totalTicks', instead of the values of the encoders which are constantly reset.
while(abs(totalTicks) < tickGoal)
//Proportional algorithm to keep the robot going straight.
motor[leftServo] = masterPower;
motor[rightServo] = slavePower;
error = SensorValue[leftEncoder] - SensorValue[rightEncoder];
slavePower += error / kp;
SensorValue[leftEncoder] = 0;
SensorValue[rightEncoder] = 0;
//Add this iteration's encoder values to totalTicks.
totalTicks+= SensorValue[leftEncoder];
motor[leftServo] = 0; // Stop the loop once the encoders have counted up the correct number of encoder ticks.
motor[rightServo] = 0;
task main()
//Distances specified in tenths of an inch.
wait1Msec(500); //Stop in between to prevent momentum causing wheel skid.
Beautiful, isn't it? Extremely accurate and consistent, regardless of surface friction or other variables. If you are using the robot on a smooth surface and/or decide to increase the speed, you may
notice a bit of inaccuracy as your tires may skid after the sudden stop. Overall, however, this is a superior method for driving accurately. | {"url":"http://www.robotc.net/w/index.php?title=Tutorials/Arduino_Projects/Mobile_Robotics/VEX/Using_encoders_to_drive_straight&oldid=5604","timestamp":"2014-04-19T03:13:53Z","content_type":null,"content_length":"49426","record_id":"<urn:uuid:228e7b11-2a65-4a34-bd4e-c5cfede3fc2a>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00490-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebra Master
Author Message
*kin* Posted: Fri Feb 02, 1990 21:47
I am in desperate need of help in completing a homework in math work sheets for 6th grade. I need to finish it by next week and am having a difficult time trying to figure out a few
tricky problems. I tried some of the online help sites but have not gotten much help so far. I would be really glad if anyone can help me.
Registered: Dec
29, 2006
From: australia
ameich Posted: Sun Feb 04, 1990 00:34
Have you checked out Algebra Master? This is a great software and I have used it several times to help me with my math work sheets for 6th grade problems. It is really very
straightforward -you just need to enter the problem and it will give you a complete solution that can help solve your homework. Try it out and see if it is useful .
Registered: Mar
21, 2005
From: Prague,
Czech Republic
alhatec16 Posted: Sun Feb 04, 1990 10:11
I am a frequent user of Algebra Master and it has really helped me comprehend math problems better by giving detailed steps for solving. I recommend this software to help you with
your algebra stuff. You just need to follow the instructions given there.
Registered: Mar
10, 2002
From: Notts,
Posted: Tue Feb 06, 1990 14:37
Thank you very much for your help ! Could you please tell me how to get hold of this program? I don’t have much time on hand since I have to finish this in a few days.
Vild Posted: Wed Feb 07, 1990 10:48
A truly piece of math software is Algebra Master. Even I faced similar problems while solving quadratic inequalities, point-slope and like denominators. Just by typing in the problem
from homework and clicking on Solve – and step by step solution to my math homework would be ready. I have used it through several math classes - Algebra 2, Pre Algebra and College
Algebra. I highly recommend the program.
Registered: Jul
3, 2001
Sacramento, CA
DVH Posted: Thu Feb 08, 1990 08:57
You can download this software from http://www.softmath.com/algebra-test/order.htm. There are some demos available to see if it is really what want and if you find it good , you can
get a licensed version for a small amount.
Registered: Dec
20, 2001 | {"url":"http://www.algebra-test.com/algebra-answers/math-work-sheets-for-6th-grade.html","timestamp":"2014-04-20T10:54:02Z","content_type":null,"content_length":"18331","record_id":"<urn:uuid:195c1b99-31f4-40a8-85fe-3bdad48541ef>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00383-ip-10-147-4-33.ec2.internal.warc.gz"} |
2005.26: Vector Spaces of Linearizations for Matrix Polynomials
2005.26: D. Steven Mackey, Niloufer Mackey, Christian Mehl and Volker Mehrmann (2005) Vector Spaces of Linearizations for Matrix Polynomials.
There is a more recent version of this eprint available. Click here to view it.
Full text available as:
PDF - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader
294 Kb
The classical approach to investigating polynomial eigenvalue problems is linearization, where the polynomial is converted into a larger matrix pencil with the same eigenvalues. For any polynomial
there are innitely many linearizations with widely varying properties, but in practice the companion forms are typically used. However, these companion forms are not always entirely satisfactory, and
linearizations with special properties may sometimes be required. Given a matrix polynomial P, we develop a systematic approach to generating large classes of linearizations for P. We show how to
simply construct two vector spaces of pencils that generalize the companion forms of P, and prove that almost all of these pencils are linearizations for P. Eigenvectors of these pencils are shown to
be closely related to those of P. A distinguished subspace is then isolated, and the special properties of these pencils are investigated. These spaces of pencils provide a convenient arena in which
to look for structured linearizations of structured polynomials, as well as to try to optimize the conditioning of linearizations [7], [8], [12].
Available Versions of this Item
Download Statistics: last 4 weeks
Repository Staff Only: edit this item | {"url":"http://eprints.ma.man.ac.uk/95/","timestamp":"2014-04-18T08:18:28Z","content_type":null,"content_length":"10151","record_id":"<urn:uuid:5b1192ea-024a-4351-b1b6-00bc33ea87bc>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00116-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum - Problems Library - Middle School, Geometry
This page:
About Levels
of Difficulty
exponents & roots
factors, factoring, & prime numbers
fractions, decimals &
ratio & proportion
Discrete Math
combinations &
graph theory
circumference &
Browse all
M.S. Problems
Middle School
About the
PoW Library
Middle School: Geometry
In grades 6-8, students learn to use the defining properties of two- and three-dimensional objects to describe, classify, and understand the relations that exist among them. Students also explore
the relations among the angles, side lengths, perimeters, areas, and volumes of similar objects, and study such geometric ideas as congruence, similarity, and the Pythagorean relationship.
Middle-school students use coordinate geometry to study properties of geometric shapes (such as regular polygons), and they investigate transformations such as flips, turns, slides, and scaling,
using such transformations to learn about congruence, similarity, and line or rotational symmetry of objects. Finally, students learn to recognize geometry in other areas such as art, science,
and everyday life.
Middle School Problems of the Week that require students to use geometry to solve them are listed below. They address the NCTM Geometry Standard for Grades 6-8.
For background information elsewhere on our site, explore the Middle School Geometry area of the Ask Dr. Math archives. For relevant sites on the Web, browse and search Geometry in our Internet
Mathematics Library; to find middle-school sites, go to the bottom of the page, set the searcher for middle school (6-8), and press the Search button.
Access to these problems requires a Membership. | {"url":"http://mathforum.org/library/problems/sets/middle_geometry.html","timestamp":"2014-04-19T09:33:14Z","content_type":null,"content_length":"21402","record_id":"<urn:uuid:13691813-12c2-4ed8-848e-d6c36752b07c>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00637-ip-10-147-4-33.ec2.internal.warc.gz"} |
An optimal message routing algorithm for circulant networks
Tomaž Dobravec, Janez Žerovnik and Borut Robič
Journal of Systems Architecture Volume 52, Number 5, , 2006. ISSN 1383-7621
A k-circulant network G(n; h1, h2, … , hk) is an undirected graph where the node set is $Z_n$ and the edges are given by $\{v,v\pm hi\}$. We present an optimal (i.e. using shortest paths) dynamic
two-terminal message routing algorithm for k-circulant networks, $k\geq 2$. Instead of computing the shortest paths in advance or using routing tables, our algorithm uses only the address of the
final destination to determine the next node to which the message must be sent in order to stay on one of the shortest paths to its destination. We introduce the restricted shortest paths, which are
used by our routing algorithm, and present an efficient algorithm for their construction in 2-circulant graphs. | {"url":"http://eprints.pascal-network.org/archive/00002412/","timestamp":"2014-04-20T20:58:07Z","content_type":null,"content_length":"6279","record_id":"<urn:uuid:1791a7c1-92f5-4e1c-b026-a323eb18d452>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00287-ip-10-147-4-33.ec2.internal.warc.gz"} |
Test Tomorrow.....Help!!!....Fermat's Little Theroem
Try adding multiples of p to p-1... there isn't really any more help anyone can give.
Well that's great....Just so I'm on firm ground with the question:
1 + 1 + 1...........1 (is congruent to) ______ (mod p)
adding up all those one's will give me:
(p-1) (congruent to) ______ (mod p) | {"url":"http://www.physicsforums.com/showthread.php?p=3788145","timestamp":"2014-04-16T16:09:22Z","content_type":null,"content_length":"68885","record_id":"<urn:uuid:a5266d12-17af-4821-96a9-b91792ac026f>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00542-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fernando Gil International Prize
2010 Winner
Patterns of Change, Linguistic Innovations in the Development of Classical Mathematics
Ladislav Kvasz
About the book
This book offers a reconstruction of linguistic innovations in the history of mathematics; innovations which changed the ways in which mathematics was done, understood and philosophically
interpreted. It argues that there are at least three ways in which the language of mathematics has been changed throughout its history, thus determining the lines of development that mathematics has
The book also offers tools of analysis by means of which scholars and students of the history and philosophy of mathematics can attain better understanding of the various changes, which the subject
of their study underwent in the course of history. The book brings also important insights for mathematics education connecting growth of language with the development of mathematical thought.
Birkhäuser Basel
ISBN 978-3-7643-8839-3
261 pages · 261
About the author
Ladislav Kvasz graduated in 1986 in mathematics at the Comenius University in Bratislava. In 1995 he received a PhD in philosophy with the thesis Classification of Scientific Revolutions.
Since 1986 he has been employed at the Faculty of Mathematics and Physics of Comenius University. In 1993 he won the Herder Scholarship and spent one year at the University of Vienna studying the
philosophy of Wittgenstein. In 1995 he won the Masaryk Scholarship of the University of London and spent one year at King’s College London working on the philosophy of Imre Lakatos. In 1997 he won
the Fulbright Scholarship and spent one semester at the University of California at Berkeley, working on philosophy of geometry. In 2000 he won the Humboldt Scholarship and spent two years at the
Technical University in Berlin working on the scientific revolution. In 2007 he moved to Prague, where he became in 2010 a professor of mathematics education.
He was the co-editor of Appraising Lakatos (Kluwer 2002) and author of Patterns of Change (Birkhauser 2008).
The First Fernando Gil Prize for Philosophy of Science
The jury has decided to award the first Fernando Gil prize for philosophy of science to Ladislav Kvasz , who is professor at the Charles University in the Czech Republic for his book: Patterns of
Change, Linguistic Innovations in the Development of Classical Mathematics, published in 2008.
Professor Kvasz graduated in 1986 in mathematics at the Comenius University In Bratislav, which was then part of the communist world. He first thought of following a career in applied mathematics,
and went to Moscow where he worked on some complicated problems in astrophysics. However, after the fall of communism, his interest shifted to the philosophy of mathematics. From 1993 to 2002, he won
a series of prestigious scholarships to pursue his research in this area in the main centres of the Western world: Vienna, London, Berkeley in Calfornia, and Berlin. During these years, he developed
the ideas which appear in his book.
The main criteria which led the jury to decide in favour of Professor Kvasz’s book for the prize was the originality of his work and the scholarly way in which he supported his position. His book
concerns the way in which mathematics develops, and it formulates three important patterns of change which are named: recoding, relativization, and reformulation. The first two of these are entirely
novel, and show the great originality of Professor Kvasz’s work. However, Professor Kvasz is not content merely to state that these patterns occur. He demonstrates his thesis by numerous examples
from the history of mathematics of which he has a profound and scholarly knowledge.
When new concepts are introduced into mathematics, this nearly always involves the introduction of a new symbolic language, and Professor Kvasz’s book discusses how these new languages are developed.
When discussing his pattern of relativization, he makes use of some of the ideas of Wittgenstein, but gives these a dynamic development. Wittgenstein in his early thinking on language claimed that
there is a form of the language which cannot be expressed in the language itself. Professor Kvasz’s idea is that one can, however, create a new language by adding the form of the old language to the
language. He shows that it was in this way that major conceptual advances such as the introduction of non-Euclidean geometry took place. This dynamic and historical account of the way in which
language develops gives Professor Kvasz’s work an interest which goes beyond mathematics into general questions of language and thought.
Altogether Professor Kvasz’s book is an exciting and stimulating one, which should help to make the Fernando Gil prize a significant factor in future developments of philosophy of science. | {"url":"http://fernando-gil.org.pt/en/nominees/2010/winner/","timestamp":"2014-04-17T07:14:27Z","content_type":null,"content_length":"9536","record_id":"<urn:uuid:db28c44e-f9a7-4553-9381-6568799684bd>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00649-ip-10-147-4-33.ec2.internal.warc.gz"} |
Shaking Things Up: The Harmonic Oscillator
Shaking Things Up: The Harmonic Oscillator
Moving beyond the particle in the box, a model for simple molecular vibrations is constructed. The solutions depend on the Hermite polynomials and exhibit parity. Wavefunctions for states with an
even quantum number have even parity (are symmetric about the y axis) and those with odd quantum numbers are odd. Parity considerations can simplify quantum chemical calculations.
MP3 podcastscreencastMathematica notebook | {"url":"http://chemistry221.blogspot.com/2005/09/shaking-things-up-harmonic-oscillator.html","timestamp":"2014-04-19T11:58:38Z","content_type":null,"content_length":"16124","record_id":"<urn:uuid:08a2fd67-d340-4728-a357-de8035136a67>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00511-ip-10-147-4-33.ec2.internal.warc.gz"} |
Delivering Papers
Date: 11/15/1999 at 15:45:54
From: Kyersten
Subject: Delivering Papers
Uri delivers 5 newspapers in the same time Juni delivers 4 newspapers.
If they have a total of 54 papers, how many will Uri deliver? How many
will Juni deliver?
Date: 11/15/1999 at 19:14:24
From: Doctor Ian
Subject: Re: Delivering Papers
Hi Kyersten,
There is a general way to solve this, but what your teachers don't
often tell you is that the 'general' method isn't always the best.
(For example, if you want to add 49 and 2, you can carry the one, or
you can say to yourself: '50, 51.')
In this case, because the numbers are small, you can just do a
Time Uri Juni Total
---- --- ---- -----
1 5 4 = 9
2 10 8 = 18
3 15 12 = 27
4 20 16 = 36
Can you complete this on your own?
Another way to solve the problem, which doesn't involve so much
writing, is to realize that in each unit of time, a total of 9 papers
will get delivered: 5 by Uri, 4 by Juni.
Since you know that the total number of papers to be delivered is 54,
you can divide 54 by 9 to get the number of units of time. Then you
can multiply this by 5 to find out how many were delivered by Uri, and
multiply by 4 to find out how many were delivered by Juni.
For example, if they were delivering 72 papers, they would finish in
72/9 = 8 units of time. That would mean that Uri would deliver
8*5 = 40 papers, and Juni would deliver 8*4 = 32 papers.
Can you take it from here?
Be sure to write back if you're still stuck, or if you have any other
- Doctor Ian, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/58711.html","timestamp":"2014-04-17T21:33:10Z","content_type":null,"content_length":"6660","record_id":"<urn:uuid:5f5ae3b7-407f-4c7c-8b6f-b86b1d24a6f3>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00323-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-user] Need Advice With Arrays and Calculating Eigenvectors
Rich Shepard rshepard@appl-ecosys....
Thu Feb 22 12:17:15 CST 2007
As a newcomer to python, NumPy, and SciPy I need to learn how to most
efficiently manipulate data. Today's need is reducing a list of tuples to
three smaller lists of tuples, creating a symmetrical matrix from each list,
and calculating the Eigenvector of each of the three symmetrical matrices.
The starting list contains 9 tuples. Each tuple has 30 items: a category
name, a subcategory name, and 28 floats. These were selected from a
database, and a tuple looks like this one:
(u'soc', u'pro', 1.3196923076923075, 3.8109999999999999, 1.6943846153846154,
2.7393076923076922, 3.825538461538462, 5.0640769230769234,
3.609923076923077, 3.1429999999999998, 1.5936153846153849,
1.4893846153846153, 2.6563076923076929, 2.2156923076923074,
3.7973076923076921, 2.6884615384615387, 2.7008461538461543,
3.4992307692307687, 2.3813846153846154, 3.2199230769230769,
1.7726923076923078, 2.9855384615384613, 2.8829230769230771,
3.7862307692307695, 2.3791538461538462, 4.0949230769230773,
2.8703846153846153, 2.8296923076923073, 3.319230769230769,
The three 'soc' subcategories need to have each of the 28 floats averaged
and assigned to another tuple for 'soc'; same with the other two categories.
That produces a list of three tuples, each with 29 items.
Each of these 28 floats represents the average of a pair-wise comparison
of values (in the non-numeric sense). So the first float above represents
the cell (1,2), the second float represents the value of the cell (1,3) and
so on. The diagonal of the matrix is 1.
When I have these three symmetrical matrices, I want to call eigen() on
each one to calculate the principal Eignevector.
I can think of indirect ways of doing all this, but I'm sure that there
are much more efficient approaches known to those who have done this before.
So, I'd like your suggestions and recommendations. Of course, if I've not
clearly explained my needs, please ask.
Richard B. Shepard, Ph.D. | The Environmental Permitting
Applied Ecosystem Services, Inc. | Accelerator(TM)
<http://www.appl-ecosys.com> Voice: 503-667-4517 Fax: 503-667-8863
More information about the SciPy-user mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-user/2007-February/011030.html","timestamp":"2014-04-18T16:58:08Z","content_type":null,"content_length":"4801","record_id":"<urn:uuid:1b29f8cd-9326-406d-af83-8c780a56f9c5>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00199-ip-10-147-4-33.ec2.internal.warc.gz"} |
Burtonsville Algebra 1 Tutor
...I have had unenthusiastic middle school readers picking up Tolstoy and Shakespeare in one semester. I help families to establish habits that enhance the test prep and guarantee future academic
success. I am comfortable teaching writing - from basic spelling, vocabulary, and sentence construction to research papers and creative fiction.
32 Subjects: including algebra 1, reading, English, chemistry
...It is possible to truly understand chemistry! I have worked through my own struggles with chemistry from the simple details (atomic structure) to the complex theories (electrochemistry). I am
looking forward to getting to know you so I can tailor your tutoring experience to meet your particular needs. For now, I travel by Metro from Silver Spring.
5 Subjects: including algebra 1, chemistry, algebra 2, geometry
"Tutoring all varieties of Math & SAT in English and Persian" I have a bachelor's degree in math and I have 15+ years of experience in teaching mathematics courses at the high school and
undergraduate level including trigonometry, algebra I and II, pre-calculus and calculus. The key strengths tha...
11 Subjects: including algebra 1, calculus, geometry, GRE
...I have a PhD in Organic Chemistry and over 10 years tutoring experience. I also offer study skills for sciences and maths. Classes Offered are Organic Chemistry, Chemistry for nursing students,
General/introductory Chemistry to college students, High school Chemistry, and AP Chemistry.I have a PhD in Organic Synthesis.
7 Subjects: including algebra 1, chemistry, organic chemistry, ACT Math
I have been working as a personal tutor since November 2007 for the George Washington University (GWU) Athletic Department. I have hundreds of hours of experience and am well-versed in explaining
complicated concepts in a way that beginners can easily understand. I specialize in tutoring math (from pre-algebra to differential equations!) and statistics.
16 Subjects: including algebra 1, calculus, statistics, geometry | {"url":"http://www.purplemath.com/Burtonsville_algebra_1_tutors.php","timestamp":"2014-04-19T05:15:54Z","content_type":null,"content_length":"24346","record_id":"<urn:uuid:e6c4135b-1d4a-478b-98c2-c0970dc98353>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00288-ip-10-147-4-33.ec2.internal.warc.gz"} |
Abelian subvarieties of abelian varieties --- reference request
up vote 5 down vote favorite
This question may be too naive, in which case I apologise in advance. Anyway, it is a well-known fact (see e.g. Milne's notes) that any abelian variety A has only finitely many direct factors up to
automorphisms of A. (Here a direct factor of A is an abelian subvariety B for which there exists another abelian subvariety C of A such that $A \cong B \times C$.)
My question is: how much is known about the corresponding question for arbitrary abelian subvarieties, rather than direct factors? That is, is it known whether every abelian variety A has finitely
many abelian subvarieties, up to automorphisms of A? If not, what's the best known result in this direction?
I've asked a couple of people about this, and their opinion seems to be that it's "more or less" known. But I would like something a little more concrete, if possible. Any relevant references would
be appreciated!
ag.algebraic-geometry abelian-varieties reference-request
1 Over the complex numbers this is just a purely elementary linear algebra question in disguise. Did you try thinking about it this way? – Kevin Buzzard Dec 3 '10 at 16:21
According to Poincaré's Complete reducibility theorem every abelian variety is isogenous to the product of simple abelian varieties with the reasonably expected uniqueness condition on the
factors. You should be able to find this theorem in any standard book on abelian varieties, say, Mumford's or Birkenhake-Lange. (I moved this here since it is more of a comment than an answer and
it certainly was not meant to be an answer.) – Sándor Kovács Dec 3 '10 at 19:56
Here is a copy of BCnrd's comment that would probably disappear with the deletion of the answer that he made the comment to. So this is a comment to the above comment: $$\quad $$ Since the
1 question is sensitive to the distinction between End(A) and End0(A), even reducing the problem to the isotypic cases seems an unwise strategy. But the Poincare reducibility theorem does show that
the abelian subvarieties are the images of endomorphisms, so the problem reduces to a question about orders in finite-dimensional semisimple Q-algebras (and more specifically, Albert algebras). –
BCnrd 1 hour ago – Sándor Kovács Dec 3 '10 at 19:58
Thanks for all the helpful comments. @BCnrd: I got that far, but the sticking point was that it seems necessary to know that the endomorphisms involved have bounded eigenvalues. I don't see how to
get that yet, so I'd better think some more. – Artie Prendergast-Smith Dec 6 '10 at 8:44
add comment
1 Answer
active oldest votes
For the benefit of others who might look at this question, let me mention that I found the following reference proving exactly what I wanted. (More precisely, I was told about it by
David Ploog.)
up vote 4 down
vote Lenstra, H; Oort, F; Zarhin, Yu. Abelian subvarieties. J. Algebra 180 (1996), no. 2, 513–516.
2 math.leidenuniv.nl/~hwl/PUBLICATIONS/1996b/art.pdf – Dmitri Dec 13 '10 at 9:56
Now, Kani shows in (mast.queensu.ca/~kani/papers/hum-msm.pdf) a result that was already well known, that if an abelian surface contains at least 3 elliptic subgroups (i.e. abelian
subvarieties), then it contains infinitely many. – Robert Auffarth Mar 11 '13 at 18:51
1 An example of this would be the self product of an elliptic curve $E$; just take $E_n:=\{(z,nz):z\in E\}$ in $E^2$; this gives an abelian subvariety for every $n$. – Robert Auffarth
Mar 11 '13 at 22:31
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry abelian-varieties reference-request or ask your own question. | {"url":"http://mathoverflow.net/questions/48180/abelian-subvarieties-of-abelian-varieties-reference-request","timestamp":"2014-04-16T22:36:40Z","content_type":null,"content_length":"60117","record_id":"<urn:uuid:0642962c-9917-48bf-a672-eb4a33cdd5e2>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00296-ip-10-147-4-33.ec2.internal.warc.gz"} |
Number Theory, Vol.2, Algebraic Number Theory
von A.N. Parshin
Kategorie: Algebraische Zahlentheorie
ISBN: 3540533869
Kurzbeschreibung Modern number theory, according to Hecke, dates from Gauss's quadratic reciprocity law. The various extensions of this law and the generalizations of the domains of study for number
theory have led to a rich network of ideas, which has had effects throughout mathematics, in particular in algebra. This volume of the Encyclopaedia presents the main structures and results of
algebraic number theory with emphasis on algebraic number fields and class field theory. Koch has written for the non-specialist. He assumes that the reader has a general understanding of modern
algebra and elementary number theory. Mostly only the general properties of algebraic number fields and related structures are included. Special results appear only as examples which illustrate
general features of the theory. A part of algebraic number theory serves as a basic science for other parts of mathematics, such as arithmetic algebraic geometry and the theory of modular forms. For
this reason, the chapters on basic number theory, class field theory and Galois cohomology contain more detail than the others. This book is suitable for graduate students and research mathematicians
who wish to become acquainted with the main ideas and methods of algebraic number theory.
Synopsis Modern number theory dates from Gauss's quadratic reciprocity law. This law and other developments in number theory have led to a rich network of ideas, which has had effects throughout
mathematics, in particular in algebra. This volume of the Encyclopaedia presents the main structures and results of algebraic number theory with emphasis on algebraic number fields and class field
theory. Koch has written for the non-specialist. He assumes a general understanding of modern algebra and elementary... | {"url":"http://www.uni-protokolle.de/buecher/isbn/3540533869/","timestamp":"2014-04-16T07:28:20Z","content_type":null,"content_length":"7150","record_id":"<urn:uuid:d92664f5-119e-487e-adf1-7afdc6a17dae>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00283-ip-10-147-4-33.ec2.internal.warc.gz"} |
Long Beach, CA Algebra 2 Tutor
Find a Long Beach, CA Algebra 2 Tutor
...I also require my students to give at least 24 hours before cancellation. I look forward to hearing from you and assisting you on your journey of success. I have taught and tutor Algebra 1 for
more than 3 years.
7 Subjects: including algebra 2, geometry, algebra 1, precalculus
...I provided tutoring services as a youth counselor, working with junior high and high school children. I am very competent in the areas of Algebra, Geometry, and Statistics. I am very patient
and I am here to serve to the best of my abilities.
9 Subjects: including algebra 2, geometry, algebra 1, elementary math
...In high school, I had my first exposure to physics, where I gained a strong understanding of the subject and used it to ace the course as well as aid my classmates in understanding the
material. In college, I completed my physics courses while on a study abroad program in England and again aided...
9 Subjects: including algebra 2, chemistry, physics, geometry
...Since high school, I have regularly practiced Spanish with friends who are native speakers. Further, I have always had a great interest in grammar, so my Spanish grammar continues to improve
in advanced grammar topics. I am a math major at Caltech currently doing research in graph theory and combinatorics with a professor at Caltech.
28 Subjects: including algebra 2, Spanish, chemistry, French
...I help clients look inward to identify the values that resonate with them and subsequently align their most heartfelt values with their passions so they can find or create careers that lead
them to their deepest fulfillment. When you develop your talents and master a skill set, you can become th...
41 Subjects: including algebra 2, English, writing, reading
Related Long Beach, CA Tutors
Long Beach, CA Accounting Tutors
Long Beach, CA ACT Tutors
Long Beach, CA Algebra Tutors
Long Beach, CA Algebra 2 Tutors
Long Beach, CA Calculus Tutors
Long Beach, CA Geometry Tutors
Long Beach, CA Math Tutors
Long Beach, CA Prealgebra Tutors
Long Beach, CA Precalculus Tutors
Long Beach, CA SAT Tutors
Long Beach, CA SAT Math Tutors
Long Beach, CA Science Tutors
Long Beach, CA Statistics Tutors
Long Beach, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/long_beach_ca_algebra_2_tutors.php","timestamp":"2014-04-20T02:07:33Z","content_type":null,"content_length":"24162","record_id":"<urn:uuid:a6ef750b-33c4-45de-9f86-8ebfc854a522>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00549-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Josh on Wednesday, September 13, 2006 at 10:49pm.
I really would like to give some work on this problem, but I have no idea how to do it! Please help!
An airplane is dropping bales of hay to cattle stranded in a blizzard on the Great Plains. The pilot releases the bales at 120 m above the level ground when the plane is flying at 80.0 m/s 55.0
degrees above the horizontal.
How far in front of the cattle should the pilot release the hay so that the bales will land at the point where the cattle are stranded?
Assuming constant velocity and altitude, which is 80.0m/s and 120m, how long does it take for the bales to fall to the ground and how far do they travel horizontally is that time period?
We know that
s=(1/2)gt^2, g=9.8m/s^2, s=120m so 120m=4.9m/s^2 * t^2 Now solve for t
The plane is flying 80.0m/s. How far does bale travel in the time t you just determined?
The location makes an angle of 55deg from the target; the altitude is 120m. What is the horizontal distance of the plane from the target? How does this distance compare with the distance you just
Ok, the plane is flying at an angle, so has an upward velocity component and a horizontal component.
For the upward component, you get a lift. How long does it take for the bale to hit the ground?
yf=yi + vy*time - 1/2 9.8 time^2
where yf=ground level=0
yi=120 m
vy= 80m/s * sin55
solve this second degree equation for time.
Now, having time in the air, how far does the bale travel horizontally?
xf=xi + vh*time
where xi=0, vh= 80cos55, and time is above. xf will be the answer to the question.
Related Questions
mastering physics - An airplane is dropping bales of hay to cattle stranded in a...
freefall - An airplane is dropping bales of hay to cattle stranded in a blizzard...
Physics - An airplane is dropping bales of hay to cattle stranded in a blizzard ...
math - you have five bales of hay. for some reason,instead of being weighed ...
math help - Thank you for helping me with the last problem Reiny and Bosnian. ...
Geometry - A machine for baling hay produces cylindrical bales that are 5 feet ...
Physics - Someone attaches a 27 kg bale of hay to one end of a rope passing over...
Compliments - Hey, Im trying to find lots of compliments. For example, if ...
Science - For my science class I have to make a machine that attempts to convert...
algebra 1 - Can someone please help me with this inequality problem -3-8>(4x+... | {"url":"http://www.jiskha.com/display.cgi?id=1158202156","timestamp":"2014-04-17T20:00:23Z","content_type":null,"content_length":"9578","record_id":"<urn:uuid:27602703-24a5-4948-89e1-a2b270dae52c>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00329-ip-10-147-4-33.ec2.internal.warc.gz"} |
TMHMM2.0 User's guide
This server is for prediction of transmembrane helices in proteins.
The method (version 1) is described in
Erik L.L. Sonnhammer, Gunnar von Heijne, and Anders Krogh:
A hidden Markov model for predicting transmembrane helices in protein sequences.
In Proc. of Sixth Int. Conf. on Intelligent Systems for Molecular Biology, p 175-182
Ed J. Glasgow, T. Littlejohn, F. Major, R. Lathrop, D. Sankoff, and C. Sensen
Menlo Park, CA: AAAI Press, 1998
Download compressed postscript file
Please cite.
Press here to see other material (model, training data, etc). .
Version 2 is very similar to version one, but it builds on a new model, so predictions are not identical. The web server has been improved (hopefully) a little. A paper is submitted describing TMHMM
in more detail (publication details not available yet).
The program takes proteins in FASTA format. It recognizes the 20 amino acids and B, Z, and X, which are all treated equally as unknown. Any other character is changed to X, so please make sure the
sequences are sensible proteins
This is an example (one protein):
>5H2A_CRIGR you can have comments after the ID
How to run it
Either give the name of the local file in which you have the proteins in the top half of the window, or paste the sequence(s) into the lower part of the window. Then press `Submit'. (It should be
possible to both give it a local file and paste sequences if you really want.)
There are two output formats: Long and short.
Long output format
For the long format (default), the server gives some statistics and a list of the location of the predicted transmembrane helices and the predicted location of the intervening loop regions.
Here is an example:
# COX2_BACSU Length: 278
# COX2_BACSU Number of predicted TMHs: 3
# COX2_BACSU Exp number of AAs in TMHs: 68.6888999999999
# COX2_BACSU Exp number, first 60 AAs: 39.8875
# COX2_BACSU Total prob of N-in: 0.99950
# COX2_BACSU POSSIBLE N-term signal sequence
COX2_BACSU TMHMM2.0 inside 1 6
COX2_BACSU TMHMM2.0 TMhelix 7 29
COX2_BACSU TMHMM2.0 outside 30 43
COX2_BACSU TMHMM2.0 TMhelix 44 66
COX2_BACSU TMHMM2.0 inside 67 86
COX2_BACSU TMHMM2.0 TMhelix 87 109
COX2_BACSU TMHMM2.0 outside 110 278
If the whole sequence is labeled as inside or outside, the prediction is that it contains no membrane
helices. It is probably not wise to interpret it as a prediction of location. The prediction gives the most probable location and orientation of transmembrane helices in the sequence. It is found by
an algorithm called N-best (or 1-best in this case) that sums over all paths through the model with the same location and direction of the helices.
The first few lines gives some statistics:
Length: the length of the protein sequence.
Number of predicted TMHs: The number of predicted transmembrane helices.
Exp number of AAs in TMHs: The expected number of amino acids intransmembrane helices. If this number is larger than 18 it is very likely to be a transmembrane protein (OR have a signal peptide).
Exp number, first 60 AAs: The expected number of amino acids in transmembrane helices in the first 60 amino acids of the protein. If this number more than a few, you should be warned that a
predicted transmembrane helix in the N-term could be a signal peptide.
Total prob of N-in: The total probability that the N-term is on the cytoplasmic side of the membrane.
POSSIBLE N-term signal sequence: a warning that is produced when "Exp number, first 60 AAs" is larger than 10.
Plot of probabilities
The plot shows the posterior probabilities of inside/outside/TM helix. Here one can see possible weak TM helices that were not predicted, and one can get an idea of the certainty of each segment in
At the top of the plot (between 1 and 1.2) the N-best prediction is shown.
The plot is obtained by calculating the total probability that a residue sits in helix, inside, or outside summed over all possible paths through the model. Sometimes it seems like the plot and the
prediction are contradictory, but that is because the plot shows probabilities for each residue, whereas the prediction is the over-all most probable structure. Therefore the plot should be seen as a
complementary source of information.
Below the plot there are links to
• The plot in encapsulated postscript
• A script for making the plot in gnuplot.
• The data for the plot.
Short output format
In the short output format one line is produced for each protein with no graphics. Each line starts with the sequence identifier and then these fields:
"len=": the length of the protein sequence.
"ExpAA=": The expected number of amino acids intransmembrane helices (see above).
"First60=": The expected number of amino acids in transmembrane helices in the first 60 amino acids of the protein (see above).
"PredHel=": The number of predicted transmembrane helices by N-best.
"Topology=": The topology predicted by N-best.
For the example above the short output would be (except that it would be on one line):
The topology is given as the position of the transmembrane helices separated by 'i' if the loop is on the inside or 'o' if it is on the outside. The above example 'i7-29o44-66i87-109o' means that it
starts on the inside, has a predicted TMH at position 7 to 29, the outside, then a TMH at position 44-66 etc.
Final remarks
Predicted TM segments in the n-terminal region sometime turn out to be signal peptides.
One of the most common mistakes by the program is to reverse the direction of proteins with one TM segment.
Do not use the program to predict whether a non-membrane protein is cytoplasmic or not. | {"url":"http://www.cbs.dtu.dk/services/TMHMM/TMHMM2.0.guide.html","timestamp":"2014-04-20T19:01:25Z","content_type":null,"content_length":"9480","record_id":"<urn:uuid:d5a0d37c-6ec6-462f-8e41-4128b473349e>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00546-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
United States History A Unit 5: Review & Final PLEASE!!
• one year ago
• one year ago
Best Response
You've already chosen the best response.
i can try and help what u need?
Best Response
You've already chosen the best response.
What school do you go to?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
no i meant @Elizabethknight , sorry if im being rude, @kittykat96.
Best Response
You've already chosen the best response.
Yes CCA!
Best Response
You've already chosen the best response.
ok i have no purpose here then bye!
Best Response
You've already chosen the best response.
what grade are u?
Best Response
You've already chosen the best response.
10th grade
Best Response
You've already chosen the best response.
well im 12 but ill help how i can this is my first year cca lol
Best Response
You've already chosen the best response.
in 12th grade*
Best Response
You've already chosen the best response.
Lol cool
Best Response
You've already chosen the best response.
so what do u need help with?
Best Response
You've already chosen the best response.
Albigence Waldo was a physician at Valley Forge. What does he say about the quality of care that sick soldiers received at Valley Forge? The soldiers receive the same treatment as if they were at
home. Many don’t suffer in the cold despite being housed in tents. Many soldiers are treated with remedies that do nothing to cure them. Despite receiving different treatment from the normal, few
of the sick die.
Best Response
You've already chosen the best response.
....give me a few minutes ok i think i remember reading something like that
Best Response
You've already chosen the best response.
c. many soldiers are treated with remedies that do nothing to cure them... i believe thats the answer
Best Response
You've already chosen the best response.
Thank you so much!
Best Response
You've already chosen the best response.
your welcome
Best Response
You've already chosen the best response.
.... i just looked up my answer i might be wrong it looks like u have a 50 /50 with c and d but it is deff one of thoes two ok i didnt want to end up giving u the wrong answer
Best Response
You've already chosen the best response.
ok thanks a lot . :)
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50fa32a0e4b022b322700bda","timestamp":"2014-04-17T09:37:12Z","content_type":null,"content_length":"70629","record_id":"<urn:uuid:1d52d34c-ef07-4fba-8b24-9ee29fc3ba71>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00093-ip-10-147-4-33.ec2.internal.warc.gz"} |
Godwin Ubanyionwu
Godwin I. Ubanyionwu, P.E.
BS Civil Engineering
University of Texas at El Paso
MS Civil Engineering
University of Texas at El Paso
Transportation Engineer
Texas Department of Transportation
Mathematics Instructor
El Paso Community College
The interest I have in mathematics was instrumental in my choice to study engineering in college. I am a licensed Professional Engineer, and I have practiced engineering in the areas of Civil,
Geotechnical and Transportation engineering. I started teaching mathematics at the El Paso Community College as an adjunct Faculty member from spring of 1986 to present. As I joggle these two
interesting and rewarding careers, I was able to bring to the classroom the discussion about the relationship between mathematics and engineering and how mathematical concepts can be applied to
I currently practice Transportation engineering with the Texas Department of Transportation where I oversee a myriad of roadway schematic projects. I develop roadway schematic design for various
functional roadway classifications such as Freeway design, Major and Minor arterial roadways. The schematic design essentially delineates design speed, right-of-way needs, traffic volumes, location
of retaining wall, location of interchanges, main lanes, grade separation, ramps, bridges and bridge class culverts, storm water drainage, and roadway geometrics such as pavement cross slope, slope
ratio for fills and cuts.
As it is the case in all engineering projects, estimation of construction cost is a major factor that must be considered. In roadway projects, quantities of roadway elements are calculated and
applied to either state wide bids or local bids unit prices to figure out the construction costs. The quantities in most cases are calculated utilizing the basic mathematical concepts of finding
areas of geometric or composite figures or volume of geometric or composite solids. The knowledge of Calculus with respect to integration can be useful in estimating volumes of embankment needed for
roadway project. Hence, in the Excel spreadsheet, an engineer can quantify the required concrete riprap in square yard, roadway excavation in cubic yard, cement in tons, concrete pavement in square
yard, retaining wall (mechanical stabilized earth) in square feet, concrete sidewalk in square yard and so on.
Over the years of my teaching profession, I have watched with keen interest how often students have consistently asked me where this math stuff will lead them in real world situation. The examples of
what I do as an engineer tends to some degree address some of their concerns. For instance, if an engineer is interested in calculating the discharge through a trapezoidal conveyance system, the area
and velocity need to be calculated, and the knowledge with regard to finding the area of the trapezoid will be useful in this case. I am currently overseeing US 54 roadway widening project in El
Paso, Texas from Hondo Pass to Transmountain road that will require the relocation of existing entrance and exit ramps as a result of proposed construction of direct connectors between US 54 and
Transmountain road. Some of the challenges include access management and proper placement of ramps to provide access to Transmountain campus of the El Paso Community College and the base ball
stadium. On the other hand, I am also involved in the roadway project to provide access point to Mission del Paso campus of the El Paso Community College through the proposed Old Hueco Tanks road. I
proposed a wide turning radius for this access point because this campus utilizes semi-trailer as one of its driving school training vehicles.
In any event, the applicability of mathematical concepts in our everyday endeavor is alive and well. It is incumbent on all mathematics instructors to strive and bridge the gap between conceptual
theory and practical applications. Students should be encouraged to take up summer internships in their respective field of study to experience first-hand the role to which mathematics could play in
their chosen careers. Mathematics is the key to all engineering disciplines and is sine-qua- non to most careers especially those in the areas of science and technology. | {"url":"http://www.maa.org/careers/career-profiles/engineering/godwin-ubanyionwu?device=mobile","timestamp":"2014-04-18T13:41:11Z","content_type":null,"content_length":"23688","record_id":"<urn:uuid:4c2cd200-42bc-45f2-8605-11313ca9cb2b>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00598-ip-10-147-4-33.ec2.internal.warc.gz"} |