content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Lefty Grove's Peak Vs. Sandy Koufax's Peak
I used a 5-year period for each guy. For Grove, it was 1928-32. For Koufax, it was 1962-66. The table below has some comparisons:
RSAA means "runs saved above average." It comes from Lee Sinins' Complete Baseball Encyclopedia. The numbers are park adjusted. So Grove has a big lead here, both in total and per 9 IP. I will come
back to these numbers later when I plug them into the Pythagorean formula.
Grove was 60% better than average at preventing HRs (that is what the 160 means). He gave up 49 HRs while the average pitcher would have allowed 78 (100*(78/49) is about 160). This gives him a pretty
big edge over Koufax. But they are not park adjusted. If they were, Grove would have an even bigger edge. Here are the HR park factors for the Philadelphia A's from 1928-32 from the
STATS, Inc. All-Time Baseball Sourcebook
: 126, 165, 153, 104, 199 (the 126 means that Shibe gave up 26% more HRs than the average park). Now Shibe Park may have had some asymmetries, so that lefties hit alot more HRs. With Grove more
likely to face righties (being a lefty himself), it is possible the park did not hurt him as much as these factors suggest. But A's righties Foxx, Miller and Dykes generally had much higher slugging
percentages at home than on the road (from Retrosheet). So my guess is that Grove certainly was not aided by his park in preventing HRs.
Koufax allowed 89 HRs while the league average was 124 and had the following HR park factors in his years: 50, 63, 62, 49, 70 (meaning Dodger Stadium allowed fewer HRs than average). So he was helped
quite a bit yet Grove still has the big edge here. He allowed 89 HRs while the league average was 124.
Relative SO/BB is each pitcher's strikeout-to-walk ratio divided by the league average. Grove had a 2.67 strikeout-to-walk ratio while the league average was 0.95. The 2.67/0.95 is multiplied by 100
to get 281. That beats the 225 of Koufax or 100*(4.57/2.03).
The ERA+ comes from Baseball Reference. It is ERA relative to the league average but also adjusted for park effects. Grove only has a slight edge here.
WAR comes from Baseball Reference (and they get it from Sean Smith at Baseball Projections). It is "Wins Above Replacement for Pitchers. A single number that presents the number of wins the player
added to the team above what a replacement player (think AAA or AAAA) would add. This value includes defensive support and includes additional value for high leverage situations."
It is not clear to me how Koufax beats Grove here. Grove has alot more RAR or "runs above replacement." It might have something to do with the leverage adjustments. None are made for Grove since the
play-by-play data has not been posted at Retrosheet. The WAR and RAR numbers imply that for Grove's years, it took 11 extra runs to win a game (441/40.1 = 11) and only 8.26 for Koufax (347/42).
Baseball Projections says that it normally takes about 10 extra runs to get a win. I wonder if they are using the formula which says it takes 10 times the square root of the number of runs scored per
inning by both teams. For Grove's years I calculated that to be 10.7 and for Koufax got 9.54. That would give Grove a WAR of 41.21 (441/10.7) and Koufax 36.37 (347/9.54).
Pitching Runs is "Adjusted Pitching Runs." It comes from Baseball Reference. It is "A set of formulas developed by Gary Gillette, Pete Palmer and others that estimates a pitcher’s total contributions
to a team’s runs total via linear weights." Lee Sinins told me it might also be based on decisions, but I am not really sure. Anyway, Grove has a big lead here, too.
Now to come back to RSAA and try to calculate the Pythagorean pct for each guy using RSAA per 9 IP. The AL of 1928-32 averaged 5.12 runs per game (yearly averages weighted by Grove's IP) and 5.12 -
1.98 = 3.14. So if Grove allows 3.14 while his team scored 5.12, he would have a winning pct of .727. Koufax would allow 2.78 while his team would score 4.05 runs per game. That gives him a pct of
One thing I have not mentioned yet or tried to take into account is integration. Last January, I compared Grove's career to Randy Johnson's. See
How Might Integration Have Affected The Lefty Grove/Randy Johnson Debate?
I tried to estimate how much better the hitters would have been during Grove's time if the percentage of players who were non-white was about the same as during Johnson's. I also tried to adjust for
the number of non-white pitchers and non-white fielders. I came up with Grove's ERA going up about 10%. What if I did that here?
Then Grove would allow 3.45 runs per game and his pct would fall to .688. That is still higher than Koufax.
But if we use the adjusted pitching runs, Grove allows 3.32 runs per game (5.12 - 1.8). He would have a pct of .704. Koufax would allow 2.63 runs per game (4.05 - 1.42). He would have a pct of .703.
That would make the two about even. Grove would get the edge due to more IP.
But if we raise Grove's runs per game by 10%, to 3.65, his pct would be only .663. That would put Koufax ahead.
Finally, if we knock down Grove's ERA+ from Baseball Reference of 172 by 10%, he would be at 155, below Koufax's 167. The 10% adjustment for integration is just an estimate. It is the same one I used
when comparing Grove to Johnson. The % of players and pitchers who were non-whites during Koufax's time was probably lower than during Johnson's time. So adding 10% to Grove's ERA is probably too
much. I don't think I know the right adjustment to make. But this gives us some idea of what the effect of integration might be.
If I lowered Grove's strikeouts per 9 IP by 10% from 5.91 to 5.32 and raised his walks per 9 IP from 2.21 to 2.43, his new strikeout-to-walk ratio would be 2.19. That divided by 0.95 would be 2.30.
So his relative SO/BB would be 230, still higher than Koufax's 225.
If I raised Grove's HRs by 10%, he would have allowed 54 HRs. Then 78/54 = 1.45. That times 100 is 145. That is still higher than Koufax's relative HR rate of 139.
2 comments:
Well and good all this math might be let's talk facts for a five year span there has never been anyone better than Sandy Koufax, Grove was great and one hell of a competitor of this thete can be
no doubt, but how can anyone compare Grove ( 0 nohitters 0 petfect games no 300 strikeout seasons and lastly no Major League Triple Crowns) if more needs to be said try 4 world championships oh
by the way Sandy's election to the Baseball HOF pct was 92 plus compared to Groves 74 pct. Now don't get me wrong I think the world of Robert Grove but to say without laughing that he or any
lefthander is better than Koufax is ridiculous at best and out an out fanasty otherwise...
Cyril Morong said...
You're the one in fantasy land. You don't think stats should be adjusted for the league average or for park effects.
The triple crowns, no hitters, etc by themselves are not the issue. It is overall effectiveness as a pitcher. Besides, Grove did win the major league triple crown in 1930 and 1931(if you mean
wins, ERA and strikeouts).
Grove had a 2.06 ERA in a league that had an ERA of 5.14 in 1931. He beat the league by 3.08. Koufax's best difference was 1.88 (ERA 1.73 league ERA 3.61) in 1966. And Koufax had a good pitcher's
park and Grove had a good hitter's park.
So what if there are no 300 K seasons? Koufax did not do that until the expanded the strike zone and in Grove's day there were not so many free swingers and players did anything they could to
make contact with two strikes. And again. Grove's SO/BB ratio, adjusted to the league average, was better than Koufax's.
Grove pitched on two world champions by the way. But again, that has more to do with the team. It is only a fantasy that one player determines everything.
Sportwsriters are fallible. So the Hall of Fame voting is not that relevant | {"url":"http://cybermetric.blogspot.com/2010/11/lefty-groves-peak-vs-sandy-koufaxs-peak.html?showComment=1302639193364","timestamp":"2014-04-17T05:04:08Z","content_type":null,"content_length":"58072","record_id":"<urn:uuid:8142416d-351a-48d3-9daf-202034289a37>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00139-ip-10-147-4-33.ec2.internal.warc.gz"} |
Margaret Kepner
I enjoy exploring the possibilities for conveying ideas in new ways, primarily visually. I have a background in mathematics, which provides me with a never-ending supply of subject matter. My
lifelong interest in art gives me a vocabulary and references to utilize in my work. I particularly like to combine ideas from seemingly different areas.
Some years ago I coined the term “visysuals” to describe what I do, meaning the “visual expression of systems” through attributes such as color, geometric forms, and patterns. My creative process
involves moving back and forth between a math concept that intrigues me, and the creation of visual images that interpret that concept in interesting ways. I intend to continue to explore the
expression of my ideas in a range of media including prints, books, and textiles.
The Zen of the Z-Pentomino
This piece is based on six different tilings of the Z-pentomino and is influenced by traditional Japanese patterns. Each of the 12 pentominoes will tile the plane -- all but one of them in infinitely
many ways. If, however, reflections are not allowed and only directly congruent tilings are considered, the Z-pentomino tiles in six, and only six, distinct ways. The resulting tilings are
reminiscent of Japanese sashiko pieces, which typically feature white stitching in geometric patterns on indigo-colored cloth. This piece presents the six tiling patterns in “sashiko style” blocks,
using color differences to emphasize the various roles the Z-pentomino plays. For example, examination of the darker ”stand-up” Zs and their immediate neighbors reveals intrinsic differences among
the six patterns. In addition, at the center of each block a minimal group of Zs is highlighted that will tile by translation alone.
The traditional quilt pattern “Nine Patch” is based on a 3x3 grid of nine squares, usually colored in a checkerboard fashion. This piece uses the pattern as a point of departure and includes other
references to “nine-ness.” The basic 9-patch pattern is generalized to produce additional “odd” patch formats including one-patch, 25-patch, 49-patch, and 81-patch squares. These in turn are
displayed in an overall 9x9 grid. The small outer squares provide the key for determining the coloring of each patch square. For example, the central 81-patch square is in the yellow row and the
purple column, resulting in a pattern of small yellow squares against a purple background. The 9-patch structure can be found at several different scales in this piece. It is the basis for 9 symbols
that are used to represent the elements in the order-9 group tables for C9 and C3 x C3. These tables are displayed via small monochrome squares that float in front of the overall grid of patch
MaMuMo2 - Study in Yellow
There are sixteen 2x2 matrices composed of the elements zero and one. These matrices are closed under matrix multiplication modulus 2. This piece is a visual representation of the multiplication
table for these 16 matrices. Circles are used for zeros and squares represent ones; initially the shapes are black on white. A “super” zero resulting from reduction modulus 2 (1+1 = 2 → 0) is shown
larger than a simple zero, and has a dot at its center. Six of the 16 matrices have inverses, and this subset forms a group. The portion of the table representing the product of these six matrices is
shown with black and white inverted, and falls into four sections at the center. This order-6 subgroup is non-abelian, and therefore must be isomorphic to the dihedral group D3. The yellow shading of
the shapes in the subgroup table highlights the identity elements. The intensity of the shading relates to cycles and reveals the mapping of the six matrices to the elements of D3. | {"url":"http://gallery.bridgesmathart.org/exhibitions/2012-bridges-conference/renpek1010","timestamp":"2014-04-17T10:29:56Z","content_type":null,"content_length":"20584","record_id":"<urn:uuid:8b2a7de7-8a14-4c03-965b-6a2548673b69>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00337-ip-10-147-4-33.ec2.internal.warc.gz"} |
perplexus.info :: Just Math : N-Divisibility II
Consider four base ten positive integers 42^98, 70^70, 105^42 and 154^28 – and, determine the total number of positive integers dividing:
(I) At least one of the four given numbers.
(II) At least two of the four given numbers.
(III) At least three of the four given numbers.
(IV) Each of the four given numbers. | {"url":"http://perplexus.info/show.php?pid=7524","timestamp":"2014-04-17T07:13:12Z","content_type":null,"content_length":"13120","record_id":"<urn:uuid:d906eea3-0d43-498c-b032-601ec9d9fb49>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00166-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Verlinde scores goal for LQG
Just a thinking.
Set [c]=1. Then
[itex][G]=L / M [/itex] and
[itex][h]=L . M [/itex]
So a lot of relationships can be just naive dimensional analisis when M=1 or when you can somehow disregard masses or lengths. This is a peril in this kind of papers, and so they are more careful
than usual about doing all the steps explicit.
I thought about it. There are different relations but all of them has its meaning. The question is to find a proper meaning.
For example I studied Planck length and Compton length. I assume it has something to do with a space curvature but is it really ?
We calculate in 3 spatial dimensions. What are the 3 dimensions. Do they exist on the fundamental quantum level?
I assume the space for our observation is made of the information. How many dimensions are between two quantum informations? Do they need any dimension at all?
Czes, I don't want you to be put at a disadvantage by not knowing some relevent background which is familiar to the rest of us. Other people here are aware of an interesting paper on arxiv that
touches on elementary dimensional analysis, involving the Planck and Compton lengths, because some of the contributory material was worked on here at PhysicsForums, back in 2005 and 2006.
Another thing Czes, do you normally use Tex in your writing? Tex is available here at PF. Just write a formula like L^2 or M_{Planck}
and put symbols "tex" and "/tex" around it. except use square brackets [...] instead of quotes "..."
In other words, copy this without the underline
and you will get
Copy this without the underline
and you will get | {"url":"http://www.physicsforums.com/showpost.php?p=2543466&postcount=23","timestamp":"2014-04-18T08:13:06Z","content_type":null,"content_length":"10853","record_id":"<urn:uuid:3def4fc6-1d48-4817-8ee2-b1c94faf79e9>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00445-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: RE: Shrinkage factor
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: RE: Shrinkage factor
From Evans Jadotte <evans.jadotte@uab.es>
To statalist@hsphsun2.harvard.edu
Subject Re: st: RE: Shrinkage factor
Date Thu, 15 Oct 2009 15:37:47 +0200
Robert A Yaffee wrote:
Sorry my salutation and message got left-
truncated. I suspect it is some inadvertently programmed
hot-key that snipped the beginning of my email. Nonetheless, I think you understand the idea.
Because the notion applies to multiple levels, the notion
of pooling may avoid some ambiguity. Best, Bob
Robert A. Yaffee, Ph.D.
Research Professor
Silver School of Social Work
New York University
Biosketch: http://homepages.nyu.edu/~ray1/Biosketch2009.pdf
CV: http://homepages.nyu.edu/~ray1/vita.pdf
----- Original Message -----
From: Evans Jadotte <evans.jadotte@uab.es>
Date: Thursday, October 15, 2009 4:48 am
Subject: Re: st: RE: Shrinkage factor
To: statalist@hsphsun2.harvard.edu
Hi Bob,
In fact, this is the formula I have. My problem is the malleability of n_j. Since I have 496 clusters at level-2 with many unbalanced observations within each cluster, the manual calculation
can make one go crazy! I think I will have to dedicate some gooood time to this! In any case many thanks for your output.
Robert A Yaffee wrote:
prefer the use of "a pooling factor" in multilevel models to indicate the the degree to which elements are pooled together.
They use the same formula for the residual intraclass coefficient that
is used for the shrinkage factor on population distribution a,
but refer to 1-B as the pooling factor
when B = 1 - [ sigma^2/(sigma^2 + sigma_y^2/n_j)]
for them, a_j (multilevel) = B mu_a + (1-B) ybar_j
ybar_j = avg of the y's within each group j
mu_a = average of the population
B = 0 when there is no pooling a_j=ybar_j
= 1 when there is complete pooling a_j = mu_a
- Hope this helps. This comes from Gelman and Hill Data
Analysis using regression and multilevel/hierarchical models, Cambridge University Press, p. 477.
Robert A. Yaffee, Ph.D.
Research Professor
Silver School of Social Work
New York University
Biosketch: http://homepages.nyu.edu/~ray1/Biosketch2009.pdf
CV: http://homepages.nyu.edu/~ray1/vita.pdf
----- Original Message -----
From: Robert A Yaffee <bob.yaffee@nyu.edu>
Date: Wednesday, October 14, 2009 8:18 pm
Subject: Re: RE: st: RE: Shrinkage factor
To: statalist@hsphsun2.harvard.edu
Elan, Evans,
Carlin and Lewis in their 3rd edition of Bayesian Methods for
describe the Bayesian Shrinkage factor B = sigma^2/(sigma^2 + tau^2)
where tau^2 would be the variance of the prior distribution while
would be the normal density of the sample (or likelihood), p.
B is also used to compute the posterior mean = B (mu) + (1-B)y
a weighted average of the prior mean and that of the sample. Regards,
Robert A. Yaffee, Ph.D.
Research Professor
Silver School of Social Work
New York University
Biosketch: http://homepages.nyu.edu/~ray1/Biosketch2009.pdf
CV: http://homepages.nyu.edu/~ray1/vita.pdf
----- Original Message -----
From: "Cohen, Elan" <cohened@upmc.edu>
Date: Wednesday, October 14, 2009 1:40 pm
Subject: RE: st: RE: Shrinkage factor
To: "'statalist@hsphsun2.harvard.edu'" <statalist@hsphsun2.harvard.edu>
Just based on the index, the following book may be helpful:
- Elan
-----Original Message-----
From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Evans Jadotte
Sent: Wednesday, October 14, 2009 1:07 PM
To: statalist@hsphsun2.harvard.edu
Subject: Re: st: RE: Shrinkage factor
Nick Cox wrote:
If there were, then a simple search would almost certainly find
-findit shrinkage- yields no hits. Did you try a Stata or
Google search?
Nick n.j.cox@durham.ac.uk
Evans Jadotte
I am estimating a three-level hierachical model using
xtmixed and want
to get the 'shrinkage factor' (Rj) to help me with the
calculation of
the variance for an empirical Bayes estimate. My model has many covariates and clusters and this makes a manual calculation
of the Rj
not malleable. Is there any user-written command to get the Rj?
Hope my request is not too confusing and can receive some help.
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
I tried under both findit shrinkage and findit reliability (this
one took me to xtmepoisson but no further help), with no luck.
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
Hello Bob,
Certainly I gathered the idea from your email!
Thanks again for your input,
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2009-10/msg00688.html","timestamp":"2014-04-21T02:21:18Z","content_type":null,"content_length":"18761","record_id":"<urn:uuid:714f2d29-f3f2-4bd7-a8bc-994c8b393064>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00226-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
This is now a squiggly thread.
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50e61a77e4b04bcb15167be3","timestamp":"2014-04-18T14:21:13Z","content_type":null,"content_length":"249766","record_id":"<urn:uuid:94215466-cdef-4579-bb8f-e7811417ac9f>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00059-ip-10-147-4-33.ec2.internal.warc.gz"} |
Graphs of Inverse Functions
Date: 07/25/2002 at 16:57:07
From: Jonida
Subject: Graphs of inverse functions
Suppose y=f(x) and y=g(x) are inverses. It turns out that the graph
of one function is the graph of the other, but reflected across the
line y=x. Explain why this is the case.
Date: 07/26/2002 at 12:43:16
From: Doctor Peterson
Subject: Re: Graphs of inverse functions
Hi, Jonida.
There are several ways to think about this. I'll suggest a couple.
First, let's take a graph:
y y=f(x)
| /
| /
| /
| /
| /
| /
The inverse function means that
y = f(x) whenever x = g(y)
x = g(y) whenever y = f(x)
That means that the point (x,y) is on the graph of y = f(x) whenever
the point (x,y) is on the graph of x = g(y). That is, the graph we
have drawn is also the graph of x = g(y):
y x=g(y)
| /
| /
| /
| /
| /
| /
But that's not the graph we want; we want to see the graph of y=g(x).
How do we get that? We have to swap the variable names:
x y=g(x)
| /
| /
| /
| /
| /
| /
Now we have the right variables, but the axes are in the wrong places.
Let's make these two graphs, for y=f(x) and y=g(x). If you've drawn
the second graph on sufficiently transparent paper, you can just flip
the paper over and place it on top of the first, so that the x and y
axes on the two graphs line up together. Essentially what you have
done is to rotate the paper around the line y=x:
y y=x x y=x
| / | /
| / | /
| / -----> | /
| / | /
| / | /
| / | /
+-------------x +-------------y
That's the same as reflecting every point on the paper in that line,
so our graph of g is now the reflection of the graph of f we started
Here's a less visual and more analytical way to see it. We'll start
the same way:
We make the graph:
y y=f(x)
| /
| /
| /
| /
| /
| /
The inverse function means that
y = f(x) whenever x = g(y)
y = g(x) whenever x = f(y)
That means that the point (a,b) is on the graph of y = f(x) whenever
the point (b,a) is on the graph of y = g(x), since b=f(a) means the
same thing as a=g(b). So we can look at these two points:
| Q(b,a) /
a+---* /
| | /
| | /
| / | |
b a
Now, what does it mean to say that Q(b,a) is the reflection of P(a,b)
in the line y=x? One way you can think of that is that the line
segment PQ between the points is (1) perpendicular to y=x, and (2)
bisected by y=x. Can you see that? You might want to imagine folding
the paper along that line, so that P and Q coincide, and think about
what the line PQ will look like.
Now to show that PQ is perpendicular to y=x, we need to show that its
slope is -1. That's true, right?
And to show that the midpoint of PQ is on y=x, we need to find the
midpoint and show that its coordinates are equal. You can do that
too, right? Then you've proved it!
If you have any further questions, feel free to write back.
- Doctor Peterson, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/61109.html","timestamp":"2014-04-21T12:45:47Z","content_type":null,"content_length":"8479","record_id":"<urn:uuid:1cc15a27-4722-486d-bb03-abd8e7f381f4>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00368-ip-10-147-4-33.ec2.internal.warc.gz"} |
Potential Difference Across Resistors in Circuit
Almost forgot to mention:
This is a DC Circuit!
In the diagram above, find the potential difference across the 3-ohm resistor.
a) 1.3 V
b) 3.0 V
c) 4.0 V
d) 5.0 V
e) 6.0 V
Relevant equations:
So far, I used: I = V/R to find current from the battery.
Then there's E = IR.
(Sorry, LaTeX doesn't like me.)
The attempt at a solution:
Using the first equation, I found that the current measures three Amperes.
I also tried using the second formula where I = 3 and R = 3, but that gave me 9 Volts, which isn't an answer.
Any help would be much appreciated! | {"url":"http://www.physicsforums.com/showthread.php?t=308786","timestamp":"2014-04-21T12:13:33Z","content_type":null,"content_length":"25492","record_id":"<urn:uuid:64a8f32c-dd4b-4a95-88be-bd1c9ab843ed>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00022-ip-10-147-4-33.ec2.internal.warc.gz"} |
Meaning of Sommerfeld radiation conditions
Have a look at the "multipole expansion". This is an expansion of the em. field from a charge-current distribution that is confined to a finite space region at points far away from this region. The
expansion goes by using spherical coordinates in Maxwell's equations and expansion in powers of [itex]1/r[/itex].
The general idea is to separate the wave equation in spherical coordinates, leading to an expansion in spherical harmonics [itex]\mathrm{Y}_{lm}(\vartheta,\varphi)[/itex]. You'll find that for the
em. fields the spherically symmetric part, i.e. the term with [itex]l=m=0[/itex], is necessarily a static contribution and at far distances reduces to a Coulomb field of a point charge, i.e., a
scalar [itex]1/r[/itex] potential.
The wave solutions start at [itex]l=1[/itex], which is the dipole term. For wave fields there are electric and magnetic dipole contributions. For the wave fields they start with a spherical wave
[itex]\vec{E} \propto \exp(\mathrm{i} k r)/r[/itex], where the temporal part of the wave is assumed as harmonic, [itex]\vec{E} \propto \exp(-\mathrm{i} \omega t)[/itex]. With this sign convention the
spherical wave [itex]\propto \exp(\mathrm{i} k r)[/itex] is a spherical wave running outwards, i.e., away from the source, and that's the solution you are looking for. If you keep the time dependence
general, these are precisely the retarded solutions, i.e., in Lorenz gauge (in Heaviside-Lorentz units):
[tex]A^{\mu}(t,x)=\int_{\mathbb{R}^3} \mathrm{d}^3 \vec{x}' \frac{J^{\mu}(t-r/c,\vec{x}')}{4 \pi c r} \quad \text{with} \quad r=|\vec{x}-\vec{x}'|.[/tex]
The Sommerfeld radiation conditions are needed to impose these retardation conditions for the Helmholtz equation, where you already assumed the harmonic time dependence. They basically force the
fields to run outwards from the sources when looking from a far distance from the sources, choosing the solution [itex]\propto \exp(+\mathrm{i} k r)[/itex] rather than [itex]\propto \exp(-\mathrm{i}
k r)[/itex], which describe waves running towards the sources, being absorbed there. | {"url":"http://www.physicsforums.com/showthread.php?s=6c0a76e7cfcb7ae67490cc58f2e8aaa9&p=4500053","timestamp":"2014-04-23T12:02:49Z","content_type":null,"content_length":"46985","record_id":"<urn:uuid:5bc50dc1-d7a0-4b5d-bbf7-ecadc1351bd4>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00537-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Energy-momentum for a point particle and 4-vectors
Quote by
You are right, that I am wrong. You see, I can accept when I have made a mistake :P
I would be interested to see how you got to:
[tex]\gamma(v')=\gamma(v) \gamma(u) (1+v_xu/c^2)[/tex]
[tex]v'_y=\frac{v_y \sqrt{1-u^2/c^2}}{1+v_xu/c^2}[/tex]
Start with [tex]1-(\frac{v'}{c})^2[/tex]. | {"url":"http://www.physicsforums.com/showpost.php?p=1643148&postcount=62","timestamp":"2014-04-19T04:30:33Z","content_type":null,"content_length":"7893","record_id":"<urn:uuid:f0b1da05-bd41-4189-8d52-6902faddfe94>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00609-ip-10-147-4-33.ec2.internal.warc.gz"} |
More graphed triangles..
July 20th 2006, 03:15 PM
More graphed triangles..
Summer school can really be a drag, ya know?
"ABC's vertices A(-1, 1), B(5,1), and C(2,4)
Determine the perimeter of ABC. Express your answer as an exact value in simplest form.
What is the slope of AC?
Find the measure of $\angle$A
What is the area of ABC?
So, pretty straightforward, I'd like to compare my work with somebody elses though.
By the way, thanks for all the help. You guys have no idea the amount of work I'm facing in quite a few subjects.
July 20th 2006, 03:32 PM
Originally Posted by zoso
Summer school can really be a drag, ya know?
"ABC's vertices A(-1, 1), B(5,1), and C(2,4)
Determine the perimeter of ABC. Express your answer as an exact value in simplest form.
the distance between two points is: $d=\sqrt{(x_1-x_2)^2+(y_1-y_2)^2}$
so let's find the distances...
now add those together: $\overline{AB}+\overline{BC}+\overline{CA}=$$6+3\sqrt2+3\sqrt2=6+2(3\sqrt2)$$=6+6\sqrt2$$=\boxed{6(1+\sqrt2)}$
July 20th 2006, 03:43 PM
slope and angle
Originally Posted by zoso
What is the slope of AC?
slope formula: $\frac{y_2-y_1}{x_2-x_1}$
Find the measure of $\angle$A
July 20th 2006, 03:49 PM
Originally Posted by zoso
What is the area of ABC?
So, pretty straightforward, I'd like to compare my work with somebody elses though.
By the way, thanks for all the help. You guys have no idea the amount of work I'm facing in quite a few subjects.
The triangle happens to be a right triangle (angle C is 90 degrees) therefore you multiply the sides and divide by two...
$A=\frac{1}{2}\times\overline{BC}\times\overline{CA }$
substitute: $A=\frac{1}{2}\times3\sqrt2\times3\sqrt2$
multiply: $A=\frac{1}{2}\times9\times2$
multiply: $A=\frac{1}{2}\times18$
multiply: $A=9$
~ $Q^u_u\!u^i_i\!i^c_c\!c^k_k\!k$
July 20th 2006, 04:53 PM
K, Everything looks good, with a few discrepncies that were my fault.
What was the method you used for obtaining angle a?
July 20th 2006, 05:38 PM
Originally Posted by zoso
K, Everything looks good, with a few discrepncies that were my fault.
What was the method you used for obtaining angle a?
To tell you the truth, I don't know the method for obtaining angles in graphs, I just know that angle A is 45 degrees because it's between two lines of slope 1 and 0
July 20th 2006, 06:07 PM
Nice responses and Good graph Quick.
Let me tell you how to find angles. Since AB is parallel to x-axis then the angle of A is same as AC creates with x-axis when extended because of parallel line. But the slope of a line is the
same as the tangent of angle it forms with x-axis thus,
$\tan \theta =1$ thus $\theta=45^o$
July 20th 2006, 06:12 PM
Originally Posted by ThePerfectHacker
Nice responses and Good graph Quick.
Thanx :D
Let me tell you how to find angles. Since AB is parallel to x-axis then the angle of A is same as AC creates with x-axis when extended because of parallel line. But the slope of a line is the
same as the tangent of angle it forms with x-axis thus,
$\tan \theta =1$ thus $\theta=45^o$
is there an equation to solve for theta? it seems to me like no mathematician (especially you) would be willing to do trial and error until they found the answer.
July 20th 2006, 06:27 PM
Originally Posted by Quick
Thanx :D
is there an equation to solve for theta? it seems to me like no mathematician (especially you) would be willing to do trial and error until they found the answer.
It seems to me that many people do not accept trial and error as a solution, why? You guess a solution and then you know it is true, proved. Yes, you cannot solve it in finite number of steps.
Which is why solutions to equations are used. But the basic angles are 0, 30, 45, 60, 90 and larger and all other can be build from their sum and diffrences, halves and thirds. This is one of
those angles you need to have memorized.
You can use a calculator feature called "arc-tangent" basically it is opposite of tangent like square root is opposite of squaring. But then you gonna ask another question how can do it without
calculator? There really is no way to do it, (in fact most math problems are like that). You can, however, approximate solutions as close as you want (as I am writing this I came up with a way I
might show it to you).
What I said before, "In fact most math problems are like that". Let me explain, mathematicians are not mostly concerned about finding what satisfies a certain equation rather they are mostly
concerned about prooving that a solution exists. For example, the fundamental theorem of algebra guarenntes the existence of a solution for any non-constant polynomial yet it does not provide a
way to find it (in fact it is impossible to have such a method!) Yet, it never bothered mathematicians all they needed to know that such a solution exists.
Knowing you you will probably ask much about what I just wrote, but it is okay. I would love to have you as a student. | {"url":"http://mathhelpforum.com/pre-calculus/4250-more-graphed-triangles-print.html","timestamp":"2014-04-18T08:52:33Z","content_type":null,"content_length":"20488","record_id":"<urn:uuid:3701fd07-1261-4cb4-9b92-7773caa9c678>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00392-ip-10-147-4-33.ec2.internal.warc.gz"} |
Geometry: Theorems
Tangent Segments
Given a point outside a circle, two lines can be drawn through that point that are tangent to the circle. The tangent segments whose endpoints are the points of tangency and the fixed point outside
the circle are equal. In other words, tangent segments drawn to the same circle from the same point (there are two for every circle) are equal.
Figure %: Tangent segments that share an endpoint not on the circle are equal
Chords within a circle can be related many ways. Parallel chords in the same circle always cut congruent arcs. That is, the arcs whose endpoints include one endpoint from each chord have equal
Figure %: Arcs AC and BD have equal measures
When congruent chords are in the same circle, they are equidistant from the center.
Figure %: Congruent chords in the same circle are equidistant from the center
In the figure above, chords WX and YZ are congruent. Therefore, their distances from the center, the lengths of segments LC and MC, are equal.
A final word on chords: Chords of the same length in the same circle cut congruent arcs. That is, if the endpoints of one chord are the endpoints of one arc, then the two arcs defined by the two
congruent chords in the same circle are congruent.
Intersecting Chords, Tangents, and Secants
A number of interesting theorems arise from the relationships between chords, secant segments, and tangent segments that intersect. First of all, we must define a secant segment. A secant segment is
a segment with one endpoint on a circle, one endpoint outside the circle, and one point between these points that intersects the circle. Three theorems exist concerning the above segments.
Theorem 1
PARGRAPH When two chords of the same circle intersect, each chord is divided into two segments by the other chord. The product of the segments of one chord is equal to the product of the segments of
the other chord.
Figure %: Chords of the same circle that intersect
In the figure above, chords QR and ST intersect. The theorem states that the product of QB and BR is equal to the product of SB and BT.
Theorem 2
Every secant segment is divided into two segments by the circle it intersects. The internal segment is a chord, and the external segment is the segment with one endpoint at the intersection of the
secant segment and the circle, and the other endpoint at the fixed point outside the circle. Given these conditions, a theorem states that when two secant segments share an endpoint not on the
circle, the products of the lengths of each secant segment and its external segment are equal.
Figure %: Secant segments that share an endpoint not on the circle
In the figure above, the secant segments DE and FE share an endpoint, E, outside the circle. The theorem states that the product of the lengths of DE and ME is equal to the product of the lengths of
FE and NE.
Theorem 3
A similar theorem exists when a secant segment and a tangent segment share an endpoint not on the circle. This theorem states that the length of the tangent segment squared is equal to the product of
the secant segment and its external segment.
Figure %: A secant segment and a tangent segment that share an endpoint not on the circle
In the figure above, secant segment QR and tangent segment SR share an endpoint, R, not on the circle. The theorem states that the length of SR squared is equal to the product of the lengths of QR
and KR. | {"url":"http://www.sparknotes.com/math/geometry2/theorems/section5.rhtml","timestamp":"2014-04-18T11:01:04Z","content_type":null,"content_length":"55188","record_id":"<urn:uuid:aa2ff0d0-396c-4be0-ac95-c02f6fe62820>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00577-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quadratic Inequality
April 8th 2011, 07:10 AM
Quadratic Inequality
Find the range of values of $k$ if $kx^2 + 8x > 6 - k$ for all real values of $x$.
The answer given is k > 8.
April 8th 2011, 07:58 AM
Find its discriminant and solve the equation discriminant=0 for k; then take a value of k which (i) is smaller than the smaller root, (ii) is higher than the higher root and (iii) lies between
the two roots. Take that region which satisfies the equation you mentioned.
April 8th 2011, 08:23 AM
I am getting the answer as k > -2 or k > 8. Why is only k > 8 specified in the answer?
April 8th 2011, 09:27 AM
You have two errors: you get x> 2 (not -2) and x> 8 (not or). In order that both of those be true, x must be greater than 8. | {"url":"http://mathhelpforum.com/algebra/177250-quadratic-inequality-print.html","timestamp":"2014-04-18T11:31:52Z","content_type":null,"content_length":"4807","record_id":"<urn:uuid:47a26e0a-1da4-40a5-b9c6-a0ce3123d0a0>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00648-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Puzzling Cube
Copyright © University of Cambridge. All rights reserved.
'A Puzzling Cube' printed from http://nrich.maths.org/
Why do this problem?
This problem
is a little more difficult than it looks. It requires children to visualise the adjoining faces of the cube and transfer this to a net of the cube.
Possible approach
You could start by showing the group the problem on an interactive whiteboard or data projector. When you have discussed it and what needs to be done children could work in pairs so that they are
able to talk through their ideas with a partner. They could use a print-out of
this sheet or draw the faces of the cube for themselves. Scissors and sticky tape would be useful!
When they have built the cube they should then transfer it to a net. This can be any arrangement which can be folded into a cube not necessarily just the conventional cross given on the sheet. At the
end of the lesson, the children could show the whole group both their cubes and the nets they have drawn. The class' work would make a great display, along with a copy of the challenge itself.
This sheet
gives larger coloured faces of the cube which can be made from card or stuck onto six square "Polydron" pieces so the puzzle can be done again and again.
Key questions
Why do you think these two faces are next to each other on the cube?
Look at these two faces. Which other one goes near them?
Possible extension
Those who found this task straightforward could try to make the net of the octahedron from
this sheet
Possible support
Suggest making a net from
this sheet
and leaving it so it can be folded and unfolded. Then draw or paste on the faces one by one. If "Polydron" squares are available the cube can be built up using the pieces from
this sheet | {"url":"http://nrich.maths.org/1140/note?nomenu=1","timestamp":"2014-04-18T11:24:30Z","content_type":null,"content_length":"7652","record_id":"<urn:uuid:29940707-c4bf-491a-b887-3c6bb692d2cc>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00529-ip-10-147-4-33.ec2.internal.warc.gz"} |
Archives of the Caml mailing list > Message from John Harrison
Date: -- (:)
From: John Harrison <John.Harrison@c...>
Subject: Re: Polymorphic comparison
Judicael Courant writes:
| I think this polymorphic comparison is quite easy to implement in the
| following way:
| #open "hashtbl";;
| let c x y = (hash x) <= (hash y);;
| #infix "c";;
I should have pointed out that I want a true ordering, i.e. something
antisymmetric. (Presumably the above isn't, since several different items might
yield the same hash value). The idea is to be able to sort a list of elements
of any type into an arbitrary but fixed order.
Pierre Weis adds:
| In the next 0.7 version of Caml Light, we plan to extend comparisons to a
| polymorphic function (i.e. prefix < : 'a -> 'a -> bool, instead of the
| now available prefix < : int -> int -> bool).
That would be all I want, I think.
| To extend comparisons to unrelated pairs of values, that is defining
| prefix < with type scheme 'a -> 'b -> bool seems a bit strange to me.
| What do you plan to do with such a general type scheme for comparisons ?
I don't foresee any use for such a general mechanism, although that's how it
was in Classic ML.
The applications I have in mind are in theorem proving; for example
canonicalizing expression trees based on an associative-commutative operation.
However I can envisage some other uses, e.g. set/multiset comparison. I suspect
that if you can only do pairwise comparison this is O(n^2), whereas just
sorting both sets first then comparing should be O(n log n). | {"url":"http://caml.inria.fr/pub/ml-archives/caml-list/1994/10/a595f66c656705f0f365d3ea2aab2b5a.en.html","timestamp":"2014-04-21T04:44:31Z","content_type":null,"content_length":"6566","record_id":"<urn:uuid:6065723d-db78-4a4f-952c-d3926c75f626>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00250-ip-10-147-4-33.ec2.internal.warc.gz"} |
Science Speakers {An Application of Mathematics in Manufacturing}
4:00 pm, Wednesday, September 11, 2013
Science 106
Science Speakers {An Application of Mathematics in Manufacturing}
Bruce Sellars, Rotational Molding Technologies Inc. Manufacturing companies have used, and are increasingly using, NC/CNC machines to remove material from their ?parts? or from their ?tooling parts?
that they use to make their ?parts?. The motion of NC/CNC machines is based on geometry which is typically prepared ?upstream? by geometric software applications on a separate computer. One
fundamental geometric task, sometimes implemented in such software, is the task of pocketing a design/shape and removing all material within that shape to a given depth in the original material. This
presentation will examine a software implementation of this task. The presentation will be a step by step set of problems followed by a solution for each problem. For some steps the solution given to
that step will be very simple. On the other hand, sometimes a surprising simple sounding problem step requires a fairly strong mathematical result. A familiarity with the calculus will make the
presentation more interesting but will not impede overall understanding of the presentation. There will be a number of pictures as illustration. The presenter has developed, implemented, and
subsequently used, on a regular basis, software generating automatic geometric ?fills?. This presentation will be an attempt to see a ?real life? mathematical application.
Contact: David Housman, phone 7061, email dhousman@goshen.edu | {"url":"http://gconline.goshen.edu/public/prod/eventcal/bin/displayDetail.php3?eid=50453&dInterval=3mo","timestamp":"2014-04-20T08:17:52Z","content_type":null,"content_length":"6040","record_id":"<urn:uuid:9266719a-1b56-40e4-8354-14570f4b5f78>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00338-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kenilworth, NJ Science Tutor
Find a Kenilworth, NJ Science Tutor
...I show you how to arrange paragraphs and choose the right sentence to correct the paragraph. Let me help you, I'll show you how to boost your score! Does your child need to learn his/her math?
56 Subjects: including biology, sociology, philosophy, anthropology
...This passion has driven me to get students to love science as well. I try and use the best materials that I can to teach the information, reinforce the main concepts, and check their
understanding. I like to use feedback from my lessons with students to figure out how to improve because whether teaching or tutoring, I am always learning as well.
15 Subjects: including zoology, botany, organic chemistry, sociology
...I encourage and motivate. I am patient and caring. Let's talk more!
9 Subjects: including biology, microbiology, English, writing
...I use a combination of prompt dissection, strategic outlining, and improving organizational skills, in order to make essay writing an enjoyable and un-daunting process. Environmental Science I
graduated with high honors from NYU with a degree in Environmental Science am pursuing a Master's of S...
12 Subjects: including nutrition, ecology, sociology, biology
I have always had a love for science since I was a high school student but my passion for food developed as a child during family functions that involved a big and tasty celebratory feast. I soon
learned that there was a way to balance delicious food with nutritious living. As a Registered Dietiti...
1 Subject: nutrition
Related Kenilworth, NJ Tutors
Kenilworth, NJ Accounting Tutors
Kenilworth, NJ ACT Tutors
Kenilworth, NJ Algebra Tutors
Kenilworth, NJ Algebra 2 Tutors
Kenilworth, NJ Calculus Tutors
Kenilworth, NJ Geometry Tutors
Kenilworth, NJ Math Tutors
Kenilworth, NJ Prealgebra Tutors
Kenilworth, NJ Precalculus Tutors
Kenilworth, NJ SAT Tutors
Kenilworth, NJ SAT Math Tutors
Kenilworth, NJ Science Tutors
Kenilworth, NJ Statistics Tutors
Kenilworth, NJ Trigonometry Tutors | {"url":"http://www.purplemath.com/kenilworth_nj_science_tutors.php","timestamp":"2014-04-16T07:56:08Z","content_type":null,"content_length":"23582","record_id":"<urn:uuid:590eb8e4-fad4-45e6-bb53-2c806870afb1>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00413-ip-10-147-4-33.ec2.internal.warc.gz"} |
Please help new to trigonometry
May 2nd 2009, 07:20 PM #1
Apr 2009
Please help new to trigonometry
Hi I am learning trigonometry if someone can do these for me I'll be able to do the rest thank you very much
These three question Thank you very much.
Hi there if $sin(\theta) = \frac{3}{11}$
You must consider a right angled triangle and the ratio
$sin(\theta) = \frac{opposite}{hypotenuse}$
Using pythagora's theorem the thrid side must be as follows:
Now you can use these to find
$cos(\theta) = \frac{adjacent}{hypotenuse}$
$sin(\theta) = \frac{opposite}{adjacent}$
May 3rd 2009, 02:00 AM #2 | {"url":"http://mathhelpforum.com/trigonometry/87010-please-help-new-trigonometry.html","timestamp":"2014-04-18T04:22:53Z","content_type":null,"content_length":"35228","record_id":"<urn:uuid:babd13e3-7494-482f-9998-38a4142386de>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00140-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematical cousins and collaboration numbers
Posted by: Dave Richeson | April 17, 2010
Mathematical cousins and collaboration numbers
A couple days ago Michael Lugo at God Plays Dice shared a link to a mathematical relationships search. Enter the names of two people with PhDs in mathematics and it will spit out their academic
relationship. For example, my advisor, John Franks, is my academic father and my good friend and collaborator Jim Wiseman is my academic brother. It draws the information from the Mathematics
Genealogy Project.
It is fun to play with. For example, Euler is my great-great-great-great-great-great-great-great-great-great-grandfather and Erdős is my tenth cousin four times removed. By the way, I think I finally
understand what it means to have nth cousin m times removed.
Interestingly when I look at my relationship to different mathematicians, it is often the case that our common ancestor is Simeon Poisson who had only three advisees: Chasles, Dirichlet, and
There is another interesting website that performs a similar function. While the mathematical relationships search pulls information from the genealogical tree, the AMS runs a website that pulls
information from the graph of collaborators (these data come from MathSciNet).
The collaboration distance generalizes the well known Erdős number. My Erdős number is 4; that means that I wrote a paper with someone who wrote a paper with someone who wrote a paper with someone
who wrote a paper with Erdős. Similarly, my collaboration number with John Forbes Nash, Jr., Grisha Perelman, Steven Strogatz, and Terence Tao is 5 and my collaboration number with John H. Conway is
Posted in Academic Technology, Links, Math | Tags: collaboration, genealogy, graph, tree | {"url":"http://divisbyzero.com/2010/04/17/mathematical-cousins-and-collaboration-numbers/?like=1&source=post_flair&_wpnonce=bcc517a348","timestamp":"2014-04-17T07:00:28Z","content_type":null,"content_length":"62032","record_id":"<urn:uuid:69177b7b-29f1-48a4-9197-f33090dc64ef>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00542-ip-10-147-4-33.ec2.internal.warc.gz"} |
The inverse of a one to one function
November 6th 2012, 01:12 PM #1
The inverse of a one to one function
Hello forum,
I personally believe, though I'm not sure, that this topic should be in the Algebra forum. Though, because I'm taking this from a Precalculus class I wasn't entirely sure whether to place it
Find the inverse of
$f(x) = \frac{1}{x-2}$
To my understanding I must find $f^-^1$
I have the answers, but I'm interested in knowing how to find the inverse
$F^-^1(x) = How can I find this?$
The o stands for composed with
$f o f^-^1$
$f(f^-^1(x)) = \frac{1}{f^-^1(x)-2}$
Re: The inverse of a one to one function
Re: The inverse of a one to one function
[QUOTE=Plato;749605]Note that 2 is not in the domain of $f$.
Try $g(x)=\frac{2x+1}{x}$.
Hello Plato,
I remember you are always actively helping me. I dont know if I portray the following well when I ask for help. But I'm eager to learn the process, the concept.
I'm on a rush to school I will solve it as soon as I have a break. Thanks for replying ^_^
Re: The inverse of a one to one function
Re: The inverse of a one to one function
[QUOTE=Plato;749610]Lets see if I can get it right this time. I forgot some of the rules when multiplying fractions with variables, fear not, there are many resources.
$x = \frac{1}{y-2}$
$x(y-2) = 1$
$xy - 2x = 1$
$xy = 1 + 2x$
$y = \frac{1+2x}{x}$
Wouldn't the two x cancel out?
My best answer is because we are adding 1 to it and therefore the values will change
Re: The inverse of a one to one function
When proving that
$F(x) ~o~ f^-^1(x)~~and~~F^-^1(x) o F(x)$ I had a little trouble.
$F(F^-^1(x)) = \frac{1}{\frac{1+2x}{x}-2}$
$= x$
Re: The inverse of a one to one function
What is $\frac{1+ 2(3)}{3}$? Is that the same thing as $\frac{1+2}{1}$?
My best answer is because we are adding 1 to it and therefore the values will change
I'm not sure I can makes sense out of that sentence! Your best answer to what question? The question "Wouldn't the two x cancel out?" require a "yes" or "no" answer, not "because".
In any case, the only time something in numerator and denominator cancel is when they are multipied by the rest of the numerator and denominator: you can cancel the "x"s in $\frac{x(a+ b)}{x(c+
d)}$ but not in $\frac{x+ b}{x(c+ d)}$ or $\frac{x+ b}{x+ d}$.
November 6th 2012, 01:50 PM #2
November 6th 2012, 02:13 PM #3
November 6th 2012, 02:18 PM #4
November 7th 2012, 04:17 AM #5
November 7th 2012, 04:29 AM #6
November 7th 2012, 04:37 AM #7
MHF Contributor
Apr 2005 | {"url":"http://mathhelpforum.com/pre-calculus/206892-inverse-one-one-function.html","timestamp":"2014-04-16T19:42:58Z","content_type":null,"content_length":"60895","record_id":"<urn:uuid:e4fcf2ae-a4ba-4c7d-8492-6efffbba0f0d>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00328-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bounding derivative of a function
up vote 0 down vote favorite
Consider $a(t)\in\mathbf{L}^{2}(\mathbb{R})$ and $a(t)>0$, is a low pass smooth function with $\hat{a}(f)=0, |f|>f_{max}$. Can we have a upper bound on the following, $\Big|\frac{a'(t)}{a(t)}\Big|$?
Using Bernstein's theorem we can upper bound $|a'(t)|$ alone based on $f_{max}$ but how can we upper bound the ratio mentioned here. Any suggestions for it.
mp.mathematical-physics fa.functional-analysis
Please edit your question for English and math. What is $a(t)$? is this the same as $x(t)$? What sort of estimate do you want? $x(t)$ can be zero at some points where $x'(t)/x(t)=\infty$. –
Alexandre Eremenko Nov 28 '12 at 22:42
Thanks Alexandre, I have corrected the notation and the question. – Neeks Nov 28 '12 at 22:51
You did not tell me what sort of estimate you want. There is no uniform estimate, of course: $a$ can have complex zeros as close as you wish to the real line, condition $a>0$ does not help, and $a'
/a$ can be arbitrarily large at some points. – Alexandre Eremenko Nov 29 '12 at 2:07
In the case I am dealing with, only real zeros of $a(t)$ are of interest. But you brought out interesting possibility. Now taking a simple example, $a(t)=1+\mu\sin\omega t$, with $0<\mu<1$ we have,
\newline $\Big|\frac{a'(t)}{a(t)}\Big|=\frac{\mu\omega\cos\omega t}{1+\mu\sin\omega t}\leq\frac{\mu\omega}{\sqrt{1-\mu^2}}, \forall t$. So above it can be bounded, though a simple example. Can we
have a bound for sum of harmonic sinusoids and generalize it. But polynomials are also entire function and hence band-limited but the bandwidth is too large for it. Please correct me. – Neeks Nov
29 '12 at 6:24
add comment
1 Answer
active oldest votes
$a(t)=\cos(t+iϵ)\cos(t−iϵ)$ is a low pass signal and $a^\prime/a$ can be as large as you wish at the point $t=\pi/2,$ if $\epsilon$ is sufficiently small.
up vote 1 down
Please see my above comment. – Neeks Nov 29 '12 at 6:26
1. What "real zeros" are you talking about if one of your conditions is that $a(t)>0$ ? 2. Example which I gave shows that in general there is NO upper bound for $a'/a$ under your
conditions. What else are you asking? – Alexandre Eremenko Nov 29 '12 at 23:38
sorry for the delayed reply. I had missed the comment. Under what further conditions on a(t) can we have an upper bound on the quantity mentioned? For example, in the example i gave
we have a bound dependent on ω – Neeks Dec 5 '12 at 17:23
add comment
Not the answer you're looking for? Browse other questions tagged mp.mathematical-physics fa.functional-analysis or ask your own question. | {"url":"http://mathoverflow.net/questions/114818/bounding-derivative-of-a-function","timestamp":"2014-04-17T07:43:04Z","content_type":null,"content_length":"58348","record_id":"<urn:uuid:ce86d393-1f41-4a55-9391-c77b27479d1a>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00400-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lacey Math Tutor
Find a Lacey Math Tutor
...I also tutored a student in Algebra 2 who received A's on every test following my instruction. I enjoy working one on one with students, whether helping them with homework or preparing for an
exam. I am willing to create practice tests for students to ensure their success.
19 Subjects: including calculus, reading, statistics, ACT Math
...During my biology requirements all my electives were in ecology class. I have tutored ecology in the past as well as teaching this subject in the high school setting preparing students for
assessments and everyday class material. I have a BS in chemistry and while in college I passed all organic chemistry classes with high marks.
16 Subjects: including algebra 1, prealgebra, chemistry, biology
...Anna, I have a PhD in infectious diseases & immunity from UC Berkeley and currently work as an infectious disease researcher at the Seattle BioMedical Research Institute. Despite pursuing an
active career in biomedical research, I have continued to develop my skills in teaching biology (microbio...
25 Subjects: including algebra 2, ACT Math, prealgebra, precalculus
...In addition to my academic skills I also have worked in a machine shop and have glass blowing experience. I have taken a full year of AutoCAD Courses and I am familiar with AutoCAD 2010 and
Inventor 2010. I am a novel thinker with ability to think outside the box, and I am looking to utilize my education laboratory experience and my work ethic.
19 Subjects: including algebra 1, algebra 2, biology, calculus
Looking for a tutor? I am an experienced certificated K-12 teacher in the areas of history and social studies as well as K-8 teacher in most subjects including: history, social studies, math,
English/Language Arts/grammar, reading, writing, science, spelling, vocabulary, penmanship, and I have also...
19 Subjects: including prealgebra, reading, geometry, writing | {"url":"http://www.purplemath.com/lacey_math_tutors.php","timestamp":"2014-04-20T13:34:57Z","content_type":null,"content_length":"23548","record_id":"<urn:uuid:f29e91e1-6a21-4b0d-babc-0e7178e4e521>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00161-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
An orchestra consists of 25 members. The youngest member is 32 years old. She leaves and is replaced by a new member who is 30 years old. By how much does the replacement change the average age of
the members of the orchestra?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
hw did you get this answer
Best Response
You've already chosen the best response.
someone can explain this
Best Response
You've already chosen the best response.
I don't know answer
Best Response
You've already chosen the best response.
Let A = old average and B = new average The old average was found by dividing the sum of the ages S by 25 to get S/25 = A The new average is found by subtracting 32 from that sum S and adding in
30. Then you still divide by 25 (since you still have 25 people) (S-32+30)/25 = B (S-2)/25 = B S/25-2/25 = B A-2/25 = B B = A - 2/25 B = A - 0.08 So B = A - 0.08 which shows us that the average
age decreased by 0.08 years
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/515353bde4b0b79aa9443a61","timestamp":"2014-04-21T12:42:23Z","content_type":null,"content_length":"37525","record_id":"<urn:uuid:bbdba263-1ece-4c63-9386-1565642ac2cd>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00304-ip-10-147-4-33.ec2.internal.warc.gz"} |
Limit Proofs
Date: 08/23/98 at 15:39:41
From: Amanda Donovan
Subject: Calculus III
My problem is that I quite simply don't understand where all things
come from when doing a proof for a limit. My professor writes things on
the board, such as the statement:
If |f(x) - L| < E, then 0 < |x-a| < delta ....
I don't understand the process. Is there a systematic way to do these
The specific question I'm working on is: show that the limit as x
approaches 0 of 1/x-1 = -1. I come out with a number and no delta in
my answer.
Any help you can give will be greatly appreciated.
Amanda Donovan
Date: 08/24/98 at 08:04:54
From: Doctor Jerry
Subject: Re: Calculus III
Hi Amanda,
First, you must mean 1/(x-1), right?
I'll give an argument to show that the limit of 1/(x-1) as x->0 is -1,
using the definition of limit you outlined above.
We want to force |1/(x-1) - (-1)| < E by controlling the size of |x-0|.
Let's work with |1/(x-1) + 1| < E, in an attempt to get |x-0|:
|1/(x-1) - (-1)| = |1/(x-1) + 1| = |x|/|x-1|
This will be small when |x| is small, provided we can control 1/|x-1|.
Let's first decide to make delta < 1/2. If this is true, then when
|x| < delta, -1/2 < x < 1/2. Thus, x is no closer to 1 than 1/2, that
is, |x-1| > 1/2. Draw a small figure if this isn't clear. If
|x-1| > 1/2, then 1/|x-1| < 2. Now back to |x|/|x-1|. Since
1/|x-1| < 2, we can say that:
|1/(x-1) - (-1)| = |1/(x-1) + 1| = |x|/|x-1| < 2|x|
To make this less than E, we need 2|x| < E, which means that
|x| < E/2.
So, we choose delta this way:
For any E > 0, let delta be the smaller of E/2 and 1/2.
Now, all of the above arguments work.
- Doctor Jerry, The Math Forum
Check out our web site! http://mathforum.org/dr.math/ | {"url":"http://mathforum.org/library/drmath/view/53363.html","timestamp":"2014-04-16T06:02:47Z","content_type":null,"content_length":"6607","record_id":"<urn:uuid:fb347a1f-b784-4219-8d86-bdee760a83a6>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00309-ip-10-147-4-33.ec2.internal.warc.gz"} |
Talk Tennis - View Single Post - The physics of a dropweight tensioner
Originally Posted by
I was just using my brand new drop-weight machine and admiring the design. Due to the design, there is very little tension variance between the bar being horizontal and a little off horizontal. As
the OP pointed out, the tension is a function of the cosine of the angle off horizontal. But it's useful to note the small angle approximation for cosine is 1-((x/57.3)^2)/2. (where x is measured in
degrees). Because the angle off horizontal is divided by 57 and then squared, for small angles the cosine is relatively invariant to the angle (because you're on the flat segment of the sine curve).
Approximate tension deviation (in percent) for error off horizontal (whether up or down):
1 degree: 0.015%
5 degrees: 0.38%
10 degrees: 1.52%
A 5 degree tilt is extremely noticeable because one end of a 2 foot bar would be about 2 inches higher or lower than the other at 5 degrees (and about 4 inches at 10 degrees). The drop weight is a
simple and elegant design for achieving a precise tension.
danno... saw an earlier post from you that you play out of the MAC... What age and rating?? I used to live in East Lansing. | {"url":"http://tt.tennis-warehouse.com/showpost.php?p=5277716&postcount=49","timestamp":"2014-04-16T17:25:48Z","content_type":null,"content_length":"18290","record_id":"<urn:uuid:1772c039-b9ef-4539-8bde-4ea3aca83811>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00161-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mplus Discussion >> Path analyses vs seperate regressions
Path analyses vs seperate regressions
gibbon lab posted on Tuesday, December 04, 2012 - 8:34 pm
Dear Professor,
I was just wondering what is the difference between a path analysise and a set of seperate regressions. I tried to do a simulation to understand the difference.
First, I generated three variables x, y and z based on a multivariate normal distribution with 1000 observations. Then I did two seperate simple regression models y=a+b*x+epsilon,z=alpha+beta*y+eta.
After that I ran a path analysis in Mplus specifying x->y->z. I noticed that I actually got the same results for estimating a,b,alpha and beta in the path analysis. However, if I specify "y with z"
in mplus, I get different results in the path analysis for estimating alpha and beta.
I understand that the command "y with z" correlates epsilon and eta which can not be done in seperate regressions. That is why I got different results.
My question: is correlating different residuals the only difference between a path analysis and seperate regression models? Do you recommend any references if I want to know more about this topic?
Thanks a lot.
Linda K. Muthen posted on Wednesday, December 05, 2012 - 11:01 am
A good book is Introduction to Structural Equation models by Otis Dudley Duncan.
Back to top | {"url":"http://www.statmodel.com/discussion/messages/11/11299.html?1354734113","timestamp":"2014-04-19T02:20:44Z","content_type":null,"content_length":"18066","record_id":"<urn:uuid:28f5b1e9-922d-4645-83cf-5d75411bff55>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00216-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matches for:
Contemporary Mathematics
2000; 569 pp; softcover
Volume: 259
ISBN-10: 0-8218-1950-X
ISBN-13: 978-0-8218-1950-0
List Price: US$126
Member Price: US$100.80
Order Code: CONM/259
Among all areas of mathematics, algebra is one of the best suited to find applications within the frame of our booming technological society. The thirty-eight articles in this volume encompass the
proceedings of the International Conference on Algebra and Its Applications (Athens, OH, 1999), which explored the applications and interplay among the disciplines of ring theory, linear algebra, and
coding theory.
The presentations collected here reflect the dialogue between mathematicians involved in theoretical aspects of algebra and mathematicians involved in solving problems where state-of-the-art research
tools may be used and applied.
This Contemporary Mathematics series volume communicates the potential for collaboration among those interested in exploring the wealth of applications for abstract algebra in fields such as
information and coding. The expository papers would serve well as supplemental reading in graduate seminars.
Advanced graduate students and researchers in ring theory, linear algebra, and algebraic coding theory.
• G. Abrams and J. J. Simón -- Isomorphisms between infinite matrix rings: A survey
• T. Albu and M. L. Teply -- The double infinite chain condition and generalized deviations of posets and modules
• R. B. Bapat and D. M. Kulkarni -- Minors of some matrices associated with a tree
• G. F. Birkenmeier, J. Y. Kim, and J. K. Park -- On quasi-Baer rings
• M. Brešar -- Functional identities: A survey
• G. Brookfield -- The Grothendieck group and the extensional structure of Noetherian module categories
• J. Dauns and Y. Zhou -- Some non-classical finiteness conditions of modules
• G. D'Este -- Free modules obtained by means of infinite direct products
• E. E. Enochs and O. M. G. Jenda -- Gorenstein injective, projective, and flat dimensions over Cohen-Macaulay rings
• A. Facchini and D. Herbera -- Projective modules over semilocal rings
• S. M. Fallat and C. R. Johnson -- Determinantal inequalities: Ancient history and recent advances
• K. R. Fuller -- Ring extensions and duality
• J. L. García and L. Marín -- Some properties of tensor-idempotent rings
• K. R. Goodearl and J. T. Stafford -- The graded version of Goldie's theorem
• M. Greferath -- On Artinian and Noetherian projective lattice geometries
• B. Huisgen-Zimmermann -- The phantom menace in representation theory
• L. Kadison and A. A. Stolin -- Separability and Hopf algebras
• P. Kanwar -- Quadratic residue codes over the integers modulo \(q^m\)
• D. Keskin -- Characterizations of right perfect rings by \(\oplus\)-supplemented modules
• P. Körtesi and J. Szigeti -- The adjacency matrix of a directed graph over the Grassmann algebra
• L. A. Kurdachenko and I. Ya. Subbotin -- On Artinian modules over hyperfinite groups
• T. Y. Lam and A. Leroy -- Principal one-sided ideals in Ore polynomial rings
• L. S. Levy -- Modules over hereditary Noetherian prime rings (Survey)
• A. Li -- Prime elements of birational extensions of a Noetherian UFD
• C. Lomp -- On the splitting of the dual Goldie torsion theory
• J. J. McDonald and M. Neumann -- The Soules approach to the inverse eigenvalue problem for nonnegative symmetric matrices of order \(n \leq 5\)
• C. J. Moreno -- Harmonic analysis on finite rings and applications
• B. L. Osofsky -- A lattice invariant for modules, II
• A. Özcan -- Modules having \(^*\)-radical
• C. J. Pappacena -- The "generalized class group" of a left Noetherian ring
• J. L. Gómez Pardo and P. A. Guil Asensio -- Indecomposable decompositions of \(\aleph\)-\(\Sigma\)-CS-modules
• L. H. Rowen and Y. Segev -- The multiplicative group of a division algebra of degree 5 and Wedderburn's factorization theorem
• C. Santa-Clara and P. F. Smith -- Modules which are self-injective relative to closed submodules
• I. Siap and D. K. Ray-Chaudhuri -- On \(r\)-fold complete weight enumerators of \(r\) linear codes
• A. I. Singh and M. A. Swardson -- Levels of quotient rings of rings of continuous functions
• P. F. Smith -- Commutative domains whose finitely generated projective modules have an injectivity property
• R. Wisbauer -- Decompositions of modules and comodules
• J. M. Zelmanowitz -- Density for polyform modules | {"url":"http://ams.org/bookstore?fn=20&arg1=conmseries&ikey=CONM-259","timestamp":"2014-04-19T15:03:12Z","content_type":null,"content_length":"18485","record_id":"<urn:uuid:4f87138c-1fbb-4324-a6a6-175168d39082>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00588-ip-10-147-4-33.ec2.internal.warc.gz"} |
Efficient conditional preparation of single photons for quantum-optical networks
Recent progress in quantum information processing highlights the fundamental necessity for a reliable single photon source that exhibits a well-defined single-mode character and well-defined photon
numbers. A very high degree of photon-number correlation is vital for linear optical quantum computation [Knill2001] and quantum communication [Browne2003] including networking via quantum repeaters;
lack thereof implies the need for post-selection, which precludes scaling. Moreover, both aspects of quantum information processing can be enhanced by exploiting the "non-Gaussian" photon statistics
associated with states containing higher photon numbers.
The generation of single photons on demand can be accomplished by several means. Single emitter sources typically provide emission on demand, but into modes that are not well-matched to detectors,
and not in pure states. Consequently the photons are detected randomly. On the other hand, heralded sources based, say, on parametric downconversion, have random emission events, but the heralded
photon can be detected with high probability.
We have developed a novel source of conditionally-prepared single photons based on PDC that overcomes and number of barriers to pure single photon wavepacket generation.[Uren2004] First, waveguiding
leads to emission in well-defined modes, which significantly enhances photon collection efficiencies, leads to higher generation rates due to mode confinement, removes space-time coupling and allows
high-visibility interference between single photons from distinct sources. The photons are easily separable, and have precise timing, paving the road towards the concatenation of multiple waveguides
for quantum networking. We have demonstrated a conditional detection efficiency of 51.5% at a brightness of 0.85 million coincidences/(s mW).
Another important technology for LOQC and state preparation is a photon number resolving detector. We have implemented a time multiplexed detector (TMD) capable of photon number resolution
[Achilles2003], built from standard optical elements, and which allows both source verification and higher-order conditional state preparation. We have used this device to prove the nonclassical
nature of the heralded single photon source.
[Uren2004] A.B. U'Ren et al., quant-ph/0312118, to appear in Phys. Rev. Lett. [Knill2001] E. Knill, et al., Nature 409, 46 (2001) [Browne2003] D.E. Browne, et al., Phys. Rev. A 67, 062320 (2003)
[Achilles2003] D. Achilles et al., Opt. Lett. 28, 2387 (2003) | {"url":"http://www.newton.ac.uk/programmes/QIS/Abstract2/walmsley.html","timestamp":"2014-04-19T04:37:38Z","content_type":null,"content_length":"5209","record_id":"<urn:uuid:028be21d-4cf9-4af8-b1f1-151eb55069ee>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00382-ip-10-147-4-33.ec2.internal.warc.gz"} |
Portability portable
Stability experimental
Maintainer Aleksey Khudyakov <alexey.skladnoy@gmail.com>
Different implementations of approximate equality for floating point values. There are multiple ways to implement approximate equality. They have different semantics and it's up to programmer to
choose right one.
:: (Fractional a, Ord a)
=> a Relative precision
-> a
-> a
-> Bool
Relative difference between two numbers are less than predefined value. For example 1 is approximately equal to 1.0001 with 1e-4 precision. Same is true for 10000 and 10001.
This method of camparison doesn't work for numbers which are approximately 0. eqAbsolute should be used instead.
:: (RealFloat a, Ord a)
=> a Relative precision
-> Complex a
-> Complex a
-> Bool
Relative equality for comlex numbers.
:: (Num a, Ord a)
=> a Absolute precision
-> a
-> a
-> Bool
Difference between values is less than specified precision.
:: Int Number of ULPs of accuracy desired.
-> Double
-> Double
-> Bool
Compare two Double values for approximate equality, using Dawson's method.
The required accuracy is specified in ULPs (units of least precision). If the two numbers differ by the given number of ULPs or less, this function returns True.
Algorithm is based on Bruce Dawson's "Comparing floating point numbers": http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm | {"url":"http://hackage.haskell.org/package/numeric-tools-0.2.0.0/docs/Numeric-ApproxEq.html","timestamp":"2014-04-18T02:09:17Z","content_type":null,"content_length":"8547","record_id":"<urn:uuid:84d86e87-022f-47a8-b611-598bca2b4d15>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00159-ip-10-147-4-33.ec2.internal.warc.gz"} |
JOHANNES KEPLER
Johannes Kepler (1571-1630)
Johannes Kepler was born on December 27, 1571, in Weil der Stadt in Swabia, a wine region in south west Germany not far from France. His paternal grandfather, Sebald Kepler, was a respected craftsman
who served as mayor of the city; his maternal grandfather Melchior Guldenmann, was an innkeeper and mayor of the nearby village of Eltingen. His father, Heinrich Kepler, was "an immoral, rough and
quarrelsome soldier," according to Kepler, and he described his mother in similar unflattering terms.
As a 7-month-old child (Kepler was sickly from birth) he contracted smallpox. His vision was severely defective, and he had various other illnesses throughout childhood, some of which may have been
From 1547 to 1576 Johannes lived with his grandparents; in 1576 his parents moved to nearby Leonberg, where Johannes entered the Latin school. In 1584 he entered the Protestant seminary at Adelberg,
and in 1589 he began his university education and the Protestant university of Tubingen. At Tubingen he studied mainly theology and philosophy, but also mathematics and astronomy. At the university,
Kepler's exceptional intellectual abilities became apparent. Kepler's teacher in mathematical subjects was Michael Maestlin, whom Kepler admired greatly. Maestlin was one of the earliest astronomers
to believe in Copernicus's heliocentric theory, the theory that the sun, not the earth, was the center of the universe and that all planets, including the earth, orbited it, although he did not teach
it to his students because Martin Luther denounced it and he would lose his job if he did. He was forced to teach the Ptolemaic system, or the geocentric theory which said that the earth was the
center of the universe and that everything orbited the earth. This was named after Ptolemy, the man who first arrived at this conclusion. Maestlin, however, was able to influence some of his students
to subscribe to the Copernican system, among them was Kepler.
After graduation, Kepler was offered a professorship of astronomy in faraway Graz (in the Austrian province in Styria), where he went in 1594. One of his duties of this professorship was to make
astrological predictions. Despite earlier failures at predictions, he predicted a cold winter, and an invasion by the Turks. Both predictions turned out to be correct, he was treated with new
respect, and his salary was raised.
While lecturing to his math class in Graz, contemplating some geometric figure involving concentric (having a common center) circles and triangles on the blackboard, Kepler suddenly realized that the
figures of the type shown (Illus. 3-1) determined a definite fixed ratio between the sizes of the two circles, provided the triangle has all sides equal, and a different ratio of sizes will occur for
a square between the two circles, another for a regular (having all sides equal) pentagon, and so on.
He thought this might be the key to the Solar System. He truly believed in the Copernican system, so he saw the planetary orbits as six concentric circles (only six planets had been discovered then),
meaning the planets revolve around the sun and have the sun as their common center but have different radii, or distances to the sun. Disappointingly, he found it just didn't work out---the ratios
were wrong. Then he had real inspiration. The universe was really three-dimensional, and instead of thinking of circles, he should be thinking about spheres, with the planetary orbits being along the
equators. Thinking in three dimensions, the analogue of the above diagram (Illus. 3-1) would be two concentric spheres with a tetrahedron, or a pyramid shaped four planed figure, between them, so
that the outer sphere passes through the vertices, or points, of the tetrahedron, and the inner sphere touches all its sides, but is completely contained in the tetrahedron. There were just six
planets, so five spaces between spheres, and there are just five regular solids. Thus, if the distances came out right, the theory provided a complete explanation in terms of a geometric model of why
there were just six planets, and why they are spaced as we find them. Actually, the distances still didn't come out right, especially for Jupiter, but Kepler was so sure of the rightness of his work,
that he blamed the discrepancies on errors in Copernicus' tables. He titled his work Mysterium Cosmographicum--the Mystery of the Universe (explained). The crucial illustration from his book (also
his model) is shown (Illus. 4-1), the outer sphere being the orbit of Saturn.
Except for Mercury, Kepler's construction produced remarkably accurate results. Because of his talent as a mathematician, displayed in his work and his book, Kepler was invited by the great Tycho
Brahe to Prague to become his assistant and calculate new orbits from Tycho's observations. Kepler moved to Prague in 1600.
Kepler and Brahe did not get along well. Brahe apparently mistrusted Kepler, fearing that his bright young assistant might eclipse him as the prominent astronomer of his day. He therefore only let
Kepler see part of his numerous data.
He set Kepler to the task of understanding the orbit of the planet Mars, which was particularly troublesome. It is believed that part of the reason for giving the Mars problem to Kepler was that it
was difficult, and Brahe hoped it would occupy Kepler while Brahe worked on his theory of the Solar System. Ironically, it was precisely the Martian data that allowed Kepler to formulate the correct
laws of planetary motion, thus eventually achieving a place in the development of astronomy far surpassing that of Brahe. When Brahe died in 1601, Kepler stole the data Brahe had been keeping from
him, and began to work with it.
Once Kepler had secured Tycho's data, he set himself to the task of once and for all determining the exact orbit of Mars. A preliminary analysis showed the orbit to be very close to a circle, a
radius about 142 million miles, but the sun was not at the center of the circle---it was at a point 13 million miles away from the center. Also, it was clear that Mars varied in speed as it went
around this orbit, moving fastest when it was closest to the sun (at perihelion) and slowest when it was furthest from the sun (aphelion). Everybody (including Kepler) believed that the motion of
planets must be a simple steady motion, or at least made up of simple steady motions, if only it were looked at in the right way. They wondered how the motion of Mars described above could be seen as
some kind of steady motion.
Actually, a possible solution to this problem had been given long before by Ptolemy. The method was to introduce another point, called the equant, on the line through the sun and the center of the
circular orbit, the equant being on the opposite side of the center from the sun.
The idea is to try to position this point so that the planet moves around the equant at a steady angular speed. This steady motion about the equant is somewhat believable, because the planet is
observed to be moving slowest when it is furthest from the sun, which is when it is closest to the equant, and vice versa, so if you imagine a spoke going out from the equant point to the planet and
sweeping around with the planet, maybe this spoke could be turning at a steady rate.
Ptolemy had shown that observations of the movement of Mars in its orbit were in fact well accounted for by a model of this sort, with the equant point the same distance from the center as the sun,
but on the other side (as in Illus. 5-1). Kepler, however, had far more accurate records of the movement of Mars, and he was interested in seeing if the model still held up under this closer
scrutinizing. He found it didn't. Even adjusting independently the radius of the orbit, the distance of the sun from the center, and the distance of the equant from the center, he found the best
possible orbit of this type was still in error by eight minutes of arc (8/60 of a degree) in accounting for the observations. Such an error could not have been detected before Tycho's work. Kepler
knew Tycho's work was accurate to about one minute, and so the model had to be thrown out.
Having thrown out the equant model, though, it was difficult to see what to do next. The natural thought for an astronomer at that time would have been to add an epicycle, that is, to imagine Mars to
be going in a small circular orbit about a point which itself goes along the orbit shown above. But Kepler didn't like that approach. The whole business with cycles and epicycles was purely
descriptive---trying to account for the observed planetary motions with a suitable combination of circular motions. Kepler, in contrast, was trying to think dynamically, that is, to understand the
planetary motions somehow in terms of a force stemming from the sun sweeping them around in their orbits. Thinking in those terms, adding an epicycle looks unattractive--what force could be pushing
the planet around the small circle, which has nothing at its center?
Kepler realized that to get the kind of precision he needed in analyzing the orbit of Mars, he first needed to have a very accurate picture of the earth's orbit, since all measurements of Mars'
position were conducted from the earth. So to pin down Mars' position relative to the sun, it was necessary to know the earth's position relative to the sun to the required precision. He wondered how
he could pin down the earth's position in space accurately. This is like being in a boat some distance from shore. If you can see only one landmark, such as a lighthouse, and you have both a compass
and a map, that is not enough to really fix your position, because you cannot tell very accurately just how far away the lighthouse is. On the other hand, if you can see two landmarks in different
directions, and measure with your compass the exact directions they lie from your boat, that is enough to fix your position exactly without guessing about distances. You just take out your map, draw
lines through the two landmarks on the map in the direction your boat lies from each of them in turn, and the point where the two lines intersect on the map is your location. Essentially, this is
just Thales' method used in geometry-- the two landmarks form the base of a triangle, and we know the direction of the boat from the two ends of the base, so we can construct a triangle on this base
with the boat at the other vertex. Just knowing the angle between the two lines isn't enough, we have to know their individual directions relative to the base, which is what we can find with the map
and the compass.
The idea was to use this same technique repeatedly to find the location of the earth, and thereby map out its orbit. The problem was, he needed two fixed lighthouses to form the base, and he only had
one, the sun. The fixed stars wouldn't do, they were infinitely far away and just play the role of the compass, giving a fixed direction. Kepler solved the problem of the second lighthouse by a very
clever trick. He used Mars. Of course, Mars is moving all the time, and the orbit of Mars is what he was trying to find, so this didn't seem to be a promising approach. But one thing Kepler did know
is that if Mars was in a certain location at a certain time, it would be in exactly that same place 687.1 days later. Kepler was able to use Brahe's volumes of data to find the exact direction of
Mars from the earth at a whole series of times at 687.1 day intervals. By finding the direction of Mars and that of the sun at those times, he had a steady Mars-sun base to use in constructing the
earth's orbit.
In contrast to the orbit of Mars, Kepler found the earth's orbit to be essentially a perfect circle. (It is actually off by about one part in 10,000.) However, the center of the circle is about 1.5
million miles away from the sun, and the speed of the earth in its orbit varies, being greatest at the closest approach to the sun. At the furthest point, the earth is 94.5 million miles from the
sun, and it is moving around its orbit at a speed of 18.2 miles per second. At the point of closest approach to the sun, the earth is 91.4 million miles from the sun, and moving around at a speed of
18.8 miles per second. Kepler noticed that there was an interesting relationship among these numbers. The ratio of speeds, 18.8/18.2= 1.03, is the inverse of the ratio of the corresponding distances,
91.4/94.5= 1/1.03. Kepler's interpretation of this was that the force he believed to be emanating from the sun, pushing the planets around, was weaker at the greater distance, and that was why the
earth was being pushed more slowly.
This relationship between speeds and distances at the extreme points of the orbit enabled Kepler to develop his second law of planetary motion (described later). Along with his knowledge about the
orbit of the earth, he was able to give a sufficiently precise account of the earth's position in space as a function of time to be able to go on to the main business, the plotting of the more
interesting Martian orbit.
Kepler knew the orbit of Mars was not a circle. In fact, he had plotted it and found it to be an oval shape that could fit inside a circle, as shown here, and deviated from the circle by at most
0.00429 of the oval's half-breadth (half-width) MC, about one-half of one percent (see Illus. 10-1).
This means the ratio of AC/MC=1.00429. Kepler's figure was constructed directly from Tycho's data. He also measured the angle CMS. Mars subtended on the baseline consisting of the sun and the center
of the orbit when the planet was in the position shown. The value of this angle was 5 18' (5 degrees 18 minutes). He stumbled entirely by chance on the fact that the ratio of lengths SM/Cm, was
Kepler felt that this could not just be a coincidence--there must be a similar relationship between angle SMC and the distance from the sun at all points on the orbit. He found from data that this
was so, but was still unable to figure out what the curve must be. Still, he had stared at his plot long enough to believe that the curve was an ellipse with the sun at one focus, he then constructed
an ellipse by a different approach. At this point it dawned on him that his original analysis also led to an ellipse.
In fact, with hindsight, it is not difficult to show how the numerical coincidence Kepler encountered follows for an ellipse. Kepler found AC/MC=1.00429=MS/MC. This meant AC=MS.
The analysis of the Martian orbit, with many wrong turns and dead ends, took Kepler six years and thousands of pages of calculations. It led to two simple laws. | {"url":"http://www.kidsastronomy.com/kepler.htm","timestamp":"2014-04-21T14:40:19Z","content_type":null,"content_length":"84392","record_id":"<urn:uuid:d9096ee8-719a-48a4-8f3f-39f201bfe892>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00419-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SOLVED] chain rule with functions related by equations?
October 19th 2009, 03:43 PM
[SOLVED] chain rule with functions related by equations?
I just don't get this problem, I feel I'm missing something obvious.
Find $dg/du$ if
$1) x+y= uv$
$<br /> 2) xy=u-v<br />$
$<br /> x=g(u,v)$
$y=h(u,v)<br />$
Poor attempt of solution:
I try to obtain an expression for x as a function of u and v, but I can't seem to be able to get anything from that system of equations.
From 1) I get $x= uv-y$ , but from 2) $y=(u-v)/x$
If I replace into 1) $x=uv-uv/x$
I don't think I'm getting something from that
If $x=g(u,v)$ then
$<br /> g(u,v)=uv-y <br />$
$<br /> g(u,v)=uv-h(u,v)$
Another thought from there: $dg/du=v-dh/du$
$dg/du=v-x$ , but $x=g(u,v)$
and I think I'm running in circles here.
October 20th 2009, 12:20 PM
What if I tried to obtain a F(x,y)= 0 which I could then derive and get an expression for dg/du? The problem would be getting rid of u and v, but the truth is I suck at solving systems of
equations. Maybe adding and multiplying the equations...
Another idea would be that y could be defined implicitly as a function of x.
It would be like F(x,y)=F(x,f(x))=0, where x is defined by u and v and keeps holding the condition.
October 21st 2009, 07:51 AM
I finally got the resolution for this exercise. The idea was derive both equations in terms of u. Then, having a system of equations, solve for dg/du. | {"url":"http://mathhelpforum.com/calculus/109081-solved-chain-rule-functions-related-equations-print.html","timestamp":"2014-04-18T09:15:03Z","content_type":null,"content_length":"8218","record_id":"<urn:uuid:3e27a196-8189-4361-8b32-27cfba685a85>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00641-ip-10-147-4-33.ec2.internal.warc.gz"} |
SUMIF between two dates (or a specific Month & Year)
As I wrote, my formula assumes you have a true date in A10.
When you enter, for example, 4/2007, Excel will parse that into 1 Apr 2007. By
the way, if you just enter 4/07, and you are using US Regional Settings
(control panel stuff), Excel will parse that as 7 Apr 2007, so you might want
to be careful how you enter a date.
In any event, with any date of a month in A10, the formula =A10-DAY(A10) will
always give a date that is the last day of the preceding month.
So the criteria argument ">"&A10-DAY(A10) will evaluate, in words, to "any date
that is greater than the last day of the preceding month".
If the date were, indeed, text, the formula would not work as written. | {"url":"http://www.excel-answers.com/microsoft/Excel/30476752/sumif-between-two-dates-or-a-specific-month--year.aspx","timestamp":"2014-04-16T22:28:22Z","content_type":null,"content_length":"12193","record_id":"<urn:uuid:58d6e4ae-89f8-4460-bca1-20a72e7ff806>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00294-ip-10-147-4-33.ec2.internal.warc.gz"} |
Nicolas T. Courtois - UCL Home Page
Dr Nicolas T. Courtois
Contact Details:
Computer Science Room 7.06a.
Malet Place Engineering Building
University College London
Gower Street
London WC1E 6BT
Tel: +44 20 7679 3713
Fax: +44 20 7387 1397
Mobile/text: +44 789 4334 773
Email: Initial.FamilyName (ATsign) ucl.ac.countrycode [My PGP key]
I have been lecturing at University College London since 2006. At UCL we have a specialist M.Sc. programme in Information Security.
Currently I teach the Applied Cryptography course. I also run a student Smart Cards Lab. Previously I taught Computer Security.
[Publications @DBLP] [In UCL Database] [ @Personal Page]
The "Courtois Dark Side" attack on MiFare Classic, see slides and paper is more than 10 times faster than the best attack in this category by Dutch university of Nijmegen, and does not require a
costly pre-computation.]. In practice the best known attack on MiFare classic is obtained by combining this "Courtois' Dark Side attack" to recover one key with the "Nijmegen Nested Authentication
Attack" to efficiently recover more keys. Here is a DETAILED explanation about how to recover cryptographic keys of all MiFare Classic cards at home with the ACR122 reader: do it yourself: hacking
MiFare Classic cards. It works for example for all London Oyster cards emitted before December 2009 and about 70 % of access cards used in buildings around the world. To know more about the practical
feasibility and impact see also this paper from 2013 and these slides). Many companies actually use the same cryptographic keys in every card, so that once keys for one card are recovered, all the
other cards can be read and written.
Attacks on KeeLoq and car locks]
Experimental algebraic attacks on ciphers] [Tools for algebraic cryptanalysis] [Hard probems]
Research Interests - Cryptology:
• Computational cryptanalysis of symmetric and asymmetric ciphers.
□ Algebraic Attacks: recover the secret key of a cipher by solving a very large system of multivariate equations over small finite fields.
☆ Special properties that make systems efficiently solvable (e.g. sparsity).
☆ Conversion and solving algebraic equations with SAT solvers.
☆ Computing Gröbner bases and designing simpler and frequently much better/faster algorithms: Gröbner basis require a fixed polynomial ordering. In many real-life cryptanalysis problems
this is a VERY bad idea and better results are obtained with ad-hoc elimination algorithms which optimize sparsity such as ElimLin (and its practical implementations which take care of
□ Design and feasability of algebraic attacks: for example some stream ciphers will be broken if a certain multivariate polynomial equation exists (sometimes finding one such equation is
sufficient to break the cipher!). Cryptanalysis of some block ciphers greatly depends on whether they can be written in a certain way.
☆ Define what kind of equations are useful/interesting. Find out if such equations exist, prove they exist (or not), compute these equations.
□ Experimental algebraic cryptanalysis.
☆ Automation of symmetric cryptanalysis. Finding special properties of ciphers.
☆ Implementation of algebraic attacks. Manipulating very large systems of multivariate equations. Fast linear algebra, in particular when RAM is scarce. Specialised memory management,
parallel computing, use of specialised hardware.
□ Number theory and lattices.
□ Side channel attacks on smart cards.
• Post-quantum cryptography and very efficient public key schemes for special needs:
□ Very short digital signatures (that can be transmitted or and verified with human interaction). Unforgeability and third-party verifiable authenticity of paper documents (bank notes, cheques,
ID cards, electronic airline tickets, etc.).
□ Very fast digital signatures (much faster than RSA) for low-cost devices.
Research Interests - Information Security:
• Markets and Information Security:
□ Security in complex commercial systems. For example electronic bank cards + terminals + back-end applications + supporting infrastructure+ user adoption + usability + legal and regulatory
drivers + economics + fraud + crime science + moral and ethical considerations. Compliance.
□ Smart cards and smart card protocols.
□ Proprietary cryptography.
□ Crypto currencies.
□ Economics of security and economics of insecurity, insurance, prices, bets and future markets in information security.
□ Risk management. Fraud in financial markets and financial institutions. Data security and compliance in financial institutions.
Last update 23/02/2010 | {"url":"http://www0.cs.ucl.ac.uk/staff/N.Courtois/","timestamp":"2014-04-18T08:23:52Z","content_type":null,"content_length":"8819","record_id":"<urn:uuid:7be43f2c-1a8c-4c4f-9d5a-a87d1b7ca2a4>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00387-ip-10-147-4-33.ec2.internal.warc.gz"} |
nd insulators
Conductors and insulators
Conduction mechanisms
Conduction of electricity in materials is by means of ‘charge carriers’, of which there are three types:
• The best-known example is the electron, with a negative charge of 0.16 × 10^-18 C. Electron conduction is the mechanism seen in metals, which have an ‘electron cloud’
• A concept that is less easy to understand is the lack of an electron in an electron cloud, which is referred to as a ‘hole’. Hole conduction is very important in semiconductors. Each missing
electron is equivalent to a positive charge of 0.16 × 10^-18 C.
• In ionic materials, the ions can take part in conduction. Each ion will be associated with one or more charges of 0.16 × 10^-18 C: ‘anions’ carry negative charge; ‘cations’ carry positive charge.
The conductivity of a material depends on three factors:
• How many charge carriers^1 there are
• How much charge each carries
• How mobile the charge carriers are, which will depend on the strength of the local electric field and the structure of the material.
It is clear from Figure 1 that, as regards conductivity, different types of material fall into three radically different categories:
Figure 1: Electrical conductivity ranges for typical materials
These three groups of materials, with their vastly different properties, are very important in electronics. Whilst detailed consideration of the physics^2 behind their conductivity behaviour is
beyond the scope and needs of this module, a simple summary is that:
• Conductors are metallic materials with loosely attached valence electrons, which can drift freely between the atoms.
• Insulators have structures in which all the electrons are tightly bound to atoms by ionic or covalent bonds, so that almost no current flows.
• Semiconductors are a class of insulating materials where little energy is needed for the bonds to be broken. This can be supplied by a small applied voltage, releasing valence electrons and
creating conductivity as electrons move from one vacated valence site to another.
With semiconductor materials, the intrinsic conductivity is altered greatly by the presence of very small amounts (parts per thousand million) of foreign atoms. During the semiconductor ‘doping’
process, impurities are added intentionally, either to make more electrons available for conduction (creating an ‘n-type’ semiconductor) or to create holes into which electrons can move (a ‘p-type’
Current density
Without an applied electric field, the charge carriers move randomly, at a rate dictated by the temperature. However, when an electric field is applied to a conductor, a gradual drift is superimposed
on this random movement.
Figure 2: Current flow in a conductor
Figure 2 is a simplification of the situation in a conductor – to start with, there is no indication of any random movement. The current flowing is the rate of movement of charge, so, if there are n
free charge carriers in unit volume, the amount of charge moved through the cross section in the conductor during time t is given by the equation
q = charge of the charge carrier
v = drift velocity^3
This equation is often converted into the form
where J is called the ‘current density’.
This current density concept is very important for the designer, because there are limitations on the achievable current density – what may seem a small current in a cable is an extremely high
current density in a fine track. You know what happens to fuses!
^3 Metals have very high numbers of free electrons, so drift velocities are actually very small, of the order of microns (µm) per second.
Self Assessment Question
You have designed a board with a 0.5 mm wide track in 35 µm thick copper. When this is carrying a current of 10mA, what is the current density in the track?
Sheet and volume resistivity
The volume resistivity of a material, symbol ρ (Greek letter rho), is the resistance between opposite faces of a ‘unit cube’. For a test piece, the relationship between resistance and resistivity is
given by the equation:
ρ = volume resistivity in ohm.cm
R = resistance in ohms between faces
A = area of the faces
l = distance between faces
Note that this is not resistance per unit volume, which would be ohm/cm^3, although this term is sometimes incorrectly used.
The surface resistivity, symbol σ (Greek letter sigma), is the resistance between two opposite edges of a square of film. Using the equation above, where l is the length of each side, and t the
thickness of the film:
For a constant film thickness, the resistance is independent of the length of the path. The units of surface resistivity are actually ohms, but are more frequently quoted in ‘ohms per square’ (Ω/sq.)
to avoid confusion with usual resistance values.
The electrical resistance of an insulating material, like that of a conductor, is the resistance offered by the conducting path to the passage of current. Insulating materials are very poor
conductors when dry, so that resistance values tend to be in Megohms, rather than ohms. However, the concept of surface resistivity is equally applicable, and you will find this concept when you read
about ESD. In the case of insulators, the thickness of the film may be constant, but it is poorly defined – typically conduction takes place in the top layers of the surface and in any contamination
or moisture on top of it.
You will also come across the term insulation resistance, which is a measurement of ohmic resistance for a given configuration, rather than a specific resistivity test. For example, insulation
resistance is often measured at high voltage, in order to check for electrical safety. In that case, the term ‘proof test’ is also used.
Resistance changes with environment
Whether we are referring to volume resistivity, surface resistivity or insulation resistance, the values we obtain will depend on a number of factors, including temperature, humidity, moisture
content, applied voltage, and the duration of voltage application. Comparing or interpreting data is difficult unless the test is controlled and defined, especially when a specimen is drying out
after being subjected to moist or humid conditions. Results can be particularly affected by surface wetting or contamination, which greatly reduce surface resistivity.
Some general points can, however, be made about the likely changes in resistivity and insulation resistance as temperature rises:
• For conductors, increased movement of the atoms interferes more with the electron drift, resulting in an increase in resistance.
• Whilst a very large temperature increase would be necessary to free electrons within their tightly-bound structure, insulators have some slight conductivity, which is caused by the diffusion of
ions in the applied field. As the temperature increases, so does the chance of an ion having enough energy to break free, so high temperatures increase diffusion and hence conductivity^4 .
• With semiconductors, an increase in temperature will enable electrons to break free, in the same way as will a small applied voltage. Though the mobility of the electrons decreases with
temperature in the same way as for metals, a much larger contribution comes from their greatly increased number, which depends exponentially^5 on the absolute temperature.
^4 A particular case of an insulator which becomes a conductor of electricity when heated is soda-silica glass, which contains enough sodium ions for the resistivity at 300°C to be six orders of
magnitude lower than at room temperature
^5 The rate of change of temperature gives useful information about the basic physics of the device
Self Assessment Question
A protective bag for boards has an internal static-dissipative layer of polyethylene whose surface resistivity is quoted as being <10^12 ohms. What does this measurement mean, and how would you
expect it to change with the conditions in the room?
Dielectric properties
Dielectric strength
All insulating materials fail at some level of applied voltage, and ‘dielectric strength’ is the voltage a material can withstand before breakdown occurs. Dielectric strength is measured through the
thickness of the material (taking care to avoid surface effects) and is normally expressed as a voltage gradient (volts per unit length). Note that the voltage gradient at breakdown is much higher
for very thin test pieces (<100µm thick) than for thicker sections.
The value of dielectric strength for a specimen is also influenced by its temperature and ambient humidity, by any voids or foreign materials in the specimen, and by the conditions of test, so that
it is often difficult to compare data from different sources.
Test variables include electrode configuration and specimen geometry, and the frequency and rate of application of the test voltage. Standard strategies include:
• The ‘short-time’ test, increasing the voltage from zero at a predetermined rate (usually between 100 and 3,000V/sec) until breakdown occurs
• The ‘step-by-step’ test, initially applying half the short-time breakdown voltage, and then increasing this in equal increments, holding each level for a set period of time.
Intrinsic dielectric strength
Another test term sometimes used is ‘intrinsic dielectric strength’, which is the maximum voltage gradient a homogeneous substance will withstand in a uniform electric field. This shows the ability
of an insulating material to resist breakdown, but practical tests produce lower values for a number of reasons:
• Defects, voids, and foreign particles introduced during manufacture which lower the dielectric strength locally, having the effect of reducing the test values as the area tested is increased
• The presence of a stress concentration at the electrode edges or points where the electric field is higher than average
• Due to the damaging effect of an electric discharge during testing
• Because of dielectric heating, which raises the temperature and lowers the breakdown strength.
Another failure mode related to voltage stress failure is ‘corona’, which is ionisation under voltage stress of air inside or at the interfaces of insulating materials. Breakdown occurs at edges,
points, interfaces, voids or gaps at voltages which depend on the materials and part geometries.
Corona erodes the insulator surface by electron bombardment, associated heat, and sometimes secondary effects from the formation of chemical oxidising agents such as ozone and oxides of nitrogen.
This effect begins immediately, and even fractions of a second of exposure at AC voltages near to breakdown will significantly reduce the breakdown strength. Corona-induced breakdown will also occur
at lower voltages, but the time required will be longer.
Self Assessment Question
You are designing a high-voltage circuit which contains an optoelectronic isolator. What issues relating to material breakdown should you be aware of both in your layout and in specifying materials
and quality standards?
Dielectric constant and permittivity
The simplest capacitor structure is a pair of parallel conducting plates separated by a medium called the ‘dielectric’. The value of the capacitance between the plates is given by the equation:
A = the area of the plates
t = the separation between the plates
and ε (Greek letter epsilon) is the absolute permittivity of the dielectric, which is a measure of the electrostatic energy stored within it and therefore dependent on the material.
A more usual way of writing the equation is to replace the absolute permittivity of the dielectric by the product term ε[0]ε[r], where e[0] is the permittivity of free space (that is, of a vacuum),
which has a value of 8.85×10^-12 Fm^-1, and e[r] is the relative permittivity, more usually called the ‘dielectric constant’. In some literature, you will also find this dimensionless quantity (it is
a ratio) referred to as κ (Greek letter kappa).
The dielectric constant of an insulating material is therefore numerically the ratio of the capacitance of a capacitor containing that material to the capacitance of the same electrode system with
vacuum replacing the insulation as the dielectric medium.
Nothing is going to have a relative permittivity less than that of a vacuum! All materials will therefore have a dielectric constant greater than 1. Dielectric constants of polymers at room
temperature are normally in the range 2 to 10, the lower values generally being associated with the lowest electrical loss characteristics.
The dielectric constant of any given material varies with temperature, and for polymers a rapid increase begins near their glass transition temperature. Dielectric constants also vary as a function
of frequency, and this aspect will be important when you look at high frequency designs.
Most materials used for capacitors have substantially higher dielectric constants than polymers, sometimes many tens of thousands. However, this is often achieved at the expense of stability. Most of
these high-permittivity dielectrics are ceramics, such as barium titanate and these can be used as fillers in polymers to increase the dielectric constant if this is required.
Dielectric loss
As well as dielectrics breaking down, as described above, most capacitors lose a fraction of the energy when an alternating current is applied. In other words, the dielectric is less than perfect.
The simplest model for a capacitor with a lossy dielectric is as a capacitor with a perfect dielectric in parallel with a resistor giving the power dissipation. The current now leads the voltage by a
very little less than 90°, where the difference δ (Greek letter delta) is termed the dielectric loss angle, as seen in Figure 3.
Figure 3: Equivalent circuit for a lossy dielectric
Without bothering about the equations, the fraction of the maximum energy lost each cycle, divided by 2π is termed the ‘loss factor’ and its value is given by tan δ (‘tan delta’): typically it is
values of tan δ that you will find quoted in reference material.
Make sure that you understand the difference between losses in the dielectric which happen when alternating current is applied, and for which tan δ is the measure, and insulation resistance, which is
a function of the direct current that flows when a voltage is applied.
Table 1 shows some typical values of dielectric constant, loss factor, and dielectric strength. The AC values are measured at 1MHz.
Table 1: Some typical values of
dielectric parameters
material e[r] tan δ MVm^-1
air 1.0006 0 3
polycarbonate 2.3 0.0012 275
FR-4 4.4 0.035 70
alumina 8.8 0.00033 12
barium titanate 1200+ 0.01 2
Self Assessment Question
For your microwave application, why might the electronic designer with whom you are working be interested in using a special laminate (Rogers 3003) instead of FR-4 (see http://
Author: Martin Tarr
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 2.0 Licence.
Terms and conditions apply. | {"url":"http://www.ami.ac.uk/courses/topics/0115_cai/index.html","timestamp":"2014-04-19T02:37:46Z","content_type":null,"content_length":"23448","record_id":"<urn:uuid:31e3a430-2a8c-46ac-a38a-5ee289f9930f>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00663-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solving PDE with Cauchy - Kowalewski Theorem
up vote 3 down vote favorite
I have the following PDE that I am trying to solve via the Cauchy-Kowalewski Theorem. But I have no idea how to do it or if its possible. Maybe one of you has an idea. Here is the problem: Let $U \
subset \mathbb{C}^{n}$ be some open subset witch contains zero. Well you can shrink $U$ arbitralily if you wish. Let $z_{j} = x_{j} + i y_{j}$ be the coordinates on $U$. I am looking for a
real-valued analytic function $\beta$ defined on $U$ such that the equations are satisfied: $\frac{\partial \beta}{\partial x_{j}} = F_{j}(x,y,\beta)$ and $\frac{\partial \beta}{\partial y_{j}} = G_
{j}(x,y,\beta)$,with $j = 1, ..., n$, where $F,G$ are also real analytic functions and with initial condition $\beta(x,0) = 0$, $\forall x \in U \cap \mathbb{R}^{n}$. Is it possible to solve such an
system of equations via Cauchy-Kowalewski ? If yes, how? If no, why and is there any other method that can give me a solution? I hope that for a lot of answers and please excuse me if the question is
too trivial. Thanks in advance.
Greeting Andrei
1 If you are specifying the partial derivatives, you need necessarily compatibility conditions. As stated the PDE can be overdetermined. Do you know that $\partial_{y_k} F_j = \partial_{x_j} G_k$
for example (they need to be since partial derivatives commute on $\beta$)? Also, typically Cauchy-Kowelewski is used to solve a "initial value problem" where the data is prescribed on a
co-dimension 1 set. You need something closer to Cartan-Kahler. – Willie Wong Oct 3 '12 at 7:18
Deane Yang's paper "Local Solvability of Overdetermined Systems Defined by Commuting First-Order Differential Operators" may help (1986, CPAM), I'll see if I can get him to say a few words here. –
Willie Wong Oct 3 '12 at 7:19
Thanks that would be very nice! – Andrei Oct 3 '12 at 7:57
1 @Willie: You have to be a bit careful with the integrability conditions. Since $F$ and $G$ involve the unknown $\beta$, you can't just check whether $\partial_{y^k}F_j=\partial_{x^j}G_k$, since
expanding these out will involve the partials of the unknown $\beta$, and you won't know whether you have equality until you know the solution $\beta$ that you are hoping to find, since these
equations only have to hold for the solution $\beta$ that also satisfies the given initial conditions (assuming that it actually exists). – Robert Bryant Oct 3 '12 at 14:22
@Robert: can one not plug in the equation there? $$\partial_{y^k}(F_j(x,y,\beta)) = (\partial_{y^k}F_j)(x,y,\beta) + (\partial_\beta F_j)(x,y,\beta)G_j(x,y,\beta)$$ Of course this still depends on
the unknown $\beta$, but if the integrability condition is only satisfied for isolated values of $\beta$ one may expect trouble. – Willie Wong Oct 3 '12 at 15:12
show 1 more comment
1 Answer
active oldest votes
You don't need the Cauchy-Kowalewski Theorem for your problem. In fact, real-analyticity is a red herring here. What you are asking for is a function $\beta(x,y)$ such that the graph $\
bigl(x,y,\beta(x,y)\bigr)$ is an integral manifold of the $1$-form $$ \theta = d\beta - F_j(x,y,\beta)\ dx^j - G_j(x,y,\beta)\ dy^j $$ (sum on $j$ in both terms) defined on $\mathbb{R}^
{2n+1} = \mathbb{C}^n\times \mathbb{R}$ (or some open neighborhood of $0$ in this space. This makes sense for smooth functions of course, and whether there is a solution or not doesn't
depend on real-analyticity.
In fact, you have added the requirement that the $2n$-dimensional graph contain the $n$-dimension submanifold defined by $(x,y,\beta) = (x, 0, 0)$, and you can see from the above that $\
theta$ vanishes on that graph if and only if the $F_j$ satisfy $F_j(x,0,0)\equiv0$, so this is certainly a necessary condition.
up vote 8
down vote A sufficient condition, after that, would be, for example, that $d\theta\wedge\theta=0$, for then the Frobenius Theorem would apply. However, this is not necessary if all you are asking is
accepted that there be a solution to the specific 'initial value problem' you have posed. To get sufficient conditions, what you should do is use ODE to, for example, construct $\beta_1(x,y^1)$
satisfying the equation $$ \frac{\partial\beta_1}{\partial y^1} = G_1(x,y^1,0,\ldots,0,\beta_1) $$ with the initial condition $\beta_1(x,0) = 0$. Then you need to check that $\theta$
vanishes on the $(n{+}1)$-dimensional graph $\bigl(x,y^1,0,\ldots,0,\beta_1(x,y^1)\bigr)$. Next, you construct $\beta_2(x,y^1,y^2)$ by solving the equation $$ \frac{\partial\beta_2}{\
partial y^2} = G_2(x,y^1,y^2,0,\ldots,0,\beta_2) $$ with the initial condition $\beta_2(x,y^1,0) = \beta_1(x,y^1)$, and so on. At each stage, you'll get more conditions on the functions
$G_j$ and $F_j$ in order for the constructed graph to be an integral of $\theta$. When you get to the end, these will be the necessary and sufficient conditions for this particular initial
value problem.
add comment
Not the answer you're looking for? Browse other questions tagged ap.analysis-of-pdes or ask your own question. | {"url":"http://mathoverflow.net/questions/108693/solving-pde-with-cauchy-kowalewski-theorem","timestamp":"2014-04-20T13:46:00Z","content_type":null,"content_length":"58762","record_id":"<urn:uuid:18b86bae-9150-4e23-9f51-7ba34989c47e>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00191-ip-10-147-4-33.ec2.internal.warc.gz"} |
Find `g'(4)` given that `f(4)=5` and `f'(4)=1` and `g(x)=f(x)/x`.
- Homework Help - eNotes.com
Find `g'(4)` given that `f(4)=5` and `f'(4)=1` and `g(x)=f(x)/x`.
To solve for g'(4), take the derivative of g(x). To do so, apply quotient rule which is `(u/v)'=(v*u'-u*v')/v^2` .
Since the expression for the function f(x) is not given, then its derivative is expressed as f'(x) only.
Then, plug-in x=4 to get the value of g'(4).
Also, plug-in the given values of f(4) and f'(4) which are 5 and 1, respectively.
Hence, `g'(4)=-1/16` .
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/find-g-4-given-that-f-4-5-f-4-1-g-x-f-x-x-express-441561","timestamp":"2014-04-20T04:09:07Z","content_type":null,"content_length":"25511","record_id":"<urn:uuid:e9e88ca7-e4b8-4e43-8f18-48d6ba8a2112>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00280-ip-10-147-4-33.ec2.internal.warc.gz"} |
2.1 Map colorings
A famous problem in mathematics concerns coloring adjacent planar regions. Like cartographic maps, it is required that, whatever colors are actually used, no two adjacent regions may not have the
same color. Two regions are considered adjacent provided they share some boundary line segment. Consider the following map.
Fig. 2.1.1
We have given numerical names to the regions. To represent which regions are adjacent, consider also the following graph.
Fig. 2.1.2
Here we have erased the original boundaries and have instead drawn an arc between the names of two regions, provided they were adjacent in the original drawing. In fact, the adjacency graph will
convey all of the original adjacency information. A Prolog representation for the adjacency information could be represented by the following unit clauses, or facts.
adjacent(1,2). adjacent(2,1).
adjacent(1,3). adjacent(3,1).
adjacent(1,4). adjacent(4,1).
adjacent(1,5). adjacent(5,1).
adjacent(2,3). adjacent(3,2).
adjacent(2,4). adjacent(4,2).
adjacent(3,4). adjacent(4,3).
adjacent(4,5). adjacent(5,4).
If these clauses were loaded into Prolog, we could observe the following behavior for some goals.
?- adjacent(2,3).
?- adjacent(5,3).
?- adjacent(3,R).
R = 1 ;
R = 2 ;
R = 4 ;
One could declare colorings for the regions in Prolog also using unit clauses.
color(1,red,a). color(1,red,b).
color(2,blue,a). color(2,blue,b).
color(3,green,a). color(3,green,b).
color(4,yellow,a). color(4,blue,b).
color(5,blue,a). color(5,green,b).
Here we have encoded 'a' and 'b' colorings. We want to write a Prolog definition of a conflictive coloring, meaning that two adjacent regions have the same color. For example, here is a Prolog
clause, or rule to that effect.
conflict(Coloring) :-
For example,
?- conflict(a).
?- conflict(b).
?- conflict(Which).
Which = b
Here is another version of 'conflict' that has more logical parameters.
conflict(R1,R2,Coloring) :-
Prolog allows and distinguishes the two definitions of 'conflict'; one has one logical parameter ('conflict/1') and the other has three ('conflict/3'). Now we have
?- conflict(R1,R2,b).
R1 = 2 R2 = 4
?- conflict(R1,R2,b),color(R1,C,b).
R1 = 2 R2 = 4 C = blue
The last goal means that regions 2 and 4 are adjacent and both are blue. Grounded instances like 'conflict(2,4,b)' are said to be consequences of the Prolog program. One way to demonstrate such a
consequence is to draw a program clause tree having the consequence as the root of the tree, use clauses of the program to branch the tree, and eventually produce a finite tree having all true
leaves. For example, the following clause tree can be constructed using fully grounded instances (no variables) of clauses of the program.
Fig. 2.1.3
The bottom leftmost branch drawn in the tree corresponds to the unit clause
which is equivalent in Prolog to the clause
adjacent(2,4) :- true.
Now, on the other hand, 'conflict(1,3,b)' is not a consequence of the Prolog program because it is not possible to construct a finite finite clause tree using grounded clauses of P containing all
'true' leaves. Likewise, 'conflict(a)' is not a consequence of the program, as one would expect. We will have more to say about program clause trees in subsequent sections.
We will revisit the coloring problem again in Section 2.9, where we will develop a Prolog program that can compute all possible colorings (given colors to color with). The famous Four Color
Conjecture was that no planar map requires more than four different colors. This was proved by Appel and Haken (1976). The solution used a computer program (not Prolog) to check on many specific
cases of planar maps, in order to rule out possible troublesome cases. The map in in Fig. 2.1.1 does require at least four colors; for example ...
Fig. 2.1.4
Exercise 2.1 If a map has N regions, then estimate how many computations may have to be done in order to determine whether or not the coloring is in conflict. Argue using program clause trees.
Prolog Code for this section.
Prolog Tutorial Contents | {"url":"http://www.csupomona.edu/~jrfisher/www/prolog_tutorial/2_1.html","timestamp":"2014-04-20T21:07:42Z","content_type":null,"content_length":"6505","record_id":"<urn:uuid:9a55009d-0a1a-4aed-a009-8b6dd0393df2>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00288-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Education Month
Math Education Month
Writing Prompt
Persuasive Essay:
What if your school decided not to teach math anymore? Have students write essays explaining why it is important to learn math. Tell them to include examples from their daily lives.
Grades K–2: Mathematics
Sorting Fun
Learning the basics of sorting is essential to your students' math success. Have students create sorting challenges for one another by putting objects in groups in which one object does not belong.
Grades K–2: Mathematics/Art
Best Bugs
Have a small group of students ask their classmates the names of their favorite creepy-crawly creatures. Then have another small group make a bar graph of the responses on poster board. Other
students may want to decorate the graph with drawings.
Grades K–5: Mathematics/Art
The Art of Math
Students will have fun creating patterns as they color triangles and hexagons. You might have students cut the shapes and use them to create new patterns.
Grades K–6: Mathematics
Name That Term!
Do your students know their math terms? Challenge them with this fun game.
Grades 1–2: Mathematics
Sharing Treats
Turn snack time into math time. Have students add, subtract, multiply, and divide as they make sure that everyone gets an equal share.
Grades 1–2: Mathematics
Math Detective
Have students practice looking for patterns in an interactive format.
Grades 1–6: Mathematics
Test-taking Tips and Strategies
Have your students explore animated illustrations of test-taking strategies.
Grades 1–6: Mathematics
Computational Game
Can your students string together a series of computations to reach a given result? See if they can get the frog safely across the pond.
Grades 1–6: Mathematics
Geometry Game
Your students can give their spatial skills a workout with this fun game.
Grades 1–6: Mathematics
Sequencing Challenge
See if your students can put together their sequencing, computational, and strategizing skills to achieve success.
Grades 2–3: Mathematics/Social Studies/Art
Color Map
Coloring maps is easy, right? Challenge your students to color maps and explore the complex math involved in the process.
Grades 2–3: Mathematics
Strategy Game
In math, there's usually more than one way to solve a problem. Have students play a fun game, applying basic problem-solving strategies.
Grades 2–4: Mathematics
Pattern Building
Tessellation Town is a place where the main occupation is making mosaics. Have students build their own personal Tessellation Town.
Grades 2–4: Mathematics
Puzzling Pieces
Piecing puzzles together helps to keep the brain nimble and promotes spatial reasoning, an important math skill. Have students beat the clock to put an image of Sacagawea back together.
Grades 2–5: Mathematics/Social Studies
Sets of Social Groups
Integrate social studies into your math class by having students use math sets to compare social groups.
Grades 2–8: Language Arts
Word Finds
Have students find math words hidden in a puzzle.
Grades 3–5: Science/Math
Dino Math
Introduce students to Sue the Dinosaur, and have them learn from math problems based on her statistics.
Grades 3–5: Math
Pie Slicing Challenge
Have students try their hand at slicing an apple pie (figuratively speaking) while applying mathematical logic.
Grades 3–5: Language Arts/Mathematics
Poems About Shapes
Have your students explore the world of geometric shapes through writing poetry.
Grades 3–5: Mathematics/Science
How It Adds Up...
Have your students use basic mathematical operations to gain a deeper understanding of the effects of overfishing on a threatened species, the Patagonian toothfish (also known as the Chilean sea
Grades 3–8: Mathematics
Challenging Word Problems
Challenge your students' problem-solving skills with Brain Teasers. Each week a new Brain Teaser is posted for each of three grade ranges: 3–4, 5–6, and 7–8. Hints are provided, and answers are given
the following week. Problems are archived for three weeks.
Grades 3–8: Mathematics
Math Answers 24/7
When your students have a math question, they can check Dr. Math's amazing archives. If they don't find the question in the archives or in the frequently asked questions section, they can e-mail the
question to Dr. Math.
Grades 4–8: Mathematics/Social Studies
Express Mail
The men and women who rode for the Pony Express covered hundreds of miles on horseback through all sorts of terrain and danger. Have students plot the route of the Pony Express, discover the kinds of
terrain it crossed, and calculate the number of shifts needed to cover the distance of its total route.
Grades 4–8: Mathematics/Science/Social Studies
Open for Business!
Turn your students into neighborhood entrepreneurs. Have them open up a lemonade stand and do the math that will bring them success.
Grades 5–8: Mathematics/Language Arts
Pig Math
So, your students think they know the whole story of the Three Little Pigs. Have your students solve several word problems based on the classic tale.
Grades 6–8: Science/Math
The Math of Global Warming
Have students develop a better understanding of global warming through math problems.
Grades 6–8: Mathematics/Social Studies
Real-Life Math
One way to show students the value of math is to take a close look at how people in their community use math every day. Have students conduct an interview with a family member or a family friend to
find out how they use math at work or at play. Then have students chart their findings. Finally, hold a class discussion on the results.
Grades 6–8: Mathematics
Shifting the Tower
This puzzle, which is based on an ancient legend, will bring out your students' problem-solving skills.
Grades 7–8: Mathematics/Language Arts
Crossword Puzzle
Challenge students to identify the polygons.
Grade 8: Mathematics
Ratio Race
Have students practice computing ratios in a timed, interactive format. | {"url":"http://www.eduplace.com/monthlytheme/april/math.html","timestamp":"2014-04-17T09:36:27Z","content_type":null,"content_length":"28717","record_id":"<urn:uuid:303c6e4b-7fb7-4093-ac7d-b8a3e0d8c642>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00432-ip-10-147-4-33.ec2.internal.warc.gz"} |
One-way ANOVA Example
Analysis of Variance Example
A manager wishes to determine whether the mean times required to complete a certain task differ for the three levels of employee training. He randomly selected 10 employees with each of the three
levels of training (Beginner, Intermediate and Advanced). Do the data provide sufficient evidence to indicate that the mean times required to complete a certain task differ for at least two of the
three levels of training? The data is summarized in the table.
Level of Training n s^2
Advanced 10 24.2 21.54
Intermediate 10 27.1 18.64
Beginner 10 30.2 17.76
H[a]: The mean times required to complete a certain task differ for at least two of the three levels of training.
H[o]: The mean times required to complete a certain task do not differ the three levels of training. ( µ[B] = µ[I] = µ[A])
Assumptions: The samples were drawn independently and randomly from the three populations. The time required to complete the task is normally distributed for each of the three levels of training. The
populations have equal variances.
Test Statistic:
Calculations: 10(24.2 - 27.16...)^2 + 10(27.1 - 27.16...)^2 + 10(30.2 - 27.16...)^2 = 180.066....
│Source │df│SS │MS │F │
│Treatments │2 │180.067 │90.033│4.662│
│Error │27│521.46 │19.313│ │
│Total │29│702.527 │ │ │
Decision: Reject H[o].
Conclusion: There is sufficient evidence to indicate that the mean times required to complete a certain task differ for at least two of the three levels of training.
Which pairs of means differ?
The Bonferroni Test is done for all possible pairs of means.
Decision rule: Reject H[o], if the interval
c = # of pairs c = p(p-1)/2 = 3(2)/2 = 3
t[.0083] = 2.554
(This value is not in the t table; it was obtained from a computer program.)
Since t[.010] < t[.0083] < t[.0050] (2.473 < t[.0083] < 2.771), use t[.005] when using a table. If you reject the null hypothesis when t = 2.771; you will also reject it for t[.0083].
There is sufficient evidence to indicate that the mean response time for the advanced level of training is less than the mean response time for the beginning level. There is not sufficient evidence
to indicate that the mean response time for the intermediate level differs from the mean response time of either of the other two levels. | {"url":"http://www2.fiu.edu/~howellip/exanova.htm","timestamp":"2014-04-18T08:03:23Z","content_type":null,"content_length":"7580","record_id":"<urn:uuid:505e4141-8430-4b28-865a-1d4ae463ff5a>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00417-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gambrills SAT Math Tutor
...In the Fall of 2011, I took the PRAXIS I and II tests, and received better-than-average scores on each test. On the Content tests (Series II) my scores on the Math and the Physics tests earned
me a Recognition of Excellence (ROE) designation. Only the best 1 in 7 test takers get that designation.
39 Subjects: including SAT math, English, calculus, physics
...WyzAnt does not allow changes once submitted.*I have been tutoring children for over 2 years now, mainly in mathematics. I'm already approved in elementary math and include some exercises in
writing and language as well. I have great intuition in solving the issues children have in learning, especially in attention spans.
31 Subjects: including SAT math, English, reading, writing
...I have taught math to sixth and seventh grade students. I am certified to teach all subjects in elementary school and mathematics at the middle school level. Throughout the past few years as a
middle school math teacher, I have helped many students strengthen their basic skills to allow them to be successful in other areas of math.
4 Subjects: including SAT math, algebra 1, elementary math, prealgebra
...This approach does not promote conceptual understanding! Students need to be able to think critically and creatively when they face a new problem. They need to be challenged with rich problems,
while being provided with the tools to tackle those problems with creativity and confidence.
16 Subjects: including SAT math, English, writing, calculus
...I don't mean to hire that high schooler down the street who scored well on the SATs and got an A in trigonometry. I mean hire a professional teacher. Professional teachers aren't just
knowledgeable in their field; they have training and experience working with students.
19 Subjects: including SAT math, reading, physics, English | {"url":"http://www.purplemath.com/Gambrills_SAT_Math_tutors.php","timestamp":"2014-04-16T07:27:04Z","content_type":null,"content_length":"24052","record_id":"<urn:uuid:0f80bdc5-13e3-4d61-b0a6-f9d7091fd7b9>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00611-ip-10-147-4-33.ec2.internal.warc.gz"} |
Figure 1: (a) The CPT symmetry can be likened to a mirror that reflects spatial coordinates, flips charge and other additive quantum numbers, and reverses time. To test for cracks in this CPT mirror,
physicists check whether the magnetic moment of the proton (left) has the same magnitude as that of the antiproton (right). (Technically, the moments have opposite signs due to the way magnetic
moment is defined relative to the spin.) (b) To measure the antiproton’s magnetic moment, the ATRAP Collaboration measures the cyclotron and spin-flip frequencies, f[c] and f[s], respectively. The
ratio of these frequencies gives the antiproton’s magnetic moment, μ[p̅ ]=-f[s]/f[c] μ[N], in terms of the nuclear magneton μ[N]. | {"url":"http://physics.aps.org/articles/large_image/f1/10.1103/Physics.6.36","timestamp":"2014-04-17T05:16:43Z","content_type":null,"content_length":"1936","record_id":"<urn:uuid:c4377440-d9c0-4d5c-96e2-3a28a857347d>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00462-ip-10-147-4-33.ec2.internal.warc.gz"} |
Granular Flow in a Tilted Hopper
John Wambaugh
"We must go forward, not backward, upward, not forward, and always twirling, twirling, twirling towards freedom." - Clin-ton
The flow of granular materials in a symmetric, vertical hopper is succesfully described by the Jenike radial solutions, which model particle paths as lines converging upon the vertex of a hopper in
two-dimensions. [1] Mathematicians at Duke and NC State have generalized these equations to three-dimensions by developing a finite-element solution for an elasto-plastic model using the Jenike
radial solutions as a basis. These new solutions allow for the possibility of secondary-circulation -- azimuthal swirling of the grains as the hopper drains. This effect should be dependent on the
degree of asymmetry of the hopper relative to an axis aligned with gravity, as well as material properties. The purpose of this experimental study is to test for secondary-circulation in a tilted
hopper for different tilt angles and hopper wall frictions. Preliminary results indicate the existence of secondary-circulation but the magnitude of the flow is too large to fit with the mathematical
1. A. W. Jenike, Powder Tech. 50, 229-235 (1987)
2. P. Gremaud and J. V. Matthews, J. Comp. Phys. 166, 63-83 (2001)
Back to seminar homepage | {"url":"http://www.phy.duke.edu/graduate/studentseminar/2003/dec5.php","timestamp":"2014-04-17T06:57:41Z","content_type":null,"content_length":"1818","record_id":"<urn:uuid:b5439546-0f3e-4eae-9054-9cf4743ecbfa>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00199-ip-10-147-4-33.ec2.internal.warc.gz"} |
GNU units and units.dat; Units of Measurement and Unit Conversion -
GNU units and units.dat; Units of Measurement and Unit Conversion
Register Search Today's Posts Mark Forums Read
Physics Forum Physics Forum. Discuss and ask physics questions, kinematics and other physics problems.
GNU units and units.dat; Units of Measurement and Unit Conversion
GNU units and units.dat; Units of Measurement and Unit Conversion - Physics Forum
GNU units and units.dat; Units of Measurement and Unit Conversion - Physics Forum. Discuss and ask physics questions, kinematics and other physics problems.
LinkBack Thread Tools Display Modes | {"url":"http://www.molecularstation.com/forum/physics-forum/36444-gnu-units-units-dat%3B-units-measurement-unit-conversion.html","timestamp":"2014-04-16T10:58:01Z","content_type":null,"content_length":"264804","record_id":"<urn:uuid:e3262755-5cb8-4f36-823e-cd740cc567f9>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00336-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
(1/3)(3x^2+12x+9)(x+3) how do i solve this???????
• one year ago
• one year ago
Best Response
You've already chosen the best response.
there is no equals sign so this is not even an equation bro
Best Response
You've already chosen the best response.
im trying to find the volume of a rectangular pyramid... that whole expression equals v
Best Response
You've already chosen the best response.
so then do you have any input values for x?? as it sits (1/3)(3x^2+12x+9)(x+3)=V can only be solved it you input an x. maybe you just want to find the roots..... idk you need to be more specific
Best Response
You've already chosen the best response.
Find an expression for the volume of a rectangular pyramid whose base has an area of 3x^2 + 12x +9 square feet and whose height is x + 3 feet.
Best Response
You've already chosen the best response.
divide each term in the first expression by 3, because of the \(\frac{1}{3}\) out front, should be easy enough because each term is divisible by \(3\)
Best Response
You've already chosen the best response.
when you are done, you have to multiply \[(x^2+4x+3)(x+3)\] which requires 6 multiplications via the distributive property
Best Response
You've already chosen the best response.
this part is a pain, but not really that bad with some practice first of all multiply everything in the first parentheses by \(x\) which really just means raising the power by 1 then multiply
everything in the first expression by \(3\) and then combine like terms
Best Response
You've already chosen the best response.
once you get the answer post and someone will check
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/503ec52ae4b00ea57898247c","timestamp":"2014-04-16T04:11:18Z","content_type":null,"content_length":"45292","record_id":"<urn:uuid:18e63ce6-cb92-4476-99f9-896292e65bb4>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00453-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Help: CONTINUITY and removable discontinuity: (attached below)
• 8 months ago
• 8 months ago
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
So you have 2 conditions to have a removable discontinuity. The first one is simply that the value must not be defined at that point, which is pretty obvious that is isn't because of the
denominator. The second condition is basically the difference between whether or not you have an asymptote at your undefined point or simply a hole. The way to tell whether or not its an
asymptote or a whole is if you factored your function, would the factor that is undefined cancel or remain? If the factor on bottom cancels, it is merely a hole, if it does not cancel then it is
an asymptote and CANNOT be removed. So I would factor your numerator and see if the denominator cancels. If it does, then simply plug in 4 into your result and you'll get the value you're looking
Best Response
You've already chosen the best response.
can we work on this together?
Best Response
You've already chosen the best response.
i've tried doing this on my own but seem to be getting something wrong.
Best Response
You've already chosen the best response.
Sure. So I would factor the numerator and see if x-4 cancels. If it cancels, you have a hole, if it does not cancel, you have an asymptote and it cannot be removed. You know how to factor the
numerator, right?
Best Response
You've already chosen the best response.
let me try.
Best Response
You've already chosen the best response.
Alright, sure.
Best Response
You've already chosen the best response.
i got 2(x^2+3x-25) @Psymon
Best Response
You've already chosen the best response.
-28you mean? And after that, you still have to further factor the x^2 + 3x - 28.
Best Response
You've already chosen the best response.
Find factors of -28 that will add up to positive 3.
Best Response
You've already chosen the best response.
yes, ,-28
Best Response
You've already chosen the best response.
Right. So When you further factor the numerator, you should have 2(x±__)(x±__). Its just remember how to factor that x^2 + 3x - 28 portion.
Best Response
You've already chosen the best response.
2(x+7) (x-4)
Best Response
You've already chosen the best response.
Exactly. In turn, that cancels out the denominator, which means the discontinuity is removable and not just an asymptote. Not only that, since the term that would make your function undefined
disappeared, you can simply plug in 4 to find out the value you need.
Best Response
You've already chosen the best response.
we're left with 2 (x+7)
Best Response
You've already chosen the best response.
so we plug in 4?
Best Response
You've already chosen the best response.
That's all ya do.
Best Response
You've already chosen the best response.
does this apply to every problem with continuity?
Best Response
You've already chosen the best response.
Yes. Most of the work is finding a way to eliminate the portion that would give you an undefined answer. Once you manage that, you just plug in the value that the limit is approaching. Of course
there are times when you cannot remove the undefined portion, in which that means the limit does not exist and you are merely approaching an asymptote.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Sure :3
Best Response
You've already chosen the best response.
ohhh can i practice a little, then ask you for a tricky problem like the one you just mentioned? it would really help.
Best Response
You've already chosen the best response.
That works, I have no problem with that.
Best Response
You've already chosen the best response.
what about a JUMP DISCONTINUITY? same thing?
Best Response
You've already chosen the best response.
That's new terminology for me. I probably know what it is but have not heard it said that way. Could you give me an example?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
1 moment
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Ah, just one sided limits. We never really gave a name for it before, just that it had to do with one-sided, lol.
Best Response
You've already chosen the best response.
oh so how do we begin with this one? sketch a graph?
Best Response
You've already chosen the best response.
i can do graphs but i'd like to do it a simpler way since i won't have enough time when it comes to a final
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
I apologize, I guess I was making sure what jump kind of meant since I did not have a picture. But yes, all you really need to confirm is that when x = 6 on both functions that they have
different limits. It is pretty obvious that they do. As far as the left and right part goes, you need to see which function to just check and see which function is approaching from the left and
which one is approaching from the right. Since 8x-8 < 6, that must be the function approaching from the left. Therefore, f(6) of 8x-8 is the value you will use for limit from the left. Since the
other function is the one that will becoming from the right, it's value at x = 6 will be the value you use for the right-sided limit. Basically, you just need to see which function to plug the
value into.
Best Response
You've already chosen the best response.
The delay was graphing and looking at so I knew exactly what to say and to get my brain to work, lol.
Best Response
You've already chosen the best response.
it's okay, so how can we work this prob out?
Best Response
You've already chosen the best response.
|dw:1375046904734:dw| I know you know how to graph, but I wanted the visual up before we move on. So you just want to see which function is from the left and which one is from the right. When it
gives you a piece-wise function, it's pretty obvious which one is going to be coming from the left and which one from the right. Once you determine that, just plug in x = 6 into the graph that
comes from the left and use that value for lim ->6 - Once you do that, plug x = 6 into the other function, since you know it's the one on the right, and find lim x-> 6 +
Best Response
You've already chosen the best response.
A lot of the time you may have to determine the behavior on your own, but the piece-wise just gave you the behavior so you didn't have to worry about it. From there it was just picking the
correct function for left and for right.
Best Response
You've already chosen the best response.
In regards to removable or non-removable,it works the same way. if one of the functions has an asymptote at the jump point, then one of the sides or both may have a limit that DNE on that side,
left or right. Maybe another example?
Best Response
You've already chosen the best response.
so to this problem we HAVE to sketch only?
Best Response
You've already chosen the best response.
In this case, no, because the piece-wise function tells us which function to use on the left and on the right. The sketching would only come in if you had maybe one function that wasn't a
piece-wise. Like if the function was 1/[(2x^2 -3] For something like that you would have to test points on the left and right and see if they're going up to infinity, down to negative infinity,
etc. Any of these piece-wise functions won't require s ketch, I just wanted to draw visual aid.
Best Response
You've already chosen the best response.
Holy cow my typing and grammar are horrible today o.o
Best Response
You've already chosen the best response.
Maybe if you have another jump problem we can look at it.
Best Response
You've already chosen the best response.
I'm going to need to head out, I apologize. Good luck with the limits.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/51f57bf6e4b00daf471c6ad2","timestamp":"2014-04-21T16:06:51Z","content_type":null,"content_length":"167083","record_id":"<urn:uuid:0065afae-bf08-4711-8a2e-db7cd3dea76f>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00517-ip-10-147-4-33.ec2.internal.warc.gz"} |
Large-Scale Surveys and Cosmic Structure - J.A. Peacock
5.2. The 2dFGRS power spectrum and CDM models
Perhaps the key aim of the 2dFGRS was to perform an accurate measurement of the 3D clustering power spectrum, in order to improve on the APM result, which was deduced by deprojection of angular
clustering (Baugh & Efstathiou 1993, 1994). The results of this direct estimation of the 3D power spectrum are shown in figure 5 (Percival et al. 2001). This power-spectrum estimate uses the
FFT-based approach of Feldman, Kaiser & Peacock (1994), and needs to be interpreted with care. Firstly, it is a raw redshift-space estimate, so that the power beyond k h Mpc^-1 is severely damped by
smearing due to peculiar velocities, as well as being affected by nonlinear evolution. Finally, the FKP estimator yields the true power convolved with the window function. This modifies the power
significantly on large scales (roughly a 20% correction). An approximate correction for this has been made in figure 5.
Figure 5. The 2dFGRS redshift-space dimensionless power spectrum, ^2(k), estimated according to the FKP procedure. The solid points with error bars show the power estimate. The window function
correlates the results at different k values, and also distorts the large-scale shape of the power spectrum An approximate correction for the latter effect has been applied. The solid and dashed
lines show various CDM models, all assuming n = 1. For the case with non-negligible baryon content, a big-bang nucleosynthesis value of [b] h^2 = 0.02 is assumed, together with h = 0.7. A good fit is
clearly obtained for [m] h k will be boosted by nonlinear effects, but damped by small-scale random peculiar velocities. It appears that these two effects very nearly cancel, but model fitting is
generally performed only at k < 0.15 h Mpc ^-1 in order to avoid these complications.
The fundamental assumption is that, on large scales, linear biasing applies, so that the nonlinear galaxy power spectrum in redshift space has a shape identical to that of linear theory in real
space. This assumption is valid for k < 0.15 h Mpc^-1; the detailed justification comes from analyzing realistic mock data derived from N-body simulations (Cole et al. 1998). The free parameters in
fitting CDM models are thus the primordial spectral index, n, the Hubble parameter, h, the total matter density, [m], and the baryon fraction, [b] / [m]. Note that the vacuum energy does not enter.
Initially, we show results assuming n = 1; this assumption is relaxed later.
An accurate model comparison requires the full covariance matrix of the data, because the convolving effect of the window function causes the power at adjacent k values to be correlated. This
covariance matrix was estimated by applying the survey window to a library of Gaussian realisations of linear density fields, and checked against a set of mock catalogues. It is now possible to
explore the space of CDM models, and likelihood contours in [b] / [m] versus [m]h are shown in figure 6. At each point in this surface we have marginalized by integrating the likelihood surface over
the two free parameters, h and the power spectrum amplitude. We have added a Gaussian prior h = 0.7± 10%, representing external constraints such as the HST key project (Freedman et al. 2001); this
has only a minor effect on the results.
Figure 6. Likelihood contours for the best-fit linear CDM fit to the 2dFGRS power spectrum over the region 0.02 < k < 0.15. Contours are plotted at the usual positions for one-parameter confidence of
68%, and two-parameter confidence of 68%, 95% and 99% (i.e. -2 ln([max]) = 1, 2.3, 6.0, 9.2). We have marginalized over the missing free parameters (h and the power spectrum amplitude). A prior on h
of h = 0.7± 10% was assumed. This result is compared to estimates from X-ray cluster analysis (Evrard 1997) and big-bang nucleosynthesis (Burles et al. 2001). The second panel shows the 2dFGRS data
compared with the two preferred models from the Maximum Likelihood fits convolved with the window function (solid lines). The unconvolved models are also shown (dashed lines). The [m] h [b] / [m] =
0.42, h = 0.7 model has the higher bump at k h Mpc^-1. The smoother [m] h [b] / [m] = 0.15, h = 0.7 model is a better fit to the data because of the overall shape. A preliminary analysis of the
complete final 2dFGRS sample yields a slightly smoother spectrum than the results shown here (from Percival et al. 2001), so that the high-baryon solution becomes disfavoured.
Figure 6 shows that there is a degeneracy between [m] h and the baryonic fraction [b] / [m]. However, there are two local maxima in the likelihood, one with [m] h [m] h
The 2dFGRS data are compared to the best-fit linear power spectra convolved with the window function in figure 6. The low-density model fits the overall shape of the spectrum with relatively small
`wiggles', while the solution at [m] h k h Mpc^-1, but fits the overall shape less well. A preliminary analysis of P(k) from the full final dataset shows that P(k) becomes smoother: the high-baryon
solution becomes disfavoured, and the uncertainties narrow slightly around the lower-density solution: [m] h = 0.18± 0.02; [b] / [m] = 0.17± 0.06. The lack of large-amplitude oscillatory features in
the power spectrum is one general reason for believing that the universe is dominated by collisionless nonbaryonic matter. In detail, the constraints on the collisional nature of dark matter are
weak, since all we require is that the effective sound speed for modes of 100-Mpc wavelength is less than about 0.1c. Nevertheless, if a pure-baryon model is ruled out, the next simplest alternative
would arguably be to introduce a weakly-interacting relic particle, so there is at least circumstantial evidence in this direction from the power spectrum.
It is interesting to compare these conclusions with other constraints. These are shown on figure =, again assuming h = 0.7± 10%. Estimates of the Deuterium to Hydrogen ratio in QSO spectra combined
big-bang nucleosynthesis theory predict [b] h^2 = 0.020 ± 0.001 (Burles et al. 2001), which translates to the shown locus of f[b] vs [m] h. X-ray cluster analysis yields a baryon fraction [b] / [m] =
0.127 ± 0.017 (Evrard 1997) which is within 1
Perhaps the main point to emphasise here is that the 2dFGRS results are not greatly sensitive to the assumed tilt of the primordial spectrum. As discussed below, CMB data show that n = 1 is a very
good approximation; in any case, very substantial tilts (n | {"url":"http://ned.ipac.caltech.edu/level5/Sept03/Peacock/Peacock5_2.html","timestamp":"2014-04-17T15:28:55Z","content_type":null,"content_length":"12427","record_id":"<urn:uuid:8f493435-7f71-482b-8e43-7266370823b0>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00636-ip-10-147-4-33.ec2.internal.warc.gz"} |
1.2. Support Vector Machines
1.2. Support Vector Machines¶
Support vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection.
The advantages of support vector machines are:
□ Effective in high dimensional spaces.
□ Still effective in cases where number of dimensions is greater than the number of samples.
□ Uses a subset of training points in the decision function (called support vectors), so it is also memory efficient.
□ Versatile: different Kernel functions can be specified for the decision function. Common kernels are provided, but it is also possible to specify custom kernels.
The disadvantages of support vector machines include:
□ If the number of features is much greater than the number of samples, the method is likely to give poor performances.
□ SVMs do not directly provide probability estimates, these are calculated using an expensive five-fold cross-validation (see Scores and probabilities, below).
The support vector machines in scikit-learn support both dens (numpy.ndarray and convertible to that by numpy.asarray) and sparse (any scipy.sparse) sample vectors as input. However, to use an SVM to
make predictions for sparse data, it must have been fit on such data. For optimal performance, use C-ordered numpy.ndarray (dense) or scipy.sparse.csr_matrix (sparse) with dtype=float64.
1.2.1. Classification¶
SVC, NuSVC and LinearSVC are classes capable of performing multi-class classification on a dataset.
SVC and NuSVC are similar methods, but accept slightly different sets of parameters and have different mathematical formulations (see section Mathematical formulation). On the other hand, LinearSVC
is another implementation of Support Vector Classification for the case of a linear kernel. Note that LinearSVC does not accept keyword kernel, as this is assumed to be linear. It also lacks some of
the members of SVC and NuSVC, like support_.
As other classifiers, SVC, NuSVC and LinearSVC take as input two arrays: an array X of size [n_samples, n_features] holding the training samples, and an array Y of integer values, size [n_samples],
holding the class labels for the training samples:
>>> from sklearn import svm
>>> X = [[0, 0], [1, 1]]
>>> y = [0, 1]
>>> clf = svm.SVC()
>>> clf.fit(X, y)
SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, degree=3,
gamma=0.0, kernel='rbf', max_iter=-1, probability=False, random_state=None,
shrinking=True, tol=0.001, verbose=False)
After being fitted, the model can then be used to predict new values:
>>> clf.predict([[2., 2.]])
SVMs decision function depends on some subset of the training data, called the support vectors. Some properties of these support vectors can be found in members support_vectors_, support_ and
>>> # get support vectors
>>> clf.support_vectors_
array([[ 0., 0.],
[ 1., 1.]])
>>> # get indices of support vectors
>>> clf.support_
array([0, 1]...)
>>> # get number of support vectors for each class
>>> clf.n_support_
array([1, 1]...)
1.2.1.1. Multi-class classification¶
SVC and NuSVC implement the “one-against-one” approach (Knerr et al., 1990) for multi- class classification. If n_class is the number of classes, then n_class * (n_class - 1) / 2 classifiers are
constructed and each one trains data from two classes:
>>> X = [[0], [1], [2], [3]]
>>> Y = [0, 1, 2, 3]
>>> clf = svm.SVC()
>>> clf.fit(X, Y)
SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, degree=3,
gamma=0.0, kernel='rbf', max_iter=-1, probability=False, random_state=None,
shrinking=True, tol=0.001, verbose=False)
>>> dec = clf.decision_function([[1]])
>>> dec.shape[1] # 4 classes: 4*3/2 = 6
On the other hand, LinearSVC implements “one-vs-the-rest” multi-class strategy, thus training n_class models. If there are only two classes, only one model is trained:
>>> lin_clf = svm.LinearSVC()
>>> lin_clf.fit(X, Y)
LinearSVC(C=1.0, class_weight=None, dual=True, fit_intercept=True,
intercept_scaling=1, loss='l2', multi_class='ovr', penalty='l2',
random_state=None, tol=0.0001, verbose=0)
>>> dec = lin_clf.decision_function([[1]])
>>> dec.shape[1]
See Mathematical formulation for a complete description of the decision function.
Note that the LinearSVC also implements an alternative multi-class strategy, the so-called multi-class SVM formulated by Crammer and Singer, by using the option multi_class='crammer_singer'. This
method is consistent, which is not true for one-vs-rest classification. In practice, on-vs-rest classification is usually preferred, since the results are mostly similar, but the runtime is
significantly less.
For “one-vs-rest” LinearSVC the attributes coef_ and intercept_ have the shape [n_class, n_features] and [n_class] respectively. Each row of the coefficients corresponds to one of the n_class many
“one-vs-rest” classifiers and similar for the intercepts, in the order of the “one” class.
In the case of “one-vs-one” SVC, the layout of the attributes is a little more involved. In the case of having a linear kernel, The layout of coef_ and intercept_ is similar to the one described for
LinearSVC described above, except that the shape of coef_ is [n_class * (n_class - 1) / 2, n_features], corresponding to as many binary classifiers. The order for classes 0 to n is “0 vs 1”, “0 vs 2”
, ... “0 vs n”, “1 vs 2”, “1 vs 3”, “1 vs n”, . . . “n-1 vs n”.
The shape of dual_coef_ is [n_class-1, n_SV] with a somewhat hard to grasp layout. The columns correspond to the support vectors involved in any of the n_class * (n_class - 1) / 2 “one-vs-one”
classifiers. Each of the support vectors is used in n_class - 1 classifiers. The n_class - 1 entries in each row correspond to the dual coefficients for these classifiers.
This might be made more clear by an example:
Consider a three class problem with with class 0 having three support vectors i and k dual_coef_ looks like this:
├┼┤Coefficients for SVs of class 0 │
├┼┤ │
├┼┤Coefficients for SVs of class 1 │
├┼┤Coefficients for SVs of class 2 │
1.2.1.2. Scores and probabilities¶
The SVC method decision_function gives per-class scores for each sample (or a single score per sample in the binary case). When the constructor option probability is set to True, class membership
probability estimates (from the methods predict_proba and predict_log_proba) are enabled. In the binary case, the probabilities are calibrated using Platt scaling: logistic regression on the SVM’s
scores, fit by an additional cross-validation on the training data. In the multiclass case, this is extended as per Wu et al. (2004).
Needless to say, the cross-validation involved in Platt scaling is an expensive operation for large datasets. In addition, the probability estimates may be inconsistent with the scores, in the sense
that the “argmax” of the scores may not be the argmax of the probabilities. (E.g., in binary classification, a sample may be labeled by predict as belonging to a class that has probability <½
according to predict_proba.) Platt’s method is also known to have theoretical issues. If confidence scores are required, but these do not have to be probabilities, then it is advisable to set
probability=False and use decision_function instead of predict_proba.
1.2.1.3. Unbalanced problems¶
In problems where it is desired to give more importance to certain classes or certain individual samples keywords class_weight and sample_weight can be used.
SVC (but not NuSVC) implement a keyword class_weight in the fit method. It’s a dictionary of the form {class_label : value}, where value is a floating point number > 0 that sets the parameter C of
class class_label to C * value.
SVC, NuSVC, SVR, NuSVR and OneClassSVM implement also weights for individual samples in method fit through keyword sample_weight. Similar to class_weight, these set the parameter C for the i-th
example to C * sample_weight[i].
1.2.2. Regression¶
The method of Support Vector Classification can be extended to solve regression problems. This method is called Support Vector Regression.
The model produced by support vector classification (as described above) depends only on a subset of the training data, because the cost function for building the model does not care about training
points that lie beyond the margin. Analogously, the model produced by Support Vector Regression depends only on a subset of the training data, because the cost function for building the model ignores
any training data close to the model prediction.
There are two flavors of Support Vector Regression: SVR and NuSVR.
As with classification classes, the fit method will take as argument vectors X, y, only that in this case y is expected to have floating point values instead of integer values:
>>> from sklearn import svm
>>> X = [[0, 0], [2, 2]]
>>> y = [0.5, 2.5]
>>> clf = svm.SVR()
>>> clf.fit(X, y)
SVR(C=1.0, cache_size=200, coef0=0.0, degree=3,
epsilon=0.1, gamma=0.0, kernel='rbf', max_iter=-1, probability=False,
random_state=None, shrinking=True, tol=0.001, verbose=False)
>>> clf.predict([[1, 1]])
array([ 1.5])
1.2.3. Density estimation, novelty detection¶
One-class SVM is used for novelty detection, that is, given a set of samples, it will detect the soft boundary of that set so as to classify new points as belonging to that set or not. The class that
implements this is called OneClassSVM.
In this case, as it is a type of unsupervised learning, the fit method will only take as input an array X, as there are no class labels.
See, section Novelty and Outlier Detection for more details on this usage.
1.2.4. Complexity¶
Support Vector Machines are powerful tools, but their compute and storage requirements increase rapidly with the number of training vectors. The core of an SVM is a quadratic programming problem
(QP), separating support vectors from the rest of the training data. The QP solver used by this libsvm-based implementation scales between libsvm cache is used in practice (dataset dependent). If the
data is very sparse
Also note that for the linear case, the algorithm used in LinearSVC by the liblinear implementation is much more efficient than its libsvm-based SVC counterpart and can scale almost linearly to
millions of samples and/or features.
1.2.5. Tips on Practical Use¶
□ Avoiding data copy: For SVC, SVR, NuSVC and NuSVR, if the data passed to certain methods is not C-ordered contiguous, and double precision, it will be copied before calling the underlying C
implementation. You can check whether a give numpy array is C-contiguous by inspecting its flags attribute.
For LinearSVC (and LogisticRegression) any input passed as a numpy array will be copied and converted to the liblinear internal sparse data representation (double precision floats and int32
indices of non-zero components). If you want to fit a large-scale linear classifier without copying a dense numpy C-contiguous double precision array as input we suggest to use the
SGDClassifier class instead. The objective function can be configured to be almost the same as the LinearSVC model.
□ Kernel cache size: For SVC, SVR, nuSVC and NuSVR, the size of the kernel cache has a strong impact on run times for larger problems. If you have enough RAM available, it is recommended to set
cache_size to a higher value than the default of 200(MB), such as 500(MB) or 1000(MB).
□ Setting C: C is 1 by default and it’s a reasonable default choice. If you have a lot of noisy observations you should decrease it. It corresponds to regularize more the estimation.
□ Support Vector Machine algorithms are not scale invariant, so it is highly recommended to scale your data. For example, scale each attribute on the input vector X to [0,1] or [-1,+1], or
standardize it to have mean 0 and variance 1. Note that the same scaling must be applied to the test vector to obtain meaningful results. See section Preprocessing data for more details on
scaling and normalization.
□ Parameter nu in NuSVC/OneClassSVM/NuSVR approximates the fraction of training errors and support vectors.
□ In SVC, if data for classification are unbalanced (e.g. many positive and few negative), set class_weight='auto' and/or try different penalty parameters C.
□ The underlying LinearSVC implementation uses a random number generator to select features when fitting the model. It is thus not uncommon, to have slightly different results for the same
input data. If that happens, try with a smaller tol parameter.
□ Using L1 penalization as provided by LinearSVC(loss='l2', penalty='l1', dual=False) yields a sparse solution, i.e. only a subset of feature weights is different from zero and contribute to
the decision function. Increasing C yields a more complex model (more feature are selected). The C value that yields a “null” model (all weights equal to zero) can be calculated using
1.2.6. Kernel functions¶
The kernel function can be any of the following:
□ linear:
□ polynomial: d is specified by keyword degree, r by coef0.
□ rbf: gamma, must be greater than 0.
□ sigmoid (r is specified by coef0.
Different kernels are specified by keyword kernel at initialization:
>>> linear_svc = svm.SVC(kernel='linear')
>>> linear_svc.kernel
>>> rbf_svc = svm.SVC(kernel='rbf')
>>> rbf_svc.kernel
1.2.6.1. Custom Kernels¶
You can define your own kernels by either giving the kernel as a python function or by precomputing the Gram matrix.
Classifiers with custom kernels behave the same way as any other classifiers, except that:
□ Field support_vectors_ is now empty, only indices of support vectors are stored in support_
□ A reference (and not a copy) of the first argument in the fit() method is stored for future reference. If that array changes between the use of fit() and predict() you will have unexpected
1.2.6.1.1. Using Python functions as kernels¶
You can also use your own defined kernels by passing a function to the keyword kernel in the constructor.
Your kernel must take as arguments two matrices and return a third matrix.
The following code defines a linear kernel and creates a classifier instance that will use that kernel:
>>> import numpy as np
>>> from sklearn import svm
>>> def my_kernel(x, y):
... return np.dot(x, y.T)
>>> clf = svm.SVC(kernel=my_kernel)
1.2.6.1.2. Using the Gram matrix¶
Set kernel='precomputed' and pass the Gram matrix instead of X in the fit method. At the moment, the kernel values between all training vectors and the test vectors must be provided.
>>> import numpy as np
>>> from sklearn import svm
>>> X = np.array([[0, 0], [1, 1]])
>>> y = [0, 1]
>>> clf = svm.SVC(kernel='precomputed')
>>> # linear kernel computation
>>> gram = np.dot(X, X.T)
>>> clf.fit(gram, y)
SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, degree=3,
gamma=0.0, kernel='precomputed', max_iter=-1, probability=False,
random_state=None, shrinking=True, tol=0.001, verbose=False)
>>> # predict on training examples
>>> clf.predict(gram)
array([0, 1])
1.2.6.1.3. Parameters of the RBF Kernel¶
When training an SVM with the Radial Basis Function (RBF) kernel, two parameters must be considered: C and gamma. The parameter C, common to all SVM kernels, trades off misclassification of training
examples against simplicity of the decision surface. A low C makes the decision surface smooth, while a high C aims at classifying all training examples correctly. gamma defines how much influence a
single training example has. The larger gamma is, the closer other examples must be to be affected.
Proper choice of C and gamma is critical to the SVM’s performance. One is advised to use GridSearchCV with C and gamma spaced exponentially far apart to choose good values.
1.2.7. Mathematical formulation¶
A support vector machine constructs a hyper-plane or set of hyper-planes in a high or infinite dimensional space, which can be used for classification, regression or other tasks. Intuitively, a good
separation is achieved by the hyper-plane that has the largest distance to the nearest training data points of any class (so-called functional margin), since in general the larger the margin the
lower the generalization error of the classifier.
1.2.7.1. SVC¶
Given training vectors
Its dual is
where n by n positive semidefinite matrix,
The decision function is:
While SVM models derived from libsvm and liblinear use C as regularization parameter, most other estimators use alpha. The relation between both is
This parameters can be accessed through the members dual_coef_ which holds the product support_vectors_ which holds the support vectors, and intercept_ which holds the independent term
1.2.7.2. NuSVC¶
We introduce a new parameter
It can be shown that the nu-SVC formulation is a reparametrization of the C-SVC and therefore mathematically equivalent.
1.2.8. Implementation details¶
Internally, we use libsvm and liblinear to handle all computations. These libraries are wrapped using C and Cython.
For a description of the implementation and details of the algorithms used, please refer to | {"url":"http://scikit-learn.org/stable/modules/svm.html","timestamp":"2014-04-16T13:13:04Z","content_type":null,"content_length":"71216","record_id":"<urn:uuid:f98efcba-2e0a-42b7-88b6-14e0060c4d41>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00119-ip-10-147-4-33.ec2.internal.warc.gz"} |
Greeley Square, NY Math Tutor
Find a Greeley Square, NY Math Tutor
...As a graduate student in a biomedical science field, I'm a huge biology dork. I've taken too many graduate biology courses to count. I could talk about biology for hours.
17 Subjects: including calculus, GRE, SAT writing, Regents
Hi, I'm Dan. I grew up in Brooklyn and now I live in Manhattan with my fiancee and two cats. I am within one year of earning my Ph.D. in chemistry from NYU.
14 Subjects: including calculus, physics, prealgebra, GRE
...Francis College and College of New Rochelle. Presently, I teach mathematics at Boricua College. I have also been tutoring math on individual one-on-one basis for the past seven years, as an
independent contractor throughout New York City five boroughs.
9 Subjects: including calculus, algebra 1, algebra 2, French
...I just graduated from Eckerd College (a small liberal arts school in St. Petersburg, FL) where I received a B.A. (High Honors) in philosophy. While at Eckerd, I cultivated my passion for
reading, writing and editing; I am confident that I can help to foster a similar passion in my students.
42 Subjects: including calculus, elementary (k-6th), phonics, prealgebra
I am a retired middle school English teacher of 25 years. I have taught and tutored English, Reading, Writing, ESL, Math and Social Studies. My teacher exam (L.A.S.T.) score was 294/300 with 2
perfect essay scores.
35 Subjects: including algebra 2, SAT math, precalculus, algebra 1
Related Greeley Square, NY Tutors
Greeley Square, NY Accounting Tutors
Greeley Square, NY ACT Tutors
Greeley Square, NY Algebra Tutors
Greeley Square, NY Algebra 2 Tutors
Greeley Square, NY Calculus Tutors
Greeley Square, NY Geometry Tutors
Greeley Square, NY Math Tutors
Greeley Square, NY Prealgebra Tutors
Greeley Square, NY Precalculus Tutors
Greeley Square, NY SAT Tutors
Greeley Square, NY SAT Math Tutors
Greeley Square, NY Science Tutors
Greeley Square, NY Statistics Tutors
Greeley Square, NY Trigonometry Tutors
Nearby Cities With Math Tutor
Bowling Green, NY Math Tutors
Captree Island, NY Math Tutors
Castle Point, NJ Math Tutors
Crotona Park, NY Math Tutors
Ellis Island, NJ Math Tutors
Five Corners, NJ Math Tutors
Lake Gardens, NY Math Tutors
Lake Swannanoa, NJ Math Tutors
Monmouth Park, NJ Math Tutors
Oak Island, NY Math Tutors
Pine Cliff Lake, NJ Math Tutors
Shady Lake, NJ Math Tutors
Silver Lake, NJ Math Tutors
Tunnel Math Tutors
Tyler Park, NJ Math Tutors | {"url":"http://www.purplemath.com/Greeley_Square_NY_Math_tutors.php","timestamp":"2014-04-18T05:31:54Z","content_type":null,"content_length":"23883","record_id":"<urn:uuid:2653079f-9d9a-4a78-a8b9-ae03c05af213>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00250-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rescaling constraints, BRST methods, and refined algebraic quantisation
Martínez Pascual, Eric (2012) Rescaling constraints, BRST methods, and refined algebraic quantisation. PhD thesis, University of Nottingham.
We investigate the canonical BRST–quantisation and refined algebraic quantisation within a family of classically equivalent constrained Hamiltonian systems that are related to each other by rescaling
constraints with nonconstant functions on the configuration space. The quantum constraints are implemented by a rigging map that is motivated by a BRST version of group averaging. Two systems are
considered. In the first one we avoid topological built–in complications by considering R^4 as phase space, on which a single constraint, linear in momentum is defined and rescaled. Here, the rigging
map has a resolution finer than what can be extracted from the formally divergent contributions to the group averaging integral. Three cases emerge, depending on the asymptotics of the scaling
function: (i) quantisation is equivalent to that with identity scaling; (ii) quantisation fails, owing to nonexistence of self–adjoint extensions of the constraint operator; (iii) a quantisation
ambiguity arises from the self–adjoint extension of the constraint operator, and the resolution of this purely quantum mechanical ambiguity determines the superselection structure of the physical
Hilbert space. The second system we consider is a generalisation of the aforementioned model, two constraints linear in momenta are defined on the phase space R^6 and their rescalings are analysed.
With a suitable choice of a parametric family of scaling functions, we turn the unscaled abelian gauge algebra either into an algebra of constraints that (1) keeps the abelian property, or, (2) has a
nonunimodular behaviour with gauge invariant structure functions, or, (3) contains structure functions depending on the full configuration space. For cases (1) and (2), we show that the BRST version
of group averaging defines a proper rigging map in refined algebraic quantisation. In particular, quantisation case (2) becomes the first example known to the author where structure functions in the
algebra of constraints are successfully handled in refined algebraic quantisation. Prospects of generalising the analysis to case (3) are discussed.
Item Type: Thesis (PhD)
Supervisors: Louko, J.M.T.
Uncontrolled Keywords: Dirac quantisation, BRST, Hilbert space, Constraints
Faculties/Schools: UK Campuses > Faculty of Science > School of Mathematical Sciences
ID Code: 2433
Deposited By: Mr Eric Martinez-Pascual
Deposited On: 21 Nov 2012 11:20
Last Modified: 21 Nov 2012 11:20
Archive Staff Only: item control page | {"url":"http://etheses.nottingham.ac.uk/2433/","timestamp":"2014-04-18T01:49:04Z","content_type":null,"content_length":"20101","record_id":"<urn:uuid:43724c79-eb1c-45c9-a5b3-bf5f48c71b00>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00647-ip-10-147-4-33.ec2.internal.warc.gz"} |
At what distance from a long straight wire carrying a current of 5.0 Amps is the magnitude of the magnetic field due... - Homework Help - eNotes.com
At what distance from a long straight wire carrying a current of 5.0 Amps is the magnitude of the magnetic field due to the wire equal to the strength of earth's magnetic field of about 5.0 x 10^-5 T
The magnetic field due to a current flowing through an infinitely long wire is given by `B = (mu*I)/(2*pi*r)` where B is the magnetic field, mu is the permeability of free space and equal to `4*pi*10
^-7` T*m/A, and r is the radial distance.
The Earth's magnetic field is approximately `5*10^-5` T. If a wire carries a current of 5 A, the magnitude of the magnetic field due to the current is `5*10^-5` T at a distance r where:
`5*10^-5 = (4*pi*10^-7*5)/(2*pi*r)`
=> r = `(4*pi*10^-7*5)/(2*pi*5*10^-5)`
=> r = `(2*10^-2)`
The magnetic field around the wire is `5*10^-5` T at a distance `2*10^-2` m from the wire.
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/what-distance-from-long-straight-wire-carrying-426798","timestamp":"2014-04-19T04:22:14Z","content_type":null,"content_length":"28031","record_id":"<urn:uuid:d76467c2-c2a3-491a-b1aa-d8760a6938ce>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00289-ip-10-147-4-33.ec2.internal.warc.gz"} |
The largest interior angle of a kite is 140 °. Side measures are 1.6 m and 1.2 m. Determine the shortest diagonal... - Homework Help - eNotes.com
The largest interior angle of a kite is 140 °. Side measures are 1.6 m and 1.2 m. Determine the shortest diagonal line kite.
Let ABCD is ite in which , AB=1.2 and BC=16 and angle ABC= 140
join A to C
In triangle ABC , by sine law
`sin(140)/(AC)=sin(angleB)/1.2=sin (angleA)/1.6`
`` also
`angleA+angle B=40`
`angleB=40-angle A`
`sin(angle B)=sin(40-angleA)=sin(40)cos(A)-cos(A)sin(angleA)`
`(.64 cos(A)-.77sin(A))/1.2=sin(A)/1.6`
`16 cos(A)=38 sin(A)`
`` `tan(A)=16/38`
join BD ,BD is perpendicular to AC . AC bsect BD say at M .
In triangle AMB
`BM=1.2xx sin(22.83)=.47`
`BD=2xxBM=2xx.47=.94` m
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/largest-interior-angle-kite-140-side-measures-1-6-442633","timestamp":"2014-04-18T15:02:56Z","content_type":null,"content_length":"25315","record_id":"<urn:uuid:2aeae2e5-2475-4918-bfd2-1241d437683f>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00450-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Maria on Thursday, March 13, 2008 at 1:16am.
describe what the graph of interval (-4,10) looks like.
• algebra - drwls, Thursday, March 13, 2008 at 5:45am
That is merely the x-axis range of a graph, from -4 to 10. You do not say what the function is, or what is being "plotted" as a curve on the y vs. x graph in that range.
Your question seems incomplete.
Related Questions
Algebra - Describe what the graph of interval[-4,10] looks like.
Algebra - describe what the graph of interval [-4,10] looks like. thank you
Algebra 1 - : Describe what the graph of interval [-4,10] looks like.
Algebra 1 - Describe what the graph of interval [-4,10] looks like.
Algebra - Describe what the graph of interval [-4,10] looks like.
Algebra - Describe what the graph of interval [-4,10] looks like.
college algebra - Describe what the graph of interval [-4,10] looks like.
116 Algebra - Describe what the graph of interval [-4,10] looks like.
algebra - Describe what the graph of interval[-4,10] looks like. Please answer
Algebra 1A - Describe what the graph of interval [-4, 10] looks like? Please ... | {"url":"http://www.jiskha.com/display.cgi?id=1205385373","timestamp":"2014-04-20T22:30:58Z","content_type":null,"content_length":"8224","record_id":"<urn:uuid:73eb6171-127a-4b46-ad78-26b01ad165be>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00125-ip-10-147-4-33.ec2.internal.warc.gz"} |
Primary Mathematics World Contest
We have primary mathematics world contest problems for individual contest and may be usefull for your students.
1. There are four kinds of dollar-notes (or dollar-bills) of value $1, $5, $10, and $50 respectively. There is a total of nine dollar-notes, with at least one dollar-note of each kind. If the total
value of these dollar-notes is $177, how many $10 dollar-notes are there ?
2. A bus starts from town A to town B and another bus starts from town B to town A on the same road. They run with constant speed to their destinations and back home without stopping. The buses pass
by each other for the first time at 700 km (kilometers) from town A and they pass by each other for the second time on the way back at 400 km from town B. How many km is it from town A to town B?
3. A contractor requests 2 men to build brick walls. One man can build a brick wall in 9 hours, while the other man can do the same job in 10 hours. However, when the two men work together, there
will be shortfall of a total of 10 bricks per hour, and it takes them exactly 5 hours to complete the brick wall . Find the total number of bricks used on the wall.
4. Clock A is ten seconds faster than standard time every hour. Clock B is twenty seconds slower than the standard time every hour. If we adjust the two clocks to standard time at the same time, then
within 24 hours Clock A shows 7:00 while Clock B shows 6:50. What is the standard time at that moment ?
5. How many different isosceles triangles of perimeter 25 units can be formed where each side is a whole number (integer) of units?
6. There are four kinds of dollar-notes (or dollar-bills) of value $1, $5, $10, and $50 respectively. There is a total of nine dollar-notes, with at least one dollar-note of each kind. If the total
value of these dollar-notes is $177, how many $10 dollar-notes are there?
7. Peter begins counting up from 100 by 7’s (100, 107, …) and Mary begins counting down from 1000 by 8’s (1000, 992, …) at the same time. If they count at the same rate, what number will they say at
the same time?
8. A bus starts from town A to town B and another bus starts from town B to town A on the same road. They run with constant speed to their destinations and back home without stopping. The buses pass
by each other for the first time at 700 km (kilometers) from town A and they pass by each other for the second time on the way back at 400 km from town B. How many km is it from town A to town B?
9. In the figure, MN is a straight line. The angles a, b and c satisfy the relations, b:a = 2:1 and c:b = 3:1. Find angle b.
10. A square floor is tiled with congruent square tiles. The tiles on the two diagonals of the floor are black. The rest of the tiles are white. If there are 101 black tiles, what is the total number
of white tiles?
11. In trapezoid ABCD, segments AB and CD are both perpendicular to BC. Diagonals AC and BD intersect at E. If AB = 9, BC = 12 and CD = 16, what is the area of triangle BEC?
12. Refer to the diagram below. In rectangle ABCD, F is the midpoint of AB, BC = 3BE, AD = 4HD. If the area of rectangle ABCD is 300 square-units, how many square-units is the area of the shaded
13. I, II, and III are three semi-circles of different sizes. If the ratio of the diameters of I , II and III is 3:4:5, and the area of III is 24cm2, how many cm2 is the sum of the areas of I and II?
14. Find the value of:
15. Find the fraction with the smallest denominator between 97/36 and 96/35 !
July 26, 2011 | Filed Under | {"url":"http://mathandflash.com/primary-mathematics-world-contest/","timestamp":"2014-04-21T14:43:17Z","content_type":null,"content_length":"75913","record_id":"<urn:uuid:ec75aca7-40c1-449d-9b6b-8ef49867448f>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00426-ip-10-147-4-33.ec2.internal.warc.gz"} |
Surprise, AZ Algebra 1 Tutor
Find a Surprise, AZ Algebra 1 Tutor
...I have a fingerprint card that shows I have passed a criminal background check. Letters of reference available too.I am certified by the Arizona Department of Education as "Highly Qualified"
to teach Middle Grades Math. I have tutored students for the AIMs test, occupational math and college level math.
24 Subjects: including algebra 1, chemistry, English, writing
Hi, my name is Keenan and I'm a professionally registered geologist with the Arizona Board of Technical Registration and earned a Bachelors and Masters in Geology at Arizona State University.
I've taught Geology as an Adjunct Faculty at Glendale, Scottsdale, and South Mountain Community Colleges an...
7 Subjects: including algebra 1, English, writing, elementary math
...I currently teach high school Special Education in the Glendale Union High School District. In my classes, I have many students with dyslexia and have great success helping them achieve their
educational, social, and career/life goals. My bachelor's degree from Chapman University is in Sociology.
40 Subjects: including algebra 1, reading, English, writing
...Math can be easy if you know how to approach and learn it. My strongest desire as a tutor is to help students conquer their fear of math and learn the joy of mastering it. Many of the students
that I tutored in college had failed subjects like college algebra two and even three times and they were desperate to pass their required math classes in order to get their degree.
15 Subjects: including algebra 1, calculus, geometry, algebra 2
I have taught at a valley high school for several years. I have taught all levels of high school math. I try to explain math in a way that the student can understand instead of using too much
math language.
10 Subjects: including algebra 1, geometry, SAT math, algebra 2
Related Surprise, AZ Tutors
Surprise, AZ Accounting Tutors
Surprise, AZ ACT Tutors
Surprise, AZ Algebra Tutors
Surprise, AZ Algebra 2 Tutors
Surprise, AZ Calculus Tutors
Surprise, AZ Geometry Tutors
Surprise, AZ Math Tutors
Surprise, AZ Prealgebra Tutors
Surprise, AZ Precalculus Tutors
Surprise, AZ SAT Tutors
Surprise, AZ SAT Math Tutors
Surprise, AZ Science Tutors
Surprise, AZ Statistics Tutors
Surprise, AZ Trigonometry Tutors | {"url":"http://www.purplemath.com/Surprise_AZ_algebra_1_tutors.php","timestamp":"2014-04-18T10:59:50Z","content_type":null,"content_length":"24052","record_id":"<urn:uuid:66244879-a5d3-4028-80c7-f25c4f1dd52e>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00020-ip-10-147-4-33.ec2.internal.warc.gz"} |
Why is addition of observables in quantum mechanics commutative?
up vote 24 down vote favorite
I am no expert in the field. I hope the question is suitable for MO.
I once followed a quantum mechanics course aimed at mathematicians. Instead of the usual motivations coming from experiment at the turn of the 19th century, the following argument (more or less) was
given to show that the QM formalism is in some sense unavoidable.
When one does physics, he is interested in measuring some quantity on a given state of the universe. The quantity (say the speed of a particle) is defined experimentally by the tool used to do the
measure. We define such an instrument, with a given measure unit, an observable. So for every state and every observable we get a real number.
We can now define a sum and a product of observables. These are obtained by performing the two measures and then adding or multiplying their values. Similarly we can define scalar multiplication.
These operations are then associative, but there is no reason why they should be commutative, since performing the first measure can (and indeed does) change the state of the universe. For some
reason I cannot understand, anyway, addition is assumed commutative. I also see no reason why multiplication should distribute over addition. We can now also consider observables with complex values,
by linearity.
At this point observables form an $\mathbb{R}$-algebra. We intoduce a norm it as follows. The norm of an observable is the sup of the absolute values of the quantities which can be measured. Every
instrument will have a limited scale, so this is a real number. By definition this is a norm. Moreover it satisfies $\|A B \| \leq \|A\| \| B \|$. We can now formally take the completion of our
algebra and obtain a Banach algebra.
Finally we define an involution * on our algebra by complex conjugation of observables. This yields a Banach * -algebra, and the third assumption which is mysterious to me is that the $C^*$ identity
Finally we can use the Gelfand-Naimark theorem to represent the given algebra as an algebra of operators on a Hilbert space. If this turns out to be separable, it is isomorphic to $L^2(\mathbb{R}^3)$
and we recover the classical Schrodinger formalism.
The problems
In this approach I see three deductions which seems arbitrary: addition is commutative, multiplication is distributive and the $C^*$ identity holds. Is there any kind of hand-waving which can jusify
these? In particular
Why is addition of observables commutative, while multiplication is not?
quantum-mechanics intuition
Hurriedly looking over your question, don't you really want to ask why addition of observables is commutative? I mean, we know why addition of operators is commutative, but that doesn't seem to be
what you're asking. – Yemon Choi Feb 6 '10 at 11:48
Sorry, I edit it. – Andrea Ferretti Feb 6 '10 at 11:50
1 It is not hand-waving that justifies any of it. It is the agreement with experiment that justifies all of it. The "unreasonable effectiveness" of mathematics in describing the universe has been
discussed by Wigner. – Steve Huntsman Feb 6 '10 at 16:30
1 I don't think that is how addition and multiplication are defined. In fact it is definitely not that way, since we know the answer for quantum mechanics. Consider the finite dimensional case.
Observables are modeled by certain operators and the numbers you get from measurement are eigenvalues. But operators' addition is not defined as addition of their eigenvalues unless you can
simultaneously diagonalize them. – MBN Feb 6 '10 at 16:43
@Georges I did not mean for that to come across as condescending, or imply that I am in any way so brilliant that I find this book simple. I simply meant, that reading the book in the way I did,
2 it did not require the level of effort that serious reading normally does, and most of what I read of it, took place walking across campus, lying in bed, or in a coffee shop. I hope I didn't give
the wrong impression. – B. Bischof Feb 6 '10 at 22:00
show 4 more comments
6 Answers
active oldest votes
Your description of the structure of the algebra of observables isn't quite how I'm used to it being. Indeed, I believe that in the best algebraic descriptions of quantum mechanics, addition
is a formal operation, rather than a physical operation as you've described. The best reference I know for this point of view is L.D. Faddeev and O.A. Yakubovskii, 2009, Lectures on Quantum
Mechanics for Mathematics Students. I don't have my copy handy right now, so I will describe my memory of how they set up the algebra of observables.
The first thing to point out is that in the real world, there is no such thing as pure states. This has nothing to do with quantum mechanics, and everything to do with an experimenter's
inability to perfectly measure the initial set-up. For your notion of "state" to make sense physically, it must be something like "repeatable initial set-up for an experiment". Once this is
your notion of state, you are perfectly able to run your experiment 1000 times, make your measurements (each individual run may give a different answer, but you can look at the distribution),
and process them as you want.
So really an observable assigns a probability distribution on $\mathbb R$ to each state. We now demand the following axiom: the (good) functions $\mathbb R \to \mathbb R$ act on the set of
observables by composition. So if $X: \{\text{states}\} \to \{\text{probability distributions}\}$ is an observable, so is $X^2$: the probability that the observable $X^2$ assigns to an
interval $[a,b]$ is the same as the probability that $X$ assigns to the interval $[\sqrt a,\sqrt b]$. In particular, suppose you compose your observable $X$ with a step function $\Theta(x - \
xi)$, where $\xi \in \mathbb R$. Then the observable $\Theta(X-\xi)$ measures the whether the value of $X$ is more than $\xi$. Then you can check that the full distribution $X$ is recoverable
up vote from the knowledge of all the $\Theta(X-\xi)$. In particular, it's recoverable from the expected values of $\Theta(X-\xi)$ on each state. So to set up the algebra of observables, it's enough
8 down to know only the expectation values for observable at each state.
Now you should realize the following. The previous paragraph makes sense even for classical mechanics, and in fact is the correct formalism (as there are no pure states). But in quantum
mechanics, it's worse than that. A definite state is one that gives a delta distribution for each observable. Classically, we believe that a sufficiently good experimenter can approximate
definite states to whatever desired accuracy. But there is very good evidence that this fails in the quantum world: no matter what tools you use, there are absolute bounds preventing states
from approximating definite states. So the language of distributions and expectations is absolutely necessary to formalize quantum mechanics, whereas in classical mechanics you could say that
there are idealized definite states, observables are functions on definite states, and states are probability distributions on the space of definite states.
Finally, the question is how to assign algebraical operations to the collection of observables. And here I admit that I don't have a great answer. One possibility is simply to convolve
probability distributions: this gives a commutative addition, for example. Then you could define a commutative associative multiplication by taking logs and adding and exponentiation, but my
memory is that this does not distribute over addition in general. F&Y define a commutative nonassociative multiplication by $(X,Y) = \frac12\bigl((X+Y)^2 - X^2 - Y^2\bigr)$. Oh, right. The
problem is the following: do you add, multiply, etc. the distribitions, or the expectation values? For addition, adding expectation values is the same as the usual convolution of
distributions and then taking expectation. But for multiplication it is not. I don't remember what F&Y do, but I think it might at the level of expectation values.
Wait, so it is in theory possible to have pure states? I thought that QM was nondeterministic. – Harry Gindi Feb 6 '10 at 23:29
@HG: I left out a definition. Given two states $\mu, \nu$, you can mix them with proportion $p, 1-p$, where $0 \leq p \leq 1$ is some probability, by flipping a (classical) coin that lands
Tails with probability $p$ and Heads with probability $1-p$, and depending on how the coin falls, set up your experiment in state $\mu$ or $\nu$. You can call the resulting state $p\mu +
(1-p)\nu$. Then a pure state is a state $\alpha$ so that if $\alpha = p\mu + (1-p)\nu$, then either $\alpha = \mu$ or $\alpha = \nu$. Classically, pure states are the same as definite
states, but this fails in QM. – Theo Johnson-Freyd Feb 6 '10 at 23:53
1 The failure of this is the "nondeterminacy" of QM. Whether you can actually set up a pure state (really you cannot), you can (presumably) approximate pure states with arbitrary accuracy.
But in quantum mechanics, there are absolute bounds preventing you from approximating definite states: for any state, there is an observable that does not give a single $\
delta$-distribution as output when it eats that state, i.e. it assigns a nontrivial probability distribution. – Theo Johnson-Freyd Feb 6 '10 at 23:56
5 Of course, in every theory, the mechanics is deterministic. In QM, and also in the classical world, "dynamics" or "mechanics" describes how the "value" (a probability distribution or
equivalently an expectation) of an observable on a given state evolves over time. It's reasonable to insist that the $\mathbb R\to \mathbb R$ action on observables is time-independent;
then in particular any theory set up the way I've described has linear, deterministic evolution at the level of probability distributions. – Theo Johnson-Freyd Feb 6 '10 at 23:59
@HG: Time evolution in QM is always deterministic. Any state, mixed or pure, evolves deterministically via the Schrodinger equation. The formalism I outlined above fits better with the
2 Heisenberg picture (states don't evolve), in which case the observables evolve deterministically (and linearly). The only non-deterministic part is the measurements: the pairing between
states and observables. In classical mechanics, the value of an observable on a pure state is determined. In quantum mechanics, even on a pure state the value of an observable is not
determined. – Theo Johnson-Freyd Feb 7 '10 at 17:52
show 2 more comments
We may think of a state $\omega$ as a functional on the algebra of observables $\mathcal O$ which is interpreted as giving the expected value of each observable. With this in mind, it is
natural to require $\omega$ to be linear (as well as two other usual properties, positivity and normalization).
Thus given two observables $A, B \in \mathcal O$, if we are going to have a sum $A + B$ it should be true that $\omega(A + B) = \omega(A) + \omega(B)$ for any state $\omega$. Since this
gives the values of $A + B$ on every state, it suffices to define it. Since $B + A$ has the same values on every state, $B + A$ is the same observable.
On the other hand, there is no natural way to say what $\omega(AB)$ should be, since states (like expectation values) need not be multiplicative.
So: commutativity of observables reduces to commutativity of $\mathbb C$ since expectations are linear, but nothing analogous applies to multiplication.
up vote 3
down vote This is based on what I've read in F. Strocchi, An Introduction to the Mathematical Structure of Quantum Mechanics.
Note that you can eventually interpret states as arising out of probability distributions, leading to Theo's comments.
Personally I am still a bit hazy on why we postulate a multiplication on observables at all, when (unlike the classical case) there is not a clear physical interpretation of what such an
operation should mean. However, given the full structure of a $C^*$-algebra, one can show that the uncertainty principle (or the existence of complementary observables) requires
noncommutative multiplication, et voila, you have quantum mechanics.
I followed myself the course from Strocchi, and it is exactly his argument I'm trying to fill. Defining a state as a linear functional seems as arbitrary as defining it as a
multiplicative linear functional. The question is: how con we derive these properties from the description of the process of measure alone? I have tried to explain the full reasoning in
my question. – Andrea Ferretti Jul 28 '10 at 10:20
Basically, a state is the current situation of the universe, an observable is some physical quantity we have some instrument to measure. They have an obvious pairing: in a given state,
apply your instrument and read the result. If you now define sum and multiplication of observables by doing the two measures one after the other, the result may depend on the order. And
indeed it does for the multiplication. Why not for the addition? – Andrea Ferretti Jul 28 '10 at 10:23
If you say $\omega(A + B)$ is the expected value of preparing the universe in state $\omega$, then measuring $A$, then measuring $B$ in the state the universe was left in after the first
measurement, then summing the results, then indeed $\omega(B + A)$ may be different. Instead, $A + B$ represents a single measurement we can do that results (in expectation) in the sum of
$A$ and $B$. That is a more sensible requirement than the corresponding one for multiplication, since even if $A$ and $B$ are somehow linked (as they are in QM), expectation is still
linear. – Jonathan L Long Jul 29 '10 at 8:41
As for a purely physical definition of addition of observables, I don't have one, and neither does Strocchi; he even notes that the introduction of the sum leads to an extension of the
physically defined observables. The situation for multiplication seems even worse, so I am not sure how much more physical motivation can be given at the purely algebraic level. –
Jonathan L Long Jul 29 '10 at 8:47
While Theo's comments are excellent, yours (Jonathan) get to the heart of the difference between addition and multiplication: states are expectation values, expectations values add, they
don't multiply. (Obviously there is more to be said.) – Toby Bartels Jan 3 '12 at 17:54
add comment
The introduction you outlined is basically reminiscent of the old quantum mechanics, anyway in the approach you depicted it is the culmination and not the premise of the construction, and the
comment was surely intended to be explanatory of the far origin of these choice. I now try to resume the history.
There are basically two approaches to mathematical quantum mechanics. The first one very complex and stratified in its development, but simple in the premise was discussed by John Von Neumann
in a lot of papers after the "Foundation of quantum mechanics", the second one is basically conceveid to be an extension for the first, and is this second approach you are referring to: the
GNS approach.
Anyway both of them are surely derived after an abstraction process very far beginning on the methods of classical mechanics, joint to the newest evidence from atomic and particle physics of
the first quantum mechanics.
Just as in classical mechanics we define functions of observables dynamical quantity so the founders of quantum mechanics conceived it is possible in quantum mechanics, anyway we need to
clarify in which sense this is possible and explanation isn't fully depleted from the naive extension of classical theory of the measure, based on real numbers, but it need of a clear
axiomatic and this was furnished from John Von Neumann (and in some way from Heisenberg, Dirac, and Schroedinger before him formulated this axiomatic)
Anyway, just as in classical mechanics there is a notion of repeatibility and regularity, so there is in quantum mechanics. The true difference is in the outcome of the measures,
deterministic in classical, probabilistic in quantum mechanics. So that measure processes are conceived deterministic in a statistical sense, and, for example, the component energies of the
isotropic harmonic oscillator sums exactly in mean value, but the variance is zero if the considered states are eigenstates. Old quantum mechanics can be founded on few axioms about the
measures and led Von Neumann, in a natural way to linear operators acting, like a non commutative algebra, on Hilbert spaces.
In order to grant correspondence principle we, following the founders of quantum mechanics, need to hypothesize the existence of intrinsically deterministically evolving observable, and just
the measure process make the difference, because these dynamical "quantities" with respect to the measures doesn't appear as real numbers, this point was the first time realized some time
after the Copenaghen interpretation was developed.
So they are assumed, after Heisenberg (speaking of non commutative numbers) and Jordan (speaking of matrices), and Schroedinger (speaking of operators acting on functional space of
probability) all these three point of view were showed to be in a certain strict framework to be equivalent, from Dirac assuming they are algebraic elements obeying to canonical commutation
up vote relation generalizing the Poisson algebra.
2 down
vote In brief the Dirac point can be summarized in assuming an Hilbert space structure for the states, and in developing step by step a theory of observables compatible with the Copenaghen
interpretation spirit and with the correspondence principle.
Anyway Von Neumann felt the need to obtain an axiomatic foundation based on more general operators algebras, and an axiomatic of measure, unifying from scratch the theoretical framework, in
fact obtaining a more general theory with respect to the Heisenberg and Dirac theoretical "prejudices". The Von Neumann point was in fact based on the general representation theory in the
geometrical framework of Banach operator algebras of operators in Hilbert space, and in particular on the CCR irreducible representation theory, but from this point the research of Von
Neumann continued in search of an intrinsic point of view based on the geometry of observable.
After time and time was in fact recognized that part of quantum theory of measure is nothing else then a generalized probabilistic theory in a Banach algebra and the general setting of
Gelfand Najmark Segal construction rebuild intrinsically the Hilbert spaces. Anyway the field extension of this setting is very problematic and a hierarchy of Hilbert spaces appears. Anyway
in this way a circle is closed and a new loop is opened: in the GNS approach to quantum mechanics we postulate that operators are living in an abstract algebra, obeying familiar rules for an
algebra with an involution (the * operation). Via Gelfand theorem the commutative case led to the algebra of complex valued continuous functions in an Hausdorf space, the spectrum of the
algebra (which will led the ordinary numerical set of coordinates of classical mechanics), and more in general to a spectral theory, culminating in the GNS construction, which associate to a
given linear form an Hilbert space and a representation for the algebra.
Anyway the true achievement of this approach is the net of algebras, that is very more general with respect to the Hilbert space interpretation of quantum mechanics,this achievement is useful
in relativistic field theory and leads to very far reaching results firstly partially discovered by Von Neumann in some papers, and after then developed from Araky, Haag, Kastler. In this
full setting is now possible to address in more precise terms the question of the cluster decomposition principle implicit in the deterministic evolutionary scheme, and the question of
repeatability principle of classical and quantum mechanics, and to understand quantitatively something about the limitation, arising from the change of the state of the universe, to this
principle, which can be espressed, for example, in term of a change of representation, becaused from the change of the linear form representing the thermokinetic state of "universe", without
any change in the postulates of quantum field theory and the derived quantum mechanics. This is perhaps the perspective of the search about KMS theorem.
I'm not very satisfied from this resume, anyway I think you can correct and integrate it, and I hope to read and write something else more precise and delimited.
This looks very interesting and covers a lot of ground. Would you object if I made some small edits for language? – Yemon Choi Feb 7 '10 at 0:14
It looks very interesting indeed, but sadly it does not touch in any way the original question. – Andrea Ferretti Feb 8 '10 at 19:56
@ YC sure you can edit it, I'm sure there are a lot of mistaken. Thanks in advance. – Tetis - Gianmarco Bramanti Feb 9 '10 at 20:59
@AF I think the points "touching" the very interesting question you pointed out are two: the choice of a algebra structure for operator is IMHO based on both: conservative approach with
regard to the construction of quantum mechanics ( in order to safe a predictive theory, so that the effect of measure in the state of universe was at first neglected at all) an empiric base
with respect to the measure of quantum quantities corresponding to additive classical quantities. E.g. the energies and spin components. Anyway in old QM these were partly "theorems"
nowadays, at all, definitions. – Tetis - Gianmarco Bramanti Feb 9 '10 at 21:17
add comment
Since you raised your question I'm uncomfortable about some question, and I re-read Von Neumann, in order to becalm myself, anyway in Von Neumann the problem isn't solved in a deductive way
neither is justified at all, only is asserted the additivity, as customary, between commutating operators (so there the correspondence principle is granted) and then the additivity is
extnded to non commutating operators.
Anyway reflecting on the practical use of non commutating linear combination of operators I realize that the arguments of linear combination are generally elements of some Lie Algebra, and
their powers, and I remember a point in the first book of Landau about classical mechanics that I like to quote:
Conservation laws.
"Not all integrals of motion have an equally relevant role in mechanics. Among these there are some whose invariance over time has an origin very deep, related to fundamental properties of
space and time and that is their homogeneity and isotropy. These quantity, these conservative, have an important general property, they are additive, that is, their value for a system
up vote 1 composed of several elements, whose interaction can be neglected, is equal to the sum of the values for each of the elements."
down vote
It seems just like Landau is mixign two unrelated points: the isotropy and the commutativity. In fact this isn't the additivity we are thinking to. Anyway there is an important point: and
this is the role of simmetry and in mathematic the role of the Stone-Neumann Theorem, and the role of parallel transport, and in mathematic the role of gauging the space and time, so we can
perahps re-connect the two point arised by Landau to the additivity of observables.
Generalizing, perhaps we need to relate an observable to an infinitesimal of a continous symmetry group: an elementar generator of lie algebra in order to justify additivity. Just
What do you think about this?
add comment
This question has bothered me for a long time! Although I don't have an answer, I'd like to mention an approach that looks promising at first, but turns out not to work.
First, recall that in quantum mechanics, you can think of a "state" as a way of preparing a physical system. Theo Johnson-Freyd pointed out in a comment that if you have two states $\rho$
and $\sigma$, you can construct a state that intuitively deserves to be called $\tfrac{1}{2}(\rho + \sigma)$:
Flip a fair coin. If the coin comes up heads, prepare the system in state $\rho$. If the coin comes up tails, prepare the system in state $\sigma$.
This state deserves the name $\tfrac{1}{2}(\rho + \sigma)$ because if $\rho[X]$ is the expectation value of the observable $X$ for a system prepared in state $\rho$, and $\sigma[X]$ is the
expectation value of $X$ for a system prepared in state $\sigma$, the expectation value of $X$ for a system prepared in state $\tfrac{1}{2}(\rho + \sigma)$ should be $\tfrac{1}{2}(\rho[X] +
\sigma[X])$, by the laws of classical probability.
Now, what happens if we use the same trick to define the sum of two observables? Given two observables $X$ and $Y$, let's define $X + Y$ to be the observable:
Flip a fair coin. If the coin comes up heads, measure $X$ and double the result. If the coin comes up tails, measure $Y$ and double the result.
The laws of classical probability tell us that if $\rho[X]$ and $\rho[Y]$ are the expectation values of $X$ and $Y$ for a system prepared in state $\rho$, the expectation value of $X + Y$
for a system prepared in state $\rho$ should be $\rho[X] + \rho[Y]$, just as you would hope.
up vote 1
down vote Here's where things go pear-shaped. Given an observable $Z$, it makes sense to define $Z^2$ to be the observable:
Measure $Z$ and square the result.
So what's the expectation value of $(X + Y)^2$ for a system prepared in state $\rho$? The laws of classical probability tell us that it's $\rho[X^2] + \rho[Y^2]$. In the formalism of
quantum mechanics, however, $X$ and $Y$ are operators and $\rho$ is a linear functional on the operator space, so
$\rho[(X + Y)^2] = \rho[X^2] + \rho[Y^2] + \rho[XY + YX]$.
If $\rho[XY + YX]$ is nonzero, this formula disagrees with the expectation value for $(X + Y)^2$ that follows from our definitions of $X + Y$ and $Z^2$, according to the laws of classical
In practice, it's not hard to find observables $X$ and $Y$ for which $\rho[XY + YX]$ can be nonzero. For example, let $X$ and $Y$ be the x-spin and z-spin of a spin-1 particle, represented
by the operators
$X = \frac{1}{\sqrt{2}}\left[\begin{array}{ccc}0&1&0\\\\1&0&1\\\\0&1&0\end{array}\right],\qquad Y = \left[\begin{array}{ccc}1&0&0\\\\0&0&0\\\\0&0&-1\end{array}\right].$
'Given an observable Z, it makes sense to define Z<sup>2</sup> to be the observable Measure Z and square the result' - I think this is where that argument falls over, and not just on a
quantum but even a classical level. I expect to be able to define the observable X+Y readily simply because linearity is a 'plausible' thing to have, but I see no reason to believe that
I'll be able to ascribe any sort of invariant meaning to Z<sup>2</sup> in general. – Steven Stadnicki Jul 28 '10 at 17:12
@Steven Stadnicki: To me, given an observable $Z$ and a function $f \colon \mathbb{R} \to \mathbb{R}$, it seems totally natural to define $f(Z)$ as the observable you measure by
measuring $Z$ and then applying $f$ to the result. What do you mean by an "invariant meaning," and why doesn't this definition give an "invariant meaning" to $f(Z)$ for any $Z$ and $f$?
– Vectornaut Jul 28 '11 at 23:05
1 On the contrary: in quantum mechanics as it exists, measuring $Z$ and squaring the result does measure $Z^2$, but flipping a coin (then making an appropriate measurement and doubling the
result) does not measure $X + Y$. – Toby Bartels Jan 3 '12 at 17:58
add comment
If I correctly understand, you are misinterpreting the meaning of the product and sum of observables.
When you say "We can now define a sum and a product of observables. These are obtained by performing the two measures and then adding or multiplying their values."
This cannot possibly describe the usual sum A+B and product AB of operators. For the product, it is not even hermitian unless A and B commute. Agreed, A+B is hermitian, but the spectrum of
up vote 1 A+B does not contain the result of the sum of a measurement of A followed by a measurement of B (in either way), again unless A and B commute. For a counter-example take $A=\pmatrix{1& 0\cr
down vote 0&-1}$ and $B=\pmatrix{0&1\cr 1&0}$.
I hope I correctly understood your question.
Well, the point of my question is in big part metamathematical. That is, the explanation I received was meant as a motivation to study quantum mechanics via C*-algebras, and a posteriori,
via the Gelfand-Naimark-Segal theorem, via operator theory. I admit that the explanation does not convince me completely, but I think it has some points. The reason why I asked here was
trying to fill the gaps. Instead you assume that observable are represented by operators, which is meant to be the conclusion – Andrea Ferretti Jan 26 '12 at 22:14
add comment
Not the answer you're looking for? Browse other questions tagged quantum-mechanics intuition or ask your own question. | {"url":"http://mathoverflow.net/questions/14376/why-is-addition-of-observables-in-quantum-mechanics-commutative?sort=oldest","timestamp":"2014-04-19T10:14:26Z","content_type":null,"content_length":"118281","record_id":"<urn:uuid:606312d3-4805-46c3-be39-16f805471908>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00082-ip-10-147-4-33.ec2.internal.warc.gz"} |
Check my arithmetic
In the last post, I forecast next year’s September sea ice extent using not only extent data, but latitude of the ice edge. To compute that from extent, I applied a formula given by Eisenman in the
supplemental info of his publication.
A reader pointed out that there seemed to be a discrepancy between his numerical values and mine — and he’s right. I’ve double-checked the approximation formula and my program, but I can’t resolve
the discrepancy.
Therefore, I’d be grateful if some of you would reproduce the calculation of latitude based on extent data, using Eisenman’s approximation formula. The formula as given is this:
The data I used for sea ice extent is here (it’s monthly averages from NSIDC):
If you could report the range of latitude values you get from the extent data in that file, using the formula as given by Eisenman, that would confirm or deny that I’ve implemented his formula as
P.S. The figure I used for the radius of the earth is 6371 km.
15 responses to “Check my arithmetic”
1. I get max 79.855, min 60.269.
[Response: Thanks -- that's exactly what I got. That means that either we both implemented his formula correctly or we both made a mistake which ended up giving us identical results.
But those numbers don't agree with the data used in Eisenman's paper, which range from 65.382 to 80.689. There are some differences, namely that he used daily data whereas these are monthly, and
his data only go through Jan. 2010. Also, he states that the ice edge latitude was *not* computed by this formula, that it's just a convenient approximation.
But -- I used his formula to compute ice edge latitude for daily data and compared that to his stated values. Not only do they not match, there's a consistent bias between the results. The
biggest discrepancy is on the low end -- his latitude range bottoms out at a little over 65N (winter max) whereas we both got down to near 60N. Also, if I plot his values for extent vs. latitude,
the scatterplot does *not* follow the approximate formula -- it's really not that close.
I'm puzzled. Perhaps I should email Eisenman and ask his opinion. In the meantime, if others want to check -- there is still a small chance that we both made the same mistake (but at this point I
doubt it).]
□ Same numbers here.
I’m not quite happy with the 0.49 value chosen by Eisenman from a numerical perspective, I think he cut off the number too early. I am assuming that Eisenmann intended the formulas to
smoothly blend into each other. However, if you evaluate the polynomial in the denominator at 16 * 10^6 km² it has a value of 0.4969420… . Cutting it off as 0.49 for the simplified formula
introduces a jump of about 0.2° at this threshold value between the latitudes of the ice edge as calculated by the long and the short formula, respectively. The resolution of the sea ice
extent data implies a resolution of the latitudes of the ice edge of 0.01°, a factor of 20 less than the original jump. Using the rounded value of 0.497 instead reduces the jump to
insignificant 0.0016°.
So, assuming the complex formula is the better approximation and the simplified formula is supposed to be a smooth continuation I would recommend to use 0.497 (or go the extra mile and use
0.496942) instead of 0.49 in the simplified formula.
2. Yeah, after coding this I looked at the previous post and found the same discrepancies. I’d say it’s worth an email to Eisenman.
[Response: Sent.]
3. I get max 79.855, min 60.269 as well.
Had different min at first, then realised I forgot to calc the lat differently for large extents.
4. Looking into details, the formula seems to be pretty rough. o1..o5 are the five terms in the parentheses of the given formula, o1 being the order 1 term of 1.56*eps. The contributions of these
terms are given below for extends in the range of 4.0 to 8.0e6 km^2. lat4 uses the terms up to o4 to estimate the latitude. The order 6 term would be a negative one, meaning the formula
underestimates the latitude in the range of interest. The most optimistic guess on the latitude uncertainty would be at least 1 deg.
ext: 4.0
o1: 0.1537, o2: -0.5866, o3: 0.4383, o4: -0.1289, o5: 0.0129
lat5: 79.1, lat4: 79.2
ext: 5.0
o1: 0.1922, o2: -0.9165, o3: 0.8561, o4: -0.3148, o5: 0.0394
lat5: 77.4, lat4: 77.7
ext: 6.0
o1: 0.2306, o2: -1.3198, o3: 1.4794, o4: -0.6527, o5: 0.0980
lat5: 75.5, lat4: 76.4
ext: 7.0
o1: 0.2690, o2: -1.7964, o3: 2.3492, o4: -1.2092, o5: 0.2119
lat5: 72.8, lat4: 75.2
ext: 8.0
o1: 0.3075, o2: -2.3463, o3: 3.5066, o4: -2.0628, o5: 0.4131
lat5: 67.3, lat4: 74.1
The formula is probably not really useful to get a realistic latitude of the ice edge with the accuracy you look for. It’s hard to tell if it is a useful statistical measure to forecast sea ice
extend. It’s biased, but the bias vanishes with the decreasing extend.
Further, the formula has problems for high ice extends:
ext: 14.0, lat5: 64.9
ext: 15.0, lat5: 62.5
ext: 16.0, lat5: 60.9
ext: 17.0, lat5: 61.3
ext: 18.0, lat5: 64.2
I’m afraid it’s not really useful for these extends. The zero order alternative offered in the paragraph might help a little bit.
ext: 14.0, lat5: 64.9, lat0: 62.6
ext: 15.0, lat5: 62.5, lat0: 61.6
ext: 16.0, lat5: 60.9, lat0: 60.7
ext: 17.0, lat5: 61.3, lat0: 59.8
ext: 18.0, lat5: 64.2, lat0: 58.9
Switching the formula for some extends effects the bias of the latitude anomaly.
5. “Also, he states that the ice edge latitude was *not* computed by this formula, that it’s just a convenient approximation.” Perhaps the ice edge latitude values in his paper are determined by
some observational criteria, whereas that formula is simply an approximation of geography? [Response: Yes, I think that's the case.] (OTOH, if the formula assumes solid ice above some latitude
and no ice below it, I’d expect the formula to give higher latitudes than using some observational criteria, which is the opposite of what you found. Curiouser…)
6. 60N is Nunivak in the Bering Sea and Cape Farewell, the southern tip of Greenland, in the North Atlantic.
I doubt that in the 21st century.
7. FWIW, the “equivalent extent” column in Eisenman’s dataset
seems to have been computed using the following equation:
lat = arcsin(1-ε/2π)
8. Also FWIW, a reasonably good fit to Eisenman’s data can be found with:
lat = arcsin(1-ε/(2π(2.2-26.91ε+204ε^2-607ε^3+608ε^4)))
… for all extents. A fifth order equation does little to improve the fit.
9. Doesn’t the radius of the earth depend on where you are?
□ Yep, but for almost all calculations you can throw in your favorite average radius and be accurate to within a few %. The difference between the polar and equatorial radii for earth is only
20 km or .3 %….
10. We are starting to get into the nitty gritty here, so has anyone one taken the shading effect of Greenland into account? It casts a very long shadow and so significantly affects insolation for a
period when the sun is low in the sky.
11. Final word (maybe):
Eyeballing Eisenman´s data 1978-2010 shows that it does seem to be a two-domain function, but the breakpoint occurs at about 10 million km², not 16 million. (typo?).
Using gnuplot´s fitting, for extents 9.8
lat = arcsin(1-ε/(2π(1.642-2.076ε)))
12. … trying again …
Using gnuplot´s fitting, for extents greater than or equal to 9.8e6, I get
lat = arcsin(1-ε/(2π(4.546-61.94ε+361.2ε²-772ε³+401ε^4)))
… and for extents less than 9.8
lat = arcsin(1-ε/(2π(1.642-2.076ε)))
13. OffTopic… continuing on gallows humor on http://thinkprogress.org/climate/2012/12/02/1259261/oklahoma-where-the-denial-comes-right-behind-the-drought/#comment-594911 … Looking at US Drought
Monitor by state, it is clear that Mississippi state is an outlier in the correlation between the amount of republican representatives in the state senate and the drought area of the respective
state. What are Mississippians doing wrong?
This entry was posted in Global Warming. Bookmark the permalink. | {"url":"https://tamino.wordpress.com/2012/11/22/check-my-arithmetic/","timestamp":"2014-04-16T04:13:10Z","content_type":null,"content_length":"69577","record_id":"<urn:uuid:c605b233-9dd9-4e4c-91df-2e2bc0a53609>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00378-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about metric geometry on Area 777
This is one of those items I should have written about long ago: I first heard about it over a lunch chat with professor Guth; then I was in not one, but two different talks on it, both by Peter
Jones; and now, finally, after it appeared in this algorithms lecture by Sanjeev Arora I happen to be in, I decided to actually write the post. Anyways, it seem to live everywhere around my world,
hence it’s probably a good idea for me to look more into it.
Has everyone experienced those annoy salesman who keeps knocking on you and your neighbors’ doors? One of their wonderful properties is that they won’t stop before they have reached every single
household in the area. When you think about it, in fact this is not so straight foreword to do; i.e. one might need to travel a long way to make sure each house is reached.
Problem: Given $N$ points in $\mathbb{R}^2$, what’s the shortest path that goes through each point?
Since this started as an computational complexity problem (although in fact I learned the analysts’s version first), I will mainly focus on the CS version.
Trivial observations:
In total there are about $N!$ paths, hence the naive approach of computing all their lengths and find the minimum takes more than $N! \sim (N/e)^N$ time. (which is a long time)
This can be easily improved by a standard divide and concur:
Let $V \subseteq \mathbb{R}^2$ be our set of points. For each subset $S \subseteq V$, for each $p_1, p_2 \in S$, let $F(S, p_1, p_2) =$ length of shortest path from $p_1$ to $p_2$ going through all
points in $S$.
Now the value of this $F$ can be computed recursively:
$\forall |S| = 2$, $F(S, p_1, p_2) = d(p_1, p_2)$;
Otherwise $F(S, p_1, p_2) =$
$\min \{ F(S \backslash \{p_2\}, p_1, q) + d(q, p_2) \ | \ q \in S, q eq p_1, p_2 \}$
What we need is minimum of $F(V, p_1, p_2)$ where $p_1, p_2 \in V$. There are $2^N$ subsets of $V$, for any subset there are $\leq N$ choices for each of $p_1, p_2$. $F(S, p_1, p_2)$ is a minimum of
$N$ numbers, hence $O(N)$ time (as was mentioned in this pervious post for selection algorithm). Summing that up, the running time is $2^N\times N^2\times N \sim O(2^N)$, slightly better than the
most naive way.
Can we make it polynomial time? No. It’s well known that this problem is NP-hard, this is explained well in the wikipedia page for the problem.
Well, what can we do now? Thanks to Arora (2003), we can do an approximate version in polynomial time. I will try to point out a few interesting ideas from that paper. The process involved in this
reminded me of the earlier post on nonlinear Dvoretzky problem (it’s a little embracing that I didn’t realize Sanjeev Arora was one of the co-authors of the Dvoretzky paper until I checked back on
that post today! >.< ) it turns out they have this whole program about ‘softening’ classic problems and produce approximate versions.
Approximate version: Given $N$ points in $\mathbb{R}^2$, $\forall \varepsilon > 0$, find a path $\gamma$ through each point such that length $l(\gamma) < (1+\varepsilon)l(\mbox{Opt})$.
Of course we shall expect the running time $T$ to be a function of $\varepsilon$ and $N$, as $\varepsilon \rightarrow 0$ it shall blow up (to at least exponential in $N$, in fact as we shall see
below, it will blow up to infinity).
The above is what I would hope is proved to be polynomial. In reality, what Arora did was one step more relaxed, namely a polynomial time randomized approximate algorithm. i.e. Given $V$ and $\
varepsilon$, the algorithm produces a path $\gamma$ such that $E(l(\gamma)-l(\mbox{Opt}) < \varepsilon)$. In particular this means more than half the time the route is within $(1+\varepsilon)$ to the
Theorem (Arora ’03): $T(N, \varepsilon) \sim O(N^{1/\varepsilon})$ for the randomized approximate algorithm.
Later in that paper he improved the bound to $O(N \varepsilon^{C/\varepsilon}+N\log{N})$, which remains the best know bound to date.
Selected highlights of proof:
One of the great features in the approximating world is that, we don’t care if there are a million points that’s extremely close together — we can simply merge them to one point!
More precisely, since we are allowing a multiplicative error of $\varepsilon$, we also have trivial bound $l(\mbox{Opt}) > \mbox{ diam}(V)$, Hence the length can be increased by at least $\varepsilon
\mbox{diam}(V)$ which means if we move each point by a distance no more than $\varepsilon \mbox{diam}(V) / (4N)$ and produce a path $\gamma'$ connecting the new points with $l(\gamma')< (1+\
varepsilon/2)l(\mbox{Opt})$, then we can simply get our desired $\gamma$ from $\gamma'$, as shown:
i.e. the problem is “pixelated”: we may bound $V$ in a square box with side length $\mbox{diam}(V)$, divide each side into $8N/\varepsilon$ equal pieces and assume all points are in the center of the
gird cell it lies in (for convenience later in the proof we will assume $8N/\varepsilon = 2^k$ is a power of $2$, rescale the structure so that each cell has side length $1$. Now the side length of
the box is $8N/\varepsilon = 2^k$):
Now we do this so-called quadtree construction to separate the points (reminds me of Whitney’s original proof of his extension theorem, or the diatic squares proof of open sets being countable) i.e.
bound $V$ in a square box and keep dividing squares into four smaller ones until no cell contains more than one point.
In our case, we need to randomize the quad tree: First we bound $V$ in a box that’s 4 times as large as our grid box (i.e. of side length $2^{k+1}$), shift the larger box by a random vector $(-i/2^
k,-j/2^k)$ and then apply the quad tree construction to the larger box:
At this point you may wonder (at least I did) why do we need to pass to a larger square and randomize? From what I can see, doing this is to get
Fact: Now when we pick a grid line at random, the probability of it being an $i$th level dividing line is $2^i/2^k = 2^{i-k}$.
Keep this in mind.
Note that each site point is now uniquely defined as an intersection of no more than $k$ nesting squares, hence the total number of squares (in all levels) in this quad tree cannot exceed $N \times k
\sim N \times \log{N/\varepsilon}$.
Moving on, the idea for the next step is to perturb any path to a path that cross the sides of the square at some specified finite set of possible “crossing points”. Let $m$ be the unique number such
that $2^m \in [(\log N)/\varepsilon, 2 (\log N)/ \varepsilon ]$ (will see this is the best $m$ to choose). Divide sides of each square in our quad tree into $2^m$ equal segments:
Note: When two squares of different sizes meet, since the number of equally squares points is a power of $2$, the portals of the larger square are also portals of the smaller one.
With some simple topology (! finally something within my comfort zone :-P) we may assume the shortest portal-respecting path crosses each portal at most twice:
In each square, we run through all possible crossing portals and evaluate the shortest possible path that passes through all sites inside the square and enters and exists at the specified nodes.
There are $(2^{4 \times 2^m})^2 \sim ($side length$)^2 \sim (N/\varepsilon)^2$ possible entering-exiting configurations, each taking polynomial time in $N$ (in fact $\sim N^{O(1/\varepsilon)}$ time)
to figure out the minimum.
Once all subsquares has their all paths evaluated, we may move to the one-level larger square and spend another $\log(N/\varepsilon) \times (N/\varepsilon)^2$ operations. In total we have
$N \times \log{N/\varepsilon} \times (N/\varepsilon)^2 \times N^{O(1/\varepsilon)}$
$\sim N^{O(1/\varepsilon)}$
which is indeed polynomial in $N/\varepsilon$ many operations.
The randomization comes in because the route produced by the above polynomial time algorithm is not always approximately the optimum path; it turns out that sometimes it can be a lot longer.
Expectation of the difference between our random portal respecting minimum path $\mbox{OPT}_p$ and the actual minimum $\mbox{OPT}$ is bounded simply by the fact that minimum path cannot cross the
grid lines more that $\mbox{OPT}$ times. At each crossing, the edge it crosses is at level $i$ with probability $2^{i-k}$. to perturb a level $i$ intersection to a portal respecting one requires
adding an extra length of no more than $2 \times 2^{k-i}/2^m \sim 2^{k+1-i}/(\log N / \varepsilon)$:
$\displaystyle \mathbb{E}_{a,b}(\mbox{OPT}_p - \mbox{OPT})$
$\leq \mbox{OPT} \times \sum_{i=1}^k 2^{i-k} \times 2^{k+1-i} / (\log N / \varepsilon)$
$= \mbox{OPT} \times 2 \varepsilon / \log N < \varepsilon \mbox{OPT}$
P.S. You may find images for this post being a little different from pervious ones, that’s because I recently got myself a new iPad and all images above are done using iDraw, still getting used to
it, so far it’s quite pleasant!
Bonus: I also started to paint on iPad~
–Firestone library, Princeton. (Beautiful spring weather to sit outside and paint from life!) | {"url":"http://conan777.wordpress.com/tag/metric-geometry/","timestamp":"2014-04-19T23:27:59Z","content_type":null,"content_length":"264453","record_id":"<urn:uuid:5a0a577e-36c5-49ce-9871-3eb45640a6f9>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00397-ip-10-147-4-33.ec2.internal.warc.gz"} |
Harrisburg, TX Algebra 2 Tutor
Find a Harrisburg, TX Algebra 2 Tutor
...As a violin and viola teacher, I have my students use ear training all the time in their lessons. As a performing musician, I have to use my ear training skills on a daily basis. I have been a
musician since I started playing violin when I was four years old.
35 Subjects: including algebra 2, reading, English, writing
...I am a retired state certified teacher in Texas both in composite high school science and mathematics. I offer a no-fail guarantee (contact me via WyzAnt for details). I am available at any
time of the day; I try to be as flexible as possible. I try as much as possible to work in the comfort of your own home at a schedule convenient to you.
35 Subjects: including algebra 2, chemistry, physics, calculus
...I am a Trinity University graduate and I have over 4 years of tutoring experience. I really enjoy it and I always receive great feedback from my clients. I consider my client's grade as if it
were my own grade, and I will do whatever it takes to make sure you get it, and at the same time make sure our sessions are easy and enjoyable.
38 Subjects: including algebra 2, English, calculus, reading
...Along with tutoring, helping children read is my greatest passion! Subjects that I tutor (both high school and college level): Algebra (I and II) Pre-Algebra Geometry Pre-Cal Calculus
Trigonometry Chemistry Biology Physics Linear Algebra ACT SAT (math, reading and writing) GMAT GRE Grammar and V...
22 Subjects: including algebra 2, chemistry, calculus, physics
...I have been professionally involved in software development since 1980 and have a reputation for excellence. My software development experience includes web-based, semantic web, and social
networking applications, database development, and systems for financial services, telecommunications, and ...
30 Subjects: including algebra 2, calculus, physics, statistics
Related Harrisburg, TX Tutors
Harrisburg, TX Accounting Tutors
Harrisburg, TX ACT Tutors
Harrisburg, TX Algebra Tutors
Harrisburg, TX Algebra 2 Tutors
Harrisburg, TX Calculus Tutors
Harrisburg, TX Geometry Tutors
Harrisburg, TX Math Tutors
Harrisburg, TX Prealgebra Tutors
Harrisburg, TX Precalculus Tutors
Harrisburg, TX SAT Tutors
Harrisburg, TX SAT Math Tutors
Harrisburg, TX Science Tutors
Harrisburg, TX Statistics Tutors
Harrisburg, TX Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Alta Loma, TX algebra 2 Tutors
Bordersville, TX algebra 2 Tutors
El Jardin, TX algebra 2 Tutors
Galena Park algebra 2 Tutors
Greenway Plaza, TX algebra 2 Tutors
Howellville, TX algebra 2 Tutors
Inks Lake Village, TX algebra 2 Tutors
Lomax, TX algebra 2 Tutors
Oak Forest, TX algebra 2 Tutors
Pine Valley, TX algebra 2 Tutors
Satsuma, TX algebra 2 Tutors
Sunny Side, TX algebra 2 Tutors
Sylvan Beach, TX algebra 2 Tutors
Timber Cove, TX algebra 2 Tutors
Tod, TX algebra 2 Tutors | {"url":"http://www.purplemath.com/Harrisburg_TX_Algebra_2_tutors.php","timestamp":"2014-04-18T14:15:04Z","content_type":null,"content_length":"24378","record_id":"<urn:uuid:4289d542-d1c5-465b-a85e-a4b126c0dd66>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00658-ip-10-147-4-33.ec2.internal.warc.gz"} |
Countryside, IL Precalculus Tutor
Find a Countryside, IL Precalculus Tutor
...I am also a proficient user of Microsoft Excel and am able to tutor all of its functionalities.I am a proficient user of Excel and skilled in both facets; Advanced Spreadsheet Formulas and VBA
(Visual Basic for Applications) programming. I provide Excel tutoring for students who seek to learn fo...
18 Subjects: including precalculus, geometry, algebra 1, ASVAB
...I love to help. Math is my specialty - including calculus, geometry, precalculus, and statistics! I have a Bachelor's of Science from California Institute of Technology (CIT), an incredibly
challenging university.
21 Subjects: including precalculus, chemistry, calculus, statistics
...If you need help with standardized testing, my GRE scores were a 680 verbal and 800 quantitative.During my masters degree I was a TA for the intro to computer science course. For three
semesters I taught C++ and Matlab to freshmen and sophomore mechanical engineering students. The course began ...
17 Subjects: including precalculus, calculus, physics, geometry
...My passion for education comes through in my teaching methods, as I believe that all students have the ability to learn a subject as long as it is presented to them in a way in which they are
able to grasp. I use both analytical as well as graphical methods or a combination of the two as needed ...
34 Subjects: including precalculus, English, reading, writing
...I have taken (and got the highest grade) medical school physiology. I am a medical doctor, who practiced for 23 years before leaving the practice of medicine in 2008. I was board certified in
family medicine.
17 Subjects: including precalculus, chemistry, statistics, reading
Related Countryside, IL Tutors
Countryside, IL Accounting Tutors
Countryside, IL ACT Tutors
Countryside, IL Algebra Tutors
Countryside, IL Algebra 2 Tutors
Countryside, IL Calculus Tutors
Countryside, IL Geometry Tutors
Countryside, IL Math Tutors
Countryside, IL Prealgebra Tutors
Countryside, IL Precalculus Tutors
Countryside, IL SAT Tutors
Countryside, IL SAT Math Tutors
Countryside, IL Science Tutors
Countryside, IL Statistics Tutors
Countryside, IL Trigonometry Tutors
Nearby Cities With precalculus Tutor
Argo, IL precalculus Tutors
Bridgeview precalculus Tutors
Brookfield, IL precalculus Tutors
Hodgkins, IL precalculus Tutors
Justice, IL precalculus Tutors
La Grange Park precalculus Tutors
La Grange, IL precalculus Tutors
Lyons, IL precalculus Tutors
Mc Cook, IL precalculus Tutors
Mccook, IL precalculus Tutors
Riverside, IL precalculus Tutors
Summit Argo precalculus Tutors
Summit, IL precalculus Tutors
Western Springs precalculus Tutors
Western, IL precalculus Tutors | {"url":"http://www.purplemath.com/Countryside_IL_precalculus_tutors.php","timestamp":"2014-04-20T04:35:28Z","content_type":null,"content_length":"24228","record_id":"<urn:uuid:ba9358fb-ad42-4de9-9efc-dee905de5269>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00243-ip-10-147-4-33.ec2.internal.warc.gz"} |
Papers Published
1. Hall, Kenneth C. and Verdon, Joseph M., Gust response analysis for cascades operating in nonuniform mean flows, AIAA Journal, vol. 29 no. 9 (1991), pp. 1463 - 1471 .
(last updated on 2007/04/10)
The unsteady aerodynamic response of a subsonic cascade subjected to entropic, vortical, and acoustic gusts is analyzed. Field equations for the first-order unsteady perturbation are obtained by
linearizing the time-dependent mass, momentum, and energy conservation equations about a nonlinear, isentropic, and irrotational mean or steady flow. A splitting technique is then used to
decompose the unsteady velocity into irrotational and rotational parts leading to field equations for the unsteady entropy, rotational velocity, and irrotational velocity fluctuations that are
coupled only sequentially. The entropic and rotational velocity fluctuations can be described in closed form in terms of the mean-flow drift and stream functions that can be computed numerically.
The irrotational unsteady velocity is described by an inhomogeneous linearized potential equation that contains a source term that depends on the rotational velocity field. This equation is
solved via a finite-difference technique. Results are presented to indicate the status of the numerical solution procedure and to demonstrate the impact of blade geometry and mean blade loading
on the aerodynamic response of cascades to vortical gust excitations. The analysis described herein leads to very efficient predictions of cascade unsteady aerodynamic phenomena, making it useful
for turbomachinery aeroelastic and aeroacoustic design applications.
Aerodynamics;Mathematical Techniques - Finite Difference Method;Turbomachinery - Blades;Mathematical Techniques - Perturbation Techniques; | {"url":"http://fds.duke.edu/db/pratt/mems/faculty/kenneth.c.hall/publications/59511","timestamp":"2014-04-16T21:56:50Z","content_type":null,"content_length":"15445","record_id":"<urn:uuid:c8265631-de43-44f3-8e5f-cce32288e836>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00536-ip-10-147-4-33.ec2.internal.warc.gz"} |
I need help to prove this in Linear Algebra
March 19th 2013, 12:17 PM #1
Mar 2013
Pacific Grove
I need help to prove this in Linear Algebra
If The linear transformation x → Ax maps Rn onto Rn.
The columns of A span Rn.
I proved the other way...but in order for this proof to be complete...I need to prove this..step by step please...it is important that I do understand the proof as well....thanks in advance
Re: I need help to prove this in Linear Algebra
This is similar to the other problem. Given x = (x[1], ..., x[n])^T, where the superscript T means transpose, i.e., this is a column vector, Ax = x[1]A[1] + ... + x[n]A[n] where A[i] is the ith
column of A and this is the sum of column vectors. The fact that x ↦ Ax is onto means that for every b ∈ ℝ^n there exist an x such that Ax = b. But this also means that the columns of A span ℝ^n.
Re: I need help to prove this in Linear Algebra
Doesn't it follow from the fact that the linear transformation is "onto" ?
Means that any vector in $\mathbb{R}^{n}$ can be gotten by doing Ax, where x is the coordinate written under some basis B. That itself means that any vector in $\mathbb{R}^n$ can be written in
terms of $c_1 v_1 + ... +c_n v_n$ where $v_1,..v_n$ are the columns of $A$ and $c_1,..c_n$ is a coordinate under some basis B. thus the columns of A span $\mathbb{R}^n$
Last edited by jakncoke; March 19th 2013 at 01:39 PM.
March 19th 2013, 01:25 PM #2
MHF Contributor
Oct 2009
March 19th 2013, 01:36 PM #3 | {"url":"http://mathhelpforum.com/advanced-math-topics/215086-i-need-help-prove-linear-algebra.html","timestamp":"2014-04-20T17:12:04Z","content_type":null,"content_length":"38902","record_id":"<urn:uuid:75c40c9b-dfcf-430d-8ea6-bd740b004686>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00182-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: September 2013 [00082]
[Date Index] [Thread Index] [Author Index]
Re: multiintegral and table
• To: mathgroup at smc.vnet.net
• Subject: [mg131643] Re: multiintegral and table
• From: Bill Rowe <readnews at sbcglobal.net>
• Date: Sun, 15 Sep 2013 07:07:27 -0400 (EDT)
• Delivered-to: l-mathgroup@mail-archive0.wolfram.com
• Delivered-to: l-mathgroup@wolfram.com
• Delivered-to: mathgroup-outx@smc.vnet.net
• Delivered-to: mathgroup-newsendx@smc.vnet.net
On 9/14/13 at 6:03 AM, phutauruk at gmail.com (Parada Hutauruk) wrote:
>Dear all,
>I have a function
>f_1 [x_,y_] = Integrate[2*y + x^2, {x,0,1},{y,0,x}]
Don't do this. f_1 is interpreted by Mathematica as something
named f with Head 1 and is definitely not a subscribed variable.
You cannot use an underscore in names with Mathematica.
Also, did you notice the color of the f when you typed f_2? It
wasn't blue (the default color of a undefined global name was
it? That should be a big hint you are doing something wrong.
getting rid of the underscores:
f1[x_, y_] = Integrate[2*y + x^2, {x, 0, 1}, {y, 0, x}];
f2[x_, y_, Q_] = Integrate[2*y + x^2*Q, {x, 0, 1}, {y, 0, x}];
fTotal[x_, y_, Q_] = f1[x, y] + f2[x, y, Q];
dataxx = Table[{Q, fTotal[x, y, Q]}, {Q, 0, 10, 0.0025}];
will generate the expected plot.
But since both f1 and f2 evaluate to simple expressions and
assuming x, y have not been assigned values, then the following
code generates the same plot more efficiently
f1 = Integrate[2*y + x^2, {x, 0, 1}, {y, 0, x}];
f2 = Integrate[2*y + x^2*Q, {x, 0, 1}, {y, 0, x}];
Plot[f1+f2, {Q, 0, 10}]
One other thing. Mathematica has several built-in things that
are named with a single uppercase letter. So, it is wise to get
in the habit of not using single uppercase letters as variables.
That avoids conflict with built-in functions and likely will
save you a lot of grief debugging your code. | {"url":"http://forums.wolfram.com/mathgroup/archive/2013/Sep/msg00082.html","timestamp":"2014-04-20T21:03:45Z","content_type":null,"content_length":"26557","record_id":"<urn:uuid:7dad0b3c-17c5-4389-881d-7dfff3510e47>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00368-ip-10-147-4-33.ec2.internal.warc.gz"} |
Midpoint proof
How would you right a proof for this theorem: "If a segment is given, then it has exactly one midpoint"?
Please note that the numbering of the postulates (P) is based on my geometry book. Also, I'm just in 9th grade geometry, so please don't use differential equations or some other math than basic
Euclidean geomtery. This will help me better understand the concept.
This is what I did so far:
Let points A and B be the end points of a line segment, AB.
By P2-3 (For any two points on a line, and a given unit of measure, there is a unique positive number called the measure of the distance between the two points) there is a defined distance between
the points A and B on line segment AB
By the "definition of between", a point Q is between points A and B if and only if each of the following conditions hold.
1.) A, Q, B are collinear.
2.) AQ + BQ = AB
Condition 1: A,Q,B are collinear
By P1-1 ( Through any two points there is exactly one line) and P1-3 (There are at least two points on a line), points A, Q and B are on the same line. Therefore, by the definition of collinear
(which states that points are collinear if and only if they are on the same line), A,Q and B are collinear.
Condition 2: AQ + BQ = AB, Let Q be the midpoint
Since Q is the only common point of segments AQ and BQ, then AQ and BQ both intersect at Q. Therefore, the end points are A and B. Since A, Q and B are all collinear, they form a line segment AB.
Thus Q is in between points A and B in AB.
This is where I kind of get lost....
AQ + BQ = AB
Let A=2, B=8, Q=5
l 2-5 l + l 8-5 l = l 2-8 l
By the "definition of midpoint", a point Q is the midpoint of a segment AB if and only if Q is between A and B and AQ=BQ.
Does that prove Q is the midpoint? I don't think so because:
1.) I haven't shown that the arithmetic I did in an attempt to show the distances between AQ and BQ are equal to that of AB works for all cases.
2.)I'm not sure I have adequately proven that A, Q and B are on the same line. | {"url":"http://www.physicsforums.com/showthread.php?t=35589","timestamp":"2014-04-19T22:56:03Z","content_type":null,"content_length":"41722","record_id":"<urn:uuid:5e1926d1-0343-4d46-a04c-be333fc10bcf>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00472-ip-10-147-4-33.ec2.internal.warc.gz"} |
February 19th 2010, 10:48 PM
The line segment joining A(2,4) and B(-3,-5) is extended through each end by a distance equal to twice its original length. Find the coordinates of the new endpoints.
Please help me... And show the solution... Thanks!
February 19th 2010, 11:45 PM
Prove It
The horizontal distance of this segment is 5 units and the vertical distance is 9 units.
So double the horizontal and vertical distances and add them in both directions. | {"url":"http://mathhelpforum.com/pre-calculus/129707-endpoints-print.html","timestamp":"2014-04-20T18:49:14Z","content_type":null,"content_length":"4226","record_id":"<urn:uuid:ff3c3b27-cdda-46e8-9ae5-0f6db97214ff>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00476-ip-10-147-4-33.ec2.internal.warc.gz"} |
Zentralblatt MATH
Publications of (and about) Paul Erdös
Zbl.No: 612.05046
Autor: Burr, Stefan A.; Erdös, Paul; Faudree, Ralph J.; Rousseau, C.C.; Schelp, R.H.; Gould, R.J.; Jacobson, M.S.
Title: Goodness of trees for generalized books. (In English)
Source: Graphs Comb. 3, 1-6 (1987).
Review: For any graph G, let p(G) denote the cardinality of the vertex set of G, let \chi(G) denote the vertex chromatic number of G and let s(G) denote the "chromatic surplus" of G, i.e. the
smallest number of vertices in a color class under any \chi(G)-coloring of the vertices of G. For any pair of graphs F and G, r(F,G) is the least number N so that in every 2-coloring of the edges of
K[N] either there is a copy of F with all of its edges in the first color class or a copy of G with all of its edges in the second color class.
It is easy to see that for connected graphs F and G with p(G) \geq s(F):
r(F,G) \geq (\chi(F)-1)(p(g)-1)-s(F).
We say that G is F-good if equality holds. The paper is devoted to a study of those graphs F for which all large trees are F-good. The results include: All sufficiently large trees are K(1,1,m
Reviewer: J.E.Graver
Classif.: * 05C55 Generalized Ramsey theory
Keywords: chromatic surplus; coloring; F-good
© European Mathematical Society & FIZ Karlsruhe & Springer-Verlag
│Books │Problems │Set Theory │Combinatorics │Extremal Probl/Ramsey Th. │
│Graph Theory │Add.Number Theory│Mult.Number Theory│Analysis │Geometry │
│Probabability│Personalia │About Paul Erdös │Publication Year│Home Page │ | {"url":"http://www.emis.de/classics/Erdos/cit/61205046.htm","timestamp":"2014-04-16T14:10:43Z","content_type":null,"content_length":"4312","record_id":"<urn:uuid:b3315026-892e-4ecd-9242-2fb4d217c1a3>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00114-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
On multi-point boundary value problems for linear ordinary differential equations with singularities.
(English) Zbl 1058.34012
The authors investigate the singular linear differential equation
${u}^{\left(n\right)}=\sum _{i=1}^{n}{p}_{i}\left(t\right){u}^{\left(i-1\right)}+q\left(t\right)\phantom{\rule{2.em}{0ex}}\left(1\right)$
on $\left[a,b\right]\subset ℝ$, where the functions ${p}_{i}$ and $q$ can have singularities at $t=a,t=b$ and $t={t}_{0}\in \left(a,b\right)$. This means that ${p}_{i}$ and $q$ are not integrable on
$\left[a,b\right]$. Equation (1) is studied with the boundary conditions
${u}^{\left(i-1\right)}\left({t}_{0}\right)=0\phantom{\rule{4.pt}{0ex}}\text{for}\phantom{\rule{4.pt}{0ex}}1\le i\le n-1,\phantom{\rule{4pt}{0ex}}\sum _{j=1}^{n-{n}_{1}}{\alpha }_{1j}{u}^{\left(j-1\
right)}\left({t}_{1j}\right)+\sum _{j=1}^{n-{n}_{2}}{\alpha }_{2j}{u}^{\left(j-1\right)}\left({t}_{2j}\right)=0,\phantom{\rule{2.em}{0ex}}\left(2\right)$
${u}^{\left(i-1\right)}\left(a\right)=0\phantom{\rule{4.pt}{0ex}}\text{for}\phantom{\rule{4.pt}{0ex}}1\le i\le n-1,\phantom{\rule{4pt}{0ex}}\sum _{j=1}^{n-{n}_{0}}{\alpha }_{j}{u}^{\left(j-1\right)}\
where ${t}_{1j},{t}_{2j},{t}_{j}$ are certain interior points in $\left(a,b\right)$. The authors introduce the Fredholm property for these problems which means that the unique solvability of the
corresponding homogeneous problem implies the unique solvability of the nonhomogeneous problem for every $q$ which is weith-integrable on $\left[a,b\right]$. Then, for the solvability of a problem
having the Fredholm property, it sufficies to show that the corresponding homogeneous problem has only the trivial solution. In this way, the authors prove main theorems on the existence of a unique
solution of (1),(2) and of (1),(3). Examples verifying the optimality of the conditions in various corollaries are shown as well.
34B10 Nonlocal and multipoint boundary value problems for ODE
34B05 Linear boundary value problems for ODE
34B16 Singular nonlinear boundary value problems for ODE | {"url":"http://zbmath.org/?q=an:1058.34012","timestamp":"2014-04-20T20:57:32Z","content_type":null,"content_length":"26055","record_id":"<urn:uuid:110d5c92-a167-434d-8bc8-7b0d1e808472>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00030-ip-10-147-4-33.ec2.internal.warc.gz"} |
Issue with float to binary
Author Issue with float to binary
We need to convert Float to Binary and we are using the following APIs:
Joined: Jul 25, 1) Float.floatToIntBits(<float>
2006 2) Integer.toBinaryString(<intbits>
Posts: 1
The usage is:
String <string> = Integer.toBinaryString(Float.floatToIntBits(<float>
For most cases we get the correct Binary String. However, when the float variable has value = 8 digit integers, we run into the following problem:
Problem: floatToIntBits Returns the SAME output for different Input values.
public class TestFloatToBinary
//Function to convert the float value to binary
public static void main(String args[])
float f1 = 17012219;
float f2 = 17012220;
float f3 = 17012221;
float f4 = 17012222;
String sBinary1 = Integer.toBinaryString(Float.floatToIntBits(f1));
String sBinary2 = Integer.toBinaryString(Float.floatToIntBits(f2));
String sBinary3 = Integer.toBinaryString(Float.floatToIntBits(f3));
String sBinary4 = Integer.toBinaryString(Float.floatToIntBits(f4));
System.out.println("float to intbits for f1 = " + Float.floatToIntBits(f1));
System.out.println("float to intbits for f2 = " + Float.floatToIntBits(f2));
System.out.println("float to intbits for f3 = " + Float.floatToIntBits(f3));
System.out.println("float to intbits for f4 = " + Float.floatToIntBits(f4));
System.out.println("Binary for f1 = " + sBinary1);
System.out.println("Binary for f2 = " + sBinary2);
System.out.println("Binary for f3 = " + sBinary3);
System.out.println("Binary for f4 = " + sBinary4);
C:\work\krishna\java>java TestFloatToBinary
float to intbits for f1 = 1266797310
float to intbits for f2 = 1266797310
float to intbits for f3 = 1266797310
float to intbits for f4 = 1266797311
Binary for f1 = 1001011100000011100101011111110
Binary for f2 = 1001011100000011100101011111110
Binary for f3 = 1001011100000011100101011111110
Binary for f4 = 1001011100000011100101011111111
As seen above, floatToIntBits and hence Integer.toBinaryString return the SAME value for Inputs 17012219, 17012220, 17012221.
What is the reason why this happens?
Are we missing something here??
Appreciate your help!
Thank you.
Java Cowboy
Saloon Keeper You need to understand that floating point data types such as float and double do not have infinite precision.
Joined: Aug 16, You cannot store any number with an arbitrary number of digits and expect that it is stored with infinite precision.
Posts: 13868 The precision of a 32-bit IEEE 754 floating point number such as float in Java is about 5 or 6 decimal digits, so it's no surprise that numbers that differ beyond about the 5th
digit only look the same in binary form.
IEEE Standard 754 Floating Point Numbers
IEEE floating-point standard (Wikipedia)
I like... [ July 26, 2006: Message edited by: Jesper Young ]
Java Beginners FAQ - JavaRanch SCJP FAQ - The Java Tutorial - Java SE 7 API documentation
Scala Notes - My blog about Scala
Joined: Oct 13,
2005 Apart from the precision problem, there is something else not right here. You start with positive numbers, and end up with a String starting with a 1.
Posts: 36478
If you count the bits printed, you find they only come to 32 if you use a negative number. So your "toBinaryString" is losing its leading zeroes.
author and
Joined: Jul 08, Please don't cross-post. I deleted the copy of this thread in the Advanced forum.
Posts: 24166
[Jess in Action][AskingGoodQuestions]
I like...
Ranch Hand
Joined: Dec 31, Another thing that may be important to take into account is that all NaN results will produce the same result using Double.doubleToLongBits(). In this case, to diferentiate a NaN
2004 from another you might consider using Double.doubleToRawLongBits()
Posts: 961
Ranch Hand
Because of limited precision of floating point representation, f1, f2 and f3 have same value.
Joined: Aug 03,
2004 Try adding
Posts: 245
to your example.
subject: Issue with float to binary | {"url":"http://www.coderanch.com/t/404250/java/java/float-binary","timestamp":"2014-04-17T19:05:17Z","content_type":null,"content_length":"35968","record_id":"<urn:uuid:9ad48766-208b-4bbe-9ce6-5026234ebde5>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00508-ip-10-147-4-33.ec2.internal.warc.gz"} |
Imaginary and Complex Solutions
Consider the following problem.
We can identify that
Now, substituting this into the Quadratic Formula:
Simplifying the expression under the radical gives:
Notice that there is a negative number under the square root symbol. If you are familiar with imaginary numbers, you know that the square root of a negative number is an imaginary number, which will
cause the solutions to this example to also be imaginary or complex numbers.
In this lesson, we chose to expose you to this situation, but not to provide details on imaginary and complex solutions. We will add a lesson on this topic after we have introduced imaginary and
complex numbers. | {"url":"http://www.algebrahelp.com/lessons/equations/quadratic/pg3.htm","timestamp":"2014-04-18T00:20:05Z","content_type":null,"content_length":"6556","record_id":"<urn:uuid:5d5f5571-3d0c-4fa9-9aae-58df6fe65a3c>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00297-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to work a fractional exponent? Need non decimal answer!
December 19th 2010, 07:56 AM #1
Dec 2010
How to work a fractional exponent? Need non decimal answer!
So on a recent test I ran into this, I wasnt sure how to work the first part because I couldnt figure out the fraction for it.
The question said to
My first instinct is to solve the first, then subtract the second from it then multiply it by PI, however I couldnt figure out how to turn the first into a fractional answer.
Any help is appreciated.
$\displaystyle a^{\frac{5}{4}} = \sqrt[4]{a^5} = \sqrt[4]{a^4\cdot a} = \sqrt[4]{a^4}\cdot \sqrt[4]{a} = a\sqrt[4]{a}$.
So on a recent test I ran into this, I wasnt sure how to work the first part because I couldnt figure out the fraction for it.
The question said to
My first instinct is to solve the first, then subtract the second from it then multiply it by PI, however I couldnt figure out how to turn the first into a fractional answer.
Any help is appreciated.
If we are interested in the positive and negative values there would be a $\pm$ in front
December 19th 2010, 08:01 AM #2
December 19th 2010, 08:04 AM #3
Grand Panjandrum
Nov 2005 | {"url":"http://mathhelpforum.com/algebra/166580-how-work-fractional-exponent-need-non-decimal-answer.html","timestamp":"2014-04-18T01:29:08Z","content_type":null,"content_length":"37821","record_id":"<urn:uuid:d0bd19b7-6fa9-4641-ba3d-dea44dbebe7f>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00056-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: [asa] Ages of the Patriarchs
From: <philtill@aol.com> Date: Tue Feb 20 2007 - 23:10:34 EST
Wow, this pattern is compelling and so I believe it! But it's **certainly** not sufficient to explain everything.
I'll list several more problems at the bottom, but most importantly it fails to explain the differences in the LXX and the Samaritan.
**If the editors of those versions had no reason to change the numbers then they wouldn't have done so.**
It only makes sense if all versions were struggling to make sense of the numbers and each version represents a school of thought on how to do it. Consider Lamech's 777. It is absolutely inexplicable
why the LXX would change that beautiful number to be the meaningless 753 and why the Samaritan would change it to be the meaningless 653. That makes no sense! It makes a lot more sense to believe
that the Masoretes would change the original 753 to be 777 in order to complete the pattern! Surely this is not hard to see! And plus, consider that the LXX and Samaritan basically agree with each
other on this number (the last two digits being the same, and the first digit differing by 100 like most of the numbers in the LXX and Samaritan) whereas the MT is voted out. This 777 is not attested
in either of the other two versions. Clearly it is a number out of the blue, chosen for its own value at the center of the symmetric pattern.
There is a very simple explanation for this pattern: the Masoretic version was edited at a late date specifically to produce this pattern. All three schools of thought represent attempts to make
sense of the numbers, and the Masoretes must have chosen a symbolic approach!
Please read this carefully because this is a constent explanation that really does explain everything:
My core belief is that Moses wrote numbers like "6 shar 5 gug and 3 as," which is a transliteration from cuneiform wedges in the hexagesimal system into Hebrew alphabet, a spelled-out hexagesimal
system. But later periods in Jewish history forgot what these "shar" and "as" units meant and didn't realize that they were not base-10. Hence they didn't add up and so they just left the text as it
was. But when Jewish scholars re-discovered Mesopotamian hexagesimal numbers during the Captivity they realized that they **almost** added up. They thought their number problem was solved!
Unfortunatelty there was still one little problem, which Robert Best explains very well: they didn't realize that the Shurrupak number system was slightly different than the main Mesopotamian system.
Hence, there seemed to remain an inexplicable failure for the ages to perfectly add up (pre-birth plus post-birth should add up to total age in each case), and the error was usually in the 100's
digit. So each text tweaked the numbers to "fix" it. Some put the 100 in the pre-birth. Some put it in the post-birth. All versions had to make other small edits here and there to complete "fixing"
the problems resulting from the difference in Shurrupak's numbers. Hence none of the three sets of numbers is identical to what Moses originally recorded. If you compare all three versions and take 2
out of 3 for the digits in each case, except delete the extra 100 wherever you find it, and then convert it to hexagesimal, you will probably be closer to what Moses had written.
So apparently, in this editing process the Masoretes chose to spiritualize the text and make the numbers symbolic to resolve the conflict. They tweaked the numbers so that a beautiful pattern
The editors of the LXX on the other hand seem to have had a different guiding principle. Apparently they took the ages to be literal and then adjusted them by consistently adding 100 to the pre-birth
ages in order to stretch out the total length of the geneology. The MT and Sam were not so consistent; they generally added 100 to the post-birth ages, but with exceptions. I suspect that the LXX was
influenced by the history recorded in Egypt, since the ages of the unbroken chain of Egyptian priests went back earlier than Adam. Herodotus tells how both he and Hesiod had been embarrassed by this,
when they discovered that their Greek mythology did not go back far enough to encompass recorded Egyptian history, and the Egyptian priests mocked them for it. This was a well-known fact in classical
times since Herodotus was read everywhere. The Greek-speaking LXX scholars, thoroughly familiar with Herodotus and working in Alexandria **Egypt** of all places, certainly were aware that their own
Genesis geneology suffered the same problem as the Greek's in comparison to Egyptian history; it wasn't long enough! And so in trying to make the ages add up they chose to take the long path, the
path that stretches it all out by putting the missing 100 in the pre-birth dates wherever possible. There's a speculation, anyhow.
The Masoretic and Samaritan versions generally put the missing 100 in the post-birth column, which does not stretch out the geneology. I take that as attestation that they were generally trying to
conserve the total duration as they had received it. Hence, I tend to believe the 100 was not originally there. That is what is perdicted by the difference between Shurrupak's and the main
Mesopotamian number system, anyhow. the Mesopotamian wedge equal to 60 was worth only 48 in Shurrupak, so the numbers seemed a little larger than they really were. This introduced a carrying error in
the 100's digit. I have not worked this out in detail, but Robert Best shows in detail how un-doing this error reduces the numbers to ordinary human ages both pre-birth and at death.
All three versions end up having additional problems near the flood because Lamech and Methuselah don't die in time. So they each tweaked the numbers additionally at that point. As Dick points out,
we actually have surviving manuscripts of the LXX that have it both ways. So if you compare all three versions and take 2 out of 3 attestation to be the original numeral in each digit, then you can
see how each text made one special edit to deal with Lamech and Methuselah, each according to its own guiding priniciple. The non-attested number always **happens** to be exactly what it needs to be
to fix Lamech and Methuselah.
So I think that the amazing symmetric pattern in the MT must have been the underlying principle that the Masoretes adopted while editing those pesky hexagesimal numbers that didn't add up. Unable to
make sense of the Shurrupak numbers, they might have accepted them as basically symbolic in God's wisdom, and they might have also noticed that they **almost** made a beautiful pattern. Hence with a
little tweaking here and there they got the pattern beautiful pattern to work. Perhaps this made them feel justified in changing the numbers. They thought they had discovered what God had intended.
Perhaps they even deleted the one Patriarch Cainan after the Flood in order to make the pattern fit....?
Here are some other things this Masoretic pattern doesn't explain unless you accept it as merely late editing. It doesn't explain the particular choice of these 4 trailing digits (0, 2, 5 and 7),
which is highly improbable by chance and not sufficiently symbolic to explain why they were used. On the other hand, if these were real numbers mistranslated from a Mesopotamian sexagesimal number
system, then that can explain this digit. It has been suggested by Robert Best that in Shurrupak they counted seasons, not years. That might make sense in an agrarian economy where everything was
highly dependent on seasons. So the final digits in their ages may ahve consisted of between zero and three cuneiform wedges to represent the number of completed seasons before carrying the fourth
season as another wedge in the "years" column. In translating this number system, a scholar may have first converted the seasons to quarters (as Robert Best suggests), so we get 0, 1/4, 1/2 and 3/4
(= 0, 2, 5, and 7 truncated to the nearest tenth). [There was some rationale why tenths were used in Shuruppak,but I can't rememer it.] Is it just coincidence that these four numbers are evenly
spaced in fourths between 0 and 10? In the hexagesimal Shurrupak explanation, this is not a coincidence!
Also, this symmetric pattern in the Masoretic fails to explain the overall apparent randomness of most of the numbers taken individually. Apart from their trailing digits being only the four possible
values, the numbers are otherwise random! There is nothing symbolic about any one number considered by itself (other than 777, of course). That was what Carol Hill's article was trying to establish,
but IMO it is not possible to establish these numbers as individually symbolic. But if you are going for symbolism, then why not make every number symbolic? Make some bigger and some smaller so the
symmetric totals come out the same, but make each number more symbolic like Lamech's 777. That would have been possible. So in other words, the pattern fails to explain why it is not a better
But if this set of ages was originally recorded in units that were part of of a hexadecimal system (shars and so forth), then the Masoretes would not have done such drastic editing as to change
everything. They only made the small tweaks as necessary to get things to add up, to get consistency with Methuselah and Lamech dying before the Flood, and to "justify" the small changes in the text
by spiritualizing it and making a symmetric pattern emerge.
I believe this hypothesis, which is merely a synthesis of what several other people have argued for different parts of the data, successfully explains everything.
God bless,
Phil M.
-----Original Message-----
From: dickfischer@verizon.net
To: asa@calvin.edu
Sent: Tue, 20 Feb 2007 7:02 PM
Subject: RE: [asa] Ages of the Patriarchs
Hi Iain, you wrote:
>>Johnson writes:
2. If only one age was different by even 1 year, the entire
system would collapse. This gives good grounds for
assuming the reliability of the MT figures. The LXX and
the SP have both "adjusted" the MT figures, but in
doing so have created chaos; in the LXX Methuselah
actually dies 14 years after the flood!
Are you willing to believe the LXX account which has Methuselah dying
14 years after the flood?<<
Reread the first two sentences. In order to make his math correct all the reported ages in the MT have to be correct. How does his mathematical model impact the reliability of the text? This is tail
wagging the dog. We don’t have just 1 year being different we have a whole missing patriarch in the MT!
The LXX puts the age of Methuselah at death at 969, same as the Masoretic text. There is a discrepancy between LXX manuscripts as to whether Methuselah lived 167 years or 187 years before the birth
of Lamech. If the correct figure is 187 then Methuselah didn't have to tread water. So I wouldn't hinge an entire argument on simple scribal errors which we can all agree impact both texts.
The Septuagint has its origin in Alexandria, Egypt and was translated between
300-200 BC. The oldest fragments from the MT date from the 9th century AD. So how does that square with your argument that "The LXX and the SP have both 'adjusted' the MT figures"? Any textual
corruption tarnishes the Johnny-come-lately MT which omitted Arphaxad's son, Canaan, just to prove the point.
There are enough questions about actual ages and textual ages that to build a mathematical model to prove all the ages are symbolic is totally flawed.
The argument that there are mathematical patterns in the patriarch's ages, therefore they are symbolic is akin to the argument that there is complexity in nature, therefore it was created.
Dick Fischer, Genesis Proclaimed Association
Finding Harmony in Bible, Science, and History
Check out the new AOL. Most comprehensive set of free safety and security tools, free access to millions of high-quality videos from across the web, free AOL Mail and more.
To unsubscribe, send a message to majordomo@calvin.edu with
"unsubscribe asa" (no quotes) as the body of the message.
Received on Tue Feb 20 23:11:06 2007 | {"url":"http://www2.asa3.org/archive/asa/200702/0356.html","timestamp":"2014-04-20T00:46:28Z","content_type":null,"content_length":"17891","record_id":"<urn:uuid:0716df36-63f9-48b5-857c-20705cfd46f0>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00477-ip-10-147-4-33.ec2.internal.warc.gz"} |
Discrete Variable Example
Discrete Variables
A variable is a value that subjects to vary. A variable is a characteristic that can assign different values. Age, salary, occupation, name, income tax etc. are few examples of variables. These can
have values that are varying according to situation and from person to person.
Variables can either be discrete or continuous.
Discrete variables are the variables which cannot take any value. Discrete variables usually include counting numbers and finite number of values. They can include categories such as sex: male or
female, eye color: black, brown, blue, green, grey or hazel. These variables can have finite number of categories. Discrete variables are not flexible enough to take any kind of value. On the other
hand, continuous variable can assign any value. | {"url":"http://www.mathcaptain.com/statistics/discrete-variables.html","timestamp":"2014-04-21T14:41:39Z","content_type":null,"content_length":"53650","record_id":"<urn:uuid:252dfcd3-1e05-42c7-9c27-2e1d900da74c>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00234-ip-10-147-4-33.ec2.internal.warc.gz"} |
[simplify-rtx] Fix 16-bit -> 64-bit multiply and accumulate
On 03/05/11 10:07, Bernd Schmidt wrote:
> I tried to fix it with the patch below, which unfortunately doesn't work
> since during combine we don't see the SIGN_EXTEND operations inside the
> MULT, but two shift operations instead. Maybe you can complete it from here?
I've tried to make this patch go various ways, and I always find a test
case that doesn't quite work. I think we're approaching it from the
wrong angle.
The problem is that widening multiplies are required to be defined as
(mult (extend ..) (extend ..)), but when the combiner tries to build a
widening multiply pattern from a regular multiply it naturally ends up
as (extend (mult .. ..)). The result is that the patch Benrd posted made
existing widening multiplies wider, but failed to convert regular
multiplies to widening ones.
I've created this new, simpler patch that converts
(extend (mult a b))
(mult (extend a) (extend b))
regardless of what 'a' and 'b' might be. (These are then simplified and
superfluous extends removed, of course.)
I find that this patch fixes all the testcases I have, and permitted me
to add support for ARM smlalbt/smlaltb/smlaltt also (I'll post that in a
separate patch).
It does assume that the outer sign_extend/zero_extend indicates the
inner extend types though, so I'm not sure if there's a problem there?
On Tue, 24 May 2011, Andrew Stubbs wrote:
> I've created this new, simpler patch that converts
> (extend (mult a b))
> into
> (mult (extend a) (extend b))
> regardless of what 'a' and 'b' might be. (These are then simplified and
> superfluous extends removed, of course.)
Are there some missing conditions here? The two aren't equivalent in
general - (extend:SI (mult:HI a b)) multiplies the HImode values in HImode
(with modulo arithmetic on overflow) before extending the possibly wrapped
result to SImode. You'd need a and b themselves to be extended from
narrower modes in such a way that if you interpret the extended values in
the signedness of the outer extension, the result of the multiplication is
exactly representable in the mode of the multiplication. (For example, if
both values are extended from QImode, and all extensions have the same
signedness, that would be OK. There are cases that are OK where not all
extensions have the same signedness, e.g. (sign_extend:DI (mult:SI a b))
where a and b are zero-extended from HImode or QImode, at least one from
QImode, though there the outer extension is equivalent to a
On 24/05/11 20:35, Joseph S. Myers wrote:
> On Tue, 24 May 2011, Andrew Stubbs wrote:
>> I've created this new, simpler patch that converts
>> (extend (mult a b))
>> into
>> (mult (extend a) (extend b))
>> regardless of what 'a' and 'b' might be. (These are then simplified and
>> superfluous extends removed, of course.)
> Are there some missing conditions here? The two aren't equivalent in
> general - (extend:SI (mult:HI a b)) multiplies the HImode values in HImode
> (with modulo arithmetic on overflow) before extending the possibly wrapped
> result to SImode. You'd need a and b themselves to be extended from
> narrower modes in such a way that if you interpret the extended values in
> the signedness of the outer extension, the result of the multiplication is
> exactly representable in the mode of the multiplication. (For example, if
> both values are extended from QImode, and all extensions have the same
> signedness, that would be OK. There are cases that are OK where not all
> extensions have the same signedness, e.g. (sign_extend:DI (mult:SI a b))
> where a and b are zero-extended from HImode or QImode, at least one from
> QImode, though there the outer extension is equivalent to a
> zero-extension.)
So, you're saying that promoting a regular multiply to a widening
multiply isn't a valid transformation anyway? I suppose that does make
sense. I knew something was too easy.
OK, I'll go try again. :)
On Wed, 25 May 2011, Andrew Stubbs wrote:
> So, you're saying that promoting a regular multiply to a widening multiply
> isn't a valid transformation anyway? I suppose that does make sense. I knew
In general, yes. RTL always has modulo semantics (except for division and
remainder by -1); all optimizations based on undefinedness of overflow (in
the absence of -fwrapv) happen at tree/GIMPLE level, where signed and
unsigned types are still distinct. (So you could promote a regular
multiply of signed types at GIMPLE level in the absence of
-fwrapv/-ftrapv, but not at RTL level and not for unsigned types at GIMPLE
2011-05-24 Bernd Schmidt <bernds@codesourcery.com>
Andrew Stubbs <ams@codesourcery.com>
* simplify-rtx.c (simplify_unary_operation_1): Create a new
canonical form for widening multiplies.
* doc/md.texi (Canonicalization of Instructions): Document widening
multiply canonicalization.
* gcc.target/arm/mla-2.c: New test.
--- a/gcc/doc/md.texi
+++ b/gcc/doc/md.texi
@@ -5840,6 +5840,11 @@ Equality comparisons of a group of bits (usually a single bit) with zero
will be written using @code{zero_extract} rather than the equivalent
@code{and} or @code{sign_extract} operations.
+@cindex @code{mult}, canonicalization of
+@code{(sign_extend:@var{m1} (mult:@var{m2} @var{x} @var{y}))} is converted
+to @code{(mult:@var{m1} (sign_extend:@var{m1} @var{x}) (sign_extend:@var{m1} @var{y}))}, and likewise for @code{zero_extract}.
@end itemize
Further canonicalization rules are defined in the function
--- a/gcc/simplify-rtx.c
+++ b/gcc/simplify-rtx.c
@@ -1000,6 +1000,21 @@ simplify_unary_operation_1 (enum rtx_code code, enum machine_mode mode, rtx op)
&& GET_CODE (XEXP (XEXP (op, 0), 1)) == LABEL_REF)
return XEXP (op, 0);
+ /* Convert (sign_extend (mult ..)) to a canonical widening
+ mulitplication (mult (sign_extend ..) (sign_extend ..)). */
+ if (GET_CODE (op) == MULT && GET_MODE (op) < mode)
+ {
+ rtx lhs = XEXP (op, 0);
+ rtx rhs = XEXP (op, 1);
+ enum machine_mode lhs_mode = GET_MODE (lhs);
+ enum machine_mode rhs_mode = GET_MODE (rhs);
+ return simplify_gen_binary (MULT, mode,
+ simplify_gen_unary (SIGN_EXTEND, mode,
+ lhs, lhs_mode),
+ simplify_gen_unary (SIGN_EXTEND, mode,
+ rhs, rhs_mode));
+ }
/* Check for a sign extension of a subreg of a promoted
variable, where the promotion is sign-extended, and the
target mode is the same as the variable's promotion. */
@@ -1071,6 +1086,21 @@ simplify_unary_operation_1 (enum rtx_code code, enum machine_mode mode, rtx op)
&& GET_MODE_SIZE (mode) <= GET_MODE_SIZE (GET_MODE (XEXP (op, 0))))
return rtl_hooks.gen_lowpart_no_emit (mode, op);
+ /* Convert (zero_extend (mult ..)) to a canonical widening
+ mulitplication (mult (zero_extend ..) (zero_extend ..)). */
+ if (GET_CODE (op) == MULT && GET_MODE (op) < mode)
+ {
+ rtx lhs = XEXP (op, 0);
+ rtx rhs = XEXP (op, 1);
+ enum machine_mode lhs_mode = GET_MODE (lhs);
+ enum machine_mode rhs_mode = GET_MODE (rhs);
+ return simplify_gen_binary (MULT, mode,
+ simplify_gen_unary (ZERO_EXTEND, mode,
+ lhs, lhs_mode),
+ simplify_gen_unary (ZERO_EXTEND, mode,
+ rhs, rhs_mode));
+ }
/* (zero_extend:M (zero_extend:N <X>)) is (zero_extend:M <X>). */
if (GET_CODE (op) == ZERO_EXTEND)
return simplify_gen_unary (ZERO_EXTEND, mode, XEXP (op, 0),
--- /dev/null
+++ b/gcc/testsuite/gcc.target/arm/mla-2.c
@@ -0,0 +1,9 @@ +/* { dg-do compile } */
+/* { dg-options "-O2 -march=armv7-a" } */
+long long foolong (long long x, short *a, short *b)
+ return x + *a * *b;
+/* { dg-final { scan-assembler "smlalbb" } } */ | {"url":"http://patchwork.ozlabs.org/patch/97180/","timestamp":"2014-04-20T11:18:51Z","content_type":null,"content_length":"17171","record_id":"<urn:uuid:9ec2bd9a-b529-422b-a997-3ba20dad4fc7>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00341-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
***Help Request*** ***Medal &/or Fan Awarded to best helper*** ***Attachment Below***
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50d8c66fe4b0d6c1d542611c","timestamp":"2014-04-21T02:06:48Z","content_type":null,"content_length":"40709","record_id":"<urn:uuid:a66084bb-b7d2-467d-8ce0-d88d0a28a1f6>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00102-ip-10-147-4-33.ec2.internal.warc.gz"} |
This Article
Bibliographic References
Add to:
Parameter Estimation Using Metaheuristics in Systems Biology: A Comprehensive Review
January/February 2012 (vol. 9 no. 1)
pp. 185-202
ASCII Text x
Jianyong Sun, J. M. Garibaldi, C. Hodgman, "Parameter Estimation Using Metaheuristics in Systems Biology: A Comprehensive Review," IEEE/ACM Transactions on Computational Biology and Bioinformatics,
vol. 9, no. 1, pp. 185-202, January/February, 2012.
BibTex x
@article{ 10.1109/TCBB.2011.63,
author = { Jianyong Sun and J. M. Garibaldi and C. Hodgman},
title = {Parameter Estimation Using Metaheuristics in Systems Biology: A Comprehensive Review},
journal ={IEEE/ACM Transactions on Computational Biology and Bioinformatics},
volume = {9},
number = {1},
issn = {1545-5963},
year = {2012},
pages = {185-202},
doi = {http://doi.ieeecomputersociety.org/10.1109/TCBB.2011.63},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE/ACM Transactions on Computational Biology and Bioinformatics
TI - Parameter Estimation Using Metaheuristics in Systems Biology: A Comprehensive Review
IS - 1
SN - 1545-5963
EPD - 185-202
A1 - Jianyong Sun,
A1 - J. M. Garibaldi,
A1 - C. Hodgman,
PY - 2012
KW - reliability
KW - biology computing
KW - genetic algorithms
KW - learning (artificial intelligence)
KW - parameter estimation
KW - physiological models
KW - optimal design
KW - optimization problems
KW - systems biology
KW - parameter estimation problem
KW - metaheuristic optimizer
KW - machine learning
KW - model parameters
KW - Biological system modeling
KW - Mathematical model
KW - Computational modeling
KW - Systems biology
KW - Optimization
KW - Biochemistry
KW - Parameter estimation
KW - evolutionary algorithms.
KW - Systems biology
KW - parameter estimation problem
KW - model calibration
KW - heuristic
KW - metaheuristic
VL - 9
JA - IEEE/ACM Transactions on Computational Biology and Bioinformatics
ER -
This paper gives a comprehensive review of the application of metaheuristics to optimization problems in systems biology, mainly focusing on the parameter estimation problem (also called the inverse
problem or model calibration). It is intended for either the system biologist who wishes to learn more about the various optimization techniques available and/or the metaheuristic optimizer who is
interested in applying such techniques to problems in systems biology. First, the parameter estimation problems emerging from different areas of systems biology are described from the point of view
of machine learning. Brief descriptions of various metaheuristics developed for these problems follow, along with outlines of their advantages and disadvantages. Several important issues in applying
metaheuristics to the systems biology modeling problem are addressed, including the reliability and identifiability of model parameters, optimal design of experiments, and so on. Finally, we
highlight some possible future research directions in this field.
[1] A. Abraham, C. Grosan, and H. Ishibuchi, Hybrid Evolutionary Algorithms. Springer, 2007.
[2] M.A. Abramson, C. Audet, and J.E. Dennis, “Generalized Pattern Searches with Derivative Information,” Math. Programming, vol. 100, no. 1, pp. 3-25, 2004.
[3] B.S. Adiwijaya, P.I. Barton, and B. Tidor, “Biological Network Design Strategies: Discovery through Dynamic Optimization,” Molecular Biosystems, vol. 2, no. 12, pp. 650-659, 2006.
[4] C.M. Alexandre and L. Hennig, “FLC or Not FLC: The Other Side of Vernalisation,” J. Experimental Botany, vol. 52, pp. 1127-1135, 2008.
[5] R. Alfieri, E. Mosca, I. Merelli, and L. Milanesi, “Parameter Estimation for Cell Cycle Ordinary Differential Equation (ODE) Models Using a Grid Approach,” Proc. HealthGrid 2007, Studies in
Health Technology and Informatics, vol. 126, pp. 93-102, 2007.
[6] J.S. Almeida and E.O. Voit, “Neural-Network-Based Parameter Estimation in S-System Models of Biological Networks,” Genome Informatics, vol. 14, pp. 114-123, 2003.
[7] U. Alon, An Introduction to Systems Biology. Chapman and Hall, 2006.
[8] S. Ando, E. Sakamoto, and H. Iba, “Evolutionary Modeling and Inference of Gene Network,” Information Science, vol. 145, pp. 237-259, 2002.
[9] I. Arisi, A. Cattaneo, and V. Rosato, “Parameter Estimate of Signal Transduction Pathways,” BMC Neuroscience, vol. 7, 2006, doi:10.1186/1471-2202-7-S1-S6.
[10] N. Arora and L.T. Biegler, “A Trust Region SQP Algorithm for Equality Constrained Parameter Estimation with Simple Parameter Bounds,” Computational Optimization and Applications, vol. 28, pp.
51-86, 2004.
[11] M. Ashyraliyev, J. Jaeger, and J.G. Blom, “Parameter Estimation and Determinability Analysis Applied to drosophila Gap Gene Circuits,” BMC Systems Biology, vol. 2, article 83, 2008.
[12] S. Audoly, G. Bellu, L. D'Angiò, M.P. Saccomani, and C. Cobelli, “Global Identifiability of Nonlinear Models of Biological Systems,” IEEE Trans. Biomedical Eng., vol. 48, no. 1, pp. 55-65, Jan.
[13] C. Auliac, V. Frouin, X. Gidrol, and F. D'Alche-Buc, “Evolutionary Approaches for the Reverse-Engineering of Gene Regulatory Networks: A Study on a Biologically Realistic Dataset,” BMC
Bioinformatics, vol. 9, article 91, 2008.
[14] C.T.H. Baker, G.A. Bocharov, J.M. Ford, P.M. Lumb, S.J. Norton, C.A.H. Paul, T. Junt, P. Krebs, and B. Ludewig, “Computational Approaches to Parameter Estimation and Model Selection in
Immunology,” J. Computational and Applied Math., vol. 184, pp. 50-76, 2005.
[15] E. Balsa-Canto, A.A. Alonso, and J.R. Banga, “Computational Procedures for Optimal Experimental Design in Biological Systems,” IET Systems Biology, vol. 2, no. 4, pp. 163-172, July 2008.
[16] E. Balsa-Canto, M. Peifer, J.R. Banga, J. Timmer, and C. Fleck, “Hybrid Optimization Method with General Switching Strategy for Parameter Estimation,” BMC System Biology, vol. 2, no. 26, pp.
1-9, 2008.
[17] E. Balsa-Canto, V. Vassiliadis, and J. Banga, “Dynamic Optimization of Single- and Multi-Stage Systems Using a Hybrid Stochastic-Deterministic Method,” Industrial and Eng. Chemistry Research,
vol. 44, no. 5, pp. 1514-1523, 2005.
[18] J.R. Banga, “Optimization in Computational Systems Biology,” BMC Systems Biology, vol. 2, article 47, 2008.
[19] J.R. Banga and J.J. Casares, “Integrated Controlled Random Search: Application to a Wastewater Treatment Plant Model,” Proc. IChemE Symp. Ser. 100, pp. 183-192, 1987.
[20] J.R. Banga, K.J. Versyck, and J.F. Van Impe, “Computation of Optimal Identification Experiments for Nonlinear Dynamic Process Models: A Stochastic Global Optimization Approach,” Industrial and
Eng. Chemistry Research, vol. 41, no. 10, pp. 2452-2430, 2002.
[21] H.T. Banks, H.T. Tran, and D. Gulick, Mathematical and Experimental Modeling of Physical and Biological Processes. Chapman and Hall, 2009.
[22] W. Banzhaf, P. Nordin, R.E. Keller, and F.D. Francone, Genetic Programming: An Introduction: On the Automatic Evolution of Computer Programs and Its Applications. Morgan Kaufmann, 1998.
[23] P. Berman, B. DasGupta, and E. Sontag, “Computational Complexities of Combinatorial Problems With Applications to Reverse Eng. of Biological Networks,” Advances in Computational Intelligence:
Theory & Applications, pp. 303-316, World Scientific, 2006.
[24] D. Braun, S. Basu, and R. Weiss, “Parameter Estimation for Two Synthetic Gene Networks: A Case Study,” Proc. IEEE Int'l Conf. Acoustics, Speech, and Signal Processing, vol. 5, pp. 769-772, 2005.
[25] M. Brown, F. He, and L.F. Yeung, “Robust Measurement Selection for Biochemical Pathway Experimental Design,” Proc. The First Int'l Symp. Optimization and Systems Biology, pp. 259-266, Aug. 2007.
[26] C.G. Broyden, “The Convergence of a Class of Double-Rank Minimization Algorithms,” J. Inst. of Math. and Its Applications, vol. 6, pp. 76-90, 1970.
[27] M. Brusco and S. Stahl, “Branch-and-Bound Applications in Combinatorial Data Analysis,” Ser. Statistics and Computing, Springer, 2005.
[28] D. Buche, N.N. Schraudolph, and P. Koumoutsakos, “Accelerating Evolutionary Algorithms with Gaussian Process Fitness Function Models,” IEEE Trans. Systems, Man and Cybernetics, Part C:
Applications and Rev., vol. 35, no. 2, pp. 183-194, May 2005.
[29] A.P. Burgard, P. Pharkya, and C.D. Maranas, “OptKnock: A Bilevel Programming Framework fo Identifying Gene Knockout Strategies for Microbial Strain Optimization,” Biotechnology and Bioeng., vol.
84, no. 6, pp. 647-657, 2003.
[30] G.C. Cawley and N.L.C. Talbot, “Preventing Over-Fitting during Model Selection via Bayesian Regularisation of the Hyper-Parameters,” J. Machine Learning Research, vol. 8, pp. 841-861, 2007.
[31] Y.J. Chang and N.V. Sahinidis, “Optimization of Metabolic Pathways under Stability Considerations,” Computers and Chemical Eng., vol. 29, no. 3, pp. 467-479, 2005.
[32] C. Chaouiya, “Petri Net Modelling of Biological Networks,” Briefings in Bioinformatics, vol. 8, pp. 210-219, 2007.
[33] J.P. Chiou and F.S. Wang, “Hybrid Method of Evolution Algorithms for Static and Dynamic Optimization Problems with Application to a Fedbatch Fermentation Process,” Computers and Chemical Eng.,
vol. 23, pp. 1277-1291, 1999.
[34] I-C. Chou, H. Martens, and E.O. Voit, “Parameter Estimation in Biochemical Systems Models with Alternating Regression,” Theoretical Biology and Medical Modelling, vol. 3, no. 25, pp. 1-11, 2006.
[35] A.R. Conn, N. Gould, and P.L. Toint, “A Globally Convergent Augmented Lagrangian Algortithm for Optimization with General Constraints and Simple Bounds,” SIAM J. Numerical Analysis, vol. 28, no.
2, pp. 545-572, 1991.
[36] A.R. Conn, N. Gould, and P.L. Toint, “A Globally Congergent Lagrangian Barrier Algorithm for Optimization with General Inequality Constraints and Simple Bounds,” Math. of Computation, vol. 66,
no. 217, pp. 261-288, 1997.
[37] M. Dasika, A. Gupta, and C. Maranas, “A Mixed Integer Linear Programing (milp) Framework for Inferring Time Delay in Gene Regulatory Networks,” Proc. Pacific Symp. Biocomputing, pp. 474-486,
[38] M.S. Dasika and C.D. Maranas, “Optcircuit: An Optimization Based Method for Computational Design of Genetic Circuits,” BMC Systems Biology, vol. 2, article 24, 2008.
[39] M.J. Dunlop, E. Franco, and R.M. Murray, “A Multi-Model Approach to Identification of Biosynthetic Pathways,” Proc. Am. Control Conf., pp. 1600-1605, July 2007.
[40] J.A. Egea, M. Rodriguez-Fernandez, J.R. Banga, and R. Marti, “Scatter Search for Chemical and Bio-Process Optimization,” J. Global Optimization, vol. 37, pp. 481-503, 2007.
[41] M. Laguna, F. Glover, and R. Martí, Advances in Evolutionary Computation: Theory and Applications, pp. 519-537. Springer-Verlag, 2003.
[42] H.-Y. Fan and J. Lampinen, “A Trigonometric Mutation Operation to Differential Evolution,” J. Global Optimization, vol. 27, no. 1, pp. 105-129, 2003.
[43] D. Fell, Understanding the Control of Metabolism. Portland Press, 1997.
[44] J. Fisher and T.A. Henzinger, “Executable Cell Biology,” Nature Biotechnology, vol. 25, no. 11, pp. 1239-1249, 2007.
[45] R. Fletcher, Practicle Methods of Optimization. Wiley, 2000.
[46] K.G. Gadkar, F.J. Doyle, and J.S. Edward, “Estimating Optimal Profiles of Genetic Alterations Using Constraint-Based Models,” Biotechnology and Bioeng., vol. 89, no. 2, pp. 243-251, 2005.
[47] E. Gill, W. Murray, and M.A. Saunders, “SNOPT: An SQP Algorithm for Large-Scale Constrained Optimization,” SIAM J. Optimization, vol. 12, no. 4, pp. 979-1006, 2002.
[48] A. Gilman and J. Ross, “Genetic-Algorithm Selection of a Regulatory Structure that Directs Flux in a Simple Metabolic Model,” Biophysical J., vol. 69, pp. 1321-1333, 1995.
[49] L. Glass and S.A. Kauffman, “The Logical Analysis of Continuous, Non-Linear Biochemical Control Networks,” J. Theoretical Biology, vol. 39, pp. 103-129, 1973.
[50] O.R. Gonzalez, C. Kuper, K. JungJr., P.C. Naval, and E. Mendoza, “Parameter Estimation Using Simulated Annealing for S-System Models of Biochemical Networks,” Bioinformatics, vol. 23, no. 4, pp.
480-486, 2007.
[51] X. Gu, “Systems Biology Approaches to the Computational Modelling of Tryanothione Metabolism in Trypanosoma Brucei,” PhD thesis, Univ. of Glasgow, 2010.
[52] S. Han, Y. Yoon, and K.H. Cho, “Inferring Biomolecular Interaction Networks Based on Convex Optimization,” Computational Biology and Chemistry, vol. 31, nos. 5/6, pp. 347-354, 2007.
[53] J. Handl, D.B. Kell, and J. Knowles, “Multiobjective Optimization in Bioinformatics and Computational Biology,” IEEE/ACM Trans. Computational Biology and Bioinformatics, vol. 4, no. 2, pp.
279-292, Apr.-June 2007.
[54] N. Hansen, S.D. Muller, and P. Koumoutsakos, “Reducing the Time Complexity of the Derandomized Evolution Strategy with Covariance Matrix Adoption (CMA-ES),” Evolutionary Computation, vol. 11,
no. 1, pp. 1-18, 2003.
[55] Recent Advances in Memetic Algorithms, W.E. Hart, N. Krasnogor, and J.E. Smith, eds., vol. 166. Springer, 2005.
[56] T. Higuchi, S. Tsutsui, and M. yamamura, “Theoretical Analysis of Simplex Crossover for Real-Coded Genetic Algorithms,” Proc. Int'l Conf. Parallel Problem Solving from Nature, pp. 365-374, 2000.
[57] S.-Y. Ho, C.-H. Hsieh, and F.-C. Yu, “Inference of S-system Models for Large-Scale Genetic Networks,” Proc. 21st Int'l Conf. Data Eng. (ICDE '05), p. 1155, 2005.
[58] S.-Y. Ho, C.-H. Hsieh, F.-C. Yu, and H.-L. Huang, “An Intelligent Two-Stage Evolutionary Algorithm for Dynamic Pathway Identification from Gene Expression Profiles,” IEEE/ACM Trans.
Computational Biology and Bioinformatics, vol. 4, no. 4, pp. 648-704, Oct.-Dec. 2007.
[59] T. Hohm and E. Zitzler, “Multiobjectivization for Parameter Estimation: A Case-Study on the Segment Polarity Network of Drosophila,” Proc. 11th Ann. Conf. Genetic and Evolutionary Computation,
pp. 209-216, 2009.
[60] R. Hooke and T.A. Jeeves, “‘Direct Search’ Solution of Numerical and Statistical Problems,” J. the ACM, vol. 8, no. 2, pp. 212-229, 1961.
[61] S. Hoops, S. Sahle, R. Gauges, C. Lee, J. Pahle, N. Simus, M. Singhai, L. Xu, P. Mendes, and U. Kummer, “COPASI—A Complex Pathway Simulator,” Systems Biology, vol. 22, no. 24, pp. 3067-3074,
[62] B. Ibrahim, S. Diekmann, E. Schmitt, and P. Dittrich, In-Silico “Modeling of the Mitotic Spindle Assembly Checkpoint,” PLoS One, vol. 3, no. 2:e1555 2008, doi: 10.1371/journal.pone.0001555.
[63] C. Igel, N. Hansen, and S. Roth, “Covariance Matrix Adaptation for Multi-Objective Optimization,” Evolutionary Computation J., vol. 15, no. 1, pp. 1-28, 2007.
[64] F.J. DoyleIII and J. Stelling, “Systems Interface Biology,” J. Royal Soc. Interface, vol. 3, pp. 603-616, 2006.
[65] L. Ingber, “Very Fast Simulated Rea-Annealing,” Math. and Computer Modelling, vol. 12, no. 8, pp. 967-973, 1989.
[66] J.E. DennisJr., D.M. Gay, and R.E. Welsch, “An Adaptive Nonlinear Least-Squares Algorithm,” ACM Trans. Math. Software, vol. 7, no. 3, pp. 348-368, 1981.
[67] Fuzzy Systems in Bioinformatics and Computational Biology, vol. 242 of Studies in Fuzziness and Soft Computing, Y. Jin and L. Wang, eds. Springer, 2009.
[68] Knowledge Incorporation in Evolutionary Computation, ser. Studies in Fuzziness and Soft Computing, Y.C. Jin, ed., vol. 167, Springer, 2005.
[69] K.A. De Jong, “An Analysis of Behavior of a Class of Genetic Adaptive Systems,” PhD thesis, The Univ. of Michigan, 1975.
[70] B.H. Junker and F. Schreiber, Analysis of Biological Networks. Wiley, 2008.
[71] K.J. Kauffman, P. Prakash, and J.S. Edward, “Advances in Flux Balance Analysis,” Current Opinion in Biotechnology, vol. 14, no. 5, pp. 491-496, 2003.
[72] S.A. Kauffman, “Metabolic Stability and Epigenesis in Randomly Constructed Genetic Nets,” J. Theoretical Biology, vol. 22, pp. 437-467, 1969.
[73] D.B. Kell, “Metabolomics and System Biology: Making Sense of the Soup,” Current Opinion in Microbiology, vol. 7, pp. 296-307, 2004.
[74] J. Kennedy and R.C. Eberhart, Swarm Intelligence. Morgan Kaufmann, 2001.
[75] Biological Networks, ser. Complex Systems and Interdisciplinary Science, F. Képès, ed., vol. 3, World Scientific, 2007.
[76] S. Kikuchi, D. Tominaga, M. Arita, K. Takahashi, and M. Tomita, “Dynamical Modeling of Genetic Networks Using Genetic Algorithm and S-system,” Bioinformatics, vol. 19, no. 5, pp. 643-650, 2003.
[77] K-Y. Kim, D-Y. Cho, and B-T. Zhang, “Multi-Stage Evolutionary Algorithms for Efficient Identification of Gene Regulatory Networks,” Proc. EvoWorkshops, pp. 45-56, 2006.
[78] S. Kim, J. Kim, and K.H. Cho, “Inferring Gene Regulatory Networks from Temporal Expression Profiles Under Time-Delay and Noise,” Computational Biology and Chemistry, vol. 31, no. 4, pp. 239-245,
[79] S. Kimura, M. Hatakeyama, and A. Konagaya, “Inference of S-System Models of Genetic Networks from Noisy Time-Series Data,” Chem-Bio Informatics J., vol. 4, no. 1, pp. 1-14, 2004.
[80] S. Kimura, K. Ide, A. Kashihara, M. Kano, M. Hatakeyama, R. Masui, N. Nakagawa, S. Yokoyama, S. Kuramitsu, and A. Konagaya, “Inference of S-System Models of Genetic Networks Using a Cooperative
Coevolutionary Algorithm,” Bioinformatics, vol. 21, no. 7, pp. 1154-1163, 2005.
[81] J. Kitagawa and H. Iba, “Identifying Metabolic Pathways and Gene Regulation Networks with Evolutionary Algorithms,” Evolutionary Computation in Bioinformatics, G.B. Fogel and D.W. Corne, eds.,
pp. 255-278, Morgan Kaufmann Publishers, 2003.
[82] H. Kitano, “Computational Systems Biology,” Nature, vol. 420, no. 14, pp. 206-210, 2002.
[83] H. Kitano, “System Biology: A Brief Overview,” Science, vol. 295, pp. 1662-1664, 2002.
[84] S.H. Kleinstein, D. Bottino, and G.S. Lett, “Nonuniform Sampling for Global Optimization of Kinetic Rate Constants in Biological Pathways,” Proc. Winter Simulation Conf., L.F. Perrone, F.P.
Wieland, J. Liu, B.G. Lawson, D.M. Nicol, and R.M. Fujimoto, eds., pp. 1611-1616, 2006.
[85] J.R. Koza, W. Mydlowec, G. Lanza, J. Yu, and M.A. Keane, “Reverse Engineering of Metabolic Pathways from Observed Data Using Genetic Programming,” Proc. Pacific Symp. Biocomputing, vol. 6, pp.
434-445, 2000.
[86] A. Kremling and J. Saez-Rodriguez, “Systems Biology—An Engineering Perspective,” J. Biotechnology, vol. 129, pp. 329-351, 2007.
[87] L. Kuepfer, U. Sauer, and P.A. Parrilo, “Efficient Classification of Complete Parameter Regions Based On Semidefinite Programming,” BMC Bioinformatics, vol. 8, article 12, 2007.
[88] Z. Kutalik, W. Tucker, and V. Moulton, “S-System Parameter Estimation for Noisy Metabolic Profiles Using Newton-Flow Analysis,” IET Systms Biology, vol. 1, no. 3, pp. 174-180, May 2007.
[89] R. Lall and E.O. Voit, “Parameter Estimation in Modulated, Unbranched Reaction Chains with Biochemical Systems,” Computational Biology and Chemistry, vol. 29, pp. 309-318, 2005.
[90] T. Lenser, T. Hinze, B. Ibrahim, and P. Dittrich, “Towards Evolutionary Network Reconstruction Tools for Systems Biology,” Proc. Fifth European Conf. Evolutionary Computation, Machine Learning
and Data Mining in Bioinformatics (EvoBIO), pp. 132-142, 2007.
[91] K. Levenberg, “A Method For the Solution of Certain Non-Linear Problems in Least Squares,” The Quarterly of Applied Math., vol. 2, pp. 164-168, 1944.
[92] W. Liebermeister and E. Klipp, “Bringing Metabolic Networks to Life: Integration of Kinetic, Metabolic, and Proteomic Data,” Theoretical Biology and Medical Modelling, vol. 3, article 42, 2006.
[93] G. Liliacci and M. Khammash, “Parameter Estimation and Model Selection in Computational Biology,” PLoS Computational Biology, vol. 6, no. 3:e1000696, Mar. 2010., doi:10.1371/
[94] P.-K. Liu and F.-S. Wang, “Inference of Biochemical Network Models in S-System Using Multiobjective Optimization Approach,” Bioinformatics, vol. 24, no. 8, pp. 1085-1092, 2008.
[95] P.-K. Liu and F.-S. Wang, “Inverse Problems of Biological Systems Using Multi-Objective Optimization,” J. the Chinese Inst. of Chemical Eng., vol. 39, no. 5, pp. 399-406, 2008.
[96] P.-K. Liu and F.-S. Wang, “Hybrid Differential Evolution with Geometric Mean Mutation in Parameter Estimation of Bioreaction Systems with Large Parameter Search Space,” J. Computers and Chemical
Eng., vol. 33, pp. 1851-1860, 2009.
[97] P.-K. Liu and F.-S. Wang, “Hybrid Differential Evolution Including Geometric Mean Mutation for Optimization of Biochemical Systems,” J. Taiwan Inst. of Chemical Eng., vol. 41, pp. 65-72, 2010.
[98] P.-K. Liu, C.-H. Yuh, and F.-S. Wang, “Inference of Genetic Regulatory Networks Using S-System and Hybrid Differential Evolution,” Proc. IEEE Congress Evolutionary Computation, pp. 1736-1743,
[99] R. Mahadevan, J.S. Edwards, and F.J. DoyleIII, “Dynamic Flux Balance Analysis of Diauxic Growth in Escherichia Coli,” Biophysical J., vol. 83, pp. 1331-1340, Sept. 2002.
[100] R. Manner, S.W. Mahfoud, and S.W. Mahfoud, “Crowding and Preselection Revisited,” Parallel Problem Solving from Nature, pp. 27-36, 1992.
[101] D. Marbach, C. Mattiussi, and D. Floreano, “Bio-Mimetic Evolutionary Reverse Engineering of Genetic Regulatory Networks,” Proc. Fifth European Conf. Evolutionary Computation, Machine Learning
and Data Mining in Bioinformatics, E. Marchiori, J.H. Moore, and J.C. Rajapakse, eds., pp. 155-165, 2007.
[102] Y. Matsubara, S. Kikuchi adn, M. Sugimoto, and M. Tomita, “Parameter Estimation for Stiff Equations of Biosystems Using Radial Basis Function Networks,” BMC Bioinformatics, vol. 7, article 230,
[103] N. Matsumaru, F. Centler, K.-P. Zauner, and P. Dittrich, “Self-Adaptive Scouting—Autonomous Experimentation for Systems Biology,” Lecture Notes in Artificial Intelligence, LNCS, vol. 3005, pp.
52-62, Springer, 2004.
[104] K. Matsumura, H. Oida, and S. Kimura, “Inference of S-System Models of Genetic Networks by Function Optimization Using Genetic Programming,” Trans. Information Processing Soc. Japan, vol. 46,
no. 11, pp. 2814-2830, 2005.
[105] C. Mattiussi and D. Floreano, “Analog Genetic Encoding for the Evolution of Circuits and Networks,” IEEE Trans. Evolutionary Computation, vol. 11, no. 5, pp. 596-607, Oct. 2007.
[106] P. Mendes and D.B. Kell, “Non-Linear Optimization of Biochemical Pathways: Applications to Metabolic Engineering and Parameter Estimation,” Bioinformatics, vol. 14, no. 1, pp. 869-883, 1998.
[107] C. Modchang, W. Triampo, and Y. Lenbury, “Mathematical Modeling and Application of Genetic Algorithm to Parameter Estimation in Signal Transduction: Trafficking and Promiscuous Coupling of
G-Protein Coupled Receptors,” Computers in Biology and Medicine, vol. 38, pp. 574-582, 2008.
[108] C.G. Moles, P. Mendes, and J.R. Banga, “Parameter Estimation in Biochemical Pathways: A Comparison of Global Optimization Methods,” Genome Research, vol. 13, nos. 2467-2474, 2003.
[109] J.H. Moore and L.W. Hahn, “Grammatical Evolution for the Discovery of Petri Net Models of Complex Genetic Systems” Proc. Int'l Conf. Genetic and Evolutionary Computation, Cantu-Paz et al.,
eds., pp. 2412-2413, 2003.
[110] J.H. Moore and L.W. Hahn, “Petri Net Modeling of High-Order Genetic Systems Using Grammatical Evolution,” BioSystems, vol. 72, pp. 177-186, 2003.
[111] J.H. Moore and L.W. Hahn, “Systems Biology Modeling in Huamn Genetics Using Petri Nets and Grammatical Evolution,” Proc. Genetic and Evolutionary Computation (GECCO '04) Conf., pp. 392-401,
[112] R. Morishita, H. Imade, I. Ono, N. Ono, and M. Okamoto, “Finding Multiple Solutions Based on an Evolutionary Algorithm for Inference of Genetic Networks by S-System,” Proc. Congress
Evolutionary Computation (CEC '03), vol. 1, pp. 615-622, 2003.
[113] M. Motta Jafelice, B.F.Z. Bechara, L.C. Barros, R.C. Bassanezi, and F. Gomide, “Cellular Automata with Fuzzy Parameters in Microscopic Study of Positive HIV Individuals,” Math. and Computer
Modelling, vol. 50, nos. 1/2, pp. 32-44, 2009.
[114] J.A. Nelder and R. Mead, “A Simplex Method for Function Minimization,” Computer J., vol. 7, pp. 308-313, 1965.
[115] J. Nielsen, “Principles of Optimal Metabolic Network Operation,” Molecular Systems Biology, vol. 3, article 126, 2007.
[116] D. Noble, “Systems Biology and the Heart,” BioSystems, vol. 83, pp. 75-80, 2006.
[117] N. Noman and H. Iba, “Inference of Gene Regulatory Networks Using S-System and Differential Evolution,” Proc. Conf. Genetic and Evolutionary Computation, pp. 439-446, 2005.
[118] N. Noman and H. Iba, “Reverse Engineering Genetic Networks Using Evolutionary Computation,” Genome Informatics, vol. 16, no. 2, pp. 205-214, 2005.
[119] N. Noman and H. Iba, “Inferring Gene Regulatory Networks Using Differential Evolution with Local Search Heuristics,” IEEE/ACM Trans. Computational Biology and Bioinformatics, vol. 4, no. 4, pp.
634-647, Oct.-Dec. 2007.
[120] J. Nummela and B.A. Julstrom, “Evolving Petri Nets to Represent Metabolic Pathways,” Genetic and Evolutionary Computation Conf., Hans-Georg Beyer and Una-May O'Reilly, eds., pp. 2133-2139,
[121] M. O'Neill and C. Ryan, “Grammatical Evolution,” IEEE Trans. Evolutionary Computation, vol. 5, no. 4, pp. 349-358, Aug. 2001.
[122] M. Patan and B. Bogacka, “Optimum Experimental Designs for Dynamic Systems in the Presence of Correlated Errors,” Computational Statistics & Data Analysis, vol. 51, pp. 5644-5661, 2007.
[123] K.R. Patil, I. Rocha, J. Forster, and J. Nielsen, “Evolutionary Programming as A Platform for in Silico Metabolic Engineering,” BMC Bioinformatics, vol. 6, no. 1, p. 308, 2005.
[124] M. Peifer and J. Timmer, “Parameter Estimation in Ordinary Differential Equations for Biochemical Processes Using the Method of Multiple Shooting,” IET System Biology, vol. 1, no. 2, pp. 78-88,
Mar. 2007.
[125] A. Pettinen, O. Yli-Harja, and M-L. Linne, “Comparison of Automated Parameter Estimation Methods for Neuronal Signalling Networks,” Neurocomputing, vol. 69, pp. 1371-1374, 2006.
[126] J.W. Pinney, D.R. Westhead, and G.A. McConkey, “Petri Net Representations in Systems Biology,” Biochemical Soc. Trans., vol. 31, no. 6, pp. 1513-1515, 2003.
[127] P.K. Polisetty, E.O. Voit, and E.P. Gatzke, “Identification of Metabolic System Parameters Using Global Optimization Methods,” Theoretical Biology and Medical Modelling, vol. 3, no. 4, pp.
1-15, 2006.
[128] M.J.D. Powell, “The NEWUOA Software for Unconstrained Optimisation without Derivatives,” Technical Report NA2004/08, Dept. of Applied Math. and Theoretical Physics, Univ. of Cambridge, 2004.
[129] A.A. Poyton, M.S. Varziri, K.B. McAuley, P.J. Mclellan, and J.O. Ramsay, “Parameter Estimation In Continuous-Time Dynamic Models Using Principal Differential Analysis,” Computers and Chemical
Eng., vol. 30, pp. 698-708, 2006.
[130] K. Praveen, D. Sanjoy, and W. Stephen, “LRJ: A Multi-Objective GA-Simplex Hybrid Approach for Gene Regulatory Network Models,” Proc. IEEE Congress Evolutionary Computation, pp. 2084-2090, 2004.
[131] J.O. Ramsay, G. Hooker, D. Campbell, and J. Cao, “Parameter Estimation for Differential Equations: A Generalized Smoothing Approach,” J. the Royal Statistical Soc.: Series B (Statistical
Methodology), vol. 69, no. 5, pp. 741-796, 2007.
[132] M. Rodriguez-Fernandez, J.A. Egea, and J.R. Banga, “Novel Metaheuristic for Parameter Estimation in Nonlinear Dynamic Biological Systems,” BMC Bioinformatics, vol. 7, pp. 483-501, 2006.
[133] M. Rodriguez-Fernandez, P. Mendes, and J.R. Banga, “A Hybrid Approach for Efficient and Robust Parameter Estimation in Biochemical Pathways,” BioSystems, vol. 83, pp. 248-265, 2006.
[134] T. Rudge and N. Geard, “Evolving Gene Regulatory Networks for Cellular Morphogenesis,” Proc. The Second Australian Conf. Artificial Life, Dec. 2005.
[135] T.P. Runarsson and X. Yao, “Stochastic Ranking for Constrained Evolutionary Optimization,” IEEE Trans. Evolutionary Computation, vol. 4, no. 3, pp. 284-294, Sept. 2000.
[136] Handbook of Fuzzy Computation, E. Ruspini, P. Bonissone, and W. Pedrycz , eds. IOP Publishing, Ltd., 1998.
[137] H. Salis and Y. Kaznessis, “Accurate Hybrid Stochastic Simulation of a System of Coupled Chemical or Biochemical Reactions,” J. Chemical Physics, vol. 122, no. 5: 54103, 2005.
[138] H. Salis, V. Sotiropoulos, and Y.N. Kaznessis, “Multiscale Hy3S: Hybrid Stochastic Simulation for Supercomputers,” BMC Bioinformatics, vol. 7, article 93, 2006.
[139] M.A. Savageau, “Biochemical Systems Analysis I: Some Mathematical Properties of the Rate Law for the Component Enzymatic Reactions,” J. Theoretical Biology, vol. 25, no. 3, pp. 365-369, 1969.
[140] M.A. Savageau, “Biochemical Systems Analysis II: The Steady State Solutions for an N-Pool System Using a Power-Law Approximation,” J. Theoretical Biology, vol. 25, no. 3, pp. 370-379, 1969.
[141] M.A. Savageau, “Biochemical Systems Analysis III: Dynamic Solutions Using a Power-Law Approximation,” J. Theoretical Biology, vol. 26, no. 2, pp. 215-226, 1970.
[142] R. Schuetz, L. Kuepfer, and U. Sauer, “Systematic Evaluation of Objective Functions for Predicting Intracellular Fluxes in Escherichia Coli,” Molecular Systems Biology, vol. 3, article 119,
[143] D. Segre, D. Vitkup, and G.M. Church, “Analysis of Optimality in Natural and Perturbed Metabolic Networks,” Proc. Nat'l Academy of Sciences of USA, vol. 99, no. 23, pp. 15112-15117, 2002.
[144] S. Sheikh-Bahaei and C.A. Hunt, “Prediction of in vitro Heapatic Biliary Excretion Using Stochastic Agent-Based Modeling and Fuzzy Clustering,” Proc. Winter Simulation Conf., L.F. Perrone, F.P.
Wieland, J. Liu, B.G. Lawson, D.M. Nicol, and R.M. Fujimoto, eds., pp. 1617-1624, 2006.
[145] Advances in Metaheuristics for Hard Optimization, Natural Computing Series, P. Siarry, and Z. Michalewicz, eds. Springer, 2008.
[146] A. Sirbu, H.J. Ruskin, and M. Crane, “Comparison of Evolutionary Algorithms in Gene Regulatory Network Model Inference,” BMC Bioinformatics, vol. 11, no. 1, p. 59, 2010.
[147] C. Spieth, F. Streichert, N. Speer, and A. Zell, “Optimizing Topology and Parameters of Gene Regulatory Network Models from Time-Series Experiments,” Proc. Genetic and Evolutionary Computation
Conf., K. Deb et al., eds., pp. 461-470, 2004.
[148] C. Spieth, R. Worzischek, and F. Streichert, “Comparing Evolutionary Algorithms on the Problem of Network Inference,” Proc. Eighth Ann. Conf. Genetic and Evolutionary Computation, pp. 305-306,
[149] J. Srividhya, E.J. Crampin, and P.E. McSharry, “Reconstructing Biochemical Pathways from Time Course Data,” Proteomics, vol. 7, pp. 828-838, 2007.
[150] R. Storn and K. Price, “Differential Evolution—A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces,” J. Global Optimization, vol. 11, pp. 341-359, 1997.
[151] F. Streichert, H. Planatscher, C. Spieth, H. Ulmer, and A. Zell, “Comparing Genetic Programming and Evolution Strategies on Inferring Gene Regulatory Networks,” Proc. Genetic and Evolutionary
Computation Conf., pp. 471-480, 2004.
[152] D. Tominaga, N. Koga, and M. Okamoto, “Efficient Numerical Optimization Algorithm Based on Genetic Algorithm for Inverse Problem: System for the Inference of Genetic Networks,” Proc. Genetic
and Evolutionary Computation Conf., pp. 251-258, 2000.
[153] J. Tomshine and Y.N. Kaznessis, “Optimization of a Stochastically Simulated Gene Network Model via Simulated Annealing,” Biophysical J., vol. 91, pp. 3196-3205, 2006.
[154] K.-Y. Tsai and F.-S. Wang, “Evolutionary Optimization with Data Collocation for Reverse Engineering of Biological Networks,” Bioinformatics, vol. 21, no. 7, pp. 1180-1188, 2005.
[155] S. Tsutsui, M. yamamura, and T. Higuchi, “Multi-Parent Recombination with Simplex Crossover in Real-Coded Genetic Algorithm,” Proc. Genetic and Evolutionary Computation Conf., pp. 657-664,
[156] P.A. Vanrolleghem and D. Dochain, “Model Identification,” Advanced Instrumentation, Data Integration, and Control of Biotechnological Process, J. Van Impe, P.A. Vanrolleghem, and D. Iserentant,
eds., pp. 251-318, Kluwer Academic Publishers, 1998.
[157] T.D. Vo, W.N.P. Lee, and P.O. Palsson, “Systems Analysis of Energy Metabolism Elucidates the Affected Respiratory Chain Complex in Leigh's Syndrome,” Molecular Genetics and Metabolism, vol. 91,
no. 1, pp. 15-22, 2007.
[158] E.O. Voit and J. Almeida, “Decoupling Dynamical Systems for Pathway Identification from Metabolic Profiles,” Bioinformatics, vol. 20, no. 11, pp. 1670-1681, 2004.
[159] V. Vyshemirsky and M. Girolami, “Biobayes: A Software Package for Bayesian Inference in Systems Biology,” Bioinformatics, vol. 24, no. 17, pp. 1933-1934, 2008.
[160] F.-S. Wang and P.-K. Liu, “Inverse Problems of Biochemical Systems Using Hybrid Differential Evolution and Data Collocation,” Int'l J. Systems and Synthetic Biology, vol. 1, pp. 21-38, 2010.
[161] Y. Wang, T. Joshi, X.-S. Zhang, D. Xu, and L. Chen, “Inferring Gene Regulatory Networks from Multiple Microarray Databases,” Bioinformatics, vol. 22, no. 19, pp. 2413-2420, 2006.
[162] K.Q. Weinberger, F. Sha, Q. Zhu, and L.K. Saul, “Graph Laplacian Regularization for Large-Scale Semidefinite Programming,” Advanced in Neural Information Processing System, 2007.
[163] D.H. Wolpert and W.G. Macready, “No Free Lunch Theorems for Optimization,” IEEE Trans. Evolutionary Computation, vol. 1, no. 1, pp. 67-82, Apr. 1997.
[164] R. Xu, G.K. Venayagamoorthy, and D.C. WunschII, “Modeling of Gene Regulatory Networks with Hybrid Differential Evolution and Particle Swarm Optimization,” Neural Networks, vol. 20, pp. 917-927,
[165] J. Yang, S. Wongsa, V. Kadirkamanathan, S.A. Billings, and P.C. Wright, “Metabolic Flux Estimation—A Self-Adaptive Evolutionary Algorithm with Singular Value Decomposition,” IEEE/ACM Trans.
Computational Biology and Bioinformatics, vol. 4, no. 1, pp. 126-138, Jan.-Mar. 2007.
[166] J. Yang, S. Woongsa, V. Kadirkamanathan, S.A. Billings, and P.C. Wright, “Differential Evolution and Its Application to Metabolic Flux Analysis,” Proc. Evo Workshops '05, F. Rothlauf et al.,
eds., pp. 115-124, 2005.
[167] Evolutionary Computation in Dynamics and Uncertain Environments, ser. Studies in Computational Intelligence, S.X. Yang, Y.-S. Ong, and Y.C. Jin, eds., vol. 51, Springer, 2007.
[168] Y. Ye, “Interior Algorithms for Linear, Quadratic and Linearly Constrained Nonlinear Programming,” PhD thesis, Dept. of ESS, Stanford Univ., 1987.
[169] M.K.S. Yeung, J. Tegner, and J.J. Collins, “Reverse Engineering Gene Networks Using Singular Value Decomposition and Robust Regression,” Proc. Nat'l Academy of Sciences USA, vol. 99, no. 9, pp.
6163-6168, 2002.
[170] L. You, “Toward Computational Systems Biology,” Cell Biochemistry and Biophysics, vol. 40, pp. 167-184, 2004.
[171] D.E. Zak, G.E. Gonye, J.S. Schwaber, and F.J. DoyleIII, “Importance of Input Perturbations and Stochastic Gene Expression in the Reverse Engineering of Genetic Regulatory Networks: Insights
from and Identifiability Analysis of an in Silico Network,” Genome Research, vol. 13, pp. 2396-2405, 2003.
[172] W. Zhang and X. Xie, “DEPSO: Hybrid Particle Swarm with Differential Evolution Operator,” Proc. IEEE Int'l Conf. Systems, Man and Cybernetics, pp. 3816-3821, 2003.
[173] Z. Zhou, Y.-S. Ong, P.B. Nair, A.J. Keane, and K.Y. Lum, “Combining Global and Local Surrogate Models to Accelerate Evolutionary Optimization,” IEEE Trans. Systems, Man, and Cybernetics, Part
C: Applications and Rev., vol. 37, no. 1, pp. 66-76, Jan. 2007.
Index Terms:
reliability,biology computing,genetic algorithms,learning (artificial intelligence),parameter estimation,physiological models,optimal design,optimization problems,systems biology,parameter estimation
problem,metaheuristic optimizer,machine learning,model parameters,Biological system modeling,Mathematical model,Computational modeling,Systems biology,Optimization,Biochemistry,Parameter
estimation,evolutionary algorithms.,Systems biology,parameter estimation problem,model calibration,heuristic,metaheuristic
Jianyong Sun, J. M. Garibaldi, C. Hodgman, "Parameter Estimation Using Metaheuristics in Systems Biology: A Comprehensive Review," IEEE/ACM Transactions on Computational Biology and Bioinformatics,
vol. 9, no. 1, pp. 185-202, Jan.-Feb. 2012, doi:10.1109/TCBB.2011.63
Usage of this product signifies your acceptance of the
Terms of Use | {"url":"http://www.computer.org/csdl/trans/tb/2012/01/ttb2012010185-abs.html","timestamp":"2014-04-23T22:42:39Z","content_type":null,"content_length":"89780","record_id":"<urn:uuid:c96fffb2-1d1a-4af6-9159-bf5171984148>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00147-ip-10-147-4-33.ec2.internal.warc.gz"} |
Simulation of damage evolution in discontinously reinforced metal matrix composites: a phase-field model
Title Simulation of damage evolution in discontinously reinforced metal matrix composites: a phase-field model
Publication Journal Article
Year of 2009
Authors Biner SB, Hu SY
Journal International Journal of Fracture
Volume 158
Pages 99-105
Date Aug
ISBN Number 0376-9429
Accession ISI:000269078900002
Keywords cracks, damage, ductility, growth, metal matrix composites, microelasticity theory, microstructures, phase-field model, simulation, solids, voids
In this study, a phase-field model is introduced to model the damage evolution, due to particle cracking in reinforced composites in which matrix deformation is described by an
elastic-plastic constitutive law exhibiting linear hardening behavior. In order to establish the viability of the algorithm, the simulations are carried out for crack extension from a
Abstract square hole in isotropic elastic solid under the complex loading path, and composites having the same volume fraction of reinforcements with two different particle sizes. The observed
cracking patterns and development of the stress-strain curves agree with the experimental observations and previous numerical studies. The algorithm offers significant advantages to
describe the microstructure and topological changes associated with the damage evolution in comparison to conventional simulation algorithms, due to the absence of formal meshing.
URL <Go to ISI>://000269078900002
DOI 10.1007/S10704-009-9351-6 | {"url":"https://www.ameslab.gov/content/simulation-damage-evolution-discontinously-reinforced-metal-matrix-composites-phase-field-mo","timestamp":"2014-04-16T07:43:58Z","content_type":null,"content_length":"22250","record_id":"<urn:uuid:0aa10dd5-b014-442e-a9af-9b9b9d7ac09a>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00249-ip-10-147-4-33.ec2.internal.warc.gz"} |
The value of A is about 78.07 degrees.
The 2003 Math Awareness Month poster design is just one example of the connection between mathematics and art. Of course there are numerous other connections, including those inspired by Escher in
the recent book M.C.Escher's Legacy [Sc2]. My article [Du3] and electronic file on the CD Rom that accompanies that book contain many examples of computer-generated hyperbolic tessellations inspired
by Escher's art. For more on Escher's work, see the Official M. C. Escher Web site http://www.mcescher.com/ [Es1].
[Ab1] Abas, S. Jan, Web site: http://www.bangor.ac.uk/~mas009/part.htm
[Bo1] Bool, F.H., Kist, J.R., Locher, J.L., and Wierda, F., editors, M. C. Escher, His life and Complete Graphic Work, Harry N. Abrahms, Inc., New York, 1982. ISBN 0-8109-0858-1
[Co1] Coxeter, H. S. M., “Crystal symmetry and its generalizations,” Royal Society of Canada(3), 51 (1957), 1-13.
[Co2] Coxeter, H. S. M., “The non-Euclidean symmetry of Escher's Picture `Circle Limit III',” Leonardo, 12 (1979), 19-25, 32.
[Co3] Coxeter, H. S. M., “The Trigonometry of Escher's Woodcut 'Circle Limit III',” The Mathematical Intelligencer, 18 no. 4 (1996) 42-46. Updated and corrected version appears in [Sc2] below.
[Co4] Coxeter, H. S. M., “Angels and devils,” in The Mathematical Gardner, David A. Klarner, editor, Wadsworth International, 1981 (out of print). ISBN 0-534-98015-5
Republished as: Mathematical Recreations: A Collection in Honor of Martin Gardner, David A. Klarner, editor, Dover Publishers, 1998. ISBN 0-486-40089-1
[De1] Deraux, Martin, Interactive tessellation web site: http://www.math.utah.edu/~deraux/tessel/
[Du1] Dunham, D., “Hyperbolic symmetry,” Computers and Mathematics with Applications, Part B 12 (1986), no. 1-2, 139-153.
[Du2] Dunham, D., “Transformation of Hyperbolic Escher Patterns,” Visual Mathematics (an electronic journal), 1, No. 1, March, 1999.
[Du3] Dunham, D., “Families of Escher Patterns,” in [Sc2] below, pp. 286-296.
[Es1] Official M. C. Escher Web site, published by the M.C. Escher Foundation and Cordon Art B.V. http://www.mcescher.com/
[Fe1] Ferguson, Helaman, Web site: http://www.helasculpt.com/gallery/index.html
[Go1] Goodman-Strauss, Chaim, “Compass and straightedge in the Poincaré disk,” Amer. Math. Monthly, 108 (2001), no. 1, 38-49.
[Gr1] Greenberg, Marvin, Euclidean and Non-Euclidean Geometries, 3rd Edition, W. H. Freeman and Co., 1993. ISBN 0-7167-2446-4
[Ha1] Hatch, Don, Hyperbolic tessellations web site: Hyperbolic Planar Tesselations [Ha1].
[He1] Henderson, David W., and Daina Taimina, Experiencing Geometry: In Euclidean, Spherical and Hyperbolic Spaces, 2nd Ed., Prentice Hall, 2000. ISBN 0130309532 Web link: http://
[Jo1] Joyce, David, Hyperbolic tessellations web site: http://aleph0.clarku.edu/~djoyce/poincare/poincare.html
[Ka1] Kaplan, Craig S., “Computer generated Islamic star patterns,” Bridges 2000, Mathematical Connections in Art, Music and Science. Winfield, Kansas, USA, 28-30 July 2000. ISBN 0-9665201-2-2 Web
link: Abstract and PDF
[Ma1] Magnus, Wilhelm, Noneuclidean Tesselations and Their Groups, Academic Press, 1974. ISBN 0-12-465450-9
[Sc1] Schattschneider, Doris, Visions of Symmetry: Notebooks, Periodic Drawings, and Related Work of M. C. Escher, W. H. Freeman, New York, 1990. ISBN 0-7167-2126-0
[Sc2] Schattschneider, Doris, and Michele Emmer, editors, M. C. Escher's Legacy: A Centennial Celebration, Springer Verlag, 2003. ISBN 3-540-42458-X
Page URL: http://www.d.umn.edu /~ddunham/mam/essay1.html
Page Author: Doug Dunham
Last Modified: Monday, 03-Feb-2003 20:22:37 CST
Comments to: | {"url":"http://www.d.umn.edu/~ddunham/mam/essay1.html","timestamp":"2014-04-20T23:36:58Z","content_type":null,"content_length":"18996","record_id":"<urn:uuid:eb07edbe-2263-40ac-8390-cd3eb37b469d>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00088-ip-10-147-4-33.ec2.internal.warc.gz"} |
Modeling Airflow Using Subject-Specific 4DCT-Based Deformable Volumetric Lung Models.
Jump to Full Text
MedLine PMID: 23365554 Owner: NLM Status: PubMed-not-MEDLINE
Abstract/ Lung radiotherapy is greatly benefitted when the tumor motion caused by breathing can be modeled. The aim of this paper is to present the importance of using anisotropic and
OtherAbstract: subject-specific tissue elasticity for simulating the airflow inside the lungs. A computational-fluid-dynamics (CFD) based approach is presented to simulate airflow inside a
subject-specific deformable lung for modeling lung tumor motion and the motion of the surrounding tissues during radiotherapy. A flow-structure interaction technique is employed that
simultaneously models airflow and lung deformation. The lung is modeled as a poroelastic medium with subject-specific anisotropic poroelastic properties on a geometry, which was
reconstructed from four-dimensional computed tomography (4DCT) scan datasets of humans with lung cancer. The results include the 3D anisotropic lung deformation for known airflow
pattern inside the lungs. The effects of anisotropy are also presented on both the spatiotemporal volumetric lung displacement and the regional lung hysteresis.
Authors: Olusegun J Ilegbusi; Zhiliang Li; Behnaz Seyfi; Yugang Min; Sanford Meeks; Patrick Kupelian; Anand P Santhanam
Related 24130084 - Climatic and technological ceilings for chinese rice stagnation based on yield gaps and...
Documents : 10539614 - Firm behavior in a market with addiction: the case of cigarettes.
21836814 - A predictive tool for foreign body fibrotic reactions using 2-dimensional computational...
15830734 - A bio-behavioral model of addiction treatment: applying dual representation theory to c...
1867164 - Landmarks in three dimensions: reconstruction from cephalograms versus direct observation.
18588034 - Oscillations in continuous cultures of microorganisms: criteria of utility of mathemati...
Publication Type: Journal Article Date: 2012-12-20
Journal Title: International journal of biomedical imaging Volume: 2012 ISSN: 1687-4196 ISO Abbreviation: Int J Biomed Imaging Publication Date: 2012
Date Detail: Created Date: 2013-01-31 Completed Date: 2013-02-01 Revised Date: 2013-04-18
Medline Nlm Unique ID: 101250756 Medline TA: Int J Biomed Imaging Country: United States
Journal Info:
Other Details: Languages: eng Pagination: 350853 Citation Subset: -
Affiliation: Department of Mechanical Materials and Aerospace Engineering, University of Central Florida, Orlando, FL 32816, USA.
Export APA/MLA Format Download EndNote Download BibTex
MeSH Terms
From MEDLINE®/PubMed®, a database of the U.S. National Library of Medicine
Full Text
Journal Information Article Information
Journal ID (nlm-ta): Int J Biomed Imaging Download PDF
Journal ID (iso-abbrev): Int J Biomed Imaging Copyright © 2012 Olusegun J. Ilegbusi et al.
Journal ID (publisher-id): IJBI open-access:
ISSN: 1687-4188 Received Day: 16 Month: 6 Year: 2012
ISSN: 1687-4196 Revision Received Day: 26 Month: 9 Year: 2012
Publisher: Hindawi Publishing Corporation Accepted Day: 4 Month: 10 Year: 2012
Print publication date: Year: 2012
Electronic publication date: Day: 20 Month: 12 Year: 2012
Volume: 2012E-location ID: 350853
PubMed Id: 23365554
ID: 3539421
DOI: 10.1155/2012/350853
Modeling Airflow Using Subject-Specific 4DCT-Based Deformable Volumetric Lung Models
Olusegun J. Ilegbusi^1
Zhiliang Li^1
Behnaz Seyfi^1
Yugang Min^2
Sanford Meeks^3
Patrick Kupelian^2
Anand P. Santhanam^2*
^1Department of Mechanical Materials and Aerospace Engineering, University of Central Florida, Orlando, FL 32816, USA
^2Department of Radiation Oncology, University of California, Los Angeles, CA 90230, USA
^3Department of Radiation Oncology, M.D. Anderson Cancer Center Orlando, Orlando, FL 32806, USA
Correspondence: *Anand P. Santhanam: asanthanam@mednet.ucla.edu
[other] Academic Editor: Ayman El-Baz
1. Introduction
Lung radiotherapy aims at delivering therapeutic ionizing radiation on lung tumor in the form of external beams from different angles while minimizing exposure to surrounding healthy tissues. Errors
in the lung tumor localization during therapy may lead to an undertreatment of the tumor and an overexposure of ionizing radiation to the surrounding lung tissues [^1]. Lung tumor localization errors
occur as the lung deforms during breathing, thereby compromising the accuracy of the radiation therapy [^2]. Clinical approach to address these localization errors typically involves increasing tumor
margins in radiation treatment plan and avoiding voluntary breathing variations (e.g., sneezing and coughing) [^3]. A precise estimation of lung tumor position can be facilitated by a fluid structure
interaction model, where the airflow inside the lungs is modeled using computational fluid dynamics (CFD) techniques, and the structure is modeled as a subject-specific anisotropic poroelastic
medium. Such an estimation of the lung tumor position will not only lead to improved adaptive radiotherapy and treatment outcomes but also lead to an improved image acquisition guidance in the
CFD of airflow inside lungs during respiration is a challenging task due to the complexity of lung geometry, structural heterogeneity and material anisotropy, and other boundary constraints [^4].
Specifically, the human lung is heterogeneous and anisotropic, with a wide range of elastic property values [^5]. This situation is further exacerbated by the presence of tumors, which significantly
increase the local elastic modulus due to stiffness [^6]. Several methods have been used to simulate flow and deformation in the lung, ranging from fractal theory to macroscopic [^7]. Some methods
allow simulation over several branching levels of the tracheobronchial tree to the alveoli level, but are computationally intensive and impracticable for near real-time application. For instance,
Yang et al. [^7] documented the computation time for airflow studies in 11 airway branches to be in year's duration. Kunz et al. [^8] and Radhakrishnan and Kassinos [^9] demonstrated the
computational complexity of fluid flow solution by using a parallel CFD solver for studying the convective and diffusive particle depositions inside the lung. The airway branching was modeled up to
11 branches. The rest of the lung space was modeled as a homogenous space. Fluid structure interaction (FSI) between the airflow and the lung parenchymal region was first investigated in [^10] by
modeling the alveolar region as macro-air-sacs with isotropic elastic properties. Coupling of anisotropic elastic properties for lung substructures with CFD studies has not been previously
This paper describes a methodology for coupling the anisotropic elasticity with CFD analysis to effectively predict the volumetric lung displacement at different breathing phases and, by so doing,
track tumor motion. The fluid structure interaction depends critically on the anisotropic nature of the subject-specific lung tissue elasticity. To incorporate the anisotropic tissue elasticity, a
multizone-based geometric representation is employed. While multizone representations have been previously investigated for arterial blood flows [^12], it has not been used for investigating airflow
inside the lungs. The usage of such a geometric representation avoided errors in airflow analyses caused by airway segmentation errors and reduced the computation time as compared to the timings
previously reported in [^7]. The airflow modeling studies also demonstrated the influence of anisotropic elasticity estimated from 4DCT on the resulting airflow inside lungs and the volumetric
deformation. The combined usage of anisotropic elasticity and a multizone geometry representation form the key contribution of the paper.
2. Formulation
The present study considers the lung as an anisotropic poroelastic medium. The spatially varying Young's modulus (YM) data is adopted from those derived based on optical flow registration of human
data in a previous study [^13]. The mathematical model involves simultaneous solution of the equations governing fluid flow of air through the airway and the structural deformation of the lobe. A
gauge pressure acquired from the patient using spirometry is imposed at the trachea, which in turn drives air into the lobes, and the pressure of the air inside the lobe in turn results in lung
deformation during the breathing process. This flow-structure interaction (FSI) approach enables the prediction of the spatial velocity distribution and lung displacement over several breathing
cycles. In order to investigate the impact of nonlinear elastic property, the predicted deformation with and without allowance for spatial variation in the YM is compared. The FSI equations were
solved by means of ADINA commercial computational code [^14].
2.1. Estimating Tissue Elastic Properties from 4DCT Images
The 4DCT scans used in this study were acquired at the M.D. Anderson Cancer Research Center, Orlando, from in vivo experiments on human adult patients at different times of the breathing cycle. 4DCT
datasets for a human subject at 10% tidal volume intervals were taken using Siemens Biograph strain-gauge 64 slices CT. The 3D volumetric lung and the airways were segmented using Pinnacle MBS and
OSIRIX software.
The 4DCT data registration algorithm was used to estimate the motion of each 3D voxel at the end-expiration 3D volume data by searching for and locating a corresponding voxel in another 3D volume at
a different breathing phase. An optical flow-based motion estimation, which is based on local Taylor series approximation and is further described in the optical flow literature, was used for the
registration [^15]. One of the limitations of the optical flow method as applied to estimating the 3D organ motion was the low sensitivity to variations in regional motion. In order to improve the
accuracy of optical flow algorithm implementation, we used a multilevel, multiresolution optical flow method [^16], which computed optical flow between two 3D volumes at lower resolution, propagated
the result to the higher resolution volume, and subsequently to the original resolution volume data. In this approach, the organ anatomy was separated into four parts: (1) lung outline, (2) large
capillaries, (3) small capillaries, and (4) parenchymal region. At each level of anatomy optical flow, a multilevel, multiresolution optical flow registration was used for computing the 4D organ
motion of that anatomy and integrated into the next level.
The next step was to estimate the subject-specific deformation model's kernel for the surface and the volumetric lung representations, which represents the internodal elastic interaction, and the
surface lung elasticity in terms of the YM values. The method is based on the approach discussed in [^17]. The volumetric lung deformation operator took as input the force applied on the voxels
inside the lung and computed the subsequent change in shape. We first estimated the volumetric applied force and the displacement, which were the inputs for estimating the operator. The force applied
on a lung for a given change in volume was computed using the pressure-volume curve measurement, a key pulmonary function test. This force was then spatially distributed inside the lung using the
vertical gradient of pressure. Such a distribution estimated the volumetric applied force.
The volumetric lung displacement was estimated using the optical flow, with Euclidean distance-based interpolation of surface registration. We then estimated the surface lung deformation operator as
previously discussed in [^16]. For both the surface and volumetric lung deformation, a heterogeneous Green's function (GF) based formulation was considered. The structural and functional constants
estimated for the surface lung dynamics were specifically used for the volumetric lung dynamics. The GF for the volumetric lung was reformulated in the spectral domain using a hyper spherical
harmonic (HSH) transformation. Upon simplification, the HSH coefficients of the displacement were represented as a product of the HSH coefficients of the applied force and the deformation operator.
The formulation at this stage was mathematically ill-posed since the dimension of the HSH coefficients was higher than the volumetric displacement and the applied force. Local isometricity was
assumed at each volumetric point obtained using the structural and functional constants associated with each voxel. The constraints were computed from the values associated with the lung surface
points using a surface lung deformation model [^18]. This approach reduced the dimensionality of the deformation operator, making the formulation well-posed. Thus for known values of the applied
force and displacement, the HSH coefficients of the operator were estimated, and the YM value of each volumetric point was computed.
2.2. Formulation of Mathematical Model for Computational Fluid Dynamics Simulation
The mathematical model involves solution of the coupled poroelastic flow-structure interaction equation with nonhomogeneous and anisotropic tissue properties. This coupled field approach required the
solution of the Richard's equation [^19] for the local lung pressure and velocity distributions, given by
[Formula ID:
represent the porosity and permeability of the tissue, respectively,
represent the compressibility and viscosity of air, respectively,
is the local pressure (pore pressure),
is air density, and
, and
are the three components of the deflection (deformation) vector for the tissue. Note that the above equation for pore pressure has already incorporated the Darcy equation for gas flow through the
tissue skeleton. This equation was coupled to the lung elastic deformation by the presence of the dilatation (final term) in the above equation. This term was supplied by solving the elastic
deformation field,
, from the following poroelastic version of the Navier's equation:
[Formula ID:
are the tissue Shear Modulus and Poisson ratio, respectively, and
is an external body force term that can include thermal effects as desired. Note that in (
represents anisotropic shear modulus. Assuming orthogonal anisotropy the value of
is allowed to vary in the
, and
planes. The shear modulus is related to the YM through the standard relation (
= YM/[2(1 +
)]) in each direction. Together the previous two equations provide the full description of the coupled lung flow problem. Solution of these equations is accomplished for subject-specific patient lung
geometries using ADINA computational code [
2.3. Geometry Reconstruction and Multilayer Mesh Generation
The CT scans at the end-expiration stage are first segmented and used to generate the three-dimensional (3D) geometry utilizing the Mimics computer code [^20]. The 3D meshes obtained are then
remeshed by means of the 3-matic framework [^21] for numerical computation. The resulting geometry reconstructed for the right lung is shown in Figure 1.
The lung airway is like a multilevel branching tree as shown in Figure 2 [^11]. Based on this airway structure of the lung, the permeability (K = φR^2/8, in which R is the branch radius) should
decrease significantly from the main central branches to the tip branches as the branch diameter progressively decreases. Correspondingly, the airflow velocity in the primary central branches is
significantly higher than that in the peripheral branches. A relatively fine volume numerical mesh size will therefore be required in the core region to reflect the relatively higher pressure,
velocity, and stress gradient there compared to the outer layers. Since proper representation of the grid structure is critical to numerical stability, the multibranching lung structure is
approximated within the context of the poroelastic model used here, as a multizone structure shown in Figure 3, that permits application of different grids and, if needed, different property values.
It can be seen that the multizone geometry representation is similar to the hyperspherical formulation used for representing the YM values of lung substructures. For instance, a normalized airway
branch radius can be converted to permeability values associated with each voxel. This permeability is then allowed to vary from the core (inner shell) to the peripheral layers (outer shell) using
the same hyperspherical parameterization. The sectional view of the volume mesh generated is shown in Figure 4. It should be noted that both Figures 3 and 4 represent cut-out views of the lobe in
order to visualize the multi zone structure and the numerical grid employed.
2.4. Input and Boundary Conditions
Phasic pressure with a period of 4s is imposed at the inlet to the lobe as illustrated in Figure 5. The amplitude of the pressure waveform was prescribed based on spirometry studies acquired from
M.D. Anderson Cancer Center, Orlando. The property data (beside the YM) were assumed from those established in previous studies. The Poisson ratio v in the poroelastic governing equation was assumed
to be 0.4, which falls within the range (0.25–0.47) suggested in previous studies [^22, ^23]. The density of lung was assumed to be 700kg/m^3 [^21]. Preliminary studies indicated that the
deformation is little affected by the permeability k over the range (0.01–0.1) as expected. The anisotropic YM adopted from a previous study based on optical flow registration patient data ranged
from 10Pa to 500Pa. The high YM values correspond to either the tumor location where the structure is rigid or the main trachea wall where the tissue is thick and rigid. The average YM for the
whole lung is 178Pa. This average value is used for the reference cases utilizing linear elastic property. Figure 6 shows representative color-coded YM distribution on a 2D slice of lobe obtained
from optical flow registration and used for the anisotropic elasticity calculations in this paper [^13].
3. Results
The predicted lung deformation simulated using the flow structure interaction at different breathing phases is plotted in Figures 7 and 8 for the linear and anisotropic YM cases, respectively. The
upper part of each figure is a cut-off view to illustrate the evolution of selected layers from the initial state over the specified duration. Figure 7 shows that with linear elasticity, the layers
expanded monotonically in all directions. On the other hand, Figure 8 shows that the case with anisotropic elasticity exhibits both directional deformation as well as expansion.
Three nodes or landmarks on the lung surface, marked A, B, C in Figures 7 and 8, were monitored, and their displacements along the x,y, and z coordinate directions were analyzed. Landmark A was at
the top surface of the lobe, B was at the outer surface near the rib cage and close to the midpoint along the craniocaudal axis, and C was at the interior surface of the lobe.
Figure 9 shows the x, y, z displacements of the monitored nodes A, B, and C over the first respiration cycle for both linear and anisotropic elasticity. The predicted displacements with linear and
anisotropic YM are quite distinct. The displacement profiles for the linear YM case, which are represented by the dash lines, are generally sinusoidal with a symmetric axis at t = 2s, corresponding
to the sinusoidal pressure waveform imposed at the inlet to the lobe. On the other hand, the displacement trajectories for the anisotropic YM case, which are represented by the solid lines in each
figure, are distorted from the sinusoidal pressure condition. The peak x displacement for node A is located at t = 2.4s, which lags the peak inlet pressure at t = 2.0s by 0.4s. The peak z
displacement for node A is 0.2s ahead of the peak inlet pressure. A similar hysteresis phenomenon is observed for the anisotropic YM case at nodes B and C. In addition, the peak displacements in the
x direction for nodes A and B in the anisotropic YM model are nearly 3 times the displacements in the linear YM model. These results clearly show that the effect on deformation of anisotropic lung
elasticity could be significant.
In order to further examine the effect of anisotropicity, the calculations were continued for additional breathing cycles. Figure 10 shows the x, y, z displacements of nodes A, B, and C for linear YM
over 6 breathing cycles. The entire displacement wave pattern becomes stable after the second breathing cycle. The result indicates that all the peak displacement values occur at the midpoint of each
cycle, that is, at t = 2s, 6s and 10s. The displacements at the end of each cycle are nearly negligible. It is worth noting that in the consensus of the result presented in a previous Figure 9,
the displacement profile observed with linear elasticity follows closely the input pressure wave pattern.
The corresponding results over 6 breathing cycles utilizing anisotropic elasticity are presented in Figure 11. The observed hysteresis time for the peak wave appears to be a fixed value for each
monitored location. For example, the predicted peak wave of the x displacement lags the peak pressure inlet by 0.4s, 0.3s, and 0.2s for nodes A, B, and C, respectively. Note that the peak
displacements also vary from cycle to cycle. The observed hysteresis resulted from the anisotropic elasticity distribution in the lung. The hysteresis time is also found to be dependent on the
geometric location of the monitored point in the lobe.
In order to further examine the peak variation in the anisotropic case, the calculations are extended to 12 respiration cycles, and the results are plotted in Figure 12. The results indicate that the
x displacement magnitude profile for the monitored node A reaches local peaks at the 2nd, 6th, and 10th cycles, corresponding to a period of 16s. The y and z displacement profiles reach their local
maxima at the 4th, 8th, and 12th cycles, which also correspond to a period of 16s. A similar periodic pattern is observed for node C while the trend in node B is not quite so distinct, perhaps
because the latter was located near the symmetry point along the craniocaudal axis.
Figure 13 summarizes the results presented above by tracing the trajectories of monitored point A over 6 breathing cycles with both linear (Figure 13(a)) and anisotropic elasticity (Figure 13(b)).
The start and end locations of the node are indicated in each figure. Hysteresis is clearly evident in the anisotropic result as exemplified by the significant differences in the trajectories of the
monitored point over successive breathing cycles. The trajectories for the linear case on the other hand are nearly coincident. The results for nodes B and C exhibit similar trends and are not
presented here for brevity.
Validation of airflow modeling is essential in verifying the usage of anisotropic YM values. Studies were conducted to verify the accuracy of the lung deformation using the CFD-based flow analysis.
Validation consisted of two parts, namely, numerical accuracy and comparison with data. Numerical accuracy was first tested by repeating the calculations for a set of grid numbers below and above the
ones chosen for the above results. The results were found to be essentially independent of grid number (and, correspondingly, grid size) beyond the ones chosen for the results. Next, clinical experts
delineated two sets of 20 landmarks on two lung models. The motion of the landmarks during the CFD simulation was documented and compared with the displacement observed in the 4DCT dataset that was
used to generate the 3D geometry. Table 1 tabulates the mean target deformation error (TDE) for the displacement obtained using the isotropic and anisotropic YM values. It can be seen that a better
accuracy was obtained for the anisotropic case as compared to the isotropic case. The maximum error for both the validations was with 3mm for the anisotropic case, which is within the clinically
acceptable accuracy range. This validation shows that using anisotropic YM values for modeling airflow inside lungs and the subsequent fluid structure interaction can be achieved within clinically
acceptable accuracy range using anisotropic elastic values.
4. Conclusion
The effect has been investigated of using subject-specific anisotropic lung elasticity for studying the airflow-induced lung deformation during radiotherapy. The lung was modeled as an anisotropic
poroelastic medium. The lung geometry at the end-expiration was reconstructed from 4DCT dataset of patients with non-small-cell lung cancer (NSCLC). The subject-specific tissue elasticity was
obtained from the 4DCT using an inverse deformation analysis [^10]. The airflow-tissue interaction model involved solving the coupled equations governing fluid dynamics of airflow inside the lungs
and the associated structural deformation of the lung, subject to appropriate boundary conditions.
The major findings of the study may be summarized as follows.
1. The local anisotropy in the elasticity of the lung substructures has a significant impact on the airflow inside lungs.
2. Anisotropic YM of lung substructures produces a hysteresis effect on the predicted spatial lung displacement relative to the pressure waveform imposed at the inlet to the lung.
3. The hysteresis time spatially varies from one 3D location to another inside the lungs.
The above findings have profound implication in the optimization and targeting of radiation to tumor in radiotherapy. The presence of tumor (focal or distributed) in the lung substructures may alter
the anisotropism in the lung poroelasticity and, hence, as this study has indicated, significantly affects the resulting spatiotemporal displacement and deformation of the lung. In addition, changes
in the tumor regression may lead to changes in the overall anisotropism of the tissue elasticity and subsequently the hysteresis. The net result is an evolving tumor location that is significantly
distinct in shape from one breathing signal to another. The model presented here has demonstrated the capacity to fully represent and quantify such detailed motion of any location in the lung
utilizing subject-specific tissue elasticity for lung substructures.
Computation time is key limitation for many CFD-based analyses. The proposed method of geometry representation took approximately 24 hours to finish the breathing simulation, which is improvement to
previous run time calculations using a full airway geometry. Future work would focus on using high performance graphics units to accelerate the calculations so that the process can be finished in
much lesser timeframe.
Results section discusses validation results for the fluid structure interaction, which showed that anisotropic YM values were effective in modeling the airflow inside the lungs. Validating the
airflow inside the lungs for different breathing patterns and the subsequent interaction studies will be a key part of our future work. Such a study would involve the use of tracking landmarks in the
lung anatomy by means of 2D cine MRI imaging that can acquire the 2D lung snapshots in a given plane in real-time. In addition, using 4D gated MRI imaging, we can validate the volumetric lung
deformation and the fluid structure interaction studies taking into account physiological factors such as tumor regression and day-to-day breathing changes. Future studies would also account for the
effect of cardiac motion on lung imaging. Such cardiac motion can be acquired using 2D cine MRI imaging in addition to 4D imaging acquired for treatment purposes and modeling the lung deformation
subject to being constrained by the cardiac motion.
This work was supported by the James and Esther King Biomedical Research Grant Program, National Science Foundation (Grants no. 1200841 and no. 1200579) and the University of California, Los Angeles.
1. Keall PJ,Mageras GS,Balter JM,et al. The management of respiratory motion in radiation oncology report of AAPM Task Group 76Medical PhysicsYear: 20063310387439002-s2.0-3374942203817089851
2. Santhanam AP,Willoughby T,Shah A,Meeks S,Rolland JP,Kupelian P. Real-time simulation of 4D lung tumor radiotherapy using a breathing model11Proceedings of the International Conference on Medical
Image Computing and Computer Assisted Intervention2008710771
3. Guckenberger M,Richter A,Boda-Heggermann J,Lohr F. Motion compensation in radiotherapyCritical Reviews in Biomedical Engineering Year: 201240318719722694199
4. Anderson J. Computational Fluid Dynamics—The Basics with ApplicationsMcgraw-Hill Science
5. Santhanam AP,Hamza-Lup FG,Rolland JP. Simulating 3-D lung dynamics using a programmable graphics processing unitIEEE Transactions on Information Technology in BiomedicineYear:
6. Ioncica AM,Malos A,Crisan E,Popescu C,Saftoiu A,Ciurea T. State-of-the art endoscopic imaging in lung cancer: should specialties collide or concur?Journal of Gastrointestinal and Liver Diseases
Year: 201019193972-s2.0-7795116188820361084
7. Yang XL,Liu Y,Luo HY. Respiratory flow in obstructed airwaysJournal of BiomechanicsYear: 20063915274327512-s2.0-3375030732116300771
8. Kunz RF,Haworth DC,Porzio DP,Kriete A. Progress towards a medical image through CFD analysis toolkit for respiratory function assessment on a clinical time scaleProceedings of the IEEE
International Symposium on Biomedical Imaging: From Nano to Macro (ISBI '09)July 20093823852-s2.0-70449484427
9. Radhakrishnan H,Kassinos S. CFD modeling of turbulent flow and particle deposition in human lungsProceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology
10. Ding H,JiangY,Furmanczyk M,Prekwas A,Reinhardt LM. Simulation of human lung respiration process using 3-D CFD with macro air sac system modelProceedings of the WMC 15th International Conference
on Health Sciences Simulation2005
11. Villard PF,Beuve M,Shariat B,Baudet V,Jaillet F. Simulation of lung behaviour with finite elements: influence of bio-mechanical parametersProceedings of the 3rd International Conference on
Medical Information Visualisation—BioMedical Visualisation (MediVis '05)July 20059142-s2.0-33746171723
12. Ilegbusi OJ,Velaski-Tuema E. A fluid-structure interaction index of coronary plaque ruptureProceedings of the Biomedical Simulation2010Springer98107 (Lecture Notes in Computer Science).
13. Santhanam AP,Neelakkantan H,Min Y. Visualization of 3D volumetric lung dynamics for real-time external beam lung radiotherapyStudies in Health Technology and InformaticsYear:
14. ADINA Theory and Modeling Guide Volume 3: ADINA CFD & FSI, ADINA R&D, May 2009.
15. Vaina LM,Beardsley SA,Rushton SK. Optical Flow and BeyondYear: 20041st editionSpringer
16. Santhanam AP,Min Y,Rolland JP,Imielinska C,Kupelian P. 4DCT Lung Registration MethodsYear: 20111st editionCRC Press
17. Santhanam AP,Min Y,Mudur SP,et al. An inverse hyper-spherical harmonics-based formulation for reconstructing 3D volumetric lung deformationsComptes Rendus-MecaniqueYear:
18. Santhanam A,Mudur S,Rolland J. An Inverse 3D Lung Deformation Analysis for Medical VisualizationYear: 2006SwitzerlandComputer Animation and Social Agents
19. Richards LA. Capillary conduction of liquids through porous mediumsJournal of Applied PhysicsYear: 1931153183332-s2.0-34247647193
20. http://www.materialise.com/mimics.
21. http://www.materialise.com/BiomedicalRnD/3-matic.
22. Werner R,Ehrhardt J,Schmidt R,Handels H. Modeling respiratory lung motion—a biophysical approach using finite element methods6916Medical Imaging. Physiology, Function, and Structure from Medical
ImagesFebruary 2008 (Proceedings of SPIE).2-s2.0-44349100259
23. Chhatkuli S,Koshizuka S,Uesaka M. Dynamic tracking of lung deformation during breathing by using particle methodModelling and Simulation in EngineeringYear: 200920097
Article Categories:
• Research Article
Previous Document: Compressed sensing photoacoustic imaging based on fast alternating direction algorithm. Next Document: The smoothing artifact of spatially constrained canonical correlation
analysis in functional MRI. | {"url":"http://www.biomedsearch.com/nih/Modeling-Airflow-Using-Subject-Specific/23365554.html","timestamp":"2014-04-19T05:16:21Z","content_type":null,"content_length":"54011","record_id":"<urn:uuid:6581cf1f-4c49-4ee2-8497-748644f34f8a>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00466-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is Ciphertext?
Image Credit:
Ciphertext can also be referred as encrypted text. Before encryption we had plaintext. The term "cipher" is sometimes used as an alternative term for ciphertext. In cryptography, cipher is an
algorithm that is applied on the plain text to get the ciphertext. Another name for ciphertext is encrypted or encoded information because it is unreadable by a human or computer without the proper
algorithm. The reverse of encryption is decryption. It is the process of turning ciphertext into readable plaintext. Codetext and ciphertext are completely different. Codetext is a result of a code,
not a cipher.
What are the different types of Ciphers?
The art of cryptography emerged thousands of years ago. Earlier ciphers or algorithms were performed manually and were entirely different from modern algorithms which are generally executed by a
machine. There are different types of ciphers such as substitution cipher, transposition cipher, polyalphabetic substitution cipher, permutation cipher, public-key cryptography and public-key
What is Substitution Cipher?
Ciphertext substitutes the plaintext. This type of cipher is also known as Caesar cipher and One-time Pad Cipher.
What is Transposition Cipher?
This ciphertext is a permutation of the plaintext. It is also known as Rail Fence Cipher.
What is Polyalphabetic Substitution Cipher?
A substitution cipher using multiple substitution alphabets is also known as Vigenère Cipher and Enigma Machine.
What is Permutation Cipher?
A transposition cipher is a type in which the key to decryption is a permutation.
What is Private-key Cryptography?
Modern ciphers are much more secure than classical ciphers and can withstand a wide range of attacks. The cipher is designed in such a way that an attacker cannot crack the key, even if he is aware
of the plaintext and corresponding ciphertext. Private-key cryptography is also known as "symmetric key algorithm". Here the same key is used for encryption as well as decryption. The receiver and
the sender must have a pre-shared key. The key is kept secret from all other parties; this key is used by for encryption by the sender, and the same key is used by the receiver for decryption. An
example for this type of cipher is DES and AES algorithms.
What is Public-key Cryptography?
It is also known as asymmetric key algorithm. Here two different keys are used for encryption and decryption. There are 2 distinct keys: public key and private key. The public key is published and
hence is possible for any sender to perform encryption, whereas, the private key is kept secret or hidden from the receiver. Example for this type of cipher is the RSA algorithm.
What is Cryptanalysis?
Cryptanalysis is the study of obtaining encrypted information, without accessing to the secret information. This method involves knowing how the system works and finding the secret key. Cryptanalysis
is also referred to as code breaking or cracking the code. Converting plaintext to ciphertext is the easiest and the most significant part of cryptanalysis. Depending on the available information and
the type of cipher is being analyzed, crypanalysts can follow various attack models to crack a cipher. The various attack models are –
• Ciphertext-only: The cryptanalyst can access only a few ciphertexts.
• Known-plaintext: The attacker has a preexisting set of ciphertexts to which he knows the corresponding plaintext.
• Chosen-plaintext attack: The attacker can obtain the ciphertexts corresponding to an arbitrary set of plaintexts. There are two types of chosen plaintext:
□ Batch chosen-plaintext attack: In this method the cryptanalyst chooses all plaintexts before they are encrypted.
□ Adaptive chosen-plaintext attack: A series of interactive queries are made by the cryptanalyst by choosing the subsequent plaintexts based on the information available from the previous
• Chosen-ciphertext attack: Here the attacker can obtain the plaintexts that are corresponding to a set of ciphertexts of his own choosing.
• Related-key attack: This is like a chosen-plaintext attack where except the attacker can obtain ciphertexts encrypted under two separate keys. Even though the keys are unknown, the relationship
between them is known. For example: two keys that differ by 1 bit. | {"url":"http://www.innovateus.net/science/what-ciphertext","timestamp":"2014-04-21T15:42:27Z","content_type":null,"content_length":"36057","record_id":"<urn:uuid:69fed60e-e023-4a0c-a7d9-54c4fbaeaa41>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00238-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Jessica on Tuesday, July 5, 2011 at 4:23am.
This is the problem given: The side of a hill faces due south and is inclined to the horizontal at an angle alpha. A straight railway upon it is inclined at an angle beta to the horizontal. If the
bearing of the railway is gamma east of north, show that cos(gamma) = cot(alpha)tan(beta)
I drew a diagram like this, according to this question (imageshack.us/photo/my-images/30/trigonometryquestion.png/).
We have cos(gamma) = CH/AC
cot(alpha) = BH/CH
tan(beta) = CH/AH,
so cot(alpha)xtan(beta) = BH/AH
I'm stuck at proving CH/AC = BH/AH
Is there anyway around this? I'll appreciate any help from you all. Thank you very much.
Note: AC is not perpendicular to BC.
• math - MathMate, Tuesday, July 5, 2011 at 8:37am
From the diagram, I am not sure if you have the same interpretation of the question as I have.
There is a slope facing south, at α with the horizontal.
The railway is at an angle of γ east of north (on the horizontal projection).
The angle β is the angle the railway makes with the horizontal.
If you look at the situation in plan view, north towards the top of the paper, we see a line at γ towards east. Let the railway line length be L, and denote A by the south end of the line, and B
the north end of the line.
Let the elevation of A (south) be zero.
Then B is h above A, where
Now we will calculate h in a different way by projecting the point B to a north-south line on the slope, call it B'.
B' should be also h above point A, since the side of the hill faces north-south.
Now calculate h by first projecting L onto the horizontal plane, then project the resulting line to the N-S line, and finally multiply by tanα to get the height h.
transpose cosγ to the left, and cancel common factor L, we get:
=tanβcotα. QED
Related Questions
math - A tower 125 feet high stands on the side of a hill. At a point 240 feet ...
math - A tower 125 feet high stands on the side of a hill. At a point 240 feet ...
math - A tower 125 feet high stands on the side of a hill. At a point 240 feet ...
maths - a ladder rests against a wall at an angle alpha to the horizontal its ...
trig - FInd the exact value pf the 6 trig function of alpha Given: point (-2,-6...
Calculating distance(law of cosines) - Observers P and Q are located on the side...
physics l - A hill that has a 15 % grade is one that rises 15.0 m vertically for...
Physics - A hill that has a 31.4 % grade is one that rises 31.4 m vertically for...
Physics - A hill that has a 31.4 % grade is one that rises 31.4 m vertically for...
physics - A hill that has a 17.5% grade is one that rises 17.5 m vertically for ... | {"url":"http://www.jiskha.com/display.cgi?id=1309854237","timestamp":"2014-04-20T02:31:37Z","content_type":null,"content_length":"10112","record_id":"<urn:uuid:044a9288-3a59-4757-b5e3-95c4a79f8dcc>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00570-ip-10-147-4-33.ec2.internal.warc.gz"} |
How Many Students Are There At Hogwarts?
How Many Students Are There At Hogwarts?
by Steve Vander Ark
There are actually about three hundred, it would seem, although there is plenty of debate on the subject. Here's the evidence from the books themselves:
• There are more or less ten students (depending on the vagarities of the Sorting) in each House per year, five boys and five girls.
• Harry's class in Gryffindor has the following students in it:
• If there are more Gryffindors of that year, isn't it strange that we haven't heard about them in six years' worth of books? Not a one has spoken up in class, has been part of any parties or
activities in the common room, or anything like that. Judging by the numbers cited above for Herbology, flying, and Potions, they must not even be in the same classes. If there are any others,
where in the world are they? We do have evidence suggesting the existence of two more girls, but if they exist they have been awfully quiet.
• Assuming the ten students per House per year numbers, that would work out to forty students per year for a total of about 280 students. This meshes with the several times that Harry has seen
"hundreds" of people in the Great Hall (in PS7 when Harry was Sorted and GF17 when he was chosen for the Tournament, for example).
• The number of teachers would also suggest a fairly small number of students. In GF12 Harry looks along the staff table in the Great Hall. He notes teachers by name until he gets to Dumbledore in
the middle of the table. He has named Flitwick, Sprout, Sinistra, Snape, and an open chair for McGonagall on that side, which means that there would be the same number on the other side, for a
total of eleven, counting Dumbledore. Add Trelawney, who we know doesn't attend these functions, and we get an even dozen teachers. Twelve teachers would not be able to teach classes to a
thousand students, unless they used Time Turners constantly, which doesn't seem likely.
• The Sorting in PA5 takes only as long as a fairly quick conversation between Harry, McGonagall, and Madam Pomfrey and a subsequent chat between Hermione and McGonagall. The duration of these
chats seems quite short. But if there are a thousand students at Hogwarts, there would be between 150 and 200 students to sort every year. Even at the rate of two per minute, that would mean the
sorting would last for over an hour. Would the rest of the students really be willing to sit for that long waiting to eat? I think it would get a bit long. And the sorting we've witnessed in the
books certainly doesn't seem to last over an hour.
• On the other hand, there are about a hundred carriages to take students to the castle from the train station in Hogsmeade (PA5). There were also 1200 seats available around the tables at the Yule
Ball (GF23). And during one Quidditch match, there were two hundred Slytherin supporters in the stadium when we know that the Ravenclaws and Hufflepuffs were backing Gryffindor (PA15). So there
are some references that support the "thousand student" concept. But by far most of the evidence suggests a much smaller number... | {"url":"http://www.hp-lexicon.org/essays/essay-hogwarts-how-many.html","timestamp":"2014-04-19T11:58:04Z","content_type":null,"content_length":"13095","record_id":"<urn:uuid:00eb3c8a-4b7f-4413-b99c-8c6202a74691>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00170-ip-10-147-4-33.ec2.internal.warc.gz"} |
Jamaica Plain Prealgebra Tutor
Find a Jamaica Plain Prealgebra Tutor
...Teaching in an elementary school means that I am familiar with and teach all subjects (math, reading, writing, social studies, and science). I also taught 4th grade math during summer school,
and have tutored 3rd graders in English and math. I was a Women and Gender Studies minor in college and ...
17 Subjects: including prealgebra, reading, English, writing
My name is Gavin S. and I am currently a high school biology teacher in Watertown. I have been teaching for three years and work primarily with students with higher emotional needs. In my
experience thus far I've had to differentiate lessons frequently, scaffold, provide alternate assessments, and alternate ways of explaining material.
8 Subjects: including prealgebra, biology, algebra 1, genetics
...During college, my most relevant experience for this job was as an undergraduate teaching assistant for organic chemistry. It taught me how to be patient in explaining concepts. I also learnt
that students grasp information differently, and adjusting accordingly to fit each student was a necessary skill.
41 Subjects: including prealgebra, chemistry, reading, English
...I look forward to working with your child to build their confidence in math and learn to conquer a subject that I love. As the school year comes to an end, I am looking for students for a
summer tutoring schedule. I am available most, if not all of the summer and can tutor morning, afternoons o...
5 Subjects: including prealgebra, algebra 1, algebra 2, precalculus
...My undergraduate degree is in Chinese Language and Literature from Boston University. As of 2014, I have been studying Chinese for over six years. I have conducted interviews and ESL lessons in
Chinese, and have even translated a short story by a modern Chinese author for my thesis project.
17 Subjects: including prealgebra, reading, English, writing | {"url":"http://www.purplemath.com/Jamaica_Plain_Prealgebra_tutors.php","timestamp":"2014-04-19T07:36:14Z","content_type":null,"content_length":"24340","record_id":"<urn:uuid:305ec3c8-b51a-4554-aa34-b9bdc503259c>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00441-ip-10-147-4-33.ec2.internal.warc.gz"} |
Analyzing Slug Tests for Maximum Accuracy
Kansas Geological Survey, Open-file Report 2000-26
Analyzing Slug Tests for Maximum Accuracy
C. D. McElwee
Prepared for Presentation at
Spring AGU Meeting, Washington, DC
May 31, 2000
KGS Open-file Report 2000-26
Knowledge of the hydraulic conductivity distribution is of utmost importance in understanding the workings of an aquifer and in planning the consequences of any action taken upon that aquifer. For
example, the effects of resource development or of contaminant remediation depend critically upon the in-situ hydraulic conductivity. Slug tests have been used extensively to measure hydraulic
conductivity in the last fifty years since Hvorslev's work. Slug test responses can run the gamut from overdamped to underdamped. Until recently models were readily available for only the two
A general nonlinear model based on the Navier-Stokes equation, nonlinear frictional loss, non-Darcian flow, acceleration effects, radius changes in the wellbore, and a Hvorslev model for the aquifer
has been developed (McElwee and Zenner, 1998). The nonlinear model has three parameters: beta which is related to radius changes in the water column, A which is related to the nonlinear head losses,
and K the hydraulic conductivity. An additional parameter has been added representing the initial velocity of the water column at slug initiation, VEL0. We find that the model is quite robust in its
estimates of K over varying conditions and allows a wide range of slug test data to be analyzed with a greater accuracy than traditional linear methods. The model covers the entire range of responses
from overdamped to underdamped.
To obtain maximum accuracy in analyzing slug test data one should use a model that will seamlessly simulate responses for the overdamped region through the critically damped region and on into the
underdamped region. This is particularly important when taking multilevel data sets at a site where the hydraulic conductivity changes dramatically from location to location. Multiple slug tests
should be taken at a given location to test for nonlinear effects and to determine repeatability. This will allow some statistical measure of the experimental accuracy. As the hydraulic conductivity
and the velocities in the wellbore increase, the nonlinear effects represented by the parameter A also increase. At some point the slug test response will become oscillatory as the hydraulic
conductivity increases. The parameter beta represents a correction to the water column length caused by radius variations in the wellbore and is most useful in matching the oscillation frequency.
Sensitivity analysis shows that in general beta and VEL0 have the lowest sensitivity and hydraulic conductivity usually has the highest, however, for very high K's the sensitivity to A may surpass
that due to K. The sensitivities vary with hydraulic conductivity but we find the model to be robust with regard to estimates of hydraulic conductivity. Since oscillatory slug tests involve
accelerations of the water column, we find that the pressure transducer responses are affected by these accelerations. For maximum accuracy of analysis, the model response must be corrected for
acceleration before comparison to the transducer response.
Knowledge of the hydraulic conductivity distribution is of utmost importance in understanding the workings of an aquifer and in planning the consequences of any action taken upon that aquifer. For
example, the effects of resource development or of contaminant remediation depend critically upon the in-situ hydraulic conductivity. Slug tests have been used extensively to measure hydraulic
conductivity in the last fifty years since Hvorslev's work. Slug test responses can run the gamut from overdamped to underdamped. Until recently models were readily available for only the two
A field site, GEMS, in the Kansas River alluvium (coarse sand and gravel overlain by silt and clay) exhibits very high conductivities and nonlinear behavior for slug tests in the sand and gravel
region. It is known from extensive drilling, sampling, and a tracer test that the hydraulic conductivity varies a great deal spatially. Over 70 wells have been completed at various depths. Slug tests
have been performed in wells that are completed in the sand and gravel interval using a packer system with a piston for slug test initiation, allowing accurate determination of the initial head and
starting time for the slug test.
GEMS (Geohydrological Experiment and Monitoring Site)
Location map for the Geohydrologic Experimental and Monitoring Site (GEMS).
Well Nests at GEMS
A typical well nest is shown in the next figure. Typically there is a fully screened well and several wells with short screens completed at various depths. In some nests we may have a well completed
into the bedrock.
Typical Slug Test Arrangement
• The next figure shows a typical slug test arrangement
• h(t) is the head in the well at any time above the static value
• Z[o] is the length of water below the static level to the top of the screen
• b is the length of the screen
The Model
A general nonlinear model based on the Navier-Stokes equation, nonlinear frictional loss, non-Darcian flow, acceleration effects, radius changes in the wellbore, and a Hvorslev model for the aquifer
has been developed.
The nonlinear model has three parameters: β which is related to radius changes in the water column, A which is related to the nonlinear head losses, and K the hydraulic conductivity. An additional
parameter has been added representing the initial velocity of the water column at slug initiation, VEL0. Normally VEL0 would be zero, but it appears that sometimes at initiation the water column
obtains an initial velocity. Since in this work a piston was used for initiation, it is understandable how that might happen. We find that the model is quite robust in its estimates of K over varying
conditions and allows a wide range of slug test data to be analyzed with a greater accuracy than traditional linear methods. The model covers the entire range of responses from overdamped to
Sensitivity Amalysis
Sensitivity analysis is a formalism that allows the estimation of the effect on model response due to changing one parameter (McElwee, 1987). This can be combined with a model fitting routine to
iteratively determine the set of parameters that give the best fit to experimental data. The sensitivity coefficient is defined as
which is a measure of how much the head (model output) changes when the parameter p is changed by a small amount. A sensitivity coefficient can be defined for each parameter, so if there are N
parameters an NxN sensitivity matrix can be defined and used in the usual least squares fitting procedure to produce the following equation for parameter increments Δp
where Δh is the difference between experimental and model head and WU is the sensitivity matrix. The diagonal elements of the sensitivity matrix are a measure of the model sensitivity to a given
parameter. The current model has four parameters (beta, A, K, and VEL0). The diagonal elements of the sensitivity matrix for these four parameters are plotted below. Sensitivity analysis shows that
in general beta and VEL0 have the lowest sensitivity and hydraulic conductivity usually has the highest, however, for very high K's the sensitivity to A may surpass that due to K. The sensitivities
vary with hydraulic conductivity but we find the model to be robust with regard to estimates of hydraulic conductivity.
Multiple Tests Are Diagnostic
Multiple slug tests should be performed at a given location to test for nonlinear effects and to determine if the tests are repeatable (Butler et al., 1996). As the hydraulic conductivity and the
velocities in the wellbore increase, nonlinear effects represented by the parameter A will also increase. At some point the slug test response will become oscillatory as the hydraulic conductivity
increases. If multiple slug tests with the same initial head are not very repeatable that implies there is noise in the data or that the well is changing characteristics. Noise with a time trend will
be particularly troublesome for longer duration slug tests produced by low values of K. Wells can change effective K from one slug test to another if the well has not been properly developed.
Multiple tests will also allow some statistical measure of the experimental accuracy. One can analyze the slug tests individually and calculate the average and the standard deviation of K or any
other fitted parameter.
The three following examples are from wells at the GEM site and span the range from overdamped to underdamped slug test responses. In each case some normalized slug test plots with differing initial
heads are shown to investigate nonlinear effects. Examples of repeat slug tests for a given initial head are also shown to look for noise or changing well characteristics. In two of the examples the
individual slug tests have been analyzed to allow calculation of the average and standard deviation in K.
Well 1-2
This well is completed at a depth of 37.7 feet with a screen length of 2.0 feet and is located in fairly fine-grained material directly under the semiconfining layer at GEMS. Therefore one would
expect a fairly low hydraulic conductivity and an overdamped response. The plot below is in the usual Hvorslev form. There is some spread in the responses for differing heads indicating noise,
however, there is no definite trend as there would be for either nonlinear responses or changing well characteristics. There is some upward curvature at early time indicating some storage effects
(Chirlin, 1989). The nonlinear model indicates that there is very little sensitivity to beta, A, or VEL0 and that the best fit for the suite of tests is K=1.40x10^-5 ft/sec. When the 4 tests are
analyzed separately the average K is 1.31x10^-5 ft/sec, with a standard deviation of .122x10^-5 which is a little less than 10% of the average value.
The figure below shows two repeat slug test responses for 2000 mi added to the well. The differences between the two curves represents the noise present during these tests. It appears that there was
some varying time trends for the local water levels during these two tests. Since the tests were fairly lengthy, the effect is more pronounced than for shorter length tests.
Well 00-6
This well is completed at a depth of 42.4 feet with a screen length of 2.5 feet and in located in medium-grained material about 7.4 feet under the semiconfining layer at GEMS. This results in a
moderate value for the hydraulic conductivity, however, the response is still an overdamped one. The plot here is also the usual Hvorslev form. There is a definite indication of nonlinear behavior
here as the responses for differing initial heads are well separated. There is downward curvature in these responses in contrast to the previous well. The nonlinear model indicates that there is low
sensitivity to beta and VEL0 but good sensitivity to K and A. The best fit for the suite of tests is K = 2.22x10^-3 ft/sec and A = 14.19.
Repeat tests for 4000 ml added to the well are shown in the plot below. The tests basically fall on top of one another, indicating high reproducibility. These tests are over in about 10 seconds so
they are not very susceptible to long term trends in local water levels.
Well 2-7
This well is completed at a depth of 56.4 feet with a screen length of 2.6 feet and is located in fairly coarse-grained material near the center of the alluvial aquifer at GEMS. In this environment
there is a fairly high hydraulic conductivity and the response is definitely underdamped with significant oscillation. The plot is of normalized head versus time, which makes it appear that the
oscillation is greater for smaller initial heads. However in fact, the negative oscillation is about the same magnitude for all responses. There is good separation of all responses indicating strong
nonlinear effects. The nonlinear model indicates that there is moderate sensitivity to beta and VEL0 and strong sensitivity to A and K. The best fit for the suite of tests analyzed together is K =
4.8x10^-3 ft/sec and A = 18.0. For 10 tests analyzed separately the average K is .00535 ft./sec with a standard deviation of .000743 ft/sec, which is a little less than 15% of the average value.
The figure below shows two repeat slug test responses for 1000 ml added to the well. The tests are nearly identical indicating high reproducibility and little evidence of local water level noise.
Dynamic Effects
The parameter beta represents a correction to the water column length caused by radius variations in the wellbore and is most useful in matching the oscillation frequency and amplitude in wells with
underdamped responses. For simple radius variations in the wellbore, the theoretical value for beta can be computed. Since oscillatory slug tests involve accelerations of the water column, we find
that the pressure transducer responses are affected by these accelerations. Theoretically the correction for acceleration is given by
where h[e] is the experimentally measured head, h[m] is the theoretical model head, Z[s] is the submergence of the pressure transducer, a is the acceleration of the water column, and g is the
acceleration of gravity. For maximum accuracy of analysis, the model response must be corrected for acceleration before comparison to the transducer response. These two effects are illustrated in the
following figures.
First of all, the results of fitting the nonlinear model to the 2000 ml and 4000 ml slug tests in well 2-7 are shown below. The response is underdamped and highly oscillatory and shows strong
nonlinear effects. These two tests are matched well, indicating the model is describing the system well. The sum of squared errors for the fit is 6.87.
When the slug test responses are recalculated using a beta of -6.9 feet instead of the theoretical value of +6.9 feet (all other parameter values are held constant at their fitted values), the sum of
squared errors increases to 30.78. The resulting response for the 2000 ml test is shown below. It is clear that the beta parameter is important in matching the amplitude and frequency of the
If the acceleration correction is turned off and the slug test responses are recalculated for the previously converged values of the parameters, the sum of squared errors increases to 13.18. A
comparison of the model response without acceleration correction is compared to the 4000 ml slug test data in the figure below. It is clear that the early part of the experimental record is not as
well produced as before. Although it is difficult to see, the fit with experimental data is not as good in those areas where the velocity is changing the most rapidly. Therefore, for maximum accuracy
in fitting the data one should make the acceleration correction.
To obtain maximum accuracy in analyzing slug test data one should use a model that will seamlessly simulate responses for the overdamped region through the critically damped region and on into the
underdamped region. This is particularly important when taking multilevel data sets at a site where the hydraulic conductivity changes dramatically from location to location. Multiple slug tests
should be taken at a given location to test for nonlinear effects and to determine repeatability. This will allow some statistical measure of the experimental accuracy. As the hydraulic conductivity
and the velocities in the wellbore increase, the nonlinear effects represented by the parameter A also increase. At some point the slug test response will become oscillatory as the hydraulic
conductivity increases. The parameter beta represents a correction to the water column length caused by radius variations in the wellbore and is most useful in matching the oscillation frequency.
Sensitivity analysis shows that in general beta and VEL0 have the lowest sensitivity and hydraulic conductivity usually has the highest, however, for very high K's the sensitivity to A may surpass
that due to K. The sensitivities vary with hydraulic conductivity but we find the model to be robust with regard to estimates of hydraulic conductivity. Since oscillatory slug tests involve
accelerations of the water column, we find that the pressure transducer responses are affected by these accelerations. For maximum accuracy of analysis, the model response must be corrected for
acceleration before comparison to the transducer response.
Butler, J.J., Jr., McElwee, C.D., and Liu, W., 1996, Improving the quality of parameter estimates from slug tests: Ground Water, v. 34, no. 3, pp. 480-490.
Chirlin, G.R., 1989, A critique of the Hvorslev method for slug test analysis: The fully penetrating well: Ground Water Monitoring Review, v. 9, no. 2, pp. 130-138.
McElwee, C.D., 1987, Sensitivity analysis of ground-water models; in, Proceedings of the 1985 NATO Advanced Study Institute on Fundamentals of Transport Phenomena in Porous Media, edited by J. Bear
and M.Y. Corapcioglu: Martinus Nijhoff, Dordrecht, Netherlands. pp. 751-817.
McElwee, C.D., and Zenner, M., 1998, A nonlinear model for analysis of slug-test data: Water Resources Research., v. 34, no. 1, pp. 55-66.
Kansas Geological Survey, Geohydrology
Placed online Nov. 20, 2007, original report dated May 2000
Comments to webadmin@kgs.ku.edu
The URL for this page is http://www.kgs.ku.edu/Hydro/Publications/2001/OFR00_26/index.html | {"url":"http://www.kgs.ku.edu/Hydro/Publications/2000/OFR00_26/index.html","timestamp":"2014-04-16T22:43:42Z","content_type":null,"content_length":"23647","record_id":"<urn:uuid:58319d65-e88a-48c1-b977-7308e5790daf>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00512-ip-10-147-4-33.ec2.internal.warc.gz"} |
Background Information
SOS Children has tried to make Wikipedia content more accessible by this schools selection. SOS Children works in 45 African countries; can you help a child in Africa?
An abacus (plurals abacuses or abaci), also called a counting frame, is a calculating tool for performing arithmetic processes. Nowadays, abaci are often constructed as a wooden frame with beads
sliding on wires, but originally they were beads or stones moved in grooves in sand or on tablets of wood, stone, or metal. The abacus was in use centuries before the adoption of the written modern
numeral system and is still widely used by merchants and clerks in China, Japan, Africa, India and elsewhere.
The user of an abacus is called an abacist; he or she slides the beads of the abacus by hand.
The first abacus was almost certainly based on a flat stone covered with sand or dust. Words and letters were drawn in the sand; eventually numbers were added and pebbles used to aid calculations.
The Babylonians used this dust abacus as early as 2400 BC. The origin of the counter abacus with strings is obscure, but India, Mesopotamia or Egypt are seen as probable points of origin. China
played an essential part in the development and evolution of the abacus.
From this, a variety of abaci were developed; the most popular were based on the bi-quinary system, using a combination of two bases (base-2 and base-5) to represent decimal numbers. But the earliest
abaci used first in Mesopotamia and later by scribes in Egypt and Greece used sexagesimal numbers represented with factors of 5, 2, 3, and 2 for each digit.
The use of the word abacus dates from before 1387, when a Middle English work borrowed the word from Latin to describe a sandboard abacus.
The Latin word came from abakos, the Greek genitive form of abax ("calculating-table"). Because abax also had the sense of "table sprinkled with sand or dust, used for drawing geometric figures",
some linguists speculate that the Greek word may be derived from a Semitic root (cf. Phoenician abak, "sand", Hebrew ābāq (pronounced "a-vak"), "dust"). The preferred plural of abacus is a subject of
disagreement, but both abacuses and abaci are in use.
Babylonian abacus
Babylonians may have used the abacus for the operations of addition and subtraction. However, this primitive device proved difficult to use for more complex calculations. Some scholars point to a
character from the Babylonian cuneiform which may have been derived from a representation of the abacus.
Egyptian abacus
The use of the abacus in ancient Egypt is mentioned by the Greek historian Crabertotous, who writes that the manner of this disk's usage by the Egyptians was opposite in direction when compared with
the Greek method. Archaeologists have found ancient disks of various sizes that are thought to have been used as counters. However, wall depictions of this instrument have not been discovered,
casting some doubt over the extent to which this instrument was used.
Greek abacus
A tablet found on the Greek island Salamis in 1846 dates back to 300 BC, making it the oldest counting board discovered so far. It is a slab of white marble 149 cm long, 75 cm wide, and 4.5 cm thick,
on which are 5 groups of markings. In the centre of the tablet is a set of 5 parallel lines equally divided by a vertical line, capped with a semi-circle at the intersection of the bottom-most
horizontal line and the single vertical line. Below these lines is a wide space with a horizontal crack dividing it. Below this crack is another group of eleven parallel lines, again divided into two
sections by a line perpendicular to them, but with the semi-circle at the top of the intersection; the third, sixth and ninth of these lines are marked with a cross where they intersect with the
vertical line.
Roman abacus
The normal method of calculation in ancient Rome, as in Greece, was by moving counters on a smooth table. Originally pebbles, calculi, were used. Later, and in medieval Europe, jetons were
manufactured. Marked lines indicated units, fives, tens etc. as in the Roman numeral system. This system of 'counter casting' continued into the late Roman empire and in medieval Europe, and
persisted in limited use into the nineteenth century.
In addition to the more common method using loose counters, several specimens have been found of a Roman abacus, shown here in reconstruction. It has eight long grooves containing up to five beads in
each and eight shorter grooves having either one or no beads in each.
The groove marked I indicates units, X tens, and so on up to millions. The beads in the shorter grooves denote fives—five units, five tens etc., essentially in a bi-quinary coded decimal system,
obviously related to the Roman numerals. The short grooves on the right may have been used for marking Roman ounces.
Indian abacus
1st century sources, such as the Abhidharmakosa describe the knowledge and use of abacus in India. Around the 5th century, Indian clerks were already finding new ways of recording the contents of the
Abacus. Hindu texts used the term shunya(means Zero) to indicate the empty column on the abacus.
Chinese abacus
The earliest mention of a suanpan is found in a First Century book of the Eastern Han Dynasty, namely Supplementary Notes on the Art of Figures written by Xu Yue. However, the exact design of this
suanpan is not known.
Usually, a suanpan is about 20 cm tall and it comes in various widths depending on the operator. It usually has more than seven rods. There are two beads on each rod in the upper deck and five beads
each in the bottom for both decimal and hexadecimal computation. Modern abacuses have one bead on the top deck and four beads on the bottom deck. The beads are usually rounded and made of a hardwood.
The beads are counted by moving them up or down towards the beam. If you move them high, you count their value. If you move them down, you don't count their value. The suanpan can be reset to the
starting position instantly by a quick jerk along the horizontal axis to spin all the beads away from the horizontal beam at the centre.
Suanpans can be used for functions other than counting. Unlike the simple counting board used in elementary schools, very efficient suanpan techniques have been developed to do multiplication,
division, addition, subtraction, square root and cube root operations at high speed.
In the famous long scroll Riverside Scenes at Qingming Festival painted by Zhang Zeduan ( 1085- 1145) during the Song Dynasty ( 960- 1297), a suanpan is clearly seen lying beside an account book and
doctor's prescriptions on the counter of an apothecary's (Feibao).
The similarity of the Roman abacus to the Chinese one suggests that one could have inspired the other, as there is some evidence of a trade relationship between the Roman Empire and China. However,
no direct connection can be demonstrated, and the similarity of the abaci may be coincidental, both ultimately arising from counting with five fingers per hand. Where the Roman model (like most
modern Japanese) has 4 plus 1 bead per decimal place, the standard suanpan has 5 plus 2, allowing less challenging arithmetic algorithms, and also allowing use with a hexadecimal numeral system.
Instead of running on wires as in the Chinese and Japanese models, the beads of Roman model run in grooves, presumably making arithmetic calculations much slower.
Another possible source of the suanpan is Chinese counting rods, which operated with a decimal system but lacked the concept of zero as a place holder. The zero was probably introduced to the Chinese
in the Tang Dynasty (618-907) when travel in the Indian Ocean and the Middle East would have provided direct contact with India and Islam allowing them to acquire the concept of zero and the decimal
point from Indian and Islamic merchants and mathematicians.
The Chinese abacus migrated from China to Korea around the year 1400. Koreans call it jupan (주판), supan (수판) or jusan (주산).
Korean and Japanese abacus
The abacus migrated from China to Korea around the year 1400 and later Japan, around the year 1600. Korea's version of the abacus is called jupan (주판) or supan (수판) or jusan (주산).
The Chinese suanpan was called soroban (算盤, そろばん, lit. "Counting tray") in Japan, imported via Korea around 1600. Like the suanpan, the soroban is still used in Japan today, even with the
proliferation, practicality, and affordability of pocket electronic calculators.
Native American abaci
Some sources mention the use of an abacus called a nepohualtzintzin in ancient Aztec culture. This Mesoamerican abacus used a 5-digit base-20 system.
The quipu of the Incas was a system of knotted cords used to record numerical data, like advanced tally sticks—but not used to perform calculations. Calculations were carried out using a yupana (
quechua for "counting tool"; see figure) which was still in use after the conquest of Peru. The working principle of a yupana is unknown, but in 2001 an explanation of the mathematical basis of these
instruments was proposed. By comparing the form of several yupanas, researchers found that calculations were based using the Fibonacci sequence 1,1,2,3,5 and powers of 10, 20 and 40 as place values
for the different fields in the instrument. Using the Fibonacci sequence would keep the number of grains within any one field at minimum.
Russian abacus
The Russian abacus, the schoty (счёты), usually has a single slanted deck, with ten beads on each wire (except one wire which has four beads, for quarter-ruble fractions). This wire is usually near
the user. (Older models have another 4-bead wire for quarter-kopeks, which were minted until 1916.) The Russian abacus is often used vertically, with wires from left to right in the manner of a book.
The wires are usually bowed to bulge upward in the center, in order to keep the beads pinned to either of the two sides. It is cleared when all the beads are moved to the right. During manipulation,
beads are moved to the left. For easy viewing, the middle 2 beads on each wire (the 5th and 6th bead) usually have a colour different from the other 8 beads. Likewise, the left bead of the thousands
wire (and the million wire, if present) may have a different colour.
The Russian abacus was in use in all shops and markets throughout the former Soviet Union, and the usage of it was taught in most schools till 1990s. Today it is regarded as an archaism and replaced
by microcalculator. In school the usage of calculator is taught since 1990s.
School abacus
Around the world, abaci have been used in pre-schools and elementary schools as an aid in teaching the numeral system and arithmetic. In Western countries, a bead frame similar to the Russian abacus
but with straight wires and a vertical frame has been common (see image). It is still often seen as a plastic or wooden toy.
The type of abacus shown here is often used to represent numbers without the use of place value. Each bead and each wire has the same value and used in this way it can represent numbers up to 100.
The most significant educational advantage of using an abacus, rather than loose beads or counters, when practicing counting and simple addition is that it gives the student an awareness of the
groupings of 10 which are the foundation of our number system. Although adults take this base 10 structure for granted, it is actually difficult to learn. Many 6-year-olds can count to 100 by rote
with only a slight awareness of the patterns involved.
Uses by the blind
An adapted abacus, invented by Helen Keller, called a Cranmer abacus is still commonly used by individuals who are blind. A piece of soft fabric or rubber is placed behind the beads so that they do
not move inadvertently. This keeps the beads in place while the users feel or manipulate them. They use an abacus to perform the mathematical functions multiplication, division, addition, subtraction
, square root and cubic root.
Although blind students have benefited from talking calculators, the abacus is still very often taught to these students in early grades, both in public schools and state schools for the blind. The
abacus teaches math skills that can never be replaced with talking calculators and is an important learning tool for blind students. Blind students also complete math assignments using a
braille-writer and Nemeth code (a type of braille code for math) but large multiplication and long division problems can be long and difficult. The abacus gives blind and visually impaired students a
tool to compute math problems that equals the speed and mathematical knowledge required by their sighted peers using pencil and paper. Many blind people find this number machine a very useful tool
throughout life. | {"url":"http://schools-wikipedia.org/wp/a/Abacus.htm","timestamp":"2014-04-17T18:25:36Z","content_type":null,"content_length":"28248","record_id":"<urn:uuid:fe70709d-de99-4d08-95fa-48fe31748d30>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00663-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sacaton Algebra Tutor
Find a Sacaton Algebra Tutor
...The main reason I thought students weren’t performing well was due to a paucity of tutors. I began to tutor students in math, reading, and writing to improve their test scores. After every
student finished their math homework, I reviewed each problem on the board and taught them how to evaluate each one.
67 Subjects: including algebra 2, American history, biology, chemistry
...I have continued to follow my interest in psychology with reading books and magazines on the subject. This is not my primary field of expertise, but I am certainly competent enough to tutor for
it. I have taught middle school and high school for many years.
43 Subjects: including algebra 2, algebra 1, Spanish, reading
...God has given me a passion for teaching and I hope to serve Him by using the gifts He has given me:)I have taught Algebra I for at least 20 years using the Saxon and Abeka curricula. I have
taught Algebra II to high school students for four years and tutored during two summers to help some students receive their credits. I have used the Saxon and Abeka curricula.
19 Subjects: including algebra 1, algebra 2, English, vocabulary
...I tutored both math and science subjects individually or in a group setting. I have my Engineering graduate degree and have been working in the engineering field for over 15 years. I am also
fluent in both English and Chinese.
8 Subjects: including algebra 2, algebra 1, calculus, trigonometry
...My formal training is in engineering and systems studies which has given great insight into physical systems and scientific inquiry. Philosophy has been a passion of mine for a long time. I
have read extensively on metaphysics, symbolic logic, epistemology, ethics, philosophy of science, mathematics, language, and several other subjects within philosophy.
62 Subjects: including algebra 1, algebra 2, reading, English | {"url":"http://www.purplemath.com/sacaton_algebra_tutors.php","timestamp":"2014-04-16T16:48:08Z","content_type":null,"content_length":"23771","record_id":"<urn:uuid:bad83893-0c10-483b-9018-66085372d1ac>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00246-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Find the Partial Sum of an Arithmetic Sequence
When your pre-calculus teacher asks you to calculate the kth partial sum of an arithmetic sequence, you need to add the first k terms. This may take a while, especially if k is large. Fortunately,
you can use a formula instead of plugging in each of the values for n. The kth partial sum of an arithmetic series is
You simply plug the lower and upper limits into the formula for a[n] to find a[1] and a[k].
Arithmetic sequences are very helpful to identify because the formula for the nth term of an arithmetic sequence is always the same:
a[n] = a[1] + (n – 1)d
where a[1] is the first term and d is the common difference.
One real-world application of an arithmetic sum involves stadium seating. Say, for example, a stadium has 35 rows of seats; there are 20 seats in the first row, 21 seats in the second row, 22 seats
in the third row, and so on. How many seats do all 35 rows contain? Follow these steps to find out:
1. Find the first term of the sequence.
The first term of this sequence (or the number of seats in the first row) is given: 20.
2. Find the kth term of the sequence.
Because the stadium has 35 rows, find a[35]. Use the formula for the nth term of an arithmetic sequence. The first term is 20, and each row has one more seat than the row before it, so d = 1.
Plug these values into the formula:
Note: This solution is the number of seats in the 35th row, not the answer to how many seats the stadium contains.
3. Use the formula for the kth partial sum of an arithmetic sequence to find the sum. | {"url":"http://www.dummies.com/how-to/content/how-to-find-the-partial-sum-of-an-arithmetic-seque.navId-403867.html","timestamp":"2014-04-19T10:38:08Z","content_type":null,"content_length":"55124","record_id":"<urn:uuid:3f50ec87-d0d0-468d-9f8a-abb83189d815>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00312-ip-10-147-4-33.ec2.internal.warc.gz"} |
Combinatorial problems related to sequences with repeated entries
Sequences of numbers have important applications in the field of Computer Science. As a result they have become increasingly regarded in Mathematics, since analysis can be instrumental in
investigating algorithms. Three concepts are discussed in this thesis, all of which are concerned with ‘words’ or ‘sequences’ of natural numbers where repeated letters are allowed: • The number of
distinct values in a sequence with geometric distri- bution In Part I, a sample which is geometrically distributed is considered, with the objective of counting how many different letters occur at
least once in the sample. It is concluded that the number of distinct letters grows like log n as n → ∞. This is then generalised to the question of how many letters occur at least b times in a word.
• The position of the maximum (and/or minimum) in a sequence with geometric distribution Part II involves many variations on the central theme which addresses the question: “What is the probability
that the maximum in a geometrically distributed sample occurs in the first d letters of a word of length n?” (assuming d ≤ n). Initially, d is considered fixed, but in later chapters d is allowed to
grow with n. It is found that for 1 ≤ d = o(n), the results are the same as when d is fixed. • The average depth of a key in a binary search tree formed from a sequence with repeated entries Lastly,
in Part III, random sequences are examined where repeated letters are allowed. First, the average left-going depth of the first one is found, and later the right-going path to the first r if the
alphabet is {1, . . . , r} is examined. The final chapter uses a merge (or ‘shuffle’) operator to obtain the average depth of an arbitrary node, which can be expressed in terms of the left-going and
right-going depths. | {"url":"http://wiredspace.wits.ac.za/handle/10539/1732","timestamp":"2014-04-19T00:11:08Z","content_type":null,"content_length":"23009","record_id":"<urn:uuid:da45ffab-70ff-44c2-8851-95fae0f9c332>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00372-ip-10-147-4-33.ec2.internal.warc.gz"} |
Jamie Craane's Blog
Out of interest I am familiarizing myself in genetic algorithms, in short GA. My interest in GA came when I first heard about the JGAP project. As mentioned on the project's site "JGAP (pronounced
"jay-gap") is a Genetic Algorithms and Genetic Programming component provided as a Java framework.". For a newcomer I found it difficult to get a good overview about all the concepts introduced in
genetic algorithms. Before diving into JGAP, I think it is essential that these concepts are well understood. This post is an introduction to genetic algorithms (GA) with JGAP and is explained with a
concrete example. In one of my next posts I will demonstrate solving a problem with genetic programming (GP).
So what is a genetic algorithm? Given is the following definition from John R. Koza:
The genetic algorithm is a
probabilistic search algorithm
that iteratively transforms a set (called a
) of mathematical objects (typically fixed-length binary character strings), each with an associated
fitness value
, into a new population of offspring objects using the Darwinian principle of
natural selection
and using operations that are patterned after naturally occurring
genetic operations
, such as
(sexual recombination) and
In genetic algorithms, a
potential solution
is called a
. A chromosome consists of a fixed length of
. A gene is a distinct component of a potential solution. During the
of the genetic algorithm, multiple solutions (chromosomes) are combined (
) to form, potentially, better solutions. The evolution is done over a population of solutions. The population of solutions is called a
and consists of a fixed-length of chromosomes. During each evolution,
natural selection
is applied to determine which solutions (chromosomes) make it to the next evolution. The input criteria for the selection process is the so-called
of a potential solution. Solutions with a better fitness value are more likely to appear in the next evolution than solutions with a worse fitness value. The fitness value of a potential solution is
determined by a
user-supplied fitness function
Although it is possible to implement the above concepts yourself, JGAP already took care of this. Because the best way to learn is by example, let me first introduce the problem domain which I am
going to solve with genetic algorithms. During the example, the concepts mentioned above are further clarified.
Consider a moving company which is specialized in moving boxes (with things in it) from one location to another. These boxes have varying volumes. The boxes are put in vans in which the boxes are
moved from location to location. To reduce transport costs, it is crucial for the moving company to use as minimal vans as possible.
Problem statement:
given a number of boxes of varying volumes, what is the optimal arrangement of the boxes so that a minimal number of vans is needed? The following example shows how to solve this problem with genetic
algorithms and JGAP.
First: with the arrangement of the boxes I mean the following: consider 5 boxes with the following volumes (in cubic meters): 1,4,2,2 and 2 and vans with a capacity of 4 cubic meters. When the boxes
are put in the vans based on the initial arrangement, the distribution of the boxes in the vans is like this:
Van Boxes Space wasted
Van 1 Box 1 3
Van 2 Box 4 0
Van 3 Box 2, Box 2 0
Van 4 Box 2 2
Fitness value = 3+2 * 4 = 20. See section
fitness function
for an explanation of the fitness function for this particular problem.
A total of 4 vans is needed. But when the number of vans needed is calculated, which is the total volume of the boxes divided by the volume of the vans, the optimal number of vans is: 11 / 4 = 2.75.
Because no partial vans can be used the optimal number of vans needed is 3. The optimal arrangement of the boxes is the following: 1,2,2,2,4. Based on this arrangement the distribution looks like
Van Boxes Space wasted
Van 1 Box 1, Box 2 1
Van 2 Box 2, Box 2 0
Van 3 Box 4 0
Fitness value of 1 * 3 = 3.
Before implementing the actual solution, the following
preparatory steps
must be taken. These preparatory steps are always needed if genetic algorithms is used to solve a particular problem.
1. Define the genetical representation of the problem domain. The boxes which must be put in the vans are represented by an array of Box instances. The genetic algorithm must find the optimal
arrangement in the array as how to put the boxes in the vans. A chromosome is a potential solution and consists of a fixed-length of genes. A potential solution in this example consists of a list
of indexes where each index represents a Box in the box array. To represent such index, I use an IntegerGene. As mention earlier, a gene is a distinct part of the solution. In this example, a
solution (chromosome) consists of as many genes as there are boxes. The genes must be ordered by the genetic program in such a way that it represents a (near) optimal arrangement. For example: if
there are 50 boxes, a chromosome with 50 IntegerGene's is constructed, where each gene's value is initialized to an index in the box array, in this case from 0 to 49.
2. Determine the fitness function. The fitness function determines how good a potential solution is compared to other solutions. In this problem domain, a solution is fitter when fewer vans are
needed so less space is wasted.
3. Determine the parameters used for the run. For the run I use a population size of 50 and a total number of 5000 evolutions. So the genotype (the population) initially consists of 50 chromosomes
(potential solutions). These values are chosen based on some experimentation and can vary based on the specific problem.
4. Determine the termination criteria. The program ends when 5000 evolutions are reached or when the optimal number of vans needed is reached. The optimal number of vans can be calculated by
dividing the total volume of the boxes by the capacity of the vans and rounding the result up (because no partial vans can be used).
The Box class has a volume. In this example 125 boxes are created with varying volumes between 0.25 and 3.00 cubic meters. The boxes are stored in an array. The following code creates the boxes:
Random r = new Random(seed);
this.boxes = new Box[125];
for (int i = 0; i < 125; i++) {
Box box = new Box(0.25 + (r.nextDouble() * 2.75));
this.boxes[i] = box;
Before we configure JGAP we must first implement a fitness function. The fitness function is the most important part in genetic algorithms as it determines which populations potentially make in to
the next evolution. The fitness function for this problem looks like this:
package nl.jamiecraane.mover;
import org.jgap.FitnessFunction;
import org.jgap.IChromosome;
* Fitness function for the Mover example. See this
* {@link #evaluate(IChromosome)} for the actual fitness function.
public class MoverFitnessFunction extends FitnessFunction {
private Box[] boxes;
private double vanCapacity;
public void setVanCapacity(double vanCapacity) {
this.vanCapacity = vanCapacity;
public void setBoxes(Box[] boxes) {
this.boxes = boxes;
* Fitness function. A lower value value means the difference between the
* total volume of boxes in a van is small, which is better. This means a
* more optimal distribution of boxes in the vans. The number of vans needed
* is multiplied by the size difference as more vans are more expensive.
protected double evaluate(IChromosome a_subject) {
double wastedVolume = 0.0D;
double sizeInVan = 0.0D;
int numberOfVansNeeded = 1;
for (int i = 0; i < boxes.length; i++) {
int index = (Integer) a_subject.getGene(i).getAllele();
if ((sizeInVan + this.boxes[index].getVolume()) <= vanCapacity) {
sizeInVan += this.boxes[index].getVolume();
} else {
// Compute the difference
wastedVolume += Math.abs(vanCapacity - sizeInVan);
// Make sure we put the box which did not fit in this van in the next van
sizeInVan = this.boxes[index].getVolume();
// Take into account the number of vans needed. More vans produce a higher fitness value.
return wastedVolume * numberOfVansNeeded;
The above fitness function loops through all the genes in the supplied potential solution (where each gene in the chromosome represents an index in the box array) and calculates how many vans are
needed for this arrangement of boxes to fit in the vans. The fitness value is based on the space wasted in every van when a new van is needed, called the wasted volume. The total volume wasted is
multiplied by the number of vans needed. This is done to create a much worse fitness value when more vans are needed. In the above, simplified, example the fitness value of the first solution is 20
and the fitness value of the second, optimal, solution is 3. One term deserves more explanation and that is allele. In the above code the getAllele method on the gene is called. Allele is just
another word for the value of the gene. Because all genes all IntegerGene's, the value of each gene is of type Integer.
Next it is time to setup JGAP:
private Genotype configureJGAP() throws InvalidConfigurationException {
Configuration gaConf = new DefaultConfiguration();
// Here we specify a fitness evaluator where lower values means a better fitness
gaConf.setFitnessEvaluator(new DeltaFitnessEvaluator());
// Only use the swapping operator. Other operations makes no sense here
// and the size of the chromosome must remain constant
SwappingMutationOperator swapper = new SwappingMutationOperator(gaConf);
// We are only interested in the most fittest individual
// The number of chromosomes is the number of boxes we have. Every chromosome represents one box.
int chromeSize = this.boxes.length;
Genotype genotype;
// Setup the structure with which to evolve the solution of the problem.
// An IntegerGene is used. This gene represents the index of a box in the boxes array.
IChromosome sampleChromosome = new Chromosome(gaConf, new IntegerGene(gaConf), chromeSize);
// Setup the fitness function
MoverFitnessFunction fitnessFunction = new MoverFitnessFunction();
// Because the IntegerGenes are initialized randomly, it is neccesary to set the values to the index. Values range from 0..boxes.length
genotype = Genotype.randomInitialGenotype(gaConf);
List chromosomes = genotype.getPopulation().getChromosomes();
for (Object chromosome : chromosomes) {
IChromosome chrom = (IChromosome) chromosome;
for (int j = 0; j < chrom.size(); j++) {
Gene gene = chrom.getGene(j);
return genotype;
In the above code we setup the JGAP library. The provided Javadoc should be self-explanatory. A population (genotype) of 50 potential solutions (chromosomes) is created where every chromosome
consists of the same number of genes as there are boxes. Because in this example a lower fitness value is better, theDeltaFitnessEvaluator is used.
Next, it is time to evolve the population. The population is evolved 5000 times. When the optimal amount of vans is reached earlier, the run ends. The following code demonstrates the evolution of the
problem solution:
private void evolve(Genotype a_genotype) {
int optimalNumberOfVans = (int) Math.ceil(this.totalVolumeOfBoxes / VOLUME_OF_VANS);
LOG.info("The optimal number of vans needed is [" + optimalNumberOfVans + "]");
double previousFittest = a_genotype.getFittestChromosome().getFitnessValue();
numberOfVansNeeded = Integer.MAX_VALUE;
for (int i = 0; i < NUMBER_OF_EVOLUTIONS; i++) {
if (i % 250 == 0) {
LOG.info("Number of evolutions [" + i + "]");
double fittness = a_genotype.getFittestChromosome().getFitnessValue();
int vansNeeded = this.numberOfVansNeeded(a_genotype.getFittestChromosome().getGenes()).size();
if (fittness < previousFittest && vansNeeded < numberOfVansNeeded) {
previousFittest = fittness;
numberOfVansNeeded = vansNeeded;
// No more optimal solutions
if (numberOfVansNeeded == optimalNumberOfVans) {
IChromosome fittest = a_genotype.getFittestChromosome();
List<Van> vans = numberOfVansNeeded(fittest.getGenes());
Because we set the preserveFittest property on the JGAP configuration object to true, we have access to the most fittest chromosome with the getFittestChromosome() method. The fittest chromosome
consists of 125 genes, the indexes of the boxes in the array, in the arrangement of how to put the boxes in the vans. The actual evolution is performed by JGAP. The fitness value determines which
populations have the highest chance to make it to the next evolution. Eventually a (near) optimal solution is formed. This indicates the importance of a well chosen fitness function as it is used in
the selection process of the chromosomes. Below is the output of a sample run:
The total volume of the [125] boxes is [210.25989987666645] cubic metres.
The optimal number of vans needed is [49]
Number of evolutions [0]
Fitness value [4123.992085987977]
The total number of vans needed is [63]
Fitness value [3458.197333300851]
The total number of vans needed is [61]
Fitness value [3138.2899569572887]
The total number of vans needed is [60]
Fitness value [2865.5105375433063]
The total number of vans needed is [59]
Fitness value [2562.282028584251]
The total number of vans needed is [58]
Fitness value [2267.7135196251966]
The total number of vans needed is [57]
Fitness value [1981.8050106661412]
The total number of vans needed is [56]
Fitness value [1704.5565017070858]
The total number of vans needed is [55]
Fitness value [1479.769464870246]
The total number of vans needed is [54]
Number of evolutions [250]
Fitness value [1215.9278601031112]
The total number of vans needed is [53]
Fitness value [1002.6487336510297]
The total number of vans needed is [52]
Number of evolutions [500]
Fitness value [774.5352329142294]
The total number of vans needed is [51]
Number of evolutions [750]
Number of evolutions [1000]
Fitness value [535.1696373214758]
The total number of vans needed is [50]
Number of evolutions [1250]
Number of evolutions [1500]
Number of evolutions [1750]
Number of evolutions [2000]
Number of evolutions [2250]
Fitness value [307.8713731063958]
The total number of vans needed is [49]
Van [1] has contents with a total volume of [4.204196540671411] and contains the following boxes:
Box:0, volume [2.2510465109421443] cubic metres.
Box:117, volume [1.9531500297292665] cubic metres.
Van [2] has contents with a total volume of [4.185233471369987] and contains the following boxes:
Box:17, volume [1.0595047801111055] cubic metres.
Box:110, volume [0.5031165156303853] cubic metres.
Box:26, volume [2.6226121756284955] cubic metres.
Van [3] has contents with a total volume of [4.312990612147265] and contains the following boxes:
Box:91, volume [1.8897555340430203] cubic metres.
Box:6, volume [2.423235078104245] cubic metres.
...Further output omitted
As seen in the above output, the optimal number of vans is reached between 2250 and 2500 evolutions. The program also outputs the distribution of the boxes in the vans. The complete source code of
the above example can be downloaded from
. The down-loadable artifact is called ga-moving-example-1.0.jar.
Genetic algorithms and programming is an exciting technology but with a lot of concepts which are hard to grasp if you are new to the field. This post is an introduction of the concepts for genetic
algorithms and showed how to implement a GA solution with JGAP. In one of my next posts, genetic programming is explained with a concrete example.
The silver bullet rule applies to genetic algorithms as well. To give you an impression, the following problems are good candidates to solve with GA:
• Problems where it is hard to find a solution but once a solution is found, measure how good this particular solution is.
• Problems where the search space is very large, complex or poorly understood.
• Problems where a near optimal solution is acceptable.
• Problems where no mathematical analysis is available.
Although I am not an expert on the subject, feel free to ask your questions and I will try to answer them as best as possible. | {"url":"http://jcraane.blogspot.com/2009_02_01_archive.html","timestamp":"2014-04-18T23:46:15Z","content_type":null,"content_length":"93254","record_id":"<urn:uuid:ab38325b-6d0e-4a7b-a722-a6c351b9497d>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00239-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-tickets] [NumPy] #601: function for computing powers of a matrix
[Numpy-tickets] [NumPy] #601: function for computing powers of a matrix
NumPy numpy-tickets@scipy....
Tue Oct 30 09:18:37 CDT 2007
#601: function for computing powers of a matrix
Reporter: LevGivon | Owner: somebody
Type: enhancement | Status: new
Priority: normal | Milestone: 1.0.4
Component: numpy.linalg | Version: devel
Severity: normal | Resolution:
Keywords: |
Comment (by LevGivon):
'' I suppose one could modify the proposed function to check
whether the matrix is defective within an appropriate
For the sake of comparison, Matlab's matrix exponentation mechanism
apparently uses decomposition without checking for defectiveness (and
encounters numerical issues when the matrix is defective). Octave returns
nans and infinities when non-integer exponents are specified even for non-
defective matricies.
Ticket URL: <http://scipy.org/scipy/numpy/ticket/601#comment:5>
NumPy <http://projects.scipy.org/scipy/numpy>
The fundamental package needed for scientific computing with Python.
More information about the Numpy-tickets mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-tickets/2007-October/001432.html","timestamp":"2014-04-21T03:10:46Z","content_type":null,"content_length":"4026","record_id":"<urn:uuid:76402ec1-5957-4955-822b-fa694a79b2a4>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00423-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
Bandwidth choice for nonparametric regression.
(English) Zbl 0554.62035
A modified version of a kernel regression estimator is analyzed with respect to choice of bandwidth. The modification consists of replacing the kernel estimate by a tapered Fourier series estimate
which simplifies some technical arguments.
It is shown that the bandwidth chosen based on an unbiased estimate of mean square error is asymptotically optimal. Several other methods of bandwidth selection, including cross-validation, are
examined and shown to be asymptotically equivalent. However, some simulation results indicate that for small or moderate sample sizes the methods are quite different.
62G05 Nonparametric estimation
62J02 General nonlinear regression
62G99 Nonparametric inference
62J99 Linear statistical inference | {"url":"http://zbmath.org/?q=an:0554.62035&format=complete","timestamp":"2014-04-17T04:33:05Z","content_type":null,"content_length":"21397","record_id":"<urn:uuid:eac3389a-675e-4b3c-b2e9-bf3134512b84>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00151-ip-10-147-4-33.ec2.internal.warc.gz"} |
Convex set proof
February 18th 2010, 07:52 AM #1
Feb 2010
Convex set proof
I have a question about convex set. how can I prove that a set is convex if and only if its intersection with any line is convex. I have worked out proof but it is not that rigorous and I was
wondering if someone could help me with this
Why not post what you have done? We can look at it then.
Here is a hint: Two points determine a unique line and lines are convex sets.
if S is convex
exist x,y that belongs to S -> lambda(x) + (1-lambda) (y) belongs to S
therefore x and y are on the line L
->lambda(x) + (1-lambda) (y) belongs to L
->lambda(x) + (1-lambda) (y) belongs to intersection of L and S
therefore S interesection L is convex
there exists x,y that belongs to S intersection with L -> lambda(x) + (1-lambda) (y) belongs to S intersection with L
-> lambda(x) + (1-lambda) (y) belong to S
->S is convex
You certainly have all to components there.
I do not know the level of rigor required of you.
thx for your help
February 18th 2010, 08:06 AM #2
February 18th 2010, 08:32 AM #3
Feb 2010
February 18th 2010, 09:02 AM #4
February 18th 2010, 10:29 AM #5
Feb 2010 | {"url":"http://mathhelpforum.com/differential-geometry/129461-convex-set-proof.html","timestamp":"2014-04-18T19:19:59Z","content_type":null,"content_length":"41813","record_id":"<urn:uuid:3adb2370-0764-47e6-bfaf-14ffb0d29d83>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00176-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematics 255
Mathematics 255 - Ordinary differential equations
The important differences between this and other sections:
Course notes
Here are PostScript files of hand outs so far:
Course outline
First order equations
First order differential equations
Numerical methods
A somewhat better numerical method
Second order equations
Inhomogeneous second order equations
Inhomogeneous second order equations II
Homework and solutions | {"url":"http://www.math.ubc.ca/~cass/courses/m255.html","timestamp":"2014-04-20T08:20:02Z","content_type":null,"content_length":"3611","record_id":"<urn:uuid:9a647592-3d72-4204-9eee-88cc6d0680bd>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00175-ip-10-147-4-33.ec2.internal.warc.gz"} |
Graphing Linear Equations
The graph of a linear equation in two variables is a line (that's why they call it linear).
If you know an equation is linear, you can graph it by finding any two solutions
(x[1], y[1]) and (x[2], y[2]),
plotting these two points, and drawing the line connecting them.
Graph the equation x + 2y = 7.
You can find two solutions, corresponding to the x-intercepts and y-intercepts of the graph, by setting first x = 0 and then y = 0.
When x = 0, we get:
0 + 2y = 7
y = 3.5
When y = 0, we get:
x + 2(0) = 7
x = 7
So the two points are (0, 3.5) and (7, 0).
Plot these two points and draw the line connecting them.
If the equation is in slope-intercept form or point-slope form, you can also use the slope to help you graph.
Graph the line y = 3x + 1.
From the equation, we know that the y-intercept is 1, the point (0, 1) and the slope is 3. Graph the point (0, 1) and from there go up 3 units and to the right 1 unit and graph a second point. Draw
the line that contains both points.
Horizontal and vertical lines have extra simple equations.
Horizontal line: y = 3
Vertical line: x = –2 | {"url":"http://hotmath.com/hotmath_help/topics/graphing-linear-equations.html","timestamp":"2014-04-17T09:34:41Z","content_type":null,"content_length":"5726","record_id":"<urn:uuid:1344f1f6-acad-4bcd-90f8-6207d75fb0eb>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00544-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebra Jokes
We hope you enjoy our collection of favorite math jokes and algebra jokes. You may want to check out our general math jokes, calculus math jokes, geometry math jokes etc. on our Math Trivia page.
A collection of Algebra Math Jokes.
A math student is pestered by a classmate who wants to copy his homework assignment. The student hesitates, not only because he thinks it's wrong, but also because he doesn't want to be sanctioned
for aiding and abetting.
His classmate calms him down: "Nobody will be able to trace my homework to you: I'll be changing the names of all the constants and variables: a to b, x to y, and so on."
Not quite convinced, but eager to be left alone, the student hands his completed assignment to the classmate for copying.
After the deadline, the student asks: "Did you really change the names of all the variables?"
"Sure!" the classmate replies. "When you called a function f, I called it g; when you called a variable x, I renamed it to y; and when you were writing about the log of x+1, I called it the timber of
New York (CNN). At John F. Kennedy International Airport today, a Caucasian male (later discovered to be a high school mathematics teacher) was arrested trying to board a flight while in possession
of a compass, a protractor and a graphical calculator.
According to law enforcement officials, he is believed to have ties to the Al-Gebra network. He will be charged with carrying weapons of math instruction.
Custom Search
We welcome your feedback, comments and questions about this site. You may also contribute your favorite math jokes, riddles, puzzles, trivia and fun stuff to add some math humor for others like
yourself. Please submit your feedback via our Feedback page.
Useful Links:
Interactive worksheets & games | {"url":"http://www.onlinemathlearning.com/math-jokes-algebra.html","timestamp":"2014-04-20T13:36:21Z","content_type":null,"content_length":"17418","record_id":"<urn:uuid:786cd21f-2e51-458c-8456-f30ce6373fc5>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00524-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Eigenfunction of a Jones Vector (System)
I am trying to find out just how to solve for the eigenfunction given a system, namely the parameters of an optical system (say a polarizer) in the form of a 2 by 2 Jones Vector. I know how to derive
the eigenvalue, using the the constituent det(λI -A) = 0, 'A' being the system at hand and 'λ' the eigenvalue. How do you go about solving for the eigenfunction? | {"url":"http://www.physicsforums.com/showpost.php?p=4160879&postcount=1","timestamp":"2014-04-19T09:49:56Z","content_type":null,"content_length":"8760","record_id":"<urn:uuid:1590af00-fbb1-4eb8-b3aa-5836112c27a9>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00590-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dividing fractions word problems
Dividing fractions word problems arise in numerous situations.We will show you some examples.
I recommend that before starting this lesson, review division of fractions
Example #1:
An Italian sausage is 8 inches long. How many pieces of sausage can be cut from the 8-inch piece of sausage if each piece is to be two-thirds of an inch ?
Since you are trying to find out how many two-thirds there are in 8, it is a division of fractions problem.
So, divide 8 by 2/3
8 ÷ 2/3 = 8/1 ÷ 2/3 = 8/1 × 3/2 = 24/2 = 12
Therefore, you can make 12 pieces having a length of 2/3 inches
Example #2
How many halves are there in six-fourth?
Again, since you are trying to find out how many halves there are in six-fourth, it is a division of fractions problem.
So, divide 6/4 by 1/2
6/4 ÷ 1/2 = 6/4 × 2/1 = 12/4 = 3
Therefore, there are 3 halves in six-fourth
Example #3:
An airplane covers 50 miles in 1/5 hours. How many miles can the airplane cover in 5 hours?
This problem is a combination of division and multiplication of fractions
First, find out how many fifths (1/5) are there in 5. This is a division of fractions problem
So, divide 5 by 1/5
5 ÷ 1/5 = 5/1 ÷ 1/5 = 5/1 × 5/1 = 25/1 = 25
Then, multiply 50 by 25 to get 1250
In 5 hours, the airplane will cover 1250 miles
Have A Great Basic Math Word Problem?
Share it here with a very detailed solution!
Enter Your Title
Enter Your Basic Math Word Problem!
[ ? ]
Upload 1-4 Pictures or Graphics (optional)
[ ? ]
Add a Picture/Graphic Caption (optional)
Click here to upload more images (optional)
Author Information (optional)
To receive credit as the author, enter your information below.
Your Name (first or full name)
Your Location (e.g., City, State, Country)
Submit Your Contribution
Check box to agree to these submission guidelines.
(You can preview and edit on the next page)
What Other Visitors Have Said
Click below to see contributions from other visitors to this page...
pizza :}
3 friends share 4/5 of a pizza. what fraction of pizza does each person get? The amount to share is 4/5 Since the amount will be shared between 3 …
solve word problem Not rated yet
If a number is added to the numerator of 11/64 and twice the number is added to the denominator of 11/64, the resulting fraction is equivalent to 1/5. …
Click here to write your own. | {"url":"http://www.basic-mathematics.com/dividing-fractions-word-problems.html","timestamp":"2014-04-20T15:54:20Z","content_type":null,"content_length":"43193","record_id":"<urn:uuid:9e21783c-1203-4c8a-8982-d20e461fecf4>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00248-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Math Forum » Discussions » Professional Associations » ncsm-members
Topic: [ncsm-members] Expecting the Best Yields Results in Massachusetts
Replies: 0
[ncsm-members] Expecting the Best Yields Results in Massachusetts
Posted: Sep 3, 2013 2:40 PM
From The New York Times, Monday, September 2,
2013. See
EDUCATION ISSUE --- This is a special issue
devoted to science and math education.
Expecting the Best Yields Results in Massachusetts
By Kenneth Chang
BRAINTREE, Mass. - Conventional wisdom and
popular perception hold that American students
are falling further and further behind in science
and math achievement. The statistics from this
state tell a different story.
If Massachusetts were a country, its eighth
graders would rank second in the world in
science, behind only Singapore, according to
Timss - the Trends in International Mathematics
and Science Study, which surveys knowledge and
skills of fourth and eighth graders around the
world. (The most recent version, in 2011, tested
more than 600,000 students in 63 nations.)
SPECIAL ISSUE. -- Learning what works. SEE
VIDEO -- Massachusetts: Math Capital?
From curriculum to technological advances to
experimentation -- a view of the state of science
and math education across the country.
Graphic: Results of the Trends in International
Mathematics and Science Study
Guesses and Hype Give Way to Data in Study of Education
(September 3, 2013)
Cognitive Science Meets Pre-Algebra
(September 3, 2013)
Chinese Educators Look to American Classrooms
(September 3, 2013)
Massachusetts eighth graders also did well in
mathematics, coming in sixth, behind Korea,
Singapore, Taiwan, Hong Kong and Japan. The
United States as a whole came in 10th in science
and 9th in math, with scores that were above the
international average.
Of course, Timss is only one test, and
achievement tests are incomplete indicators of
educational prowess. But behind Massachusetts'
raw numbers are two decades of sustained efforts
to lift science and mathematics education.
Educators and officials chose a course and held
to it, even when the early results were deeply
While Massachusetts has a richer and
better-educated population than most states, it
is not uniformly wealthy. The gains reflected
improvement across the state, including poorer
"I think we are a proof point of what's
possible," said Mitchell D. Chester, the state
education commissioner.
On a sunny day in May, fifth graders at Donald E.
Ross Elementary School here were gathered at an
outdoor gazebo, learning about fulcrums by using
a ruler set up like a seesaw and balancing
weights at both ends.
At South Middle School, seventh graders in a
science class worked in small groups to
brainstorm how a box of items - a plastic jar,
beaker, water, and a mix of sand, soil, clay and
pebbles - could help answer a question posed by
the teacher: How do sediments carried in water
get deposited? They devised small experiments and
wrote down their observations, and at the end of
class each group presented its findings.
None of the topics were novel, but they were
consistent in their hands-on approach, inviting
students to explore and explain. "Much more
hands-on than what we ever used to do," said
Dianne D. Rees, the district's science director.
"Hands-on as much as possible."
Braintree, a town of about 35,000 south of
Boston, is neither an inner-city area nor a
wealthy suburb. "We're sort of, we used to say, a
blue-collar area," said William Kendall, the
director of mathematics and technology for the
Braintree schools.
When Dr. Kendall arrived in 1973 as a math
teacher, the standard approach was talking at the
front of the classroom and writing on the
Some children learned well from lectures. Others
did not. "And it was O.K. those people don't get
it, because only we, the math elite, get it," Dr.
Kendall said.
Back then, one could graduate from high school
without ever taking algebra. "Then came ed
reform," Dr. Kendall said, "and now everybody had
to learn math."
Ambitious Goals
"Ed reform" was the Massachusetts Education
Reform Act of 1993, passed by a Democratic
Legislature and signed by a Republican governor,
William F. Weld.
The three core components were more money (mostly
to the urban schools), ambitious academic
standards and a high-stakes test that students
had to pass before collecting their high school
diplomas. All students were expected to learn
algebra before high school.
"It was a combination of carrots and sticks,"
said David P. Driscoll, deputy education
commissioner at the time.
Also noteworthy was what the reforms did not
include. Parents were not offered vouchers for
private schools. The state did not close poorly
performing schools, eliminate tenure for teachers
or add merit pay. The reforms did allow for some
charter schools, but not many.
Then the state, by and large, stayed the course.
The new achievement test, the Massachusetts
Comprehensive Assessment System (MCAS for short),
was given to 10th graders for the first time in
1998. (The graduation requirement of obtaining an
acceptable score on the 10th-grade MCAS did not
take effect until 2003.)
The troubled urban schools performed terribly.
In the small city of Chelsea, which borders
Boston, almost 90 percent of the students come
from low-income families and most did not speak
English as their first language. On the first
MCAS, two-thirds of Chelsea 10th graders failed
math. The science scores were nearly as dismal.
Two years later, scores in the urban districts
showed only glacial improvement. A report from
the University of Massachusetts at Boston
concluded that the reforms were not delivering on
the promises.
Critics worried that when the use of MCAS as a
graduation requirement kicked in, thousands of
students would be deprived of their diplomas and
would drop out in despair. Dr. Driscoll, who was
elevated to education commissioner in 1998, kept
the MCAS.
"People were expecting it to go away," Robert D.
Gaudet, the lead UMass researcher, recalled in a
recent interview. "He held to his guns."
Officials did make adjustments. Students who fail
the MCAS can retake it several times until they
pass, and can still graduate if they otherwise
demonstrate they have learned the material.
Test scores have risen markedly. Last year, 54
percent of Chelsea 10th graders were proficient
or advanced on the math MCAS.
On tests administered by the federal Education
Department, Massachusetts, which had been above
average, rose to No. 1 among the 50 states in
Building Blocks
Two decades after Massachusetts passed its
education reform, there is still much
disagreement over what were the crucial
components to its success.
Some think it was the added money; others note
that successful countries operate schools at much
lower costs.
Some think high-stakes testing imposed
accountability on administrators, teachers and
students; others say that it merely added stress
and that the proliferation of tests takes away
too much time from learning.
Some think the standards gave clarity on what was
expected of teachers and students; others say
there is little correlation between well-written
standards and student performance.
Officials like Dr. Driscoll say all three components were essential.
Dr. Rees, the Braintree schools' science
director, said the standards helped make sure
that teachers across the state covered the same
subjects, laying the groundwork for subsequent
"There's a logic to that, a progression," she
said. "You start learning about solids in
kindergarten. In first grade, you learn about
solids and liquids, and then in second grade, you
start to learn about solids and liquids and
The MCAS has helped Braintree figure out what
works and what doesn't. Middle school students
were struggling with chemistry questions on the
eighth-grade MCAS. The district changed the order
of instruction, covering concrete science
concepts in sixth grade and moving some chemistry
topics to seventh. "And it worked," Dr. Rees
said. "They're doing better on their chemistry."
Still, Massachusetts officials admit they have more to do.
While scores have improved across the board, the
gap between the highest achievers and the lowest
- notably blacks, Hispanics and special education
students - has persisted.
Seeing Results
At East Middle School, the elixir is Kristen
Walsh, who teaches math to sixth, seventh and
eighth graders with so-called special needs, a
potpourri of learning disabilities that include
dyslexia and autism. On this day she was
introducing a lesson on variables and linear
equations with a problem involving gym
She explained the usual math concepts of
beginning algebra - the slope of a line
indicating the rate of change, the y intercept
where the line intersects the y axis. Where she
lingered was less the math concepts but the words
used in the word problem, repeatedly checking
that the students understood that the "start-up
fee" of one health club was the same thing as the
membership fee at another.
In essence, she was teaching how to interpret a
math problem as much as how to solve it.
Dr. Kendall says teachers now laugh when he tells
them that it was once possible to graduate from
Braintree High School without ever taking
algebra. "You can't get out of eighth grade
without knowing Algebra I now," he said. "We're
teaching it to everybody, and everybody is having
The first new math standards in Massachusetts, in
the 1990s, echoed the "constructivist" pedagogy
then in vogue. Students would construct their
knowledge through trial and error, resulting in a
deeper understanding.
But many parents rebelled, complaining that their
children never mastered basic skills. The state
officials in charge of the next revision wanted a
back-to-basics curriculum. But Dr. Kendall and
others argued that that old approach had already
The "math wars" erupted at the turn of the
millennium, culminating in a sort of détente -
constructivism was purged, but the new
Massachusetts standards did not prescribe a new
approach. They stated what students were to
learn, but not how teachers were to teach. "What
came out of it ended up being a good document,
because it contained no pedagogy," Dr. Kendall
That allowed teachers like Ms. Walsh to devise and improve.
Take the multiplication table. The traditional
approach was to memorize it in order. A strict
constructivist would have children figure it out
by playing with sticks and other so-called
Braintree combines those approaches, with the
teachers guiding the learning in a particular
"Now research shows when you're teaching
multiplication facts, you should start with the
2s, go to the 10s, go to the 5s, do the 4, the 8,
don't hit 0, because the idea of multiplying 0 by
0 is complicated, until they've got a foundation
in multiplication," Dr. Kendall said. "Do 0 and 1
in about the middle, and save 7 and 3 until the
end, because those are the really hard ones."
He added, "We're helping them construct their own
knowledge in a way that is successful."
Abby Federico, one of Ms. Walsh's special-needs
students, said her mother told her the middle
school math curriculum was much more advanced
than when she was in school. "She was like, 'I
learned this stuff in high school,' " Abby said.
Dr. Kendall said that special needs students in
Braintree used to routinely fail the math MCAS.
Now those in Ms. Walsh's class often get
"It's pretty easy in my opinion, because Ms.
Walsh usually teaches us a lot of methods to use
in math to make it seem easier," Abby said,
adding that she might even choose a career that
requires math skills.
"Math is pretty nice," she said.
A version of this article appears in print on
September 3, 2013, on page D1 of the New York
edition with the headline: One State Had a Plan
And Saw It Through.
Jerry P. Becker
Dept. of Curriculum & Instruction
Southern Illinois University
625 Wham Drive
Mail Code 4610
Carbondale, IL 62901-4610
Phone: (618) 453-4241 [O]
(618) 457-8903 [H]
Fax: (618) 453-4244
E-mail: jbecker@siu.edu | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2597332","timestamp":"2014-04-17T05:38:03Z","content_type":null,"content_length":"29503","record_id":"<urn:uuid:7f957f3b-249a-41ee-9a66-7bb9996bd8e3>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00265-ip-10-147-4-33.ec2.internal.warc.gz"} |
volume for a cylinder
May 29th 2010, 09:54 AM #1
volume for a cylinder
For this cylinder the radius r = 6.8 inches and the height L = 14.2 inches. Which is the BEST estimate for the volume? (Use π = 3.14)
1686 in3
1962 in3
2008 in3
2154 in3
The volume of a prism is given by $\text{ Cross Sectional Area } \times \text{ Length}$
For a cylinder the cross section is equal to that of a circle, hence the volume is given by $V = \pi r^2 l$
I get an answer of 2061.75 which doesn't seem all that close
May 29th 2010, 10:16 AM #2 | {"url":"http://mathhelpforum.com/geometry/146878-volume-cylinder.html","timestamp":"2014-04-18T13:18:36Z","content_type":null,"content_length":"34040","record_id":"<urn:uuid:8ff4f34f-e919-4493-a413-0c2374aed552>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00607-ip-10-147-4-33.ec2.internal.warc.gz"} |
Miami Beach, WA SAT Math Tutor
Find a Miami Beach, WA SAT Math Tutor
...I firmly believe in the power of a well-worded, properly constructed sentence, and believe that this is an area where I can provide a lot of clarity for you. My background is in English
Literature and Rhetoric from UC Berkeley, and I am currently a certified high school English teacher in Miami-...
24 Subjects: including SAT math, English, reading, writing
...In some ways, getting ready for ACT Reading is similar to getting ready for ACT Math: the only way to do better is by practicing more. Yet the difference is that the person already has the
prerequisite for ACT Reading: common sense. Therefore, I work with the person in making understand that he or she already has what it takes to pass, or excel in the test.
20 Subjects: including SAT math, reading, English, ESL/ESOL
...I also hold a yoga teacher's certification. Here is the cancellation or no-show policy that you need to take into consideration if you decide to hire my services: If the cancellation is the
same day of tutoring there will be a 30 min. charge. If a student reschedules for a later day or time within the same week, there will be a 20 minute charge.
16 Subjects: including SAT math, Spanish, chemistry, biology
...The daycare involved foster children, adopted children, special needs children, ADHD diagnosed children, and the typical 4-13 year old. I am apt at switching my teaching method to curb it
towards the student's learning method. I helped them with memory games, learning how to read, the basics of math and science, etc.
11 Subjects: including SAT math, geometry, algebra 1, algebra 2
...Reading. Sports Program.). The summer is a great time to get caught up to or get ahead of grade level. My name is Paul M.
30 Subjects: including SAT math, calculus, geometry, ASVAB
Related Miami Beach, WA Tutors
Miami Beach, WA Accounting Tutors
Miami Beach, WA ACT Tutors
Miami Beach, WA Algebra Tutors
Miami Beach, WA Algebra 2 Tutors
Miami Beach, WA Calculus Tutors
Miami Beach, WA Geometry Tutors
Miami Beach, WA Math Tutors
Miami Beach, WA Prealgebra Tutors
Miami Beach, WA Precalculus Tutors
Miami Beach, WA SAT Tutors
Miami Beach, WA SAT Math Tutors
Miami Beach, WA Science Tutors
Miami Beach, WA Statistics Tutors
Miami Beach, WA Trigonometry Tutors
Nearby Cities With SAT math Tutor
Bal Harbour, FL SAT math Tutors
Bay Harbor Islands, FL SAT math Tutors
Brickell, FL SAT math Tutors
Carl Fisher, FL SAT math Tutors
El Portal, FL SAT math Tutors
Florida City, FL SAT math Tutors
Indian Creek Village, FL SAT math Tutors
Medley, FL SAT math Tutors
Miami Beach SAT math Tutors
North Bay Village, FL SAT math Tutors
North Miami Bch, FL SAT math Tutors
Sunset Island, FL SAT math Tutors
Surfside, FL SAT math Tutors
Venetian Islands, FL SAT math Tutors
Virginia Gardens, FL SAT math Tutors | {"url":"http://www.purplemath.com/Miami_Beach_WA_SAT_Math_tutors.php","timestamp":"2014-04-20T06:42:04Z","content_type":null,"content_length":"24424","record_id":"<urn:uuid:eccc19b7-88e5-4792-b51f-326c4ac60827>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00322-ip-10-147-4-33.ec2.internal.warc.gz"} |
Oleg Prokopyev's Homepage
Refereed Journal Articles:
□ A. Khojandi, L.M. Maillart, O.A. Prokopyev, "Optimal Planning of Life-Depleting Maintenance Activities" IIE Transactions, accepted for publication, 2013.
□ G. Degirmenci, J.P. Kharoufeh, O.A. Prokopyev, "Maximizing the Lifetime of Query-Based Wireless Sensor Networks," ACM Transactions on Sensor Networks, accepted for publication, 2013.
□ O. Shylo, O.A. Prokopyev, A.J. Schaefer, "Stochastic Operating Room Scheduling for High Volume Specialties under Block Booking," INFORMS Journal on Computing, accepted for publication, 2012.
□ S. Karademir, N. Kong, O.A. Prokopyev, "On Greedy Approximation Algorithms for a Class of Two-Stage Stochastic Assignment Problems," Optimization Methods and Software, accepted for publication,
□ O. Mostovyi, O.A. Prokopyev, O.V. Shylo, "On Maximum Speedup Ratio of Restart Algorithms Portfolios," INFORMS Journal on Computing, Vol. 25/2 (2013), pp. 222-229.
□ O. Ursulenko, S. Butenko, O.A. Prokopyev, "A Global Optimization Algorithm for Solving the Minimum Multiple Ratio Spanning Tree Problem," Journal of Global Optimization, Vol. 56/3 (2013), pp.
□ F.M. Pajouh, B. Balasundaram, O.A. Prokopyev, "On Characterization of Maximal Independent Sets via Quadratic Optimization," Journal of Heuristics, Vol. 19/4 (2013), pp. 629-644.
□ A.C. Trapp, O.A. Prokopyev, A.J. Schaefer, "On a Level-Set Characterization of the Integer Programming Value Function and its Application to Stochastic Programming." Operations Research, Vol. 61/
2 (2013), pp. 498-511.
□ O.Y. Ozaltin, O.A. Prokopyev, A.J. Schaefer, "Two-Stage Quadratic Integer Programs with Stochastic Right-Hand Sides," Mathematical Programming, Vol. 133/1 (2012), pp. 121-158. // Test instances
□ O.Y. Ozaltin, O.A. Prokopyev, A.J. Schaefer, M.S. Roberts, "Optimizing the Societal Benefits of the Annual Influenza Vaccine: A Stochastic Programming Approach," Operations Research, Vol. 59/5
(2011), pp. 1131-1143.
□ J. Rajgopal, Z. Wang, A.J. Schaefer, O.A. Prokopyev, "Integrated Design and Operation of Remnant Inventory Supply Chains under Uncertainty," European Journal of Operational Research, Vol. 214/2
(2011), pp 358-364.
□ A.C. Trapp, F. Zink, O.A. Prokopyev, L. Schaefer, "Thermoacoustic Heat Engine Modeling and Optimization," Applied Thermal Engineering, Vol. 31/14-15 (2011), pp. 2518-2528.
□ S. Karademir, O.A. Prokopyev, "A Short Note on Solvability of Systems of Interval Linear Equations," Linear and Multilinear Algebra, , Vol. 59/6 (2011), pp. 707-710.
□ O.V. Shylo, O.A. Prokopyev, J. Rajgopal, "On Algorithm Portfolios and Restart Strategies," Operations Research Letters, , Vol. 39/1 (2011), pp. 49-52.
□ M. Baz, B. Hunsaker, O.A. Prokopyev, "How Much Do We Pay for Using Default Parameters?," Computational Optimization and Applications, Vol. 48/1 (2011), pp. 91-108.
□ O.Y. Ozaltin, O.A. Prokopyev, A.J. Schaefer, "The Bilevel Knapsack Problem with Stochastic Right-Hand Sides," Operations Research Letters, Vol. 38 (2010), pp. 328-333.
□ A. Trapp, O.A. Prokopyev, "Solving Order-Preserving Submatrix Problem via Integer Programming," INFORMS Journal on Computing, Vol. 22 (2010), pp. 387-400.
□ A. Trapp, O.A. Prokopyev, S. Busygin, "Finding Checkerboard Pattern via Fractional 0--1 Programming," Journal of Combinatorial Optimization, Vol. 20 (2010), pp. 1-26.
□ N. Alpay, A. Trapp, O.A. Prokopyev, C. Camacho, "Optimization of minimum set of protein-DNA interactions: a quasi exact solution with minimum over-fitting," Bioinformatics, Vol. 26 (2010), pp.
□ O.A. Prokopyev, "On Equivalent Reformulations for Absolute Value Equations," Computational Optimization and Applications, 44 (2009), pp. 363-372.
□ O.A. Prokopyev, S. Butenko, A. Trapp, "Checking Solvability of Systems of Interval Linear Equations and Inequalities via Mixed Integer Programming," European Journal of Operational Research, Vol.
199 (2009), pp. 117-121.
□ J. Rajgopal, Z. Wang, A. Schaefer, O.A. Prokopyev, "Effective Management Policies for Remnant Inventory Supply Chains," IIE Transactions, Vol. 41 (2009), pp. 437-447.
□ O. Seref, O.E. Kundakcioglu, O.A. Prokopyev, P.M. Pardalos, "Selective Support Vector Machines," Journal of Combinatorial Optimization, Vol. 17 (2009), pp. 3-20.
□ O.A. Prokopyev, N. Kong, D.L. Martinez-Torres, "The Equitable Dispersion Problem," European Journal of Operational Research, Vol. 197 (2009), pp. 59-67.
□ O. Shylo, O.A. Prokopyev, V. Shylo, "Solving Weighted MAX-SAT via GES," Operations Research Letters, Vol. 36 (2008), pp. 434-438.
□ S. Busygin, O.A. Prokopyev, P.M. Pardalos, "Biclustering in Data Mining," Computers and Operations Research, Vol. 35/9 (2008), pp. 2964-2987.
□ P.M. Pardalos, O.A. Prokopyev, O. Shylo, V. Shylo, "Global Equilibrium Search Applied to the Unconstrained Binary Quadratic Optimization Problem," Optimization Methods and Software, Vol. 23/1
(2008), pp. 129-140.
□ C.A.S. Oliveira, O.A. Prokopyev, P.M. Pardalos, M.G.C. Resende, "Streaming Cache Placement Problems: Complexity and Algorithms," International Journal of Computational Science and Engineering,
Vol. 3 (2007), pp. 173-183.
□ S. Busygin, O.A. Prokopyev, P.M. Pardalos, "An Optimization Based Approach for Data Classification", Optimization Methods and Software, Vol. 22/1 (2007), pp. 3-9.
□ M. Min, O.A. Prokopyev, P.M. Pardalos, "Optimal Solutions to Minimum Total Energy Broadcasting Problem in Wireless Ad Hoc Networks", Journal of Combinatorial Optimization, Vol. 11/1 (2006), pp.
□ H.-Z. Huang, P. Pardalos, O.A. Prokopyev, "Lower Bound Improvement and Forcing Rule for Quadratic Binary Programming", Computational Optimization and Applications, Vol. 33 (2006), pp. 187-208.
□ W. Chaovalitwongse, O.A. Prokopyev, P.M. Pardalos, "Electroencephalogram (EEG) Time Series Classification: Applications in Epilepsy", Annals of Operations Research, Vol. 148 (2006), pp. 227-250.
□ H. K. Fung, S. Rao, C. A. Floudas, O.A. Prokopyev, P.M. Pardalos, F. Rendl, "Computational Comparison Studies of Quadratic Assignment Like Formulations for the In Silico Sequence Selection
Problem in De Novo Protein Design," Journal of Combinatorial Optimization, Vol. 10/1 (2005), pp. 41-60.
□ O.A. Prokopyev, H.-Z. Huang, P.M. Pardalos, "On Complexity of Unconstrained Hyperbolic 0-1 Programming Problems", Operations Research Letters, Vol. 33/3 (2005), pp. 312-318.
□ O.A. Prokopyev, C. Meneses, C.A.S. Oliveira, P.M. Pardalos, "On Multiple-Ratio Hyperbolic 0--1 Programming Problems," Pacific Journal of Optimization, Vol. 1/2 (2005), pp. 327-345.
□ S. Busygin, O.A. Prokopyev, P.M. Pardalos, "Feature Selection for Consistent Biclustering via Fractional 0-1 Programming", Journal of Combinatorial Optimization, Vol. 10/1 (2005), pp. 7-21.
□ W. Chaovalitwongse, P.M. Pardalos, O.A. Prokopyev, "A New Linearization Technique for Multi-Quadratic 0-1 Programming Problems," Operations Research Letters, Vol. 32/6 (2004), pp. 517-522.
□ P.M. Pardalos, W. Chaovalitwongse, L.D. Iasemidis, J.C. Sackellares, D.-S. Shiau, P.R. Carney, O.A. Prokopyev, V.A. Yatsenko, "Seizure Warning Algorithm Based on Optimization and Nonlinear
Dynamics," Mathematical Programming, Vol. 101/2 (2004), pp. 365-385.
□ O.A. Prokopyev, P.M. Pardalos, "On Approximability of Boolean Formula Minimization," Journal of Combinatorial Optimization, Vol. 8/2 (2004), pp. 129-135.
□ O.A. Prokopyev, P.M. Pardalos, "Minimum $\epsilon$-equivalent Circuit Size Problem," Journal of Combinatorial Optimization, Vol. 8/4 (2004), pp. 495-502. | {"url":"http://www.pitt.edu/~droleg/pub.html","timestamp":"2014-04-21T04:52:42Z","content_type":null,"content_length":"12496","record_id":"<urn:uuid:50ab9428-e510-4272-8074-d02b0fcc64b0>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00126-ip-10-147-4-33.ec2.internal.warc.gz"} |
Assessment: The Soft Spots
May 29th, 2007 by Dan Meyer
Jonathan, Jackie, and Sara have been asking sharp questions in the comments about how I assess students. They've found a lot of soft spots on an otherwise leathery-tough assessment strategy and I'd
like to address them here.
First, to bring those up to speed who don't feel like digging through the pdf manifesto (gonna get on that soon, promise):
You break your curriculum into forty skills or concepts.
You take the concepts several-at-a-time as you roll through the year.
You assess a student once with a straightforward problem; that's good for a B.
You assess her again at more depth. Ask her to go backwards and solve for the inputs. Use a word problem. Use negatives. Make her prove she's got it.
Then change her B on that concept to an A. Also, tell her she doesn't have to take that concept on future tests.
Sara's Concern: Retention
Sara: With shorter teach-practice-assess cycles, how do you know that kids are really learning a concept, not just shoving it into short-term memory for long enough to pass your mini-assessment?
My response from the comments:
Sara, it’s always encouraging and depressing at the same time when people seize on the most obvious glitch in this system. Glad you people are evaluating this thing through critical eyes.
So I’ll say here that once a student completes a concept twice and I tell her she doesn’t have to pass it again, that she can work on other concepts, there is a tendency to file that knowledge
away somewhere impermanent.
But in nearly every case, when I toss an old problem on the board and a student says, I don’t remember that, it takes the absolute minimum of prodding for her to generate full recall.
There's probably a decent discussion to be had here on the merits of retention, in general, in an age when anything can be found on the Internet and everything is kind of like riding a bicycle. At
some point in that discussion I'd mention that I had to re-teach myself several sections of Geometry before teaching it for the first time this year. Unfortunately I'm just not courageous enough to
make that whole case right now.
Jonathan & Jackie's Concern: Intellectual Simplicity
Jackie: Dan, when assessing one concept at a time, how do you assess your students’ ability to synthesize the concepts? Ability to problem solve? Ability to communicate mathematical thinking? In
short, when do the higher order skills come into play?
Jonathan: By testing in little pieces, one skill at a time, are they ever asked to put skills together, to use more than one at a time?
There's rigor and there's synthesis. As arbitrated by California's released questions I hit rigor but I rarely assess synthesis. Here's the difference. To assess "Similar Area/Volume," recently, I
asked the following question:
A large deck weighs 1600 lbs. and costs $40.00 to waterproof. A similarly shaped smaller deck weighs 200 lbs. How much will it cost to waterproof?
By California's standards, that's a rigorous assessment. Jonathan and Jackie are concerned, however, that my concepts don't talk to each other. A common synthesis question (though an admittedly
annoying example) would've read:
A large deck weighs 2x – 100 lbs. and costs $40.00 to waterproof. A similarly shaped smaller deck weighs 200 lbs and costs $10 to paint. Solve for x.
To solve the second variety, you've got to know your order of operations, your algebraic equations, and your similar area/volume.
I'm pretty sure Jackie and Jonathan understand my intent. I can't crash concepts together like that without gumming up my remediation process. Say a student botched that last question, scored a 2/4.
It would then be impossible for me to determine (from the score alone, a week down the line) whether the student understood similarity but just couldn't deal with the algebra or vice versa.
So I split the three components apart and assess them separately.
I'm not sure it'll reassure anyone but me that we do hit these synthesized problems hard during openers / classwork / project time. That gets my guilt way down. The hard question here is this: am I
willing to fail a student for an inability to synthesize concepts?
My answer is an emphatic no so I keep this game up without much guilt. If it really bothered me, though, I would toss in a concept called "Synthesis" every few concepts. It'd be a lame duck for
remediation (though math assessment across the land right now is one loudly-quacking lame duck) but it'd ding students' grades who couldn't synthesize while still leaving me the original,
unsynthesized concepts to assist in their remediation.
3 Responses to “Assessment: The Soft Spots”
1. on 29 May 2007 at 5:01 pm1
I understand the reasonable concern raised in this thread that concepts and concept-based assessments are too skills based and compartmentalized. However, cleverly constructed concepts can force
students to the application level of Bloom’s taxonomy and encourage students to be clever and thoughtful. In fact, I would argue that students with weak basic skills and who would otherwise be
considered remedial rely more heavily on analysis and application of a myriad of methods for their success because brute force calculations are like kryptonite for them. They have to think
because they have difficulty with the basics and need “easy” paths to success.
For example:
Concept 12: I can solve a quadratic equation using an appropriate method.
a) x^2 + 4x = -3 b) x^2 + = -4x + 1 c) 4x^2 + x = 5
Easy Right? Not if you are a struggling math student. One might suggest that simply memorizing and applying the quadratic formula would be enough for this question thereby making this a simple
skills-based question.
Here’s the rub. For a struggling student to use the quadratic formula, they must (without making errors) work with integers flawlessly, simplify radical expressions, and reduce fractions with
radical numerators. With each additional step a weak student completes, the likelihood of an error increases.
Therefore, the focus of instruction is necessarily skills-based to some degree in that eventually the students will need to be able to work with the QF efficiently and apply brute force. However,
in remedial classes the need for students to analyze different types of quadratic equations and classify them into types most easily addressed by factoring, completing the square, and the
quadratic formula is paramount for these students to be successful. Further, these students need the proficiency to apply these skills when needed and without direct prompting to do so. Returning
to the problem (Concept 12) above illustrated this point.
FOR A) The weak student should use factoring and the zero product property to avoid the radical simplification and fraction reducing involved in solving this problem using the QF. Factoring
allows the student to solve the problem in two steps minimizing the likelihood of errors.
FOR B) The weak student should use completing the square because this seemingly simple problem KILLS students who try to use the QF at the point of radical simplification and again when they have
to reduce the resulting fraction containing a radical in the numerator. Completing the square avoids all of these pitfalls because with the even middle term it will have no denominator.
FOR C) They have no choice but to use the QF, but rather than having to muscle their way through three problems they only have to attack one.
The higher levels of Bloom’s taxonomy are necessarily well represented in this schema which also, I believe, improves student retention related to this area of study.
Lastly, a concept based assessment program builds in ever-present moments for student encouragement. You can always find SOMETHING in a test that you can use to say to a kid, “wow, you look like
you are really starting to get this,” no matter how poorly he or she is performing overall. Encouraged students are more likely to try more challenging Synthesis-type questions as part of a
lesson or as a quiz if they feel more hopeful and empowered overall.
2. on 29 May 2007 at 9:30 pm2 dan
Yeah, good promotion of retention there and you certainly don’t need to convince me that this assessment strategy can be rigorous.
Reading your comment reminded me also that writing my own assessments, and forcing myself to undergo the same analytical train your comment rides, has done wonders for me as a teacher. I reckon
it’s possible to land the balance of instruction/assessment in other ways, this one just feels like an express route.
3. on 30 May 2007 at 3:34 pm3
I appreciate the clarification/additional details.
So much to consider as I’m planning for next year… which is a good thing, as I’m actually engaged in thoughtful planning, so thanks! | {"url":"http://blog.mrmeyer.com/2007/assessment-the-soft-spots/","timestamp":"2014-04-21T07:26:07Z","content_type":null,"content_length":"42193","record_id":"<urn:uuid:f921fdab-f496-476a-9f84-2c2b94aab4d4>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00190-ip-10-147-4-33.ec2.internal.warc.gz"} |