content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Rules for Inverse Functions
There are a few rules for whether a function can have an inverse, though. Even though you can by anything you want in life, a function doesn't have all the same freedoms in math. Maybe they should
draft a constitution or something.
First of all, it's got to be a function in the first place. For a review of that, go here...or watch this video right here:
Second, that function has to be one-to-one. That is, for every x-value, there's got to be a unique y-value.
This is a one-to-one function.
This is not. Notice how multiple x values can yield the same y value.
Figuring out if a function is one-to-one is as simple as drawing a straight line. No, really—give it a shot. It's called the horizontal line test. Draw any function, and then draw a straight
horizontal line through it. If there's anywhere that the line passes through the function more than once, it is not a one-to-one function.
This function passes the horizontal line test
Sample Problem
Which of the following is not a one-to-one function? Try drawing them if you have trouble.
a. x^2 + 4
b. -4x
c. 2^x
The answer is a. Because this function is even, or symmetric across the y-axis, the horizontal line test fails, and it is not one-to-one. Don't worry, the function won't be punished, it's just part
of a different circle of friends.
If a function isn't one-to-one, though, there's a simple way to make it conform: remove the parts that fail the horizontal line test. For example, if we have the function x^2 + 4 from the sample
This function fails the horizontal line test. No re-takes either—bummer.
All we've got to do is restrict the domain of the function so that it does pass the test with flying colors.
Here we restrict the domain to either x < 0 or x > 0, and now it's one-to-one. | {"url":"http://www.shmoop.com/logarithm-exponent/inverse-function-rules.html","timestamp":"2014-04-17T01:42:46Z","content_type":null,"content_length":"36957","record_id":"<urn:uuid:9ac05286-06ae-4a27-9df4-1df2db60041b>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00062-ip-10-147-4-33.ec2.internal.warc.gz"} |
Northwoods, IL Math Tutor
Find a Northwoods, IL Math Tutor
...I usually meet with students in the evening at our local library. I have also met with students in their homes and occasionally at a local coffee shop. Students will typically meet with me
twice a week for 30 minutes to an hour each session.
13 Subjects: including trigonometry, algebra 1, algebra 2, biology
...I take full advantage of lesson time. It is important to consider student motivation as practice is crucial to success. I listen to student feedback, incorporate student choice, and adjust my
lessons as needed.
24 Subjects: including trigonometry, differential equations, discrete math, dyslexia
...I teach math and physics at a college in a Western suburb. I have my PhD in physics but since math is the language of science, I learned and practiced (and taught)it throughout of my career.
While I teach, I emphasize understanding and grasping the subjects along with solving problems.
8 Subjects: including algebra 1, algebra 2, physics, prealgebra
...Currently I am enrolled at Northeastern Illinois University in the Education program, which has required that my background (using fingerprints) be checked twice in the last 12 months. Along
with Spanish, I enjoy teaching math to girls, especially Pre-Algebra and Algebra, because I like to show ...
17 Subjects: including statistics, algebra 1, geometry, prealgebra
...I have a knowledge of tips and tricks to simplify elementary algebra concepts. As an undergraduate math major, I volunteered with a program that tutored local middle school students in math. I
worked in small groups or individually with mostly 7th and 8th graders to help them keep up with the pace of the class.
5 Subjects: including algebra 1, prealgebra, statistics, probability
Related Northwoods, IL Tutors
Northwoods, IL Accounting Tutors
Northwoods, IL ACT Tutors
Northwoods, IL Algebra Tutors
Northwoods, IL Algebra 2 Tutors
Northwoods, IL Calculus Tutors
Northwoods, IL Geometry Tutors
Northwoods, IL Math Tutors
Northwoods, IL Prealgebra Tutors
Northwoods, IL Precalculus Tutors
Northwoods, IL SAT Tutors
Northwoods, IL SAT Math Tutors
Northwoods, IL Science Tutors
Northwoods, IL Statistics Tutors
Northwoods, IL Trigonometry Tutors
Nearby Cities With Math Tutor
Beaver Creek, IL Math Tutors
Boulder Hill, IL Math Tutors
Cloverdale, IL Math Tutors
Clyde, IL Math Tutors
Echo Lake, IL Math Tutors
Forest Lake, IL Math Tutors
Half Day, IL Math Tutors
Helmar, IL Math Tutors
La Grange Highlands, IL Math Tutors
Oak Ridge, IL Math Tutors
Prairieview, IL Math Tutors
South Suburban, IL Math Tutors
Techny Math Tutors
Trout Valley, IL Math Tutors
West Chicago Math Tutors | {"url":"http://www.purplemath.com/Northwoods_IL_Math_tutors.php","timestamp":"2014-04-18T10:58:35Z","content_type":null,"content_length":"23990","record_id":"<urn:uuid:7ff08ce9-71e2-424d-a18b-b629b70fda88>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00584-ip-10-147-4-33.ec2.internal.warc.gz"} |
March 31st 2010, 08:58 AM
If I have 9 varieties of pizza to choose from, in how many ways can I choose 4 slices of pizza ?
March 31st 2010, 09:26 AM
Hello, fholik!
If I have 9 varieties of pizza to choose from,
in how many ways can I choose 4 slices of pizza?
For each of the four slices, you have 9 choices of variety.
Therefore, there are: . $9^4 \,=\,6561$ ways.
March 31st 2010, 12:57 PM
Thanks for the reply, but your method leaves duplicate choices. This question came from an state-sponsored math exam and the correct response is 495. I just need to find the formula for future
March 31st 2010, 01:38 PM
The question is vague. But the given answer implies a multi-set.
The number of ways to put K identical units into N different cells is $\binom{K+N-1}{K}$.
Here the choices are identical and the pizzas are different.
April 1st 2010, 04:18 AM
Thank You, Plato. That is the formula I was looking for. I had seen it a few months ago, but had forgotten. Thanks for reminding me. I will write it down this time. Fholik | {"url":"http://mathhelpforum.com/discrete-math/136677-combination-print.html","timestamp":"2014-04-24T17:17:05Z","content_type":null,"content_length":"6354","record_id":"<urn:uuid:5f325815-65d5-43ed-8f2c-fcfddc83d360>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00389-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fibonacci sequence
knit patterns learn faq home
Fibonacci sequence
A mathematical sequence consists of a series of numbers, usually starting at 0 or 1, all of which are derived from preceeding ones according to formal rules. In other words, no matter where you are
in the sequence you ought to be able to deduce what the following number should be.
One of the most well-known sequences was one derived by the Italian medieval mathematician known today as Fibonacci. In his Liber Abaci, published in 1202, he explained a simple sequence, where every
number is the sum of the two numbers preceeding it. So the result is:
1 1 2 3 5 8 13 21 34 ...
The fascinating thing about it is that this sequence is actually found in nature with amazing frequency. Fibonacci himself used the cute (pre-Australian cute) example of the multiplication of rabbits
in his book. Since then, it's also been found to apply to such varied things as shell spirals, branching plants, flower petals, seeds, pine cones, leaves... It seems almost everywhere you look there
is lurking a Fibonacci sequence.
So what does all this have to do with fiber? There are easy ways to use the Fibonacci sequence in designs, both for color and texture. And the results somehow always manage to look pleasing to the
human eye, which is probably because the sequence looks so... natural! See for instance this improvised sock - doesn't it look like someone racked her brain to come up with something so
sophisticated? And you don't need to use a whole lot of the sequence for the effect to take hold visually - instead of using 3 uniform thin stripes for instance, make the 3rd twice as wide, and see
the result...
So let's think of some obvious ways to use the sequence:
• You have a very plain stockinette sweater which needs a bit of livening up. Instead of doing a few stripes, use a Fibonacci sequence of garter ridges as a pattern. If you want color, the width of
the stripes can follow the sequence. This can work both when one color does the pattern:
a...a aa bbbbb aa bbb aa bb aa b aa b a...a
or when both colors work together: bbbbbbbb aaaaaaaa bbbbb aaaaa bbb aaa bb aa b a b a
• You run out of yarn as you near the end of the sleeves. No problem, you can incorporate a few stripes in a Fibonacci sequence as you change colors, and instead of strange contrasting cuffs your
sweater will look perfectly integrated. Add a couple thin Fibonacci contrasting stripes (1 1 2 or even just 1 1) in the collar and it'll look like you did it all on purpose.
• You can use one color/texture in a Fibonacci sequence of varying width, separated by fixed contrasting bands:
aa bbbbb aa bbb aa bb aa b aa b
• Better yet, you can have two sequences going in both directions at once:
aaaaaaaa b aaaaa b aaa bb aa bbb a bbbbb a bbbbbbbb
• You can integrate the sequence into the design of Fair-Isle patterns, with progressively larger designs. Or into cables of more and more complex design.
• Here is a Fibonacci rib - a 4x4 ordinary rib is livened up by changing the pattern every Fibonacci number of rows.
aaaa bbbb aaaa bbbb
bbbb aaaa bbbb aaaa
aaaa bbbb aaaa bbbb
bbbb aaaa bbbb aaaa
aaaa bbbb aaaa bbbb
aaaa bbbb aaaa bbbb
bbbb aaaa bbbb aaaa
bbbb aaaa bbbb aaaa
aaaa bbbb aaaa bbbb
aaaa bbbb aaaa bbbb
aaaa bbbb aaaa bbbb
bbbb aaaa bbbb aaaa
bbbb aaaa bbbb aaaa
bbbb aaaa bbbb aaaa
• And this is a Fibonacci doodle - switch rows every Fibonacci number of rows.
There is nothing original about this, it's all been very well known for a long time, and once your eye gets tuned to it you'll notice uses of the sequence in many designs. But it all works so well to
give pleasing designs that there's no reason not to make use of it liberally...
knit patterns learn faq home
First published: 01/21/03
Modified: 4/27/06
All rights reserved. © Fuzzy Galore 2002-2006. | {"url":"http://www.fuzzygalore.biz/articles/fibonacci_seq.shtml","timestamp":"2014-04-19T15:22:43Z","content_type":null,"content_length":"8668","record_id":"<urn:uuid:24af5b1f-a479-4546-9b84-dc03146ad681>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00212-ip-10-147-4-33.ec2.internal.warc.gz"} |
1. Introduction
Voronoi diagram belong to classical problems of the computational geometry. Its origin dates back into 1850 when it was considered by Dirichlet. The rigid mathematical fundamental was given by
Voronoi in 1908.
Let us have points p[i], 0 < i <= n in n-dimensional space P. The set of all points with property that every point inside the cell is closer to point p[i] than to any other point from P represents
the Voronoi cell (Figure 1). The union of all Voronoi cells is known as Voronoi diagram.
Figure 1: Voronoi cell
As seen from Figure 1, the construction of Voronoi diagram consists of repetitive subdivisions of the space determined by the input points into subspaces to meet the Voronoi criteria:
In many aplications, Voronoi diagrams are already the final solution. For example, study of behavior and maintainance of live creature, which are depended on number of neighbors with whom they are
fighting for food and lightness is exactly what Voronoi diagram expresses. However, having a Voronoi diagram, many important geometric features like searching for the closest neighbor, Delaunay
triangulation, searching for the largest empty circle, minimum spanning tree (Euclidean tree), can be determined in the linear time.
The first reiable algorithm for construction has been published by Green and Sibson [GREE77], although the Voronoi diagrams have been known for a long time. This approach used incremental method and
worked in O(n^2) time. The first algorithm requiring O(n log[2] n) was suggested by Shamos and Hoey (1975) [PREP85] using divide and conquer approach. However, its implementation is complicated. In
1985, Fortune presented more elegant approach and it is described in [BERG97, O'ROU93]. In this paper, we consider primarily the data structures needed for the construction. In 1986, Edelsbruner and
Seidel discovered beautiful connection between Voronoi diagram and convex hulls in one higher dimension [O'ROU93]. This method has some beautiful properties and it is becoming more and more popular.
Roman Cuk - roman.cuk@uni-mb.si | {"url":"http://www.cescg.org/CESCG99/RCuk/introduction.htm","timestamp":"2014-04-19T20:14:16Z","content_type":null,"content_length":"2874","record_id":"<urn:uuid:17bb1b5a-d2ba-432c-851f-285aead5a2a9>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00511-ip-10-147-4-33.ec2.internal.warc.gz"} |
On Why A Century Is Not Entirely An Arbitrary Milestone
The figure 100 is a nice round figure. It has been under scrutiny recently especially because Sachin Tendulkar reached his 100th international century. But why does it matter? Does a player who makes
95 contribute significantly less to a team than a player who makes 105? Is the difference between scoring 95 and 105 significantly dissimilar to that between scoring 85 and 95?
Milestones get established because thresholds matter. These thresholds are apparently arbitary. It is probably fair to say that they are in fact arbitrary, in the sense that they were not established
for a cricketing reason.
It turns out that 100 runs is not such a bad threshold score after all. My friend
suggested this idea to me a couple of weeks ago and did some calculations. His idea was based on the equivalence between a century by a batsman and a 5 wicket haul by a bowler. A 5 wicket haul has
some correlation to the total number of wickets required to bowl a side out. Here is how the argument goes.
Assuming that a team is made up of 4 players whose primary job is to take wickets - "bowlers" and 7 players whose primary job is to score runs - "batsmen", it follows that a 5 wicket haul is double
the number of wicket each bowler can be expected to take on average in Test Team (10/4 = 2.5 etc.). Hence, a 5 wicket haul, even though it is probably just as arbitrary as a century, as a threshold
to be recorded, is meaningful beyond its putative gestalt. It accounts for the number of times that a single bowler dismissed at least half the opposition in an innings.
How should a comparable measure be established for batsmen?
We tried this in a number of ways. Declarations create difficulties, especially in the third innings. Batting in the second innings (3rd or 4th match innings) creates difficulties in general, because
of the existence of preset targets and declarations.
As a comparable achievement to the 5 wicket haul, the following criteria are used -
The minimum run total in the 1st innings (1st or 2nd match innings) for the loss of 10 wickets, which ensures that a team is more likely to either win or draw the Test, than it is to lose it.
The minimum run total is calculated by calculating the runs/wicket figure for each 1st innings (1st or 2nd match innings) total, and sorting the runs/wicket score in ascending order. The threshold
team score is that (runs/wicket x 10) figure at which the likelyhood of defeat is less that 50%.
If this team total is N, then the "century" score is 2*N/7 (Since we assume that there are 7 batsmen who are primarily responsible for scoring runs, and, as in the case of the 5 wicket haul, the
threshold must be double the par score.)
The table below shows the development of the Threshold Team Score and the "Century" Score at 200 first innings intervals (roughly every 100 Test Match). The figures are cumulative - the figure for
the 1000 innings mark includes the first 1000 innings, the figure for the 2000 innings mark includes the first 2000 innings and so on.
The "Century" scores have ranged between 88 and 99 over these years. This has remained remarkably stable over the 130 odd years that Test cricket has been played. The high points (where the "century"
score has been 99), were when Bradman played, and in the 2000s.
Why consider only 1st innings scores?
This is because it is difficult to consider 3rd or 4th innings scores due to the existence of leads, deficits, declarations and targets. While teams often declare their 1st or 2nd innings closed,
this is a different type of declaration after a different type of innings compared to the 3rd or 4th innings. If the proposed calculation is done for 2nd innings, then a "century" score turns out to
be about 160, because a large share of 2nd innings are played with a large lead, involve 3rd innings declarations, can involve very short 3rd or 4th innings, etc.
Please note that this is an attempt to look at a century as a statistic. This does not mean that a particular score of 88 in a 1992, was as good as a particular score of 99 in 2012.
What this does show, is that the threshold of 100 is not a bad one, and the comparison of a century to a 5 wicket haul is a good one.
6 comments:
1. Such an ingenious analysis, with a very satisfying result. Could you be persuaded to pursue this line to provide scores that lead to other result probabilities (say 50% chance of winning; 90%
chance of avoiding defeat)?
Have you come across the academic work of Dr Scarf on predicting Test match outcomes? I've referenced it on Declaration Game. I'll send you a link if interested.
1. Great analysis....
Please send the link to me as well...
New Android Games
2. The academic work:
An analysis of strategy in the first three innings in test cricket: declaration and the follow-on. Philip Scarf and Sohail Akhtar. Salford Business School Working Paper Series. Paper no. 337/
And where I've used it: http://wp.me/p1OY5E-5J
2. Great analysis KD. Here's one more proof why it may not be such an arbitrary number:
Look at the graph and the cut-off value.....
3. In my personal view i don't think that a century can express a player's expertise however, consistence of performance and stamina express a player's ability.
1. You are right, any one can score a century.. but if he consistently scores them then we can say he is a class player.
Cool Android Games | {"url":"http://cricketingview.blogspot.com/2012/04/on-why-century-is-not-entirely.html","timestamp":"2014-04-20T08:14:27Z","content_type":null,"content_length":"147082","record_id":"<urn:uuid:af95bc1d-3064-4e17-90d6-584f3c877651>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00246-ip-10-147-4-33.ec2.internal.warc.gz"} |
Research in Differential Games
A Summer Undergraduate Research Project
David Szurley and Edward Aboufadel
The term "Differential Games" is applied to a group of problems in applied mathematics that share certain characteristics related to the modeling of conflict. In a basic differential game there are
two actors -- a pursuer and an evader -- with conflicting goals. The pursuer wishes, in some sense, to catch the evader, while the evader's mission is to prevent this capture. These "games" are
modeled mathematically by first defining state variables that represent the position (and perhaps velocity) of the participants, determining (differential) equations of motion for the rivals, and
then describing sets in the state space called target sets. (For example, a target set for a pursuer may include points in the state space where the distance between the pursuer and the evader is
small.) Each participant in the game tries to drive the state variables of the game into a particular target set by controlling key variables called, naturally, controls. The study of these games has
implications for real-life air combat and for artificial intelligence.
An archetypal example of a differential game is known as the Homicidal Chauffeur (described in Lewin, 1994, and created by Issacs). In this game, the driver of a circular car acts to knock down a
pedestrian, who, of course, does not wish to be flattened. The car can move faster than the pedestrian, but the pedestrian can maneuver better. What is the best strategy for the pursuer (the car) and
the evader (the pedestrian) to follow in order for each to achieve their conflicting goals? This is a case the car is always considered as being at the origin, and the state variables are the
position of the pedestrian. The solution to this problem can be applied to air combat where a slow, but more maneuverable airplane is pursued by a faster but less maneuverable craft.
This problem, and others (with names such as the Lion and Man problem, the Obstacle Tag game, and the Lady in the Lake problem), all involve similar geometric constructions to create mathematical
models, but the methods of analysis vary greatly. As a result, an active group of researchers have been working on these problems since World War II, people whose prime research interests have
included game theory, astronomy, computer science, and engineering. These problems are also called "pursuit problems", and a recent search of the Math-Sci disks in our library, using the word
"pursuit", indicated over one-hundred recent research articles published during the past 3 years alone.
The differential game that initially caught our interest could be called "the ant game." Bruckstein (1993), influenced by a passage in Fenyman's autobiography (1985), considered why it is that ant
trails "look so straight and nice." The solution involved creating and analyzing a mathematical model of ant motion in pursuit of food. The model is constructed in the plane, with the origin being
the anthill and a point (L,0) representing the food. The first ant leaves the anthill in pursuit of the food, and he follows a somewhat random path. The second ant leaves the anthill a short time
later and along another path, following the strategy that its velocity vector is always pointing toward the first ant. (The first ant's control is the direction of its velocity vector.) The third ant
follows the second ant using the same strategy, and so on. The analysis of this system of differential equations focuses on the angle made by the velocity vectors of the nth and n+1th ants.
Bruckstein showed that as n gets larger -- in other words, as more ants leave the anthill and follow the strategy -- the angle approaches zero, hence the ants walk more and more in a straight line.
During the Summer of 1996, we plan to tackle three problems. The first, and simplest, is to create an effective computer demonstration of Bruckstein's ant trails analysis. The demonstration will be
an effective tool to teach others about differential games, and writing the demonstration will help us in our learning. The second challenge is to re-cast a problem recently investigated by Aboufadel
(1996) -- what strategy does a baseball player use to catch a fly ball -- in the language of differential games in order to analyze the problem from a new direction. McBeath, et. al. (1995)
demonstrated empirically that baseball players use a technique called the Linear Optical Trajectory (LOT) method. The idea is that the fielder moves in such a way as to keep constant the perceived
angle made by the image of the baseball, home plate, and the player. Aboufadel proved algebraically that if the LOT method was, in fact, the strategy that fielders really use, then a number of
qualitative statements made by McBeath et. al. about the LOT method were justified.
Our third, and most difficult, challenge is to consider the modeling of air combat between two planes as a differential game. Most of the theory of differential games has been done in two-dimensions,
because it is easier. Air combat involves three-dimensions, and much of the work that has been done on this problem has restricted the airplanes to a fixed height above the ground, effectively
returning the problem to two dimensions. There is also the issue of who is the pursuer and who is the evader when fighter pilots are trying to destroy each other. Grimm and Well (1990) have reviewed
the literature and have found a number of areas that can be explored. In particular, we wish to create a three-dimensional model of air combat between two fighter pilots, building in a difference of
speed, maneuverability, and weapon range between the two airplanes, along with the blinding effects of the position of the sun. The object will be to develop this model as a differential game, with a
mathematical analysis leading to the optimal strategies for the two participants.
For more information about this project, contact Edward Aboufadel: aboufade@gvsu.edu | {"url":"http://faculty.gvsu.edu/aboufade/web/szurley.htm","timestamp":"2014-04-21T09:47:00Z","content_type":null,"content_length":"10112","record_id":"<urn:uuid:352ff96e-c8eb-4f7f-aa56-84c8aa9c17e7>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00259-ip-10-147-4-33.ec2.internal.warc.gz"} |
Velocity/Net-Change type word problem
1. May 15th 2013, 10:07 AM #1
2. May 15th 2013, 12:59 PM #2
Re: Velocity/Net-Change type word problem
Check your signs - the formula should have Q(t) decreasing with time, not increasing. You should have:
$Q(t) = Q_0 +\frac {R_0} k (e^{-kt} -1)$
If you set t to infinity you get:
$Q(\infty) = Q_0 + \frac {R_0} k (e^{-\infty} -1) = Q_0 - \frac {R_0} k$
Set $Q(\infty) = 0$ to determine the minimum value of k in terms of $Q_0$ and $R_0$ that satisfies the condition.
Similar Math Help Forum Discussions
Search Tags | {"url":"http://mathhelpforum.com/calculus/218961-velocity-net-change-type-word-problem.html","timestamp":"2014-04-18T19:01:32Z","content_type":null,"content_length":"34314","record_id":"<urn:uuid:aa7a21d6-a15a-494a-bdd4-6067f5de714f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00121-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about MJ4MF on Math Jokes 4 Mathy Folks
Posts tagged ‘MJ4MF’
The paperback version of Math Jokes 4 Mathy Folks was released on August 9, 2010. During its first three years on Amazon, it received 17 reviews, with an average rating of 4.76. Recently, however, an
unimpressed reviewer gave it just 2 stars:
This reminds me. If you’ve read MJ4MF and liked it, please post a review on Amazon. (If you disliked it, please post your review on MySpace.)
But I digress. Back to my point. The disparaging review that appeared on Amazon contained just 21 words:
jokes are not very funny – seems like they were stretching it to find enough jokes to fill a book to sell
To fully understand this review, I offer the following annotations.
jokes are not very funny
“I wouldn’t know humor if it bit me. I often travel to Branson, MO, to see Yakov Smirnoff perform live, and I think that Carrot Top’s performance on Star Search is the funniest moment ever.”
seems like they
“I’m unaware that the author is a single person,” or possibly, “I’m not familiar with rules of English grammar.”
were stretching it
“I don’t understand common English idioms. A friend pointed out that the correct phrase is just ‘were stretching’ without the ‘it.’ Oops.”
to find enough jokes
“I failed to realize that the book contains 400+ math jokes, yet a Google search for ‘math jokes’ returns 2,830,000 results. Simple percentages show how selective the author has been. I also hadn’t
visited this blog before posting my review; I now see that a significant number of jokes not in the book have appeared on this blog, so clearly the author did not exhaust the supply.”
to fill a book to sell
“The author is a money-hungry swine who would sell his grandmother’s secret recipe for Hungarian pierogi for 50 bucks.”
Sadly, this last claim is mostly true. But my grandmother’s pierogi were divine, and the recipe is worth far more than $50. Kindly submit your bid in the Comments.
But I’m not bitter. I don’t care that this review reduces my average rating by 0.15 stars or that it single-handedly drops the book to #19 when someone searches for ‘math jokes’ on Amazon and sorts
by “Avg. Customer Review.”
Instead, I prefer to remember the MJ4MF review written by Caregiver x 2, who said:
This morning I gave this book to my son, he didn’t put it down for a long time. He was laughing and flipping the pages as fast as he could. And he was on his summer break!
She is wise beyond her years, and I appreciate that she took the time to share her insightful comments with the world.
It’s college bowl season, and there is an impressive line-up of games, from the Famous Idaho Potato Bowl, to the Little Caesars Pizza Bowl, to the Buffalo Wild Wings Bowl, to the GoDaddy.com Bowl, to
Oh, for Pete’s sake.
There are no fewer than eight college football bowls that have completely abandoned any pretense of respecting tradition. The name of the bowl is isomorphic with the name of the sponsoring company.
Sure, some bowls give a nod to tradition by appending the name of the sponsor to the historical name, such as the Allstate Sugar Bowl or the Discover Orange Bowl. But even in those cases, the sponsor
is listed before the bowl itself.
What can you do? My daddy always told me, “If you can’t beat ‘em, join ‘em!”
Following his sage advice, I’d like to announce the 2013 Math Jokes 4 Mathy Folks Bowl, which already has a spiffy logo…
But the MJ4MF Bowl will be different than the others. There have to be rules. My rules.
First, the game must be played on January 3, 2013, which can be written as 1/3/13. (Nice, huh?)
Second, both teams would have to be willing to modify their nicknames — only temporarily, of course — to make them more mathy. For example,
• Arizona State Sum Devils
• East Carolina πrates
• Navy Midpoint Men
• North Texas Median Green
• Penn State Nittany Lines
• Standford Cardinality
• Tulane Sine Wave
• UCLA de Bruijn Sequences
• Western Kentucky Hilltopologists
Third, and most importantly, the yard lines on the field would need to be renumbered. Currently, they are numbered as follows:
|0 1|0 2|0 3|0 4|0 5|0 4|0 3|0 2|0 1|0 0|
That’s just dumb. For the MJ4MF Bowl, the yard lines will be numbered like this:
-5|0 -4|0 -3|0 -2|0 -1|0 0 1|0 2|0 3|0 4|0 5|0
Honestly, doesn’t that make more sense? The middle of the field would be the 0-yard line, which seems appropriate; and, now when you hear, “The Lions have the ball on the 10-yard line,” you won’t
have to wonder, “Which 10-yard line?”
Finally, teams will not have to meet the onerous NCAA bowl eligibility requirements to participate in the MJ4MF Bowl. Why does a team need six wins to be bowl eligible, anyway? That just means
they’ll demand a big pay-out, and unless a rich, eccentric math geek buys a million copies of Math Jokes 4 Mathy Folks in the next week, well, that’s just not gonna happen.
Two exciting teams are currently sought to play in the inaugural MJ4MF Bowl. Notre Dame and Alabama are required to play for the national championship, and the likes of Georgia, Kansas State, and
Nebraska have already agreed to other bowl games… but surely the Golden Eagles of Southern Miss (0-12) and the Akron Zips (1-11) are available, no?
Through some special features at Amazon Author Central, I am able to know the daily sales rank of Math Jokes 4 Mathy Folks. My sales rank at the end of each of the last three days was 44,404,
96,990, and 35,355, respectively. I thought that was interesting — three days in a row when the sales rank was a five-digit number in which one digit occurred at least three times. What’s the
likelihood of that? Stated more formally:
Assuming that the sales rank of MJ4MF is always a five-digit number, what is the probability that three consecutive days’ sales ranks will contain a digit that occurs in the sales rank at least
three times?
The sales rank of MJ4MF has never been a five-digit number in which the same digit is repeated five times. (Bummer!) The probability of that occurrence, though, is even less likely than the situation
described above — though I won’t tell you exactly how much less likely, so as not to spoil your fun!
April is:
With such a glorious coincidence^1 of human-created holidays, I feel like I have to do something big. Monumental, even. But what? I thought about preparing a major April Fools prank, such as
preparing a fake video about spaghetti growing on trees or publishing an article about how the Alabama legislature passed a law setting π = 3. But since those have already been done, I decided on
something a little different…
Announcing the MJ4MF Humorous Math Poem Contest!
That’s right! Submit your original entries of humorous math poems.
The format is entirely up to you.
• Try your hand at the highly mathematical haiku.
• Author a sonnet about your love of numbers.
• Use ALGEBRA to create an acrostic poem.
• Or, get a little seedy with a limerick about doing problem sets late at night.
The only rule, really, is that your submission must be completely original. Please don’t copy a poem from another website or transcribe one of J. A. Lindon’s gems.
Post your poem in the comments section, or send it to me privately at mj4mf@verizon.net. Next week, I’ll compile all entries into a single post and create a poll so visitors can vote for their
favorite. The author of the best poem, as selected by the readers of the MJ4MF blog, will receive an autographed copy of Math Jokes 4 Mathy Folks, as well as a special secret prize.
Good luck, and have fun!
To get the creative juices flowing, you can read a few classics below, or check out The Square Root of Three.
Pi goes on and on and on…
And e is likewise cursed.
I wonder: Which is larger
When the digits are reversed?
– J. A. Lindon
I used to think math was no fun,
‘Cause I couldn’t see how it was done.
But Euler’s my hero
For I now see why zero
Equals e^iπ + 1.
– Paul Nahin
With my hands in a fire
And my arse on some ice
I’d say that, on average,
I feel rather nice.
– an MJ4MF original (sort of)
^1 Speaking of coincidence, John Allen Paulos wrote, “…though it is unlikely that any particular sequence of events specified beforehand will occur, there is a high probability that some remarkable
sequence will be observed subsequently” (A Mathematician Reads the Newspaper, p. 50). You might also like his first book, Innumeracy.
I was delighted to open the April issue of the Mathematics Teacher journal and discover a review of Math Jokes 4 Mathy Folks. Reviewer Leah Evans had some nice things to say:
Math Jokes 4 Mathy Folks is a delightful read. The author has compiled a vast array of puns, quips, and jokes meant for people of varied ages and mathematical expertise.
I highly recommend this book as a diversion from the rigor of mathematics. It allows us to have a good laugh (or a long groan) at a joke that only “math nerds” would get, and it points out that
humor can be found in all mathematical applications, even a telescoping series.
I love the cliffhanger at the end! The reference to a “telescoping series” is — I think — in regards to this joke (p. 89):
If you’re interested, here’s a copy of the entire review. (Click on the image to view a full-size version.)
Do you know what happened one year ago today? Well, lots of things, actually…
• A major storm near Hot Springs, AR, dropped baseball-sized hail while tornadoes raged nearby. (Yikes!)
• The first legal gay marriages in Washington, DC, were performed.
• Teen idol Corey Haim, best known for his role as Sam Emerson in The Lost Boys, died of an accidental overdose.
But perhaps most importantly…
That’s right — I started sharing math jokes, random thoughts, and senseless drivel on this blog exactly 365 days ago. To those of you who read the MJ4MF blog regularly, comment occasionally, and
forward links to your friends and colleagues, I want to say one thing:
Gee, you sure have a penchant for wasting time.
But seriously, I’d like to say Thank You. When I started, I wasn’t sure I had enough material to last a year. But given the number of subscribers and all the positive comments I’ve received, I plan
to keep doing this for as long as I can. I’ve really enjoyed the past year, and I hope that I can keep you entertained.
Some trivia:
• The most popular post during the past year was Smart Quarterbacks, the Super Bowl, and SAT Scores.
• The blog received 14,432 views during its first year, and February 6 was its busiest day.
• The growth in traffic can be approximated by the funtion y = 33x^2 – 143x + 312, where y is the number of pageviews during the x^th month since inception.
• If you drank one low-calorie beer every night since the first post on MJ4MF, that would be one lite year.
And finally…
I said to my friend, “Did you know that only 57% of people are able to understand the mathematical content contained on the MJ4MF blog?” His response:
I belong to the other 33%.
Oish. Happy anniversary! | {"url":"https://mathjokes4mathyfolks.wordpress.com/tag/mj4mf/","timestamp":"2014-04-18T01:02:36Z","content_type":null,"content_length":"53004","record_id":"<urn:uuid:cb6b1b6a-8eb4-4850-809f-c31d72b84d5d>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00579-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: RESIDUAL DISTRIBUTION SCHEMES FOR CONSERVATION
SIAM J. SCI. COMPUT. c 2002 Society for Industrial and Applied Mathematics
Vol. 24, No. 3, pp. 732769
Abstract. This paper considers a family of nonconservative numerical discretizations for conser-
vation laws which retain the correct weak solution behavior in the limit of mesh refinement whenever
sufficient-order numerical quadrature is used. Our analysis of 2-D discretizations in nonconservative
form follows the 1-D analysis of Hou and Le Floch [Math. Comp., 62 (1994), pp. 497530]. For a
specific family of nonconservative discretizations, it is shown under mild assumptions that the error
arising from nonconservation is strictly smaller than the discretization error in the scheme. In the
limit of mesh refinement under the same assumptions, solutions are shown to satisfy a global en-
tropy inequality. Using results from this analysis, a variant of the "N" (Narrow) residual distribution
scheme of van der Weide and Deconinck [Computational Fluid Dynamics '96, Wiley, New York, 1996,
pp. 747753] is developed for first-order systems of conservation laws. The modified form of the N-
scheme supplants the usual exact single-state mean-value linearization of flux divergence, typically
used for the Euler equations of gasdynamics, by an equivalent integral form on simplex interiors.
This integral form is then numerically approximated using an adaptive quadrature procedure. This
quadrature renders the scheme nonconservative in the sense described earlier so that correct weak
solutions are still obtained in the limit of mesh refinement. Consequently, we then show that the | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/353/1208260.html","timestamp":"2014-04-17T12:41:47Z","content_type":null,"content_length":"8796","record_id":"<urn:uuid:725d773a-533e-4edf-84f1-d332743517e6>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00611-ip-10-147-4-33.ec2.internal.warc.gz"} |
NYJM Abstract - 15-8 - Mark Tomforde
Mark Tomforde
Continuity of ring *-homomorphisms between C*-algebras
Published: May 5, 2009
Keywords: C*-algebras, rings, homomorphisms, Jordan morphisms
Subject: 46L05, 16W10
The purpose of this note is to prove that if A and B are unital C*-algebras and φ : A → B is a unital *-preserving ring homomorphism, then φ is contractive; i.e., ∥ φ (a) ∥ ≦ ∥ a ∥ for all a ∈ A.
(Note that we do not assume φ is linear.) We use this result to deduce a number of corollaries as well as characterize the form of such unital *-preserving ring homomorphisms.
Author information
Department of Mathematics, University of Houston, Houston, TX 77204-3008, USA | {"url":"http://nyjm.albany.edu/j/2009/15-8.html","timestamp":"2014-04-18T05:33:07Z","content_type":null,"content_length":"8315","record_id":"<urn:uuid:98a63b05-5844-4efb-830b-30ac1ea62cdf>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00250-ip-10-147-4-33.ec2.internal.warc.gz"} |
A question about composition of tigonometric functions
November 8th 2011, 01:45 AM #1
Nov 2011
A question about composition of tigonometric functions
A little something I'm trying to understand:
sin(arcsin(x)) is always x, but
arcsin(sin(x)) is not always x
So my question is simple - why? Since each cancels the other, it would make sense that arcsin(sin(x)) would always result in x.
I'd appreciate any explanation. Thanks!
Re: A question about composition of tigonometric functions
Because adding any integer multiple of $\displaystyle 2\pi$ also satisfies the solution to the second...
Re: A question about composition of tigonometric functions
Hello, yotamoo!
A little something I'm trying to understand:
. . $\text{(a) }\sin(\arcsin x)$ is always $x.$
. . $\text{(b) }\arcsin(\sin x)$ is not always $x.$
So my question is: why?
I'll illustrate with a specific example.
(a) We have: . $\sin(\arcsin x)$
Suppose $x = \tfrac{1}{2}$
Then: . $\arcsin\left(\tfrac{1}{2}\right) \:=\:\begin{Bmatrix}\frac{\pi}{6} + 2\pi n \\ \\[-4mm] \frac{5\pi}{6} + 2\pi n \end{Bmatrix}$
Hence: . $\sin\begin{pmatrix} \frac{\pi}{6} + 2\pi n \\ \frac{5\pi}{6} + 2\pi n \end{pmatrix} \:=\:\tfrac{1}{2}$
We get back our original value of $x.$
(b) We have: . $\arcsin(\sin x)$
Suppose $x = \tfrac{5\pi}{6}$
Then: . $\sin\tfrac{5\pi}{6} \,=\,\tfrac{1}{2}$
Hence: . $\arcsin\left(\tfrac{1}{2}\right) \;=\;\begin{Bmatrix}\frac{\pi}{6} + 2\pi n \\ \frac{5\pi}{6} + 2\pi n \end{Bmatrix}$
There is an infinite number of possible values.
We do not get back our original value of $x.$
November 8th 2011, 01:57 AM #2
November 8th 2011, 08:13 PM #3
Super Member
May 2006
Lexington, MA (USA) | {"url":"http://mathhelpforum.com/trigonometry/191435-question-about-composition-tigonometric-functions.html","timestamp":"2014-04-16T17:09:07Z","content_type":null,"content_length":"41950","record_id":"<urn:uuid:7986e657-7128-46c7-92a9-f816c28f0b97>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00568-ip-10-147-4-33.ec2.internal.warc.gz"} |
Incomplete elliptic integral of the third kind: General characteristics
General characteristics
Domain and analyticity
Symmetries and periodicities
Mirror symmetry
Poles and essential singularities
With respect to n
With respect to z
With respect to m
Branch points
With respect to n
With respect to z
With respect to m
Branch cuts
With respect to n
With respect to z
General description
Formulas on real axis for real m, n
For max(m,n)<1
For m<1< n
For n<1< m
For 1<n< m
For 1<m< n
Formulas for vertical intervals
For m<1
For m>0
With respect to m | {"url":"http://functions.wolfram.com/EllipticIntegrals/EllipticPi3/04/ShowAll.html","timestamp":"2014-04-19T22:19:14Z","content_type":null,"content_length":"65944","record_id":"<urn:uuid:57d885e6-519d-417c-9bda-76dc5f44f6a9>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00259-ip-10-147-4-33.ec2.internal.warc.gz"} |
Preliminary Investigation of Wavefield Depth Extrapolation by Two-Way Wave Equations
International Journal of Geophysics
Volume 2012 (2012), Article ID 968090, 11 pages
Research Article
Preliminary Investigation of Wavefield Depth Extrapolation by Two-Way Wave Equations
^1The School of Electronic and Information Engineering and Institute of Wave and Information, Xi’an Jiaotong University, Xi’an, Shaanxi 710049, China
^2Modeling and Imaging Laboratory, Institute of Geophysics and Planetary Physics, University of California Santa Cruz, Santa Cruz, CA 95064, USA
^3National Engineering Laboratory for Offshore Oil Exploration, Xi’an Jiaotong University, Xi’an, Shaanxi 710049, China
Received 7 February 2012; Accepted 25 April 2012
Academic Editor: Joerg Schleicher
Copyright © 2012 Bangyu Wu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
Most of the wavefield downward continuation migration approaches are relying on one-way wave equations, which move the seismic energy always in one direction along depth. The one-way downward
continuation migrations only use the primaries for imaging and do not treat secondary reflections recorded on the surface correctly. In this paper, we investigate wavefield depth extrapolators based
on the full acoustic wave equations, which can propagate wave components to opposite directions. Several two-way wavefield downward continuation propagators are numerically tested in this study.
Recursively implementing of the depth extrapolator makes it necessary and important to eliminate the unstable wave modes, that is, evanescent waves. For the laterally varying velocity media,
distinction between the propagating and evanescent wave mode is less clear. We demonstrate that the spatially localized two-way beamlet propagator is an effective way to remove the evanescent waves
while maintain the propagating mode in laterally inhomogeneous media.
1. Introduction
Downward continuation migration calculates the wavefield at greater depth based on the existing wavefield at the shallower depth. For each frequency component, the wavefield can be downward continued
recursively from surface to target depth. These algorithms have the flexibility of migrating the seismic data sequentially in depth and frequency, which leads to substantial reduction of both
computational and memory requirements. It is particularly advantageous in a migration-velocity-analysis procedure, since one can analyze one layer for each iteration (layer stripping). It also leads
to the definition of another useful family of seismic imaging methods—survey-sinking migration [1].
Although many different downward continuation algorithms have been proposed, most of them are based on solving the one-way wave equation. The common ground for those methods is splitting the full
wave equation into two one-way wave equations that allow for downward or upward wave propagation separately. The directional splitting of the operator suppresses the up-going propagating waves, thus
making it difficult, if not impossible, to use the secondary reflections correctly for imaging [2]. The wavefield downward continuation scheme based on the full acoustic wave equation was first
introduced by Kosloff and Baysal [3]. For a background with depth-dependent velocity and zero offset source-receiver configuration, the full wave partial differential equation is changed to a
second-order ordinary differential equation, and they solved the equation by the fourth-order Runge-Kutta method. Sandberg and Beylkin [4] extended this method to laterally varying media and migrated
the prestack seismic data in the source-receiver survey-sinking style. Sandberg et al. [2] modified the algorithm in Sandberg and Beylkin [4] to take a proper account of secondary reflections
recorded at the surface for imaging.
The recursively implementation of wavefield depth extrapolators makes operator stability a critical issue. In physical reality, only solutions with harmonical oscillations (propagating wave) or
exponentially decay (evanescent wave) with depth are present. However, in numerical algorithms, the artificial evanescent waves, which exponentially grow can also be generated and make the
extrapolated wavefield grow out of bounds quickly. Usually, the evanescent waves are eliminated to secure the operator stability. For a laterally uniform medium with depth-dependent velocity, the
evanescent waves can be suppressed in the frequency wavenumber domain by using the Fourier transform along space and a simple cutoff filter [5]. For the laterally varying velocity field, the
identification of the evanescent wave is less clear. To assure the numerical stability, Kosloff and Baysal suggested using the cutoff filter adjusted to the maximum velocity at each depth [3].
However, such a strategy also discarded certain propagating waves generated by the steeply dipping events, resulting in poor imaging of these structures. Sandberg and Beylkin [4] introduced spectral
projectors to remove the evanescent wave modes and leave all propagating modes intact in a variable background.
We observe that, in the laterally varying velocity field, the propagating and evanescent waves are in fact precisely defined by the velocity at each spatial point. It is difficult to identify
different wave modes in the wavenumber domain by using the global Fourier transform assuming one reference velocity at each depth. It leads us to consider the spatially localized wave propagator with
reference velocities for different locations. Beamlet propagator [6] provides an alternative approach for seismic wave propagation and imaging based on local velocity perturbation. Wavelet transform
is applied along space to shuffle the wavefield between space and local space-wavenumber domain (beamlet domain). Each beamlet is propagated to next depth with local reference velocity. Beamlet
propagators have been developed successfully with the Local Cosine Basis (LCB) [7] for the orthogonal and Gabor-Daubechies Frame (GDF) [8] for the nonorthogonal decomposition. The unwanted waves can
be removed by an ideal low-pass filter defined by the local reference velocities, leaving the portion of propagating waves increased.
In this study, we first test solving the two-way propagator by the 4th-order Rung-Kutta method, the two-way phase-shift method and the implicit two-way phase-shift method, which avoids splitting of
the full wave propagator. In the second tests, we see that the two hyperbolas in the surface wavefield are downward continued by the two-way propagators, with each hyperbola moving in opposite
direction in the time space seismogram panel. We derive the two-way beamlet propagator for the local background velocity by using LCB transform and test wave propagation and post stack imaging in the
lateral high contrast velocity surroundings.
2. Full-Wave Downward Continuation Operator in Frequency Wavenumber Domain
2.1. Ordinary Differential Two-Way Wave Equation
In a two-dimensional acoustic medium with constant density, the full way wave equation reads where and are the horizontal and vertical coordinates, is the pressure field at time t, and is the
acoustic velocity. In a lateral homogenous velocity model, equation (1) can be double transformed in and to give where is the horizontal wavenumber and is the temporal frequency.
For the migration along depth axis, the second order equation to can be written as two coupled one order differential wave equations in the depth variable : where represents the vertical pressure
gradient. Equation (3) can be solved with standard numerical techniques for ordinary differential equations.
The full wave equation is of second order on depth and therefore requires two boundary conditions on the surface to initiate the depth extrapolation: the pressure wavefield and its normal derivative.
Usually, only the pressure field is recorded on the surface, the other field must be generated with certain assumptions. We estimate the normal derivative of the wavefield using the same approach as
in Kosloff and Baysal [3]. Assuming a constant velocity near the surface, the gradient of the wavefield can be obtained from as where is the vertical wavenumber. Given and at surface, equation (3)
can be downward continued recursively to any depth.
2.2. Phase-Shift Two-Way Depth Extrapolator
The solution to the second-order wave equation (2) can also be written as Equation (5) is more easily to be understood as two-way wave equation, and we can pick up the up- and down-going terms
without difficulty. When downward continue the up- or down-going wave separately, one of the exponential terms can be set to zero. The wavefield gradient is also not needed to decide the wave
propagation direction. Equation (5) then reduces to the phase-shift method of Stolt [9].
2.3. Implicit Two-Way Phase-Shift Depth Extrapolation
Equation (5) still explicitly splits the two-way wave equation to up- and down-going waves. The phase-shift terms and propagate the up- and down-going components separately.
The exponential and the trigonometric functions have the following relations: Substitute the exponential terms by the trigonometric functions in (5), we obtain Equation (7) avoids splitting the wave
equation into the up- and down-going propagators. In practical implementation, (7) is more stable than (5). If approaches to zero, in denominator of (5) may cause numerical instability. However, in (
7), and form a sinc function, which eliminates the singularities caused by .
3. Two-Way Beamlet Propagators for Local Background Reference Velocities
If the velocity is a function only of depth, the migration in frequency wavenumber domain is accurate and efficient since the Fast Fourier Transform can be used. For a laterally varying medium, the
selection of reference velocities is required for the phase-shift operator. To improve the accuracy, part of the computation needs to be performed in the space domain. These approaches can be
categorized as mixed domain or dual domain methods. Beamlet migration is a one-way wave equation-based dual domain imaging method. The wave fields are spatially partitioned with local windows and
propagated with beamlet propagators (in beamlet domain) followed by local perturbation corrections (in space domain). In local phase space, the evanescent and propagating modes are defined by the
local reference velocities and can be processed separately.
3.1. Local Cosine Basis and Wavefield Decomposition
The seismic wavefield and its normal derivative can be represented by local Fourier frame, such as the Gabor-Daubechies frame, or the local trigonometric basis (local cosine/sine basis). The local
cosine bases are localized version of cosine bases, using overlapped bell functions to decompose signals in separable blocks by fast algorithm [10]. The basis element is characterized by position ,
the nominal length of the window , and local wavenumber index, ( denotes the total sample points of the interval) as follows: where is a bell function, which is smooth and supported in the compact
interval for , where , are the left and right overlapping radius.
Wavefield and its derivative at depth can be decomposed into beamlets with windows along the -axis Here , and are beamlet coefficients. stands for inner product.
Local cosine bases are real functions. In case of complex wavefield, LCB decomposition is applied to both the real and imaginary parts of the wavefield [7].
3.2. LCB Beamlet Domain Two-Way Downward Propagator
For easy implementation, we derive the two-way beamlet propagator starting from (5); we have Here with different sub- and superscripts are the phase-shift operator to extrapolate the wavefields and
their derivative from depth to . Next, we derive the wave propagator in the LCB beamlet domain following [7]. We define as the evolved LCB beamlet by background propagation using the reference
velocity in window . is the wavenumber domain beamlet basis. For each exponential term in (10), the evanescent waves are discarded based on the following judgment: The redecomposition of the
propagated beamlets into new beamlets formulates the beamlet propagator, written as Thus we obtain the exact expression of the two-way beamlet background propagator components Here the propagator
elements represent the coupling factors from the beamlet coefficient to beamlet at new depth level. Due to the propagation of up- and down-going waves and their derivatives, the two-way beamlet
propagator elements are eight times more than that of the one-way propagator. In this algorithm, the computation of two-way wavefield downward extrapolation is approximately 8 times more intensive
than the one-way beamlet methods.
4. Numerical Experiments
4.1. Impulse Responses in Homogenous Media
The impulse responses are computed by different depth extrapolation propagators mentioned above. Considering a domain of 3.75km deep and 12km wide, with a constant velocity of 2km/s, we position a
point Ricker wavelet source with dominat-frequency 15Hz at km and km. The space and depth sampling intervals are 25m.
The impulse responses by different depth propagators are shown in Figure 1 with Figure 1(a) from (3) by the 4th-order Runge-Kutta method, Figure 1(b) two-way phase-shift method (5) and Figure 1(c)
the implicit two-way phase-shift method (7), respectively. Figure 1(d) is from the one-way phase-shift method [9]. All the responses look much the same. From these tests, we see that the two-way
depth downward continuation methods have the same accuracy as the phase-shift method in the homogenous media as expected.
During migration, when , all the wave fields extrapolating by the two-way wave equation blow up rapidly, making it necessary to suppress these unstable modes (evanescent waves) in downward
extrapolation algorithms. For a laterally uniform medium, Kosloff and Baysal [3] proposed suppressing the evanescent waves in the wavenumber domain by using a simple cutoff filter (same as used
4.2. Impulse Responses in Two-Layer Velocity Models
Shown in Figure 2 are impulse responses at three time moments for the two-layer models using the implicit two-way phase-shift extrapolator. The velocity change interfaces are in the middle of the
depth axis. Left column is for the −50% contrast medium (from 2km/s to 1km/s), and the right is for the +50% contrast medium (from 2km/s to 3km/s).
In Figure 2, panels from top to bottom are in order of time sequence. We see that, with the presence of the vertical velocity contrast, there are some anticausal waves enter from the bottoms of the
model. In the +50% velocity contrast case, there are vertical artifacts propagating horizontally in the high-velocity region.
Figure 3 shows the seismograms at depths above and below the interface. The left column is for the −50% velocity contrast and the right column for the +50%. The top panels are seismograms before
passing through the interface and the bottom panels, below the interface. We see the hyperbola splits into two after passing the interface and one of which propagates in the opposite direction in the
time space panel.
Kosloff and Baysal described the appearance of the up-going energy as the inherent nonuniqueness in the conceptual model on which the migration was based. With the exploding reflector model, a
surface recording alone cannot determine the amount of down-going energy at zero time. To do this, a set of geophones need to be placed beneath the structure [3]. To our understanding, the anticasual
event appeared after passing through the interfaces because the mathematical downward continuation algorithms do not obey the nature of the wave propagation. The reflected waves are generated when
the downward continuation procedure encounters the velocity interface. Physically, the reflected wave should go upward; however, the downward continuation procedure carries it to the future depth by
the two-way extrapolator. On the snapshots in Figure 2, the two-way propagators can propagate the up-going wave reflecting by the interface but put it in the wrong spatial location.
Seismic imaging based on the two-way depth migration, when high velocity contrast exists, the migrated image will contain low-frequency artifacts similar to the RTM-migrated data. Techniques like
smoothing the migration velocity model [2, 4] or applying a Laplacian filter to the final stacked image can effectively dampen the artifacts.
4.3. Propagating Wavefield Components in Two Directions in Seismogram
In this section, we test the downward continuation algorithms by the two-way extrapolator with data containing both down-going and up-going waves. Figure 4(a) shows a wavefield on the surface
containing two hyperbolas, tagged with A and B. The gradient for each hyperbola is calculated separately and added together with opposite signs at the surface, shown in Figure 4(b) (Note the polarity
difference between the two gradient fields). Figures 4(c) and 4(d) are seismograms for two depths during downward extrapolation. We see hyperbola A and B moving in opposite directions during depth
extrapolation by the two-way wave equations. After some depth, hyperbola A focused into a point, while hyperbola B is continuing to spread. There is no artifacts or cross coupling between the two
events. This shows that the two-way extrapolator works in the case of homogeneous or smoothly varying media. For media with sharp boundaries, we can perform some smoothing operation to the media
before depth extrapolation or do more study to reduce the artifacts when passing the boundary.
4.4. Two-Way Beamlet Propagation and Imaging in Inhomogeneous Media
The beamlet space localization gives the possibility and flexibility in adapting to the fast varying lateral heterogeneities for wave propagation. Theoretically, when there are no lateral velocity
variations, the space localized propagators will produce the same result as the methods with only one reference velocity at each depth. However, because the beamlet propagator is the integral of the
fine sampling horizontal wavenumbers (14), it is more stable at the vicinity of transiting from propagating modes to evanescent modes. Demonstrating on the snapshots of the impulse responses for the
two-layer model, there are less horizontally propagating vertical events in the high-velocity region (shown in the right column of Figure 5).
Figure 6 shows the background reference velocity for the localized beamlet propagators for 2D SEG/EAGE salt model. The local background velocity reflects the velocity homogeneity in a great degree,
such as the salt body and some reflectors. Wave propagation in the local background velocity keeps more propagating waves with high accuracy comparing with the methods only selecting one reference
velocity for each depth.
Figure 7 shows impulse responses based on one-way and two-way beamlet propagators in the piecewise local background velocity shown in Figure 6. Panels in left column are for the one-way beamlet
propagator and the right column, the two way beamlet propagator. Panels in the same row are at the same time moments for comparison.
The snapshots demonstrate that the localized propagators can handle multipath arrivals with smooth wavefronts amplitude induced by complex velocity functions. We also see that the sharp velocity
discontinuity introduce extra signal in the two-way beamlet propagation.
In the left column of Figure 8, we present snapshots of the two-way beamlet propagator in a smoothed version of the SEG velocity model. Examination of the results in Figures 7(b), 7(d), and 7(f) and
Figures 8(a), 8(b), and 8(c) shows that the extra signals produced by the sharp velocity boundaries in the model are greatly suppressed. In Figures 8(b), 8(d), and 8(f), snapshots generated by finite
difference method on the velocity in Figure 6 are displayed for comparison.
Figure 9 illustrates the post stack imaging performance of the localized propagator on 2D SEG/EAGE synthetic data sets. Figure 9(a) shows the original velocity model, with minimum and maximum
velocities are 1500m/s and 4410m/s. Figure 9(b) shows the post stack image of the one-way beamlet migration with local perturbation correction (for details of local perturbation correction see
reference [7]).
Figures 9(c) and 9(d) display the image based on one-way and two-way beamlet propagators using the local reference velocities, without perturbation correction. In Figure 9(c), the boundary of the
salt body and most of the reflectors are all imaged. Due to the accumulated errors from velocity perturbations, the image is unsatisfactory in the sub-salt area. In presence of strong velocity
contrasts, the anticausal artifacts generated by the two-way extrapolator will significantly contaminate the final image. Kosloff and Baysal eliminated the anti-causal energy from the depth section
by filtering out components with negative vertical wavenumbers [3].
A blurred version of the original velocity model can also be used as the background velocity to avoid the migration artifacts. Figure 10(a) is a smoothed SEG velocity and Figure 10(b) is
corresponding local background velocity. Image in Figure 10(c) shows that some of the artifacts in Figure 9(d) are removed. For the smoothed velocity, because the selections constant reference
velocities in each window, the background velocity for the two-way beamlet propagators is still piecewise and there are still some remaining artifacts in the migrated image.
5. Discussion and Conclusions
The potential of using multiple reflections for imaging is very attractive for the depth extrapolation based on two-way wave equations. In this paper, we did some preliminary tests for wave
propagation and imaging based on the two-way depth extrapolators, which can propagate the wavefield components in opposite directions. The downward continuation scheme is equivalent to an initial
value problem and two initial conditions are needed on the surface to start the propagation. The evanescent waves must be eliminated for stable extrapolation. In lateral inhomogeneous media, the
identification and elimination of unwanted waves directly relate to the wavefield accuracy and imaging quality. Beamlet propagator extrapolates the wavefield in space-localized phase space. In
laterally inhomogeneous media, the evanescent and propagating modes are defined by the local reference velocity in each window, which provides an effective way to improve the accuracy of wave mode
identification. The two-way beamlet propagator is computationally more expensive than the one-way propagator. For migrations in the models with sharp boundaries, we must take measures to eliminate
anti-casual events generated by the two-way depth propagators passing through the interfaces. More study is needed to develop a depth extrapolation procedure, which can take advantage of the two-way
wave propagator.
The authors thank Xiao-Bi Xie at University of California Santa Cruz for many insightful discussions and Rui Yan for help on the snapshots by finite difference method. The authors would like to thank
the anonymous reviewers for their comments and suggestions. This work is financially supported by WTOPI (Wavelet Transform On Propagation and Imaging for Seismic Exploration) Project at University of
California, Santa Cruz. They also thank National Natural Science Foundation (40730424) and National Science and Technology Major Project of China (under Grants 2011ZX05023-005-009) for funding this
work. Financial support for the first author from China Scholarship Council is also acknowledged.
1. J. F. Claerbout, Imaging the Earth’s Interior, Blackwell Scientific Publications, 1985.
2. K. Sandberg, G. Beylkin, and A. Vassiliou, “Full-wave-equation depth migration using multiple reflections,” SEG Technical Program Expanded Abstracts, vol. 29, no. 1, pp. 3323–3326, 2010. View at
Publisher · View at Google Scholar · View at Scopus
3. D. D. Kosloff and E. Baysal, “Migration with the full acoustic wave equation,” Geophysics, vol. 48, no. 6, pp. 677–687, 1983. View at Scopus
4. K. Sandberg and G. Beylkin, “Full-wave-equation depth extrapolation for migration,” Geophysics, vol. 74, no. 6, pp. WCA121–WCA128, 2009. View at Publisher · View at Google Scholar · View at
5. J. B. Chen, “On the selection of reference velocities for split-step Fourier and generalized-screen migration methods,” Geophysics, vol. 75, no. 6, pp. S249–S257, 2010. View at Publisher · View
at Google Scholar · View at Scopus
6. R. S. Wu, Y. Z. Wang, and J. H. Gao, “Beamlet migration based on local perturbation theory,” in Proceedings of the 70th Society of Exploration Geophysics International Convention, pp. 1008–1011,
7. R. S. Wu, Y. Wang, and M. Luo, “Beamlet migration using local cosine basis,” Geophysics, vol. 73, no. 5, pp. S207–S217, 2008. View at Publisher · View at Google Scholar · View at Scopus
8. L. Chen, R. S. Wu, and Y. Chen, “Target-oriented beamlet migration based on Gabor-Daubechies frame decomposition,” Geophysics, vol. 71, no. 2, pp. S37–S52, 2006. View at Publisher · View at
Google Scholar · View at Scopus
9. R. H. Stolt, “Migration by Fourier transform,” Geophysics, vol. 43, no. 1, pp. 23–48, 1978. View at Scopus
10. M. V. Wickerhauser, Adapted Wavelet Analysis from Theory to Software, AK Peters, 1994. | {"url":"http://www.hindawi.com/journals/ijge/2012/968090/","timestamp":"2014-04-18T18:32:54Z","content_type":null,"content_length":"268218","record_id":"<urn:uuid:d2b82009-78b5-4e12-b640-22a995c9c6dd>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00168-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/estebananaya/answered/1","timestamp":"2014-04-20T03:31:41Z","content_type":null,"content_length":"106201","record_id":"<urn:uuid:dd42e7fe-c464-49f0-ad3b-deb8d12229ed>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00413-ip-10-147-4-33.ec2.internal.warc.gz"} |
Basic Math Problem
May 16th 2009, 11:57 AM #1
May 2009
Basic Math Problem
I need help on a question the questions says
by eliminating Y, find the solution to the Simulataneous Eqautions.
Thanks in Advance,
Plug x-7 in for y in the first equation so $x^2+(x-7)^2=25$
$x^2+y^2 = 25$
As you've stated y=x-7 so we can sub that in for where we see 'y' in the first equation.
$x^2 + (x-7)^2 = 25 \: , \: x^2 + x^2 - 14x + 49 = 25$
$2x^2 - 14x + 24 = 0$
Thanks for that but I think this is a quadratic as there is space to put two sets of values for X and Y.
can you please work it out and tell what the answers are so i can check to see if I'm gettting the right answers
Thanks again,
You can tell us what you get for answers and we'll tell you if you are right. This forum is to help people not to do it for you
Well you can solve that equation to find x and from there find y:
$2x^2 - 14x + 24 = 0$
Divide by 2: $x^2 - 7x+12 = 0$
I can see that this factorises to $(x-4)(x-3) = 0$
therefore either x-4=0 or x-3=0 and therefore x = 4 or x=3.
We then put these back into y = x-7
y = 4-7 = -3 or y = 3-7 = -3.
Our solutions are therefore x=4, y=-3 or x=3, y=-4. There are two pairs of answers for this
thanks for that i didnt get anything close to that I started using the other eqaution i cant tell which equation to use when i see the questions.
thanks for the help,
May 16th 2009, 11:59 AM #2
May 2009
May 16th 2009, 12:00 PM #3
May 16th 2009, 12:08 PM #4
May 2009
May 16th 2009, 12:16 PM #5
May 2009
May 16th 2009, 12:22 PM #6
May 16th 2009, 12:48 PM #7
May 2009 | {"url":"http://mathhelpforum.com/algebra/89239-basic-math-problem.html","timestamp":"2014-04-21T02:26:05Z","content_type":null,"content_length":"48696","record_id":"<urn:uuid:c9cca431-8618-41cd-a0cd-a88624ee8cb7>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00302-ip-10-147-4-33.ec2.internal.warc.gz"} |
distance problem
Re: distance problem
In my opinion the question is awful. I have to make so many assumptions. Now, what the heck is that drawing? Are we supposed to assume that is the xy axis without it being labelled? Am I supposed to
assume that graph at the ends is touching the x axis, making A amd B roots.
Did they expect you to be able to get the roots to that quartic? Not an easy job without a computer. Do they want the straight line distance between A and B or the Arc length distance?
Now assuming that is the xy axis and A and B are touching it and that squiggly mess is the graph of the function, which it is not. I solve like this:
-X^4+5x^3+4X^2+6X+8 = 0 has 2 real roots thay are:
x = -1 and x = 5.89102041
So the straight line distance is 6.89102041 which is none of your choices.
The arc length distance between A and B is 327.039 also not one of your choices.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof. | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=144896","timestamp":"2014-04-21T14:55:41Z","content_type":null,"content_length":"15606","record_id":"<urn:uuid:9020ac22-0584-4c78-8d5f-3882e59a5b8b>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00001-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by mary on Sunday, December 28, 2008 at 12:12am.
A woman on a bridge 90.0m high sees a raft floating at a constant sped on the river below. She drops a stone from rest in an attempt to hit the raft. The stone is released when the raft has 6.00m
more to travel before passing under the bridge. The stone hits the water 2.00m in front of the raft. Find the speed of the raft.
I made a table with Y:-90m, A:9.8m/s^2, Vo: 0
So since y+ Vot+1/2at^2
90= 0+ ½(9.8) t^2
T+ 4.29
I figured the raft moves 4.00m.
4.00m/4.49s =.89 m/s
But the answer in the back of the book is .932m/s
What did I do wrong?
• Physics - drwls, Sunday, December 28, 2008 at 1:49am
What you did wrong was forget that you had (correctly) calculated the time to reach the water as 4.29 s. You then changed it to 4.49 s.
The raft moves 4 m in the time it takes the stone to fall 90 m. That time T is given by the equation
(g/2)T^2 = 90
T = sqrt(180/g) = 4.29 s
V = 4/4.29 = 0.932 m/s
• Physics - ~christina~, Sunday, December 28, 2008 at 1:50am
The problem is that you divided the distance traveled by the raft by ..4.49s.
I don't know where you got 4.49, but using the original time obtained by you with the time it takes for the rock to hit the water, would give you the answer in the book.
(time it takes for stone to hit water is same time traveled by raft)
• Physics - mary, Friday, January 2, 2009 at 10:09pm
• Physics - Precious, Wednesday, January 18, 2012 at 5:46am
A woman on a bridge 75.0m high sees a raft floating at a constant speed on the river below. She drops a stone from rest in an attempt to hit the raft. The stone is released when the raft has
7.00m more to travel before passing under the bridge. The stone hits the water 4.00m in front of the raft. Find the speed of the raft.
Related Questions
Physics - A woman on a bridge 75.0m high sees a raft floating at a constant sped...
Physics - A woman on a bridge 75.0m high sees a raft floating at a constant ...
physics - A woman on a bridge 104 m high sees a raft floating at a constant ...
Physics - A woman on a bridge 91.1 m high sees a raft floating at a constant ...
physics - A woman on a bridge 77.4 m high sees a raft floating at a constant ...
physics - A woman on a bridge 81.6 m high sees a raft floating at a constant ...
physics - A woman on a bridge 81.6 m high sees a raft floating at a constant ...
phys - A woman on a bridge 84.7 m high sees a raft floating at a constant speed ...
Physics - a woman on a bridge 75 high sees a raft floating at a constant speed ...
physics - A log is floating on swiftly moving water. A stone is dropped from ... | {"url":"http://www.jiskha.com/display.cgi?id=1230441164","timestamp":"2014-04-18T09:51:00Z","content_type":null,"content_length":"10170","record_id":"<urn:uuid:5fcd270b-e8cd-45d6-a7c3-60d55f0eef7d>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00020-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
can anyone help with Matlab?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
What problem are you facing in Matlab?
Best Response
You've already chosen the best response.
i have to write a program for this problem... determine the number of terms necessary to approximate cosx to 8 significant figures using the Maclaurin series approximation. use x=0.3pi \[cosx=1-
(x ^{2}/2)+(x ^{4}/4!)-(x ^{6}/6!)+(x ^{8}/8!)...\] i don't know where to begin. I took Matlab 2 summers ago and the school and instructor was horrible and i'm not just saying this. the
instructor stayed on the first 2 chapters (learning the basic functions) for half the semester and barely taught us how to write our own programs. i think she's fired.
Best Response
You've already chosen the best response.
Here is matlab that should work % cos(x) = 1 - (x^2)/2! + (x^4)/4! -(x^6)/6!+(x^8)/8!... % let y= x*x % cos(x) = sum( (-y)^n/(2n)! ) format short x= 0.3*pi; y= x*x; for N= 1:6 n= 0:N; s1= [(-y).^
n./factorial(2*n) ] mac= sum(s1); cx= cos(x); str= sprintf('%d terms. series: %12.10f cos(x): %12.10f\n %12.10f',... N, mac,cx, (cx-mac)); disp(str); end;
Best Response
You've already chosen the best response.
differetiate cos(x) iteratively at x=0 y = cos (t) loop { y = diff( y , t); sub(y, 0) } find the coefficients ... use the usual method.
Best Response
You've already chosen the best response.
@phi thx, did you solve the problem to 10 sig figures?
Best Response
You've already chosen the best response.
It prints out more than 8. I'll leave it to you to count the digits
Best Response
You've already chosen the best response.
ok, how do i format the number of digits? so far i was able to find the commands "short", "long", but not exact number of digits i want
Best Response
You've already chosen the best response.
try running the program. the sprintf statement formats using the C style formatting %12.10f means use 12 places (includes decimal point), with 10 places to the right of the decimal
Best Response
You've already chosen the best response.
ok, i'll try thx
Best Response
You've already chosen the best response.
Although be careful, because I can't count! where it says it is using n terms, it is really using n+1 terms, counting the initial 1
Best Response
You've already chosen the best response.
i'm actually really confused and writing everything out on paper to see where everything fits. why is y=x*x, if s1 suppose to equal the explicit form of the mac series, why is it (-y)^n instead
of (-1)^n and where's x^(2n)? i'm comparing to the explicit form: \[cosx=\sum_{n=0}^{\infty} \frac{ (-1)^n }{ (2n)! } x ^{2n}\]
Best Response
You've already chosen the best response.
you do know \( x^{2n} = (x^2)^n \), right? and \( (-1)^n \cdot (x^2)^n= (-x^2)^n \)
Best Response
You've already chosen the best response.
ok thx
Best Response
You've already chosen the best response.
It just makes the code a little bit simpler (or maybe just a little bit more indecipherable).
Best Response
You've already chosen the best response.
ok, and back to the number of decimal places...how would changing the %12.10f change anything since it's after %?
Best Response
You've already chosen the best response.
See above for the explanation. %12.0f would print out up to 11 digits plus the decimal point with 0 of the digits to the right of the point. It is easier to play with it.
Best Response
You've already chosen the best response.
it worked. thx again for the help
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5036a518e4b00dda45ec6467","timestamp":"2014-04-17T06:59:41Z","content_type":null,"content_length":"67264","record_id":"<urn:uuid:c23f5e4b-35db-48ab-be4f-225eb04cbbda>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00128-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: Fwd: high frequency data with panel structure
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: Fwd: high frequency data with panel structure
From Kit Baum <baum@bc.edu>
To statalist@hsphsun2.harvard.edu
Subject st: Fwd: high frequency data with panel structure
Date Tue, 16 May 2006 07:48:57 -0400
For some reason this message does not appear on the digest.
Kit Baum, Boston College Economics
Begin forwarded message:
From: Kit Baum <baum@bc.edu>
Date: May 15, 2006 12:09:59 PM EDT
To: statalist@hsphsun2.harvard.edu
Subject: re: high frequency data with panel structure
Sebastien said
Thanks for your reply. Actually the individuals are banks, hence to incorporate
cash taker effects I wanted to run a panel regression to take in this specific
effect. The data are interbank transactions - some are on a secured basis
some are not. To look at the interest rate effect of different collaterals
provided & maturity chosen I thought a panel data analysis to be the best.
I already thought of changing the strucutre of the data into hours, for instance.
Unfortunately I wasn't able to break it down to a lower level than one day
in Stata.
This sort of data (e.g. trades, which may occur none, one or many times per day in a given session) is not panel data. It is repeated measures data. Think of an intensive care ward where some
patients' vital signs are monitored more frequently than others. We may have 50 BP measurements for patient A and 10 BP measurements for patient B over the same calendar day. You do want to note
that many of the measurements come from the same unit (bank, patient etc.) but you can do that with Stata's xt* commands using "iis". A command like xtreg will allow you to specify i(banknr), and
run a fixed-effect model in which you demean each bank's transactions without the necessity of evenly spaced time units from which the measurements are generated. Of course, you could also add
temporal variables to the analysis--dummies for each trading day in your sample, or for the day of week (Monday dummy, etc.)
Kit Baum, Boston College Economics
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2006-05/msg00510.html","timestamp":"2014-04-18T18:53:18Z","content_type":null,"content_length":"7731","record_id":"<urn:uuid:46956a7b-abcd-4c3a-be7d-a607ef544cad>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00133-ip-10-147-4-33.ec2.internal.warc.gz"} |
More Math Functions
Current spec proposal:
• Candidate Library Extensions Spec 3/28/2011
doc pdf
Older assets
At the July 2011 TC39 meeting, we reviewed the spreadsheet below and the recommendation it contained for the new Math libraries to include with ES6. The word and pdf documents linked below include
candidate spec text for this recommendation, as well as the Number and String extensions.
• Spreadsheet comparing ECMAScript Math functions to other platforms
• Candidate Library Extensions Spec 11/15/2011
doc pdf
There were a few issues raised during the July discussion - some of which are addressed in this spec text. Remaining open issues include:
• Which (or both) of Java Sign and C signbit should be included? Proposal: Sign.
• Should deg2rad/rad2deg be included? Proposal: No.
• Request for randomInt(n)
• Should gamma and erf be included?
At the May 2011 meeting TC39 agreed that it would be appropriate to add missing Math and Number functions. The actual functions to be added still need to be determined. The following inputs will be
used in selecting that actual functions to include: | {"url":"http://wiki.ecmascript.org/doku.php?id=harmony:more_math_functions","timestamp":"2014-04-16T21:58:18Z","content_type":null,"content_length":"14374","record_id":"<urn:uuid:728dc883-b32f-42cc-9842-20e9f60d3e60>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00649-ip-10-147-4-33.ec2.internal.warc.gz"} |
On teaching mathematics
by V.I. Arnold
This is an extended text of the address at the discussion on teaching of mathematics in Palais de Découverte in Paris on 7 March 1997.
Mathematics is a part of physics. Physics is an experimental science, a part of natural science. Mathematics is the part of physics where experiments are cheap.
The Jacobi identity (which forces the heights of a triangle to cross at one point) is an experimental fact in the same way as that the Earth is round (that is, homeomorphic to a ball). But it can be
discovered with less expense.
In the middle of the twentieth century it was attempted to divide physics and mathematics. The consequences turned out to be catastrophic. Whole generations of mathematicians grew up without knowing
half of their science and, of course, in total ignorance of any other sciences. They first began teaching their ugly scholastic pseudo-mathematics to their students, then to schoolchildren
(forgetting Hardy's warning that ugly mathematics has no permanent place under the Sun).
Since scholastic mathematics that is cut off from physics is fit neither for teaching nor for application in any other science, the result was the universal hate towards mathematicians - both on the
part of the poor schoolchildren (some of whom in the meantime became ministers) and of the users.
The ugly building, built by undereducated mathematicians who were exhausted by their inferiority complex and who were unable to make themselves familiar with physics, reminds one of the rigorous
axiomatic theory of odd numbers. Obviously, it is possible to create such a theory and make pupils admire the perfection and internal consistency of the resulting structure (in which, for example,
the sum of an odd number of terms and the product of any number of factors are defined). From this sectarian point of view, even numbers could either be declared a heresy or, with passage of time, be
introduced into the theory supplemented with a few "ideal" objects (in order to comply with the needs of physics and the real world).
Unfortunately, it was an ugly twisted construction of mathematics like the one above which predominated in the teaching of mathematics for decades. Having originated in France, this pervertedness
quickly spread to teaching of foundations of mathematics, first to university students, then to school pupils of all lines (first in France, then in other countries, including Russia).
To the question "what is 2 + 3" a French primary school pupil replied: "3 + 2, since addition is commutative". He did not know what the sum was equal to and could not even understand what he was
asked about!
Another French pupil (quite rational, in my opinion) defined mathematics as follows: "there is a square, but that still has to be proved".
Judging by my teaching experience in France, the university students' idea of mathematics (even of those taught mathematics at the École Normale Supérieure - I feel sorry most of all for these
obviously intelligent but deformed kids) is as poor as that of this pupil.
For example, these students have never seen a paraboloid and a question on the form of the surface given by the equation xy = z^2 puts the mathematicians studying at ENS into a stupor. Drawing a
curve given by parametric equations (like x = t^3 - 3t, y = t^4 - 2t^2) on a plane is a totally impossible problem for students (and, probably, even for most French professors of mathematics).
Beginning with l'Hospital's first textbook on calculus ("calculus for understanding of curved lines") and roughly until Goursat's textbook, the ability to solve such problems was considered to be
(along with the knowledge of the times table) a necessary part of the craft of every mathematician.
Mentally challenged zealots of "abstract mathematics" threw all the geometry (through which connection with physics and reality most often takes place in mathematics) out of teaching. Calculus
textbooks by Goursat, Hermite, Picard were recently dumped by the student library of the Universities Paris 6 and 7 (Jussieu) as obsolete and, therefore, harmful (they were only rescued by my
ENS students who have sat through courses on differential and algebraic geometry (read by respected mathematicians) turned out be acquainted neither with the Riemann surface of an elliptic curve y^2
= x^3 + ax + b nor, in fact, with the topological classification of surfaces (not even mentioning elliptic integrals of first kind and the group property of an elliptic curve, that is, the Euler-Abel
addition theorem). They were only taught Hodge structures and Jacobi varieties!
How could this happen in France, which gave the world Lagrange and Laplace, Cauchy and Poincaré, Leray and Thom? It seems to me that a reasonable explanation was given by I.G. Petrovskii, who taught
me in 1966: genuine mathematicians do not gang up, but the weak need gangs in order to survive. They can unite on various grounds (it could be super-abstractness, anti-Semitism or "applied and
industrial" problems), but the essence is always a solution of the social problem - survival in conditions of more literate surroundings.
By the way, I shall remind you of a warning of L. Pasteur: there never have been and never will be any "applied sciences", there are only applications of sciences (quite useful ones!).
In those times I was treating Petrovskii's words with some doubt, but now I am being more and more convinced of how right he was. A considerable part of the super-abstract activity comes down simply
to industrialising shameless grabbing of discoveries from discoverers and then systematically assigning them to epigons-generalizers. Similarly to the fact that America does not carry Columbus's
name, mathematical results are almost never called by the names of their discoverers.
In order to avoid being misquoted, I have to note that my own achievements were for some unknown reason never expropriated in this way, although it always happened to both my teachers (Kolmogorov,
Petrovskii, Pontryagin, Rokhlin) and my pupils. Prof. M. Berry once formulated the following two principles:
The Arnold Principle. If a notion bears a personal name, then this name is not the name of the discoverer.
The Berry Principle. The Arnold Principle is applicable to itself.
Let's return, however, to teaching of mathematics in France.
When I was a first-year student at the Faculty of Mechanics and Mathematics of the Moscow State University, the lectures on calculus were read by the set-theoretic topologist L.A. Tumarkin, who
conscientiously retold the old classical calculus course of French type in the Goursat version. He told us that integrals of rational functions along an algebraic curve can be taken if the
corresponding Riemann surface is a sphere and, generally speaking, cannot be taken if its genus is higher, and that for the sphericity it is enough to have a sufficiently large number of double
points on the curve of a given degree (which forces the curve to be unicursal: it is possible to draw its real points on the projective plane with one stroke of a pen).
These facts capture the imagination so much that (even given without any proofs) they give a better and more correct idea of modern mathematics than whole volumes of the Bourbaki treatise. Indeed,
here we find out about the existence of a wonderful connection between things which seem to be completely different: on the one hand, the existence of an explicit expression for the integrals and the
topology of the corresponding Riemann surface and, on the other hand, between the number of double points and genus of the corresponding Riemann surface, which also exhibits itself in the real domain
as the unicursality.
Jacobi noted, as mathematics' most fascinating property, that in it one and the same function controls both the presentations of a whole number as a sum of four squares and the real movement of a
These discoveries of connections between heterogeneous mathematical objects can be compared with the discovery of the connection between electricity and magnetism in physics or with the discovery of
the similarity between the east coast of America and the west coast of Africa in geology.
The emotional significance of such discoveries for teaching is difficult to overestimate. It is they who teach us to search and find such wonderful phenomena of harmony of the Universe.
The de-geometrisation of mathematical education and the divorce from physics sever these ties. For example, not only students but also modern algebro-geometers on the whole do not know about the
Jacobi fact mentioned here: an elliptic integral of first kind expresses the time of motion along an elliptic phase curve in the corresponding Hamiltonian system.
Rephrasing the famous words on the electron and atom, it can be said that a hypocycloid is as inexhaustible as an ideal in a polynomial ring. But teaching ideals to students who have never seen a
hypocycloid is as ridiculous as teaching addition of fractions to children who have never cut (at least mentally) a cake or an apple into equal parts. No wonder that the children will prefer to add a
numerator to a numerator and a denominator to a denominator.
From my French friends I heard that the tendency towards super-abstract generalizations is their traditional national trait. I do not entirely disagree that this might be a question of a hereditary
disease, but I would like to underline the fact that I borrowed the cake-and-apple example from Poincaré.
The scheme of construction of a mathematical theory is exactly the same as that in any other natural science. First we consider some objects and make some observations in special cases. Then we try
and find the limits of application of our observations, look for counter-examples which would prevent unjustified extension of our observations onto a too wide range of events (example: the number of
partitions of consecutive odd numbers 1, 3, 5, 7, 9 into an odd number of natural summands gives the sequence 1, 2, 4, 8, 16, but then comes 29).
As a result we formulate the empirical discovery that we made (for example, the Fermat conjecture or Poincaré conjecture) as clearly as possible. After this there comes the difficult period of
checking as to how reliable are the conclusions .
At this point a special technique has been developed in mathematics. This technique, when applied to the real world, is sometimes useful, but can sometimes also lead to self-deception. This technique
is called modelling. When constructing a model, the following idealisation is made: certain facts which are only known with a certain degree of probability or with a certain degree of accuracy, are
considered to be "absolutely" correct and are accepted as "axioms". The sense of this "absoluteness" lies precisely in the fact that we allow ourselves to use these "facts" according to the rules of
formal logic, in the process declaring as "theorems" all that we can derive from them.
It is obvious that in any real-life activity it is impossible to wholly rely on such deductions. The reason is at least that the parameters of the studied phenomena are never known absolutely exactly
and a small change in parameters (for example, the initial conditions of a process) can totally change the result. Say, for this reason a reliable long-term weather forecast is impossible and will
remain impossible, no matter how much we develop computers and devices which record initial conditions.
In exactly the same way a small change in axioms (of which we cannot be completely sure) is capable, generally speaking, of leading to completely different conclusions than those that are obtained
from theorems which have been deduced from the accepted axioms. The longer and fancier is the chain of deductions ("proofs"), the less reliable is the final result.
Complex models are rarely useful (unless for those writing their dissertations).
The mathematical technique of modelling consists of ignoring this trouble and speaking about your deductive model in such a way as if it coincided with reality. The fact that this path, which is
obviously incorrect from the point of view of natural science, often leads to useful results in physics is called "the inconceivable effectiveness of mathematics in natural sciences" (or "the Wigner
Here we can add a remark by I.M. Gel'fand: there exists yet another phenomenon which is comparable in its inconceivability with the inconceivable effectiveness of mathematics in physics noted by
Wigner - this is the equally inconceivable ineffectiveness of mathematics in biology.
"The subtle poison of mathematical education" (in F. Klein's words) for a physicist consists precisely in that the absolutised model separates from the reality and is no longer compared with it. Here
is a simple example: mathematics teaches us that the solution of the Malthus equation dx/dt = x is uniquely defined by the initial conditions (that is that the corresponding integral curves in the
(t,x)-plane do not intersect each other). This conclusion of the mathematical model bears little relevance to the reality. A computer experiment shows that all these integral curves have common
points on the negative t-semi-axis. Indeed, say, curves with the initial conditions x(0) = 0 and x(0) = 1 practically intersect at t = -10 and at t = -100 you cannot fit in an atom between them.
Properties of the space at such small distances are not described at all by Euclidean geometry. Application of the uniqueness theorem in this situation obviously exceeds the accuracy of the model.
This has to be respected in practical application of the model, otherwise one might find oneself faced with serious troubles.
I would like to note, however, that the same uniqueness theorem explains why the closing stage of mooring of a ship to the quay is carried out manually: on steering, if the velocity of approach would
have been defined as a smooth (linear) function of the distance, the process of mooring would have required an infinitely long period of time. An alternative is an impact with the quay (which is
damped by suitable non-ideally elastic bodies). By the way, this problem had to be seriously confronted on landing the first descending apparata on the Moon and Mars and also on docking with space
stations - here the uniqueness theorem is working against us.
Unfortunately, neither such examples, nor discussing the danger of fetishising theorems are to be met in modern mathematical textbooks, even in the better ones. I even got the impression that
scholastic mathematicians (who have little knowledge of physics) believe in the principal difference of the axiomatic mathematics from modelling which is common in natural science and which always
requires the subsequent control of deductions by an experiment.
Not even mentioning the relative character of initial axioms, one cannot forget about the inevitability of logical mistakes in long arguments (say, in the form of a computer breakdown caused by
cosmic rays or quantum oscillations). Every working mathematician knows that if one does not control oneself (best of all by examples), then after some ten pages half of all the signs in formulae
will be wrong and twos will find their way from denominators into numerators.
The technology of combatting such errors is the same external control by experiments or observations as in any experimental science and it should be taught from the very beginning to all juniors in
Attempts to create "pure" deductive-axiomatic mathematics have led to the rejection of the scheme used in physics (observation - model - investigation of the model - conclusions - testing by
observations) and its substitution by the scheme: definition - theorem - proof. It is impossible to understand an unmotivated definition but this does not stop the criminal
algebraists-axiomatisators. For example, they would readily define the product of natural numbers by means of the long multiplication rule. With this the commutativity of multiplication becomes
difficult to prove but it is still possible to deduce it as a theorem from the axioms. It is then possible to force poor students to learn this theorem and its proof (with the aim of raising the
standing of both the science and the persons teaching it). It is obvious that such definitions and such proofs can only harm the teaching and practical work.
It is only possible to understand the commutativity of multiplication by counting and re-counting soldiers by ranks and files or by calculating the area of a rectangle in the two ways. Any attempt to
do without this interference by physics and reality into mathematics is sectarianism and isolationism which destroy the image of mathematics as a useful human activity in the eyes of all sensible
I shall open a few more such secrets (in the interest of poor students).
The determinant of a matrix is an (oriented) volume of the parallelepiped whose edges are its columns. If the students are told this secret (which is carefully hidden in the purified algebraic
education), then the whole theory of determinants becomes a clear chapter of the theory of poly-linear forms. If determinants are defined otherwise, then any sensible person will forever hate all the
determinants, Jacobians and the implicit function theorem.
What is a group? Algebraists teach that this is supposedly a set with two operations that satisfy a load of easily-forgettable axioms. This definition provokes a natural protest: why would any
sensible person need such pairs of operations? "Oh, curse this maths" - concludes the student (who, possibly, becomes the Minister for Science in the future).
We get a totally different situation if we start off not with the group but with the concept of a transformation (a one-to-one mapping of a set onto itself) as it was historically. A collection of
transformations of a set is called a group if along with any two transformations it contains the result of their consecutive application and an inverse transformation along with every transformation.
This is all the definition there is. The so-called "axioms" are in fact just (obvious) properties of groups of transformations. What axiomatisators call "abstract groups" are just groups of
transformations of various sets considered up to isomorphisms (which are one-to-one mappings preserving the operations). As Cayley proved, there are no "more abstract" groups in the world. So why do
the algebraists keep on tormenting students with the abstract definition?
By the way, in the 1960s I taught group theory to Moscow schoolchildren. Avoiding all the axiomatics and staying as close as possible to physics, in half a year I got to the Abel theorem on the
unsolvability of a general equation of degree five in radicals (having on the way taught the pupils complex numbers, Riemann surfaces, fundamental groups and monodromy groups of algebraic functions).
This course was later published by one of the audience, V. Alekseev, as the book The Abel theorem in problems.
What is a smooth manifold? In a recent American book I read that Poincaré was not acquainted with this (introduced by himself) notion and that the "modern" definition was only given by Veblen in the
late 1920s: a manifold is a topological space which satisfies a long series of axioms.
For what sins must students try and find their way through all these twists and turns? Actually, in Poincaré's Analysis Situs there is an absolutely clear definition of a smooth manifold which is
much more useful than the "abstract" one.
A smooth k-dimensional submanifold of the Euclidean space R^N is its subset which in a neighbourhood of its every point is a graph of a smooth mapping of R^k into R^(N - k) (where R^k and R^(N - k)
are coordinate subspaces). This is a straightforward generalization of most common smooth curves on the plane (say, of the circle x^2 + y^2 = 1) or curves and surfaces in the three-dimensional space.
Between smooth manifolds smooth mappings are naturally defined. Diffeomorphisms are mappings which are smooth, together with their inverses.
An "abstract" smooth manifold is a smooth submanifold of a Euclidean space considered up to a diffeomorphism. There are no "more abstract" finite-dimensional smooth manifolds in the world (Whitney's
theorem). Why do we keep on tormenting students with the abstract definition? Would it not be better to prove them the theorem about the explicit classification of closed two-dimensional manifolds
It is this wonderful theorem (which states, for example, that any compact connected oriented surface is a sphere with a number of handles) that gives a correct impression of what modern mathematics
is and not the super-abstract generalizations of naive submanifolds of a Euclidean space which in fact do not give anything new and are presented as achievements by the axiomatisators.
The theorem of classification of surfaces is a top-class mathematical achievement, comparable with the discovery of America or X-rays. This is a genuine discovery of mathematical natural science and
it is even difficult to say whether the fact itself is more attributable to physics or to mathematics. In its significance for both the applications and the development of correct Weltanschauung it
by far surpasses such "achievements" of mathematics as the proof of Fermat's last theorem or the proof of the fact that any sufficiently large whole number can be represented as a sum of three prime
For the sake of publicity modern mathematicians sometimes present such sporting achievements as the last word in their science. Understandably this not only does not contribute to the society's
appreciation of mathematics but, on the contrary, causes a healthy distrust of the necessity of wasting energy on (rock-climbing-type) exercises with these exotic questions needed and wanted by no
The theorem of classification of surfaces should have been included in high school mathematics courses (probably, without the proof) but for some reason is not included even in university mathematics
courses (from which in France, by the way, all the geometry has been banished over the last few decades).
The return of mathematical teaching at all levels from the scholastic chatter to presenting the important domain of natural science is an espessially hot problem for France. I was astonished that all
the best and most important in methodical approach mathematical books are almost unknown to students here (and, seems to me, have not been translated into French). Among these are Numbers and figures
by Rademacher and Töplitz, Geometry and the imagination by Hilbert and Cohn-Vossen, What is mathematics? by Courant and Robbins, How to solve it and Mathematics and plausible reasoning by Polya,
Development of mathematics in the 19th century by F. Klein.
I remember well what a strong impression the calculus course by Hermite (which does exist in a Russian translation!) made on me in my school years.
Riemann surfaces appeared in it, I think, in one of the first lectures (all the analysis was, of course, complex, as it should be). Asymptotics of integrals were investigated by means of path
deformations on Riemann surfaces under the motion of branching points (nowadays, we would have called this the Picard-Lefschetz theory; Picard, by the way, was Hermite's son-in-law - mathematical
abilities are often transferred by sons-in-law: the dynasty Hadamard - P. Levy - L. Schwarz - U. Frisch is yet another famous example in the Paris Academy of Sciences).
The "obsolete" course by Hermite of one hundred years ago (probably, now thrown away from student libraries of French universities) was much more modern than those most boring calculus textbooks with
which students are nowadays tormented.
If mathematicians do not come to their senses, then the consumers who preserved a need in a modern, in the best meaning of the word, mathematical theory as well as the immunity (characteristic of any
sensible person) to the useless axiomatic chatter will in the end turn down the services of the undereducated scholastics in both the schools and the universities.
A teacher of mathematics, who has not got to grips with at least some of the volumes of the course by Landau and Lifshitz, will then become a relict like the one nowadays who does not know the
difference between an open and a closed set.
V.I. Arnold
Translated by A.V. GORYUNOV
Published in: Uspekhi Mat. Nauk 53 (1998), no. 1, 229—234;
English translation: Russian Math. Surveys 53 (1998), no. 1, 229—236.
Source of this text: | {"url":"http://pauli.uni-muenster.de/~munsteg/arnold.html","timestamp":"2014-04-18T21:25:20Z","content_type":null,"content_length":"26432","record_id":"<urn:uuid:892a85d4-79c9-4e19-85bd-055c73b99ff2>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00549-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fundamentals of Physics/Vectors
A vector is a two-element value that represents both magnitude and direction.
Vectors are normally represented by the ordered pair ${v} = (v_x\, v_y)$ or, when dealing with three dimentions, the tuple ${v} = (v_x\, v_y\, v_z)$. When written in this fashion, they represent a
quantity along a given axis.
The following formulas are important with vectors:
$v_x = \left\|\mathbf{v}\right\| \cos{\theta}$
$v_y = \left\|\mathbf{v}\right\| \sin{\theta}$
$\theta = \tan^{-1}(\frac{v_y}{v_x})\,\!$
Addition and subtractionEdit
Addition is performed by adding the components of the vector. For example, c = a + b is seen as:
${c} = (a_x + b_x \, a_y + b_y)$
With subtraction, invert the sign of the second vector's components.
${c} = (a_x - b_x \, a_y - b_y)$
Multiplication (Scalar)Edit
The components of the vector are multiplied by the scalar:
$s * {v} = (s*v_x \, s*v_y)$
While some domains may permit division of vectors by vectors, such operations in physics are undefined. It is only possible to divide a vector by a scalar.
This page or section is an undeveloped draft or outline.
You can help to develop the work, or you can ask for assistance in the project room.
Last modified on 21 March 2012, at 22:00 | {"url":"http://en.m.wikibooks.org/wiki/Fundamentals_of_Physics/Vectors","timestamp":"2014-04-16T10:46:37Z","content_type":null,"content_length":"16967","record_id":"<urn:uuid:30dbbb9e-5004-4785-bc51-9b0ad8b6e536>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00030-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Real-World Economics Of High-Performance Drilling
Article From: 7/15/2003 Modern Machine Shop, Kirk Gordon, Larry Littleson
Two things can be said about high-performance carbide drills:
1. They drill better and faster, and last longer, than high speed or traditional carbide drills.
2. They’re much more expensive than high speed or traditional carbide drills.
Do the benefits justify the cost? Not always. And not always in a way that can be seen just by analyzing short-term costs. Evaluating the economics of a high-performance drill involves a range of
different factors.
This article examines those factors. A simple but realistic example of drilling work serves as a model. While the numbers used to analyze this job will be based on approximations and assumptions that
may not apply directly to a particular shop or application, the logic of the analysis can be applied to a wide variety of drilling work. What this analysis shows is that the cost effectiveness of a
given drill may be determined by factors that are far removed from the tool’s initial price.
The Job
The analysis will be based on the simple drilling job pictured below. Here are the specifics:
• The workpiece is made of some reasonably machinable steel, and it is run on a single machining center.
• The production rate for all of the work on this part is 15 pieces per hour, real net average.
• The piece needs four drilled holes, each 0.531 inch in diameter and 1.250 inch deep.
• The job runs two shifts for a total of 16 hours per day, 5 days per week.
• The shop runs at a flat rate of $60 per hour including machine costs, labor and overhead.
Basic Drill Performance
We begin by looking at a basic measure of drill performance: the cutting rate. The two tools compared are a typical, good quality high speed steel drill and a high-performance carbide drill. The
numbers shown for speeds and feed rates are based on manufacturer recommendations combined with sound common practice and experience.
Of course the carbide drill is faster. If cutting times are translated into cost, the results look like this:
Basic Drill Cost
Now let’s add some other important information, such the costs of the drills and their expected cutting life.
The carbide drill cuts 2.5 times faster than the HSS drill and lasts 5 times as long, but it’s also nearly 18 times more expensive. As a result, the basic cost of drilling a hole remains high. But
one thing that’s not yet a part of our calculations is the drill’s cost and performance after its first life cycle.
Drill Maintenance
The table below shows the basic costs of resharpening the drills. It’s assumed that a standard high speed twist drill can be sharpened in house using any of a variety of relatively inexpensive tool
and cutter grinders. The shop rate of $60 per hour is used here. Average sharpening time is assumed to be 5 minutes.
High-performance carbide drills cannot be sharpened using ordinary drill grinders or grinding fixtures. Part of why these drills work so well results from the innovative shapes and geometries on
their points. These complex points demand higher levels of grinding precision.
The prices below are typical of sources known to be capable of producing the high-quality points. Coating is a cost that must also be included.
Real Operating Costs
The resharpening cost numbers assume that each drill can be sharpened 10 times before it becomes unfit for continued use. If we combine the cost of buying a drill with the cost of all of its
resharpenings, and then consider the long-term overall performance of both types of tools, we ought to get a more accurate view of the true costs involved.
The table below looks at 6 months (25 weeks) of production. Using assumptions already listed, that works out to 120,000 holes.
Inventory Costs
Now the difference becomes clear. Using high-performance carbide drills and a competent resharpening source can reduce the cost of drilling this part by almost 40 percent. However, the analysis is
still incomplete. We assumed that carbide drills will be sent outside for resharpening, but what do we use for drills while we are waiting for the resharpened drills to come back? The answer, of
course, is more drills. How many more will depend on the turnaround time for resharpening.
At the production rate, number of holes, and drill life we’ve been using, we will wear out roughly two drills per work day. That means we will need two additional drills for every day we wait for
regrinding. The number of drills needed to fill this pipeline can be substantial. As the chart below makes clear, controlling the delivery time for resharpening is vital.
Cost Control
One way to gain control over resharpening work, and at the same time reduce its cost, is to do the work in house. Since ordinary drill sharpening equipment won’t do the job, new equipment will be
needed. One example of the kind of equipment that may be needed is pictured on the facing page. CNC machines may be expensive, but in some cases outside regrinding will be more expensive still.
Assume the $60 shop rate we’ve been using all along. That rate includes machine payment, labor and overhead costs. Assume also a grind rate of 10 drills per hour. The table shows the cumulative
savings for the 10 resharpenings in the life of a drill.
Final Analysis
The proposed in-house program saves $12 for every resharpening, or $120 over the life of each drill. If these savings are included in the Long-Term Operating Cost Comparison (page 85), then the total
drilling cost for the job drops to $15,099. This represents a savings of more than $2,600 compared to outside resharpening, and more than $14,000 compared to the cost of using high speed steel
Keep in mind, these savings apply to just one drill, one job and one 6-month period. For a typical production plant that has even a small number of drilling machines or machining centers, the
potential savings stand to be even larger.
About the authors: Kirk Gordon is president of Gordon Engineering (Audubon, Pennsylvania), a maker of CNC drill point grinders. Larry Littleson is president of AWD Associates (Clarkston, Michigan), a
Gordon Engineering dealer and also a provider of drill resharpening, reconditioning and recoating services.
Comments are reviewed by moderators before they appear to ensure they meet Modern Machine Shop’s submission guidelines.
blog comments powered by Disqus | {"url":"http://www.mmsonline.com/articles/the-real-world-economics-of-high-performance-drilling","timestamp":"2014-04-16T19:37:42Z","content_type":null,"content_length":"35690","record_id":"<urn:uuid:7ad18888-963f-497a-83ef-e205f1fd7827>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00522-ip-10-147-4-33.ec2.internal.warc.gz"} |
Consider The Circuit Shown In The Figure Below, ... | Chegg.com
Attach picture as well please.
Image text transcribed for accessibility: Consider the circuit shown in the figure below, consisting of a realistic inductor and capacitor in series with parasitic resistances placed in parallel.
Compute the transfer function H(s) = VC(s)/Vin(s). First, transform the elements in parallel with resistors into elements in series with resistors. Note that for a reactance X and resistance R in
parallel, the local quality factor Q = |X|/|R|. Then, define the series combination of R L and R c to be R eq. Be sure to compute the correct transfer function when you are done. Assume R L= 100
kOhm, R c =10 kOhm, L=100 mH, undamped resonant frequency 3162 rad/s, and bandwidth of 109 rad/s. Determine C, R eq, the circuit quality factor Q cir, and the maximum voltage gain Hm. Find the EXACT
and the approximate values (use the approximation formulas) of the upper and lower half-power frequencies. Determine the percent error using the exact as the reference value. Plot the magnitude
response using MATLAB or equivalent. Use the symbolic toolbox in MATLAB to compute the impulse and step responses of the transfer function for the circuit of problem 42. Turn in your COMMENTED code
(you may write on your printout) and output from your MATLAB program.
Electrical Engineering | {"url":"http://www.chegg.com/homework-help/questions-and-answers/consider-circuit-shown-figure-consisting-realistic-inductor-capacitor-series-parasitic-res-q4442006","timestamp":"2014-04-16T23:15:28Z","content_type":null,"content_length":"19672","record_id":"<urn:uuid:0b603e86-c137-49d5-9cbb-a62aa8e6636c>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00042-ip-10-147-4-33.ec2.internal.warc.gz"} |
Polyhedra Volume Calculations - A JavaScript Implementation
Calculating the volume of regular and irregular polyhedra is an important task in nearly all branches of science and technology, including engineering, mathematics, computer science, and
bioinformatics. This article provides a brief overview of the problem, and presents a JavaScript implementation that allows the volumes of arbitrary polyhedra to be calculated using a form contained
in a Web page. This JavaScript implementation allows a user to easily calculate the volumes of more interesting, unusual, and challenging polyhedra.
Many practical applications of science and technology (including engineering, mathematics, computer science, and bioinformatics) require calculating the volumes of regular and irregular polyhedra.
For example, an engineer may be required to calculate the volume of a manufactured component, a mathematician may want to calculate the volume contained inside an experimentally determined surface, a
computer scientist may require the volume of a computer generated cloud, or a bioinformatician may need to determine the volume of a molecule displayed by a three-dimensional graphics system.
A polyhedron is a three dimensional shape enclosed by a finite number of faces. Each face is formed by a polygon. The faces meet along straight-line segments called edges, and the edges meet at
points called vertices. A polyhedron with identical vertices and congruent faces is called a regular polyhedron. The Platonic solids are five convex, regular polyhedra: the four-faced regular
tetrahedron, the six-faced cube, the eight faced octahedron, the twelve-faced dodecahedron, and the twenty-faced icosahedron. Any polyhedron which is not regular is said to be irregular. A simple
example of an irregular polyhedron is a cube with one corner sliced off.
As a student of elementary mathematics, algebra, or geometry, you were probably taught to apply standard formulas to calculate the volumes of simple polyhedra such as the cube (V = s^3), the
rectangular solid (V = lwh), and the tetrahedron (V= (\/¯2 / 12) s³). When faced with the problem of finding the volume of a more complicated polyhedron that did not match one of these simple shapes,
you were probably taught to decompose that complicated polyhedron into two or more simpler objects whose volumes could then be found using standard formulas.
The following screen shot displays a wireframe image of an irregular polyhedron. The base of the polyhedron is an irregular five-sided polygon, and each of that polygon's vertices connect to one of
two other vertices near the top of the polyhedron. How would you go about finding the volume of this irregular polyhedron?
A non-intuitive approach to finding the volume of an arbitrary polyhedron is to decompose the polyhedron into a collection of pyramids that share a common peak (located inside or on the surface of
the polyhedron). Then you can accumulate the volumes of those pyramids to find the volume of the original polyhedron. In order to apply this technique, the volume of each pyramid must be found. The
volume of a general pyramid is bh/3, where b is the area of the base and h is the perpendicular distance from the base to the peak. The problem thus reduces to finding the area of each base and the
height of each pyramid. The area of each base is the area of the corresponding face, and that area may be found in terms of the face vertex coordinates.
In all but the simplest problems, the procedure ultimately requires identifying the vertices defining the faces of the polyhedron, selecting an appropriate common point to represent the peak of each
pyramid, using the vertices to find the area of each base, and evaluating the height of each pyramid. When this procedure is performed by hand, ample opportunities arise to unintentionally introduce
computational errors and arrive at an incorrect answer. However, if the vertex coordinates are readily available, the process is simplified because it is then much easier to calculate the area of
each face and the required height.
Three Elementary Problems
To set the stage for the JavaScript computations presented in this paper, it is helpful to consider the solutions to three elementary problems.
First, the volume of a unit cube may be found by decomposing the cube into six pyramids. Each face of the cube is the base of one of the pyramids, and the center of the cube is selected as the common
peak. The area of each base is 1.0, and the perpendicular distance from each base to the peak is 0.5. The volume of each pyramid is 0.5/3, and the volume of the cube is 6(0.5/3) or 1.0. In this
example, the pyramid base areas and perpendicular distances may found by inspection, and the coordinates of each vertex are not directly required.
Next, the volume of a rectangular solid having length l, width w, and height h measured along the x, y, and z axes may be found by decomposing that solid into 6 pyramids that share a common peak
located at the center of the solid and having coordinates (l/2, w/2, h/2). The pyramids formed by the left and right faces have areas wh and height l/2. The pyramids formed by the front and back
faces have areas lh and height w/2. The pyramids formed by the top and bottom faces have areas lw and height h/2. Accumulating the volumes of these pyramids yields the volume of the rectangular
solid: again, the pyramid base areas and perpendicular distances may be found by inspection, and the coordinates of each vertex are not directly required.
V = 2(1/3)lhw/2 + 2(1/3)lhw/2 + 2(1/3)lwh/2 = lwh
Finally, the volume of a sphere having radius 1.0 may be approximated by decomposing that sphere into a very large number of very thin pyramids that share a common peak located at the center of the
sphere. In the limiting case, the total volume of the pyramids is 1/3 times the total base area (4πr^2) times the height of each pyramid, r, or (4/3)πr^3. Calculating the exact volume of an
approximate sphere whose surface is modeled by a small number of vertical and horizontal slices requires the base area of each approximating pyramid to be determined, and that does require knowledge
of all the vertex coordinates.
Developing the JavaScript Implementation
JavaScript is a programming language primarily intended for adding interactivity to Web pages. However, JavaScript may also be used to perform mathematical computations and demonstrate essential
programming concepts. The only tools required to develop and test JavaScript programs are a simple text editor like Notepad and a Web browser like Internet Explorer. JavaScript programs, or
"scripts", are embedded in the HTML code of a Web page.
JavaScript allows a user to create and manipulate their own objects, in addition to using built-in objects available as part of the language. In that sense, JavaScript is said to be an "object-aware"
language. JavaScript objects have properties, called parameters, and JavaScript objects can execute functions, called methods.
To implement a JavaScript solution to the polyhedron volume problem, it will be necessary to develop the code necessary to process several different kinds of objects. We will need to define a
collection of vertices, use those vertices to define the polyhedron's faces, find the area of those faces, assign a common point to serve as the top of each face pyramid, and finally accumulate the
volume of all the face pyramids. To provide a user interface, a form contained in a Web page will be used.
In the following sections, JavaScript source files for classes Point, Vector, Polygon, and Pyramid are presented, and the details of each class is briefly described. Ultimately, these supporting
classes greatly simplify implementing a solution to the Polyhedron volume problem.
JavaScript Code for the Class Point
JavaScript allows a programmer to define new objects that extend the built-in Object class. An object that represents a three-dimensional point will be useful for storing vertex values. The file
Point.js contains the JavaScript code that defines a new class named Point. The class Point extends the built-in Object class by adding three properties named x, y, and z, and assigning a value,
printPoint (the name of a JavaScript function) to the existing toString property. To use that code in an HTML document, it is only necessary to insert the following line:
<script language="JavaScript" src="Point.js"></script>
// File: Point.js
// JavaScript functions for Point processing
// Point constructor
function Point(X,Y,Z)
var p = new Object();
p.x = X;
p.y = Y;
p.z = Z;
p.toString = printPoint;
return p;
// Implementation of toString() returns a String
function printPoint()
return "Point: (" + this.x + "," + this.y + "," + this.z + ")";
Testing the Class Point
The following HTML document can be used to test the class, Point. The result should be a single line of output in the browser's window:
Point: (10,20,30)
<title>Test Class Point</title>
<script language="JavaScript" src="Point.cs"></script>
<script language="JavaScript">
var a = new Point( 10, 20, 30 );
document.write( a );
Note that the toString() method is used to display Point's value. The document.write() method automatically calls an object's toString() method. By defining that method for class Point, we can create
a simply formatted string that displays a Point's coordinates. This approach is taken in the examples that follow. It provides an easy way to test each class.
JavaScript Code for the Class Vector
JavaScript code that defines a Vector object and the methods that implement vector processing, will be useful for processing vectors and finding the direction normal to a surface. The file Vector.js
contains JavaScript that defines a new class named Vector. The class Vector extends the built-in Object class by adding three properties named x, y, and z, and assigning a value printVector (the name
of a JavaScript function) to the existing toString property, and adding several new properties that define additional vector processing methods. To use that code in an HTML document it is only
necessary to insert the following line:
<script language="JavaScript" src="Vector.js"></script>
// File: Vector.js
// JavaScript functions for Vector processing
// Vector constructor
function Vector(X,Y,Z)
var p = new Object();
p.x = X;
p.y = Y;
p.z = Z;
p.toString = printVector;
p.magnitude = getMagnitude;
p.dotProduct = getDotProduct;
p.crossProduct = getCrossProduct;
p.unitVector = getUnitVector;
p.subtract = subtractVectors;
p.add = addVectors;
return p;
// Implementation of toString() returns a String
function printVector()
return "Vector: (" + this.x + "," + this.y + "," + this.z + ")";
// Vector magnitude
function getMagnitude()
var sum = this.x*this.x + this.y*this.y + this.z*this.z;
var result = 0.0;
if( sum > 0.0 ) {
result = Math.pow( sum, 0.5 );
result = 0.0;
return result;
// Vector dot product
function getDotProduct(B)
var result = this.x * B.x + this.y * B.y + this.z * B.z;
return result;
// Vector cross product
function getCrossProduct(B)
var c = new Vector(0,0,0);
c.x = this.y * B.z - B.y * this.z;
c.y = this.z * B.x - B.z * this.x;
c.z = this.x * B.y - B.x * this.y;
return c;
// Vector unit vector
function getUnitVector()
var mag = this.magnitude();
if( mag <= 0.0 )
alert("Error: Attempt to use mag <= 0 in getUnitVector()");
var v = new Vector( this.x, this.y, this.z );
v.x = v.x / mag;
v.y = v.y / mag;
v.z = v.z / mag;
return v;
// Subtract two vectors
function subtractVectors(B)
var u = new Vector( 0,0,0 );
u.x = this.x - B.x;
u.y = this.y - B.y;
u.z = this.z - B.z;
return u;
// Add two vectors
function addVectors(B)
var u = new Vector( 0,0,0 );
u.x = this.x + B.x;
u.y = this.y + B.y;
u.z = this.z + B.z;
return u;
Testing the Class Vector
The following HTML document can be used to test the class Vector. The result should be the following lines of output in the browser's window:
a is Vector: (10,20,30)
b is Vector: (20,30,-10)
sum c is Vector: (30,50,20)
difference d is Vector: (-10,-10,40)
magnitude of a is 37.416573867739416
dot product (a * b) is 500
cross product (a x b) is Vector: (-1100,700,-100)
unit vector for Vector: (10,20,30) is Vector:
magnitude of h is 1
<title>Test Class Vector</title>
<script language="JavaScript" src="Vector.js"></script>
<script language="JavaScript">
var a = new Vector(10,20,30);
var b = new Vector(20,30,-10);
var c = a.add(b);
var d = a.subtract(b);
document.write("a is " + a + "<br>");
document.write("b is " + b + "<br>");
document.write("sum c is " + c + "<br>");
document.write("difference d is " + d + "<br>");
var e = a.magnitude();
document.write("magnitude of a is " + e + "<br>");
var f = a.dotProduct(b);
document.write("dot product (a * b) is " + f + "<br>");
var g = a.crossProduct(b);
document.write("cross product (a x b) is " + g + "<br>");
var h = a.unitVector();
document.write("unit vector for " + a + " is " + h + "<br>");
var i = h.magnitude();
document.write("magnitude of h is " + i + "<br>");
JavaScript Code for the Class Polygon
The JavaScript code that defines a Polygon object and methods that implement finding the area and perimeter of a polygon, will also be useful. The file Polygon.js contains the JavaScript code that
defines a new class named Polygon.
The class Polygon extends the built-in Object class. A Polygon is modeled by an array of vertices. The vertices are taken in counter-clockwise order around the polygon’s perimeter as you look at the
face. A value, printPolygon (the name of a JavaScript function), is assigned to the existing toString property, and methods to calculate the area, perimeter, normal vector, and transformed
coordinates of the polygon are also added. The normal vector will later be used to find the height of a pyramid, and the transformed coordinates will be used to find the polygon’s area.
To use that code in an HTML document it is only necessary to insert the following line:
<script language="JavaScript" src="Polygon.js"></script>
// File: Polygon.js
// JavaScript functions for Polygon processing
// Polygon constructor
function Polygon(a)
var t = new Object();
//t.points = a;
t.points = new Array();
for( k = 0; k < a.length; k++ )
t.points[k] = a[k];
t.toString = printPolygon;
t.area = getPolygonArea;
t.localCoordinates = getLocalCoordinates;
t.normal = getNormal;
return t;
// Implementation of toString()
function printPolygon()
return "Polygon: " + this.points ;
// Calculate a polygon's area
function getPolygonArea()
// Get the polygon in its own
// x-y coordinates (all z's should be 0)
var polygonPrime = this.localCoordinates();
// Apply the surveyor's formula
len = polygonPrime.points.length;
var result = 0.0;
var dx = 0.0;
var dy = 0.0;
for( var k = 0; k < (len-1); k++ )
dx = polygonPrime.points[k+1].x - polygonPrime.points[k].x;
dy = polygonPrime.points[k+1].y - polygonPrime.points[k].y;
result += polygonPrime.points[k].x * dy -
polygonPrime.points[k].y * dx;
dx = polygonPrime.points[0].x - polygonPrime.points[len-1].x;
dy = polygonPrime.points[0].y - polygonPrime.points[len-1].y;
result += polygonPrime.points[len-1].x * dy -
polygonPrime.points[len-1].y * dx;
return result/2.0;
// Calculate a polygon's normal vector
function getNormal()
// Construct a vector from points[0] to points[1]
var dx = this.points[1].x - this.points[0].x;
var dy = this.points[1].y - this.points[0].y;
var dz = this.points[1].z - this.points[0].z;
var v01 = new Vector( dx,dy,dz );
// Construct a vector from points[1] to points[2]
dx = this.points[2].x - this.points[1].x;
dy = this.points[2].y - this.points[1].y;
dz = this.points[2].z - this.points[1].z;
var v12 = new Vector( dx,dy,dz );
// Get the cross product, which returns
// a vector in the normal direction
norm = v01.crossProduct(v12);
// Make norm a unit vector
norm = norm.unitVector();
return norm;
// Convert a Polygon in (x,y,z) to a polygon in (x',y',0)
function getLocalCoordinates()
// Copy "this" Polygon
var p = new Polygon( this.points );
// Select p.points[0] as the displacement
var Rx = p.points[0].x;
var Ry = p.points[0].y;
var Rz = p.points[0].z;
// Subtract R from all the points of polygon p
for( var k = 0; k < p.points.length; k++ )
p.points[k].x -= Rx;
p.points[k].y -= Ry;
p.points[k].z -= Rz;
// Select P0P1 as the x-direction
var dx = p.points[1].x-p.points[0].x;
var dy = p.points[1].y-p.points[0].y;
var dz = p.points[1].z-p.points[0].z;
var xprime = new Vector(dx,dy,dz);
// Find a unit vector in the xprime direction
var iprime = xprime.unitVector();
// Find the vector P1P2
dx = p.points[2].x-p.points[1].x;
dy = p.points[2].y-p.points[1].y;
dz = p.points[2].z-p.points[1].z;
var p1p2 = new Vector(dx,dy,dz);
// Find a vector kprime in the zprime direction
var kprime = iprime.crossProduct(p1p2);
// Make kprime a unitVector
kprime = kprime.unitVector();
// Find the vector jprime in the yprime direction
var jprime = kprime.crossProduct(iprime);
// For each point, calculate the projections on xprime, yprime, zprime
// (All zprime values should be zero)
for( var k = 0; k < p.points.length; k++ )
var pprime = new Point(0,0,0);
var pv = new Vector( p.points[k].x, p.points[k].y, p.points[k].z );
pprime.x = iprime.dotProduct(pv);
pprime.y = jprime.dotProduct(pv);
pprime.z = kprime.dotProduct(pv);
p.points[k] = pprime;
// Return a polygon in its own local x'y'z' coordinates
return p;
Testing the Class Polygon
The following HTML document can be used to test the class Polygon. The result should be the following lines of output in the browser's window:
poly1 is Polygon: Point: (0,0,0),
Point: (1,0,0),Point: (1,2,0),Point: (0,1,0)
area: 1.5
<title>Test Class Polygon</title>
<script language="JavaScript" src="Point.js"></script>
<script language="JavaScript" src="Vector.js"></script>
<script language="JavaScript" src="Polygon.js"></script>
<script language="JavaScript">
var p0 = new Point(0,0,0);
var p1 = new Point(1,0,0);
var p2 = new Point(1,2,0);
var p3 = new Point(0,1,0);
var points = new Array();
points[0] = p0;
points[1] = p1;
points[2] = p2;
points[3] = p3;
var poly1 = new Polygon(points);
document.write("poly1 is " + poly1 + "<br>");
var a = poly1.area();
document.write("area: " + a + "<br>");
JavaScript Code for the Class Pyramid
The JavaScript code that defines a Pyramid object and a method that implements finding the volume of a pyramid, will be useful for finding the volume of a polyhedron. The file Pyramid.js contains the
JavaScript code that defines a new class named Pyramid.
The class Pyramid extends the built-in Object class. A Pyramid is modeled by a Polygon, the base of the Pyramid, and a Point, the peak of the Pyramid. A value, printPyramid (the name of a JavaScript
function), is assigned to the existing toString property, and methods to calculate the volume and height of the Pyramid have been added.
To use that code in an HTML document, it is only necessary to insert the following line:
<script language="JavaScript" src="Pyramid.js"></script>
// File: Pyramid.js
// JavaScript functions for Pyramid processing
// Pyramid constructor
function Pyramid(poly,pnt)
var t = new Object();
t.polygon = poly;
t.point = pnt;
t.height = getPyramidHeight
t.volume = getPyramidVolume;
t.baseArea = getPyramidBaseArea;
t.toString = printPyramid;
return t;
// Implementation of toString() returns a String
function printPyramid()
return "Pyramid: " + this.polygon + " + " + this.point;
// Calculate the Pyramid's volume
function getPyramidVolume()
// Calculate the perpendicular distance
// from the base to the top point
var d = this.height();
// Calculate the area of the base
var baseArea = this.polygon.area();
// Calculate the volume of the polygon's pyramid
var volume = d * baseArea / 3.0;
return volume;
function getPyramidHeight()
// Construct a vector from the Pyramid base to the top point
var dx = this.point.x - this.polygon.points[0].x;
var dy = this.point.y - this.polygon.points[0].y;
var dz = this.point.z - this.polygon.points[0].z;
var vt = new Vector(dx,dy,dz);
// Calculate the perpendicular
// distance from the base to the top point.
// The distance d is the projection of vt in the normal direction.
// Because a right-hand coordinate system is assumed, the value of d
// may be negative, so the absolute value is returned.
var norm = this.polygon.normal();
var d = norm.dotProduct(vt);
var result = 0.0;
if( d < 0.0 )
result = Math.abs(d);
result = d;
return result;
function getPyramidBaseArea()
return this.polygon.area();
Testing the Class Pyramid
The following HTML document can be used to test the class Pyramid. The result should be the following lines of output in the browser's window:
pyramid is Pyramid: Polygon: Point: (0,0,0),
Point: (1,0,0),Point: (1,2,0),
Point: (0,1,0) + Point: (0.5,0.5,1)
pyramid height is: 1
pyramid base area is: 1.5
pyramid volume: 0.5
<title>Test Class Pyramid</title>
<script language="JavaScript" src="Point.js"></script>
<script language="JavaScript" src="Vector.js"></script>
<script language="JavaScript" src="Polygon.js"></script>
<script language="JavaScript" src="Pyramid.js"></script>
<script language="JavaScript">
var p0 = new Point(0,0,0);
var p1 = new Point(1,0,0);
var p2 = new Point(1,2,0);
var p3 = new Point(0,1,0);
var points = new Array();
points[0] = p0;
points[1] = p1;
points[2] = p2;
points[3] = p3;
var poly = new Polygon(points);
var pnt = new Point(0.5,0.5,1);
var pyramid = new Pyramid(poly,pnt);
document.write("pyramid is " + pyramid + "<br>");
document.write("pyramid height is: " + pyramid.height() + "<br>" );
document.write("pyramid base area is: " + pyramid.baseArea() + "<br>" );
document.write("pyramid volume: " + pyramid.volume() + "<br>");
Putting it All Together - Implementing the Polyhedron Volume Calculator
The following HTML code contains a form that allows a user to define a polyhedron by specifying the number of faces, the vertices that define each face, and each vertex’s coordinates. The form
contains a Submit button that, when clicked, triggers the execution of the JavaScript function named calculate(). That function reads the user’s input, parses the vertex coordinate values, calculates
the pyramid volume associated with each face, accumulates the polyhedron’s total volume, and displays that result using a specified number of decimal digits. As designed, the form can handle up to 9
faces and up to 12 vertices. The form’s design can be extended to handle problems involving more faces and more vertices. The JavaScript code must then be modified to initialize, read, and process
any additional data.
For test and demonstration purposes, the form also contains an additional button labeled Load Cube. When that button is clicked, the JavaScript function named loadCube() executes and fills the form
with the face and vertex data for a unit cube. If the Calculate button is then clicked, the volume of the unit cube is calculated and displayed as 1.00000. In the calculate() method, the number of
displayed digits is controlled by applying the toFixed() method.
The form is initially loaded with sample data defining a unit cube having one corner cut off. Clicking the Reset button reloads that sample data, and if the Calculate button is then clicked, the
volume of that irregular polyhedron is displayed as 0.97197.
<title>Polyhedron Volume Calculator</title>
body {background-color:#FF9900;}
table {background-color:#FF9900;}
.content {width:800px; background-color:white; padding:10px;}
<script language="JavaScript" src="Point.js"></script>
<script language="JavaScript" src="Vector.js"></script>
<script language="JavaScript" src="Polygon.js"></script>
<script language="JavaScript" src="Pyramid.js"></script>
<script language="JavaScript">
// Button click event handler
function calculate()
// Read the number of faces and vertices
var faces = parseInt(document.getElementById("FACES").value);
var vertices = parseInt(document.getElementById("VERTICES").value);
// Read all the vertex coordinates
var vertex = new Array();
for( var k = 0; k < vertices; k++ )
var vertexStr = document.getElementById("VERTEX"+k).value;
var items = vertexStr.split(",");
vertex[k] = new Point( parseFloat(items[0]),
parseFloat(items[1]), parseFloat(items[2]));
// Process all the faces
var totalVolume = 0.0;
for( var k = 0; k < faces; k++ )
// Read the index numbers of the vertices for this face
var str = document.getElementById("FACEVERTICES" + k).value;
var indexStrings = str.split(",");
var indexNumbers = new Array();
for( var j = 0; j < indexStrings.length; j++ )
indexNumbers[j] = parseInt(indexStrings[j]);
// Create an array of Points using the indexNumbers
verts = new Array();
for( var j = 0; j < indexNumbers.length; j++ )
var index = indexNumbers[j];
var x = vertex[index].x;
var y = vertex[index].y;
var z = vertex[index].z;
verts[j] = new Point( x,y,z );
// Create a Polygon using the vertices
var poly = new Polygon( verts );
// Create a Pyramid using the Polygon
//var pyrm = new Pyramid( poly, new Point(0.5,0.5,1.0) );
// Create a Pyramid using the face Polygon and vertex[0]
var pyrm = new Pyramid( poly, vertex[0] );
// Get he Pyramid volume
var vol = pyrm.volume();
totalVolume += vol;
// Display the calculated volume
document.getElementById("VOLUME").value = totalVolume.toFixed(5);
function loadCube()
document.getElementById("FACES").value = "6";
document.getElementById("VERTICES").value = "8";
document.getElementById("FACEVERTICES0").value = "0,1,2,3";
document.getElementById("FACEVERTICES1").value = "4,5,6,7";
document.getElementById("FACEVERTICES2").value = "0,3,5,4";
document.getElementById("FACEVERTICES3").value = "1,7,6,2";
document.getElementById("FACEVERTICES4").value = "2,6,5,3";
document.getElementById("FACEVERTICES5").value = "0,4,7,1";
document.getElementById("FACEVERTICES6").value = "";
document.getElementById("FACEVERTICES7").value = "";
document.getElementById("FACEVERTICES8").value = "";
document.getElementById("VERTEX0").value = "0,0,0";
document.getElementById("VERTEX1").value = "1,0,0";
document.getElementById("VERTEX2").value = "1,0,1";
document.getElementById("VERTEX3").value = "0,0,1";
document.getElementById("VERTEX4").value = "0,1,0";
document.getElementById("VERTEX5").value = "0,1,1";
document.getElementById("VERTEX6").value = "1,1,1";
document.getElementById("VERTEX7").value = "1,1,0";
document.getElementById("VERTEX8").value = "";
document.getElementById("VERTEX9").value = "";
document.getElementById("VERTEX10").value = "";
document.getElementById("VERTEX11").value = "";
document.getElementById("VOLUME").value = "";
<div align="center">
<div class="content">
<table border="2" cellspacing="1"
bgcolor="#FF9900" cellpadding="2">
<td colspan="4" align="center">
<h2>Polyhedron Volume Calculator</h2>
<td colspan="2">Faces:
<input type="text" name="FACES"
id="FACES" size="10"
<td colspan="2">Vertices:
<input type="text" name="VERTICES"
id="VERTICES" size="10"
<td align="center">Face</td>
<td align="center">Vertices</td>
<td align="center">Vertex</td>
<td align="center">Coordinates</td>
<td align="center">0</td>
<input type="text" name="FACEVERTICES0" id="FACEVERTICES0" size="10"
<td align="center">0</td>
<td> <input type="text" name="VERTEX0"
id="VERTEX0" size="10"
<td align="center">1</td>
<input type="text" name="FACEVERTICES1" id="FACEVERTICES1"
size="10" value="6,9,8,7"></td>
<td align="center">1</td>
<td> <input type="text" name="VERTEX1"
id="VERTEX1" size="10" value="1,0,0"></td>
<td align="center">2</td>
<input type="text" name="FACEVERTICES2"
id="FACEVERTICES2" size="10" value="0,4,9,6"></td>
<td align="center">2</td>
<input type="text" name="VERTEX2"
id="VERTEX2" size="10" value="1,0,0.5"></td>
<td align="center">3</td>
<input type="text" name="FACEVERTICES3"
id="FACEVERTICES3" size="10" value="1,7,8,5,2"></td>
<td align="center">3</td>
<input type="text" name="VERTEX3" id="VERTEX3"
size="10" value="0.5,0,1"></td>
<td align="center">4</td>
<input type="text" name="FACEVERTICES4"
id="FACEVERTICES4" size="10" value="4,3,5,8,9"></td>
<td align="center">4</td>
<input type="text" name="VERTEX4" id="VERTEX4"
size="10" value="0,0,1"></td>
<td align="center">5</td>
<input type="text" name="FACEVERTICES5"
id="FACEVERTICES5" size="10" value="0,6,7,1"></td>
<td align="center">5</td>
<input type="text" name="VERTEX5" id="VERTEX5"
size="10" value="1,0.5,1"></td>
<td align="center">6</td>
<input type="text" name="FACEVERTICES6"
id="FACEVERTICES6" size="10" value="2,5,3"></td>
<td align="center">6</td>
<input type="text" name="VERTEX6" id="VERTEX6"
size="10" value="0,1,0"></td>
<td align="center">7</td>
<input type="text" name="FACEVERTICES7"
id="FACEVERTICES7" size="10"></td>
<td align="center">7</td>
<input type="text" name="VERTEX7" id="VERTEX7"
size="10" value="1,1,0"></td>
<td align="center">8</td>
<input type="text" name="FACEVERTICES8"
id="FACEVERTICES8" size="10"></td>
<td align="center">8</td>
<input type="text" name="VERTEX8" id="VERTEX8"
size="10" value="1,1,1"></td>
<td align="center"> </td>
<td align="center">9</td>
<input type="text" name="VERTEX9" id="VERTEX9"
size="10" value="0,1,1"></td>
<td align="center"> </td>
<td align="center">10</td>
<input type="text" name="VERTEX10"
id="VERTEX10" size="10"></td>
<td align="center"> </td>
<td align="center">11</td>
<input type="text" name="VERTEX11"
id="VERTEX11" size="10"></td>
<td align="center"> </td>
<td colspan="2">
<p align="right">Volume: </td>
<td colspan="2"><input type="text"
name="VOLUME" id="VOLUME" size="20"></td>
<td colspan="2" align="center">
<input type="button" value="Calculate" onClick="calculate()"></td>
<td colspan="2" align="center">
<input type="reset" value="Reset" name="B1"></td>
<td colspan="4" align="center">
<input type="button" value="Load Cube"
onClick="loadCube()" name="B2"></td>
A Specific Example
The following screen shot displays a wire-frame image of an irregular polyhedron. Using the Polyhedron Volume Calculator, the volume of this irregular polyhedron can easily be found. The required
data is tabulated below. The volume of this unusual shape is 20.36364, to five significant digits.
│ Vertex │ Co-ordinates │ Face │ Vertices │
│ 0 │ 0, 0, 0 │ 1 │ 0, 1, 2, 3, 4 │
│ 1 │ 0, 2, 0 │ 2 │ 4, 5, 0 │
│ 2 │ 1, 4, 0 │ 3 │ 0, 5, 6, 1 │
│ 3 │ 4, 2, 0 │ 4 │ 1, 6, 2 │
│ 4 │ 2, 0, 0 │ 5 │ 3, 2, 6 │
│ 5 │ 2, 1, 4 │ 6 │ 5, 4, 3, 6 │
│ 6 │ 2, 2, 6 │ --- │ --- │
For comparison, the volume of a circular cone with a base area of 10.0 and a height of 6.0 is 1/3*10*6, or 20.0 cubic units.
The problem of finding the volume of an irregular polyhedron and the solution of that problem by decomposing the polyhedron into a collection of pyramids was discussed. Three simple problems
involving the volume of a cube, a rectangular solid, and a sphere were presented to set the stage for solving more difficult problems.
JavaScript source files for classes Point, Vector, Polygon, and Pyramid were presented. The result is a library of JavaScript source files that may easily be included in other HTML documents.
The code for an HTML document containing a form that allows a user to enter a polyhedron’s face vertices and vertex coordinates was presented. The JavaScript code within that document calculates the
volume of a polyhedron given a list of its faces, the vertices comprising each face, and the coordinates of each vertex. The form is initially loaded with sample data for calculating the volume of a
unit cube (a simple regular polyhedron). For demonstration purposes, the data for a unit cube having a corner cut off (a simple example of an irregular polyhedron) can also be loaded.
The article concluded with an example that calculated the volume of an irregular polyhedron, a wedge-like shape having an irregular polygon with fve sides for a base.
More information regarding the mathematics underlying the code in this article may be found in these references:
1. Weisstein, E. W.,. "Polyhedron" From MathWorld--A Wolfram Web Resource
2. Weast, R. C., Editor, 1968, CRC Standard Mathematical Tables, 16^th Edition, (Cleveland: The Chemical Rubber Co.)
3. Kreyszig, E., 1967, Advanced Engineering Mathematics, 2^nd Edition, (New York: John Wiley and Sons, Inc.)
4. Estrella, S.G., 2002, The Web Wizard’s Guide to JavaScript, (Boston: Addison Wesley)
5. Negrino, T. and Smith, D, 1998, JavaScript for the World Wide Web, 2^nd Edition, (Berkely: Peachpit Press)
Other CodeProject articles concerning JavaScript object-oriented programming include: | {"url":"http://www.codeproject.com/Articles/13666/Polyhedra-Volume-Calculations-A-JavaScript-Impleme?fid=287917&df=90&mpp=10&sort=Position&spc=None&tid=2455035","timestamp":"2014-04-24T19:45:34Z","content_type":null,"content_length":"196884","record_id":"<urn:uuid:ba7a796b-41cc-4666-a165-d4b429a0968e>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00302-ip-10-147-4-33.ec2.internal.warc.gz"} |
1. T. H. Cormen, C. E. Leiserson, R. L. Rivest and C. Stein, Introduction to Algorithms, MIT Press, 2001. References:
1. S. Sahni, Data Structures, Algorithms and Applications in C++, 2nd Ed., Universities Press, 2005.
2. M. T. Goodrich and R. Tamassia, Data Structures and Algorithms in Java, Wiley, 2006.
3. A. V. Aho and J. E. Hopcroft, Data Structures and Algorithms, Addison-Wesley, 1983.
1. MA252: Endsem exam is on Apr 24, 2012 (The exam covers full syllabus of MA252)
2. MA253: Endsem Lab exam is on Apr 10, 2012 (Tuesday), from 2:00PM to 4:30PM
(The syllabus for the exam. includes up to the last lecture of MA252
3. Quiz II is on April 02, 2012 (Monday), from 8:00AM.
4. Assignment submission deadline is on 04/03/2012 (Sunday) before 11:59PM. The submission to be done by uploading the files to the folder name "usernameMA252A1" in the your math server account.
The folder will be created by Mr. Majumdar. Note that after the deadline read and write permission will be over for the folder.
5. MA253: Midsem Lab exam is on Feb 14, 2012 (Tuesday), from 2:00PM to 4:30PM
(The syllabus for the exam. includes up to the last lecture of MA252 and the tutorial of MA253)
6. Quiz I is on January 17, 2012 (Tuesday), from 10:00AM to 10:50AM
Lecture Notes: __________________________________________________________________________________ Dec 27 Lecture Notes 1 [Tests and marks distribution, Introduction, RAM Model, Estimating running
time] Dec 28 Lecture Notes 2 [The problem of sorting, Insertion sort, Big-Oh notation, Computing prefix averages] Jan 02 Lecture Notes 3 [Asymptotic Notations, Merge sort, Recurrence equation]
Jan 03 Lecture Notes 4 [Solving recurrences, Binary search, Powering a number] Jan 04 Lecture Notes 5 [Computing Fibonacci numbers, Matrix multiplication, Strassen's algorithm] Jan 09 Lecture
Notes 6 [Quicksort, Complexity analysis of the quicksort] Jan 10 Lecture Notes 7 [kth Element Problem, Order Statistics, Randomized selection algorithm] Jan 11 Lecture Notes 8 [Analysis of
expected time for randomized selection, Worst-case linear-time order statistics] Jan 16 Lecture Notes 9 [Complexity analysis of the Randomized quicksort, Decision-tree model, Lower bound for
comparison sorting] Jan 17 Quiz I Jan 18 Lecture Notes 10 [Heap Sort] Jan 23 Lecture Notes 11 [Complexity analysis of Heap Sort] Jan 24 Lecture Notes 12 [Inserting elements in Heap, Priority
queues, Sorting in linear time: Counting sort] Jan 25 Lecture Notes 13 [Radix sort, Complexity analysis of radix sort] Jan 30 Lecture Notes 14 [Bucket sort, Complexity analysis of Bucket sort]
Jan 31 Lecture Notes 15 [Binary Search Trees (BST): Operation: Search, Minimum, Maximum, Predecessor, Successor, Insert/Create] Feb 01 Lecture Notes 16 [Deletion in BST, Binary Tree Traversal
Methods] Feb 06 Lecture Notes 17 [Expression Tree and construction of expression tree from given travel orders] Feb 07 Lecture Notes 18 [Hashing] Feb 08, 13 Lecture Notes 19 [Review of Stack,
Queue and Linked List] Feb 21 Midsem Feb 27 Lecture Notes 20 [Balanced Search Trees: AVL Tree] Feb 28 Lecture Notes 21 [Insertion & deletion in AVL Tree] Feb 29 Lecture Notes 22 [2-3 Tree] Mar 05
Lecture Notes 23 [Red-black Tree] Mar 12 Lecture Notes 24 [Graph Algorithms: Graph Search Methods, Breadth-first search (BFS)] Mar 13 Lecture Notes 25 [Spanning Tree, Depth-First Search (DFS),
Topological Sorting] Mar 14 Lecture Notes 26 [Strongly connected components, Bi-connected components] Mar 19 Lecture Notes 27 [Disjoint Sets] Mar 20 Lecture Notes 28 [MST: Kruskal's Algorithm]
Mar 21 Lecture Notes 29 [MST: Prim's Algorithm] Mar 26 Lecture Notes 30 [SSSP: Dijkstra's Algorithm] Mar 27 Lecture Notes 31 [SSSP: Bellman-Ford Algorithm] Mar 28 Lecture Notes 32 [Dynamic
Programming] Apr 02 Quiz II Apr 03 Lecture Notes 33 [Longest common subsequence problem and dynamic programming solution] Apr 04 Lecture Notes 34 [Dynamic programming solution of 0-1 Knapsack
problem] Apr 10 Lecture Notes 35 [Dynamic programming solution of Matrix Chain Order Problem] Apr 16 Lecture Notes 36 [All-Pairs Shortest Paths Problem, Dynamic programming solution based on
matrix multiplication, Floyd-Warshall Algorithm] Apr 17 Lecture Notes 37[Complexity Class and NP-Completeness (for interested students)] Apr 24 Endsem | {"url":"http://www.iitg.ernet.in/psm/indexing_ma252/y12/","timestamp":"2014-04-17T00:51:44Z","content_type":null,"content_length":"12504","record_id":"<urn:uuid:846dd97c-53ec-4cc2-b6d6-31078a91b73e>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00226-ip-10-147-4-33.ec2.internal.warc.gz"} |
Beginner Magic Lesson: Adding Basic Lands to a Multi-Colored Deck
Essential Magic Articles
Page Beginner Magic Lesson: Adding Basic Lands to a Multi-Colored Deck by RomanianEgghead
Many of my friends have begun to play Magic recently, and learning the rules to play with my own decks is only half the battle. Once these players graduate to wanting their own unique deck to
use, they are usually calling me back confused about the basics for land count and mana curve. Although these will be mentioned in this article, I also wanted to show a way to help these
players not fear the multi-colored deck by showing a few easy tricks to balance a mana base.
The fear in making a deck with more than one color is, obviously, being stuck with the wrong lands. There is nothing more frustrating than playing a W/G deck with all white spells in your hand
and a board of only Forests. The most reliable solution to this is to spend tens or hundreds of dollars on shock, fetch, pain and other nonbasic lands to ensure a proper mana base every game.
However, there is an easier solution that works for casual or draft play very well in using a few basic steps to determine a proper
mana base using only basic lands.
1) Determine the number of lands in your deck. In a standard, 60 card deck, this will often be between 20-25 total lands. A good place to start with many decks is 23 lands. If your mana curve
is low or you have a lot of creatures/spells that help produce mana or find lands than go for 20-21. If you have a mana intensive deck or a control deck, it may help to have closer to 25 lands.
2) Count the number of mana symbols in all spells in your deck. This means for every
Suntail Hawk
you would have
Black Knight
you would count
3) Determine a ratio between the number of each different mana symbol. You may count 30
4) Have your lands for each color equal the ratio for mana symbols. Do not go below 7-8 of any one land type. In the first example above, 30
5) PLAYTEST YOUR DECK! This is the most important step. Shuffle your deck well, draw some test hands, and play a few games to make sure your land base is working well. If you find you need more
or less of one of the lands, then make the change.
I hope everyone can use this guide with some basic ratios to help their friends learn how to construct a balanced and fun deck on a budget. Many decks can work well with only basic lands in
casual games, even 3-color decks, if the land is balanced well.
Last edited 11/12/2013 2:53:30 PM Page 1 of 1 Prev Next Go to page:
Rate Article: You must login to rate articles.
Login or Join Free!
Discuss this Article! All Forums
Browse Articles Submit Article
Deck Search Combo Search | {"url":"http://www.essentialmagic.com/em2/Doc.aspx?hdocid=316","timestamp":"2014-04-17T00:48:52Z","content_type":null,"content_length":"22389","record_id":"<urn:uuid:45e46e14-a3e9-4052-9b78-c29dbfdf26eb>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00051-ip-10-147-4-33.ec2.internal.warc.gz"} |
Approximation of ln8 using the power rule [Archive] - Free Math Help Forum
06-13-2011, 03:16 PM
I have been stuck on this for hours on end. I really need help quickly on this.
The source of the problem is this:
The approximation of the natural logarithm of 2: ln 2 ? 0.693. Based on this approximation and the power rule for logarithmic expressions, how could you approximate ln 8, without a calculator?
At first I tried to switch from ln 8 to loge8. I have been stuck ever since. I had asked my instructor where I'm going wrong, and their response was "This problem doesn't involve you using e. you
have to figure out how to break down 8 to have a base of 2 with an exponent. If you can do that then you will already know what ln of 2 is because its given in the problem. you will then need to take
what ln of two is and multiply it by the exponent that came down in the front due to the power rule of logs."
If I can't change the natural log to something else, how do I even do this, let alone do it using the power rule? I did cheat and use my calculator to find the ln of 8 is 2.079. Still I have no idea
how to get to that or explain it.
Please help! | {"url":"http://www.freemathhelp.com/forum/archive/index.php/t-71081.html","timestamp":"2014-04-18T18:48:19Z","content_type":null,"content_length":"4203","record_id":"<urn:uuid:c0759fdf-11b2-448c-a450-8396cb8bd1d0>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00376-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Decomposition of a 10th degree equation
Replies: 4 Last Post: Mar 16, 2013 10:17 AM
Messages: [ Previous | Next ]
Re: Decomposition of a 10th degree equation
Posted: Mar 15, 2013 3:34 PM
On Mar 15, 11:32 am, Deep <deepk...@yahoo.com> wrote:
> Consider the following equation (1) for the given conditions.
> x^10 + y^10 = z^10 (1)
> Conditions: x, z are odd integers > 0 and y is non integer but x^10, y^10, z^10 are all integers each > 0
> (1) can be decomposed as (2) and (3) where x = uv and u, v are co prime integers.
With these conditions you can take any odd 0 < x < z, and let y =
(z^10 - x^10)^(1/10), and that is the exact set of solutions, so I
can't quite see the point?
Date Subject Author
3/15/13 Decomposition of a 10th degree equation Deep Deb
3/15/13 Re: Decomposition of a 10th degree equation Pubkeybreaker
3/15/13 Re: Decomposition of a 10th degree equation Deep Deb
3/15/13 Re: Decomposition of a 10th degree equation gnasher729
3/16/13 Re: Decomposition of a 10th degree equation Deep Deb | {"url":"http://mathforum.org/kb/message.jspa?messageID=8643485","timestamp":"2014-04-19T09:42:31Z","content_type":null,"content_length":"21205","record_id":"<urn:uuid:ec92631b-a559-4dc8-b1e3-59684cd8dafa>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00418-ip-10-147-4-33.ec2.internal.warc.gz"} |
Interval arithmetic
April 27th 2009, 06:46 AM
Interval arithmetic
I'm not a mathematician, just an amateur. I'm looking at interval arithmetic at the moment and have a stumbling block which derives from trying to do some geometry with interval arithmetic...
The problem involves using interval arithmetic and vectors....
I have a vector q which is defined as three intervals ( xi, yi, zi ) where
xi is the closed interval [x0, x1]
yi is the closed interval [y0, y1]
zi is the closed interval [z0, z1]
I also have two equations:
u = a - dot( q, b )
v = c - dot( q, d )
where the vectors b, d are constant, and the scalars a,c are constant.
Therefore u and v are also vectors whose components are intervals.
I also know that both intervals u,v have negative low and positive high values. I need a way of determining whether or not there exists a value of q ( ie. specific values of its scalar components
) which will give me a positive 'u' and a positive 'v'. I do not need to know the value, only to determine if a valid solution exists within the intervals of vector q.
I hope this is clear... cheers. | {"url":"http://mathhelpforum.com/advanced-math-topics/85984-interval-arithmetic-print.html","timestamp":"2014-04-18T07:29:24Z","content_type":null,"content_length":"4282","record_id":"<urn:uuid:8a8e9e96-9dac-42f8-a5b0-eae6eb378635>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00320-ip-10-147-4-33.ec2.internal.warc.gz"} |
Library: Foundation
Package: DateTime
Header: Poco/LocalDateTime.h
This class represents an instant in local time (as opposed to UTC), expressed in years, months, days, hours, minutes, seconds and milliseconds based on the Gregorian calendar.
In addition to the date and time, the class also maintains a time zone differential, which denotes the difference in seconds from UTC to local time, i.e. UTC = local time - time zone differential.
Although LocalDateTime supports relational and arithmetic operators, all date/time comparisons and date/time arithmetics should be done in UTC, using the DateTime or Timestamp class for better
performance. The relational operators normalize the dates/times involved to UTC before carrying out the comparison.
The time zone differential is based on the input date and time and current time zone. A number of constructors accept an explicit time zone differential parameter. These should not be used since
daylight savings time processing is impossible since the time zone is unknown. Each of the constructors accepting a tzd parameter have been marked as deprecated and may be removed in a future
Member Summary
Member Functions: adjustForTzd, assign, day, dayOfWeek, dayOfYear, determineTzd, dstOffset, hour, hourAMPM, isAM, isPM, julianDay, microsecond, millisecond, minute, month, operator !=, operator +,
operator +=, operator -, operator -=, operator <, operator <=, operator =, operator ==, operator >, operator >=, second, swap, timestamp, tzd, utc, utcTime, week, year
Creates a LocalDateTime with the current date/time for the current time zone.
const DateTime & dateTime
Creates a LocalDateTime from the UTC time given in dateTime, using the time zone differential of the current time zone.
double julianDay
Creates a LocalDateTime for the given Julian day in the local time zone.
const LocalDateTime & dateTime
int tzd,
const DateTime & dateTime
Deprecated. This function is deprecated and should no longer be used.
Creates a LocalDateTime from the UTC time given in dateTime, using the given time zone differential. Adjusts dateTime for the given time zone differential.
int tzd,
double julianDay
Deprecated. This function is deprecated and should no longer be used.
Creates a LocalDateTime for the given Julian day in the time zone denoted by the time zone differential in tzd.
int tzd,
const DateTime & dateTime,
bool adjust
Deprecated. This function is deprecated and should no longer be used.
Creates a LocalDateTime from the UTC time given in dateTime, using the given time zone differential. If adjust is true, adjusts dateTime for the given time zone differential.
int year,
int month,
int day,
int hour = 0,
int minute = 0,
int second = 0,
int millisecond = 0,
int microsecond = 0
Creates a LocalDateTime for the given Gregorian local date and time.
• year is from 0 to 9999.
• month is from 1 to 12.
• day is from 1 to 31.
• hour is from 0 to 23.
• minute is from 0 to 59.
• second is from 0 to 59.
• millisecond is from 0 to 999.
• microsecond is from 0 to 999.
int tzd,
int year,
int month,
int day,
int hour,
int minute,
int second,
int millisecond,
int microsecond
Deprecated. This function is deprecated and should no longer be used.
Creates a LocalDateTime for the given Gregorian date and time in the time zone denoted by the time zone differential in tzd.
• tzd is in seconds.
• year is from 0 to 9999.
• month is from 1 to 12.
• day is from 1 to 31.
• hour is from 0 to 23.
• minute is from 0 to 59.
• second is from 0 to 59.
• millisecond is from 0 to 999.
• microsecond is from 0 to 999.
Timestamp::UtcTimeVal utcTime,
Timestamp::TimeDiff diff,
int tzd
Member Functions
LocalDateTime & assign(
int year,
int month,
int day,
int hour = 0,
int minute = 0,
int second = 0,
int millisecond = 0,
int microseconds = 0
Assigns a Gregorian local date and time.
• year is from 0 to 9999.
• month is from 1 to 12.
• day is from 1 to 31.
• hour is from 0 to 23.
• minute is from 0 to 59.
• second is from 0 to 59.
• millisecond is from 0 to 999.
• microsecond is from 0 to 999.
LocalDateTime & assign(
int tzd,
int year,
int month,
int day,
int hour,
int minute,
int second,
int millisecond,
int microseconds
Deprecated. This function is deprecated and should no longer be used.
Assigns a Gregorian local date and time in the time zone denoted by the time zone differential in tzd.
• tzd is in seconds.
• year is from 0 to 9999.
• month is from 1 to 12.
• day is from 1 to 31.
• hour is from 0 to 23.
• minute is from 0 to 59.
• second is from 0 to 59.
• millisecond is from 0 to 999.
• microsecond is from 0 to 999.
LocalDateTime & assign(
int tzd,
double julianDay
Deprecated. This function is deprecated and should no longer be used.
Assigns a Julian day in the time zone denoted by the time zone differential in tzd.
int day() const;
Returns the day witin the month (1 to 31).
int dayOfWeek() const;
Returns the weekday (0 to 6, where 0 = Sunday, 1 = Monday, ..., 6 = Saturday).
int dayOfYear() const;
Returns the number of the day in the year. January 1 is 1, February 1 is 32, etc.
int hour() const;
Returns the hour (0 to 23).
int hourAMPM() const;
Returns the hour (0 to 12).
bool isAM() const;
Returns true if hour < 12;
bool isPM() const;
Returns true if hour >= 12.
double julianDay() const;
Returns the julian day for the date.
int microsecond() const;
Returns the microsecond (0 to 999)
int millisecond() const;
Returns the millisecond (0 to 999)
int minute() const;
Returns the minute (0 to 59).
int month() const;
Returns the month (1 to 12).
bool operator != (
const LocalDateTime & dateTime
) const;
LocalDateTime operator + (
const Timespan & span
) const;
LocalDateTime & operator += (
const Timespan & span
LocalDateTime operator - (
const Timespan & span
) const;
Timespan operator - (
const LocalDateTime & dateTime
) const;
LocalDateTime & operator -= (
const Timespan & span
bool operator < (
const LocalDateTime & dateTime
) const;
bool operator <= (
const LocalDateTime & dateTime
) const;
LocalDateTime & operator = (
const LocalDateTime & dateTime
LocalDateTime & operator = (
const Timestamp & timestamp
LocalDateTime & operator = (
double julianDay
Assigns a Julian day in the local time zone.
bool operator == (
const LocalDateTime & dateTime
) const;
bool operator > (
const LocalDateTime & dateTime
) const;
bool operator >= (
const LocalDateTime & dateTime
) const;
int second() const;
Returns the second (0 to 59).
void swap(
LocalDateTime & dateTime
Timestamp timestamp() const;
Returns the date and time expressed as a Timestamp.
int tzd() const;
Returns the time zone differential.
DateTime utc() const;
Returns the UTC equivalent for the local date and time.
Timestamp::UtcTimeVal utcTime() const;
Returns the UTC equivalent for the local date and time.
int week(
int firstDayOfWeek = DateTime::MONDAY
) const;
Returns the week number within the year. FirstDayOfWeek should be either SUNDAY (0) or MONDAY (1). The returned week number will be from 0 to 53. Week number 1 is the week containing January 4. This
is in accordance to ISO 8601.
The following example assumes that firstDayOfWeek is MONDAY. For 2005, which started on a Saturday, week 1 will be the week starting on Monday, January 3. January 1 and 2 will fall within week 0 (or
the last week of the previous year).
For 2007, which starts on a Monday, week 1 will be the week startung on Monday, January 1. There will be no week 0 in 2007.
int year() const;
void adjustForTzd();
Adjust the _dateTime member based on the _tzd member.
void determineTzd(
bool adjust = false
Recalculate the tzd based on the _dateTime member based on the current timezone using the Standard C runtime functions. If adjust is true, then adjustForTzd() is called after the differential is
std::time_t dstOffset(
int & dstOffset
) const;
Determine the DST offset for the current date/time. | {"url":"http://pocoproject.org/docs-1.5.0/Poco.LocalDateTime.html","timestamp":"2014-04-16T04:44:58Z","content_type":null,"content_length":"27819","record_id":"<urn:uuid:70a00163-d4d0-472f-a2ab-78be720c9d9f>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00035-ip-10-147-4-33.ec2.internal.warc.gz"} |
Olney, MD Geometry Tutor
Find an Olney, MD Geometry Tutor
...These courses involved learning the techniques, both analytical and numerical, for solving ordinary, partial, and non-linear differential equations including Green's function techniques. I
have also taught Physics and Electrical Engineering courses for both undergraduate and graduate students. ...
16 Subjects: including geometry, calculus, physics, statistics
...I have a PhD in Applied Mathematics and have used statistics extensively in my research as an Earth Scientist at the NASA/Goddard Space Flight Center. I have computed and/or evaluated anomaly
correlations to verify numerical weather predictions and to diagnose relationships between physical obse...
39 Subjects: including geometry, chemistry, calculus, algebra 1
...Success, especially in tutoring, comes from building strong, personal relationships with each student, and getting to know one's student, and letting the student know the tutor, also builds
trust, which is essential for a healthy learning environment. I am eager to work with any student, and am ...
11 Subjects: including geometry, algebra 1, algebra 2, German
...I would like to see students try solving problems on their own first and treat me with respect so that it can be reciprocated. I was born and raised in Seoul, Korea where my parents still
live. I came to the States to finish high school and college.
17 Subjects: including geometry, chemistry, precalculus, physics
...When I took the ACT and SAT in October 2008, I scored in the 99% and 95% percentiles respectively for the English portions. I took both AP English and AP Literature in high school and have
taken English classes in college. Geometry is a challenging transition point for some students.
17 Subjects: including geometry, English, reading, writing
Related Olney, MD Tutors
Olney, MD Accounting Tutors
Olney, MD ACT Tutors
Olney, MD Algebra Tutors
Olney, MD Algebra 2 Tutors
Olney, MD Calculus Tutors
Olney, MD Geometry Tutors
Olney, MD Math Tutors
Olney, MD Prealgebra Tutors
Olney, MD Precalculus Tutors
Olney, MD SAT Tutors
Olney, MD SAT Math Tutors
Olney, MD Science Tutors
Olney, MD Statistics Tutors
Olney, MD Trigonometry Tutors | {"url":"http://www.purplemath.com/olney_md_geometry_tutors.php","timestamp":"2014-04-19T07:08:05Z","content_type":null,"content_length":"23974","record_id":"<urn:uuid:62c54d92-972d-4647-a61b-2cd65470146a>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00502-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dissertation talk : Efficient learning algorithms with limited information
Dissertation talk : Efficient learning algorithms with limited information
Seminar: Departmental | May 9 | 2-3 p.m. | 380 Soda Hall
Anindya De, UC Berkeley
Abstract : Since the seminal work of Valiant (1984), the Probably Approximately Correct (PAC) model has been the standard model for learning. However, for specific classes of functions, it is
conceivable that one can learn in far weaker models / meet more stringent success criterion.
a) In the first part of this work, we will look at learning algorithms for halfspaces where the algorithm only has access to approximate values of correlation of the unknown function with each of the
coordinates. Besides its applications in learning theory, this family of problems is of interest in social choice theory. The algorithms are based on exploiting the probability theoretic and Fourier
analytic properties of halfspaces.
b) In the second part, we will look at learning algorithms which given access to positive examples of an unknown Boolean function (from a specific concept class) seeks to learn the uniform
distribution over the positive examples. Besides being a natural example of unsupervised learning in the context of Boolean functions, it has connections to several aspects of complexity theory. We
employ tools from learning theory, Sampling algorithms and convex programming.
anindya.de@berkeley.edu, 510-316-7703 | {"url":"http://events.berkeley.edu/index.php/calendar/sn/eecs.html?event_ID=66859","timestamp":"2014-04-18T08:50:07Z","content_type":null,"content_length":"40897","record_id":"<urn:uuid:36d12d88-66fb-4460-ba01-cecc6ef8b00a>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00087-ip-10-147-4-33.ec2.internal.warc.gz"} |
Derivate function with 2 parameters
December 29th 2010, 03:10 AM #1
Dec 2010
Derivate function with 2 parameters
Hello there, my first time so if it's in bad category please move it where it belongs
In short, I need to create normals for wave, so I use derivative of wave and then make line perpedicular to it. I found that I can make perpedicular using this formula:
I need to find perpediculars for derivated wave function. And now that's where problem begins. My wave function:
$F(x,t) = y*sin(G(x,t))$
$G(x,t) = 2*PI*f*(x/v-t)$
I have very little knoweledge about derivative, I'm not on university yet and most of what I know is through searching net.. I think I can derivate first function, but second with 2 parameters
does real problem. I already tried solve it (found something about partial derivatives on wikipedia), and as I make graphical computation I can compare results and it was not really like solution
I'm sure about. So I think I got lost there somewhere derivating G...
I would be glad if someone could solve this one a show me steps also not just solution, so I can learn from it.
Thanks in advance
Last edited by DarkRaven; December 29th 2010 at 05:02 AM.
This post definitely belongs in Calculus, not Differential Equations, just fyi.
Hmm. Typically, with a differentiable function of one variable, $y=f(x),$ the normal line to a point $(x_{0},f(x_{0}))$ on the curve has a slope given by
Now, with the problem you've presented, I have one big question: with respect to which variable, $x$ or $t,$ are you trying to find the normal line?
Well.. take look here:
Dispersion (water waves) - Wikipedia, the free encyclopedia
Below Wave propagation heading are those 2 formulas:
What I did is that I just modified angle formula.
In what I'm doing, I take sample point at distance $x$ from center of wave, and testing with wave time $t$ if the wave already spread to that point and what phase it's in.
Which variable should I then take in respect? I think it should be time $t$ but I'm not sure about it.
I'm afraid I haven't the foggiest notion what you're trying to do. Your OP and your third post seem a bit unrelated. Do me a favor, please: write out the original problem statement from your book
or notes or whatever, word-for-word, and leave nothing out. Thank you!
I'm sorry for that it's confusing..
Can't re-write it word-for-word because it's just in my head. Anyway one more try.
In computer graphics, there are vectors - normals - of unit length, that computer use to determine how lit the object is. Without normal and using light, you would see everything dark. Those
normals are perpedicular to surface.
| <- normal
That was something about what I need to create.
Now how to create it:
1. my surface is defined by formulas in third post. Shape of this surface is sinusoid as it's equation of wave.
Back to computer things... In computer, you can't really get ultimately smooth wave, unless using other method than mine. The way I create wave is by creating points close to each other, so close
that when you zoom out it's smooth, and connecting them with narrow line. In start they all lye on x-axis, but solving equation from 3rd post for each point in specified time, I get y-axis
values. This way I get shape of wave in computer.
2. derivate to find tangents
Next step to find normals is derivating this shape, with which I have problem. When I consider it, maybe it's with respect to $x$, because $t$ is static in specified time. Anyway, derivating this
I find tangents in each point (remember it's not smooth it just look smooth and there are points which need normals).
3. lines perpedicular to tangents
Now we have tangents, we are able to convert them into perdepiculars $-1/tangent$ in each point of wave.
4. creating normals
Until now, the lines was specified by slope( $k$), such that $y = k*x$ and it didn't matter where they were laying. But before we position them, we convert them to normals. Remember that normals
have unit length? We can write:
$1=x^2 + y^2$
where $x$ is just some x-axis value and $y$ some y-axis value.
We also know that $y = k*x$ and we solved $k$ in step 3.
so using these 2:
$x = squareroot(1-k^2*x^2)$
This can be then solved but as I don't know $k$ yet, I can't simplify it. Calculating $y$ is then trivial.
Armed with unit vector we can position it onto point position (if we were solving for px=2, py = 1, we would then update normal like: x = x+px, y = y + py )
And that's really everything I can tell about it without going deep in computer graphics. It's just finding slope of tangent, finding slope of perpedicular, creating normal and position it.
Thanks for clarifying. Knowing the context of light shining on the wave is precisely what I needed in order to be able to tell you the information you're looking for. Context is everything!
Ok, you're definitely after the derivative with respect to the spatial variable $x$. When you compute that partial derivative, it'll still have a $t$ in it, which indicates that it's going to
change in time. That's fine. With your wave function
$\eta(x,t)=a\sin(kx-\omega t),$
the partial derivative with respect to $x$ gives you
$\eta_{x}(x,t)=k a \cos(kx-\omega t).$
Use this to compute the spatial slope of the wave function at any time and value of $x$. Then do the procedure you've outlined. Find the slope of the normal, and then use the point on the wave
function to determine the intercepts, essentially, of the equation of the normal line.
Make sense?
Yes. I'll have to convert it for my modified formula but that involves just some constants to re-write. It's quite late here, I'll post result of testing it later if there will be time as it
won't be ordinary day
Well it doesn't look alike. Maybe I made some error in my code but I just got something in my head that I need to check on:
that partial derivative is already a slope of tangent, or what it is.
Following wikipedia shows this:
so when I write:
$y = k * x$
and the $k$ is $f'$ I get:
$y1 = k*a*cos(k*x - omega*t) * x1$
where $x1, y1$ are coordinates of point laying on tangent.
Is that correct?
You are assuming that the equation of the tangent line goes through the origin: a very sketchy assumption, I would think. I would add an arbitrary intercept to the equation in order to allow for
the possibility that the tangent line will not go through the origin. So you'd have this:
$y_{1} = ak\cos(kx - \omega t) x_{1}+b.$
Here, $x$ is the $x$-coordinate of the point of tangency.
You'll need to solve for $b,$ which you can do by setting this equation of the tangent line equal to what you get in the original equation. That is, do this:
$\eta(x,t)=a\sin(kx-\omega t)=ak\cos(kx - \omega t) x+b,$ and solve for $b.$ It'll almost certainly depend on $t,$ but you'd expect that.
Make sense?
Yes, I think... so if we substitute (have to learn how to write those signs in math tag.. don't you have link?)
$alpha = k*x - omega*t$
we can write:
$y1 = x1*k*cos(alpha)*(a+1)+sin(alpha)$
That should do the trick?
The best way to get good at LaTeX is to double-click code that produces output you like. On this forum, that will produce a popup box that shows you the code the produced the output you like.
I don't quite agree with your result. Using your definition of $\alpha,$ I get this:
Do you see how I got this?
yeah..made error in first stage when dividng with $a$.. loks like today's not my bast day..
I now got result like you do but there's thing I noticed in your post, I thought you just forgot subscript but you wrote that equation again without it:
in third form of equation shouldn't there been $x_1$ after $cos(\alpha)$ ? Because there
you wrote it with subsrcipt.. but result of that would be then just
$y_1 = a*sin(\alpha)$ as $x_1-x_1 = 0$
I was thinking that the subscripted variables represented coordinates that are along the tangent line. This is to distinguish from the coordinates of the point of tangency, for which I used
non-subscripted variables. Does that clear that up?
A thought occurred to me: finding the intercept of the tangent line is superfluous, because you're really after the equation of the normal line. For that, all you need is the slope of the normal
line (-1/slope of tangent line, as you've said before), and the coordinates of the point of tangency. You go through the usual procedure to find the equation of the line. What do you get?
Well that's the first idea I got in fact. But the question now is how to calculate slope of tangent? That question is related to 6th post where I'm asking what result I get derivating my
function. Because it's in fact only real number when you substitue numbers for constants and nothing more. So if the result would be slope I get:
$S$ - slope of perpedicular (normal with infinite length)
$S = \frac{-1}{f'}$
$S = \frac{-1}{k*a*cos(\alpha)}$
Now something I need to check but It should be ok to suggest:
-if we are dispatching normal into right position and we really need just direction, we can assume it's crossing origin and that simplifies:
$y_1 = S * x_1$
next equation we can use is that normal have unit length:
$1 = y_1^2 + x_1^2$
$1 = x_1^2*(S^2+1)$
$x_1 = \sqrt{\frac{1}{S^2+1}}$
$y_1 = S * \sqrt{\frac{1}{S^2+1}}$
substituing for S:
$x_1 = \sqrt{\frac{1}{(\frac{-1}{k*a*cos(\alpha)})^2+1}}$
$y_1 = (\frac{-1}{k*a*cos(\alpha)}) * \sqrt{\frac{1}{(\frac{-1}{k*a*cos(\alpha)})^2+1}}$
In this post $x_1$ and $y_1$ are final values for x and y axis of normal, as in computer 2D space vector is defined by 2 values... So we get final vector and we are ready to place it onto point,
we were calculating it for... In fact we don't really do anything more than just calculating $x_1$ and $y_1$, because you just specify to which point normal belong and computer does the rest
I agree with your slope of the normal line of
However, the assumption that the normal line goes through the origin is, I think, a poor one. I would allow for a nonzero intercept, and solve for it by plugging in the coordinates of the point
of tangency/normalcy thus:
$a\sin(\alpha)=-\dfrac{1}{ak\cos(\alpha)}\,x+b,$ which implies
$b=a\sin(\alpha)+\dfrac{1}{ak\cos(\alpha)}\,x.$ Therefore, the equation of the normal line is
$y_{n}=-\dfrac{1}{ak\cos(\alpha)}\,x_{n}+a\sin(\alpha)+\df rac{1}{ak\cos(\alpha)}\,x.$
You can simplify a little bit:
Here, I've used $(x_{n},y_{n})$ as the coordinates of points along the normal line.
Make sense?
December 29th 2010, 12:52 PM #2
December 30th 2010, 05:31 AM #3
Dec 2010
December 30th 2010, 12:42 PM #4
December 30th 2010, 04:11 PM #5
Dec 2010
December 30th 2010, 04:23 PM #6
December 30th 2010, 04:33 PM #7
Dec 2010
December 31st 2010, 02:37 AM #8
Dec 2010
December 31st 2010, 03:06 AM #9
December 31st 2010, 04:48 AM #10
Dec 2010
December 31st 2010, 05:19 AM #11
December 31st 2010, 06:09 AM #12
Dec 2010
December 31st 2010, 08:09 AM #13
December 31st 2010, 09:32 AM #14
Dec 2010
December 31st 2010, 09:58 AM #15 | {"url":"http://mathhelpforum.com/calculus/167064-derivate-function-2-parameters.html","timestamp":"2014-04-21T05:32:09Z","content_type":null,"content_length":"99432","record_id":"<urn:uuid:05451f06-6a1f-4051-b612-1fe284a69b69>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00623-ip-10-147-4-33.ec2.internal.warc.gz"} |
all you math guys. I NEED help. asymptotes [Archive] - Skim Online Message Boards
View Full Version : all you math guys. I NEED help. asymptotes
You wouldn't know your asymptote from a hole in the ground.
well there's really no way to duplicate the way Vrablic says "asymptote" in text, but zach knows what I'm talking about
but why is it (-10,2) and not just -10? where did that 2 come from?
lim means limit and it is the limit as x approaches -10. That is all I remember from calc.
well, what is the "answer" im looking for. it doesn't give me what it wants... it just keeps rabling on about q(0) and shit that I don't understand AT ALL.
You should be alright if you listen in class and do the homework. It would have helped if you took trig in high school or college. When you get to cos and sin derivatives and shit it is a little
well the good thing is that my professor said that we aren't doing anything in trig. So sins, cos, or tans. Since its business calculus, its like geared towards business. ( go fucking figure. haha)
I took business calc it was called Quantitative Methods. It was easy as fuck. Just listen in class.
yah jesse, did you take it? because I know they offer the same course at UF.
did that make sense or help at all? its actually a pretty easy thing to explain, and its an easy concept to grasp i think, but everything gets tougher over the interwebz.
not right away, but I am going to go back and pick it apart, because I recognize all the terms, its just the first time where everything has been related in one section for me. So I am going to
attempt to piece it together tomorrow. I've been doing homework all day and can't think anymore.
all i talked about were vertical asymptotes. theres also horizontal and oblique(diagonal) asymptotes. same concept basically, the function always approaches an "invisible line", but never actually
touches it. There are just different ways to find them. haha have fun with it.
it's funny b/c he's never on here, except magically right when somebody's asking a math question. It's like Spidey sense.
Actually, it really was. I hadn't been on SOMB all day long and I decided to check it and randomly see a 5 minute old math post.
Powered by vBulletin® Version 4.2.2 Copyright © 2014 vBulletin Solutions, Inc. All rights reserved. | {"url":"http://skimonline.com/forum/archive/index.php?t-23087.html","timestamp":"2014-04-17T12:55:35Z","content_type":null,"content_length":"18760","record_id":"<urn:uuid:86eef0ba-a55a-403f-98e7-5d4a852e2090>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00662-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sampling from a torus
February 19, 2014
By Joseph Rickert
by Joseph Rickert
One of the key ideas in topological data analysis is to consider a data set to be a sample from a manifold in some high dimensional topological space and then to use the tools of algebraic topology
to reconstruct the manifold. It turns out that the converse problem of taking a random sample from a given topological manifold also has some very useful applications in statistics. In their 2012
paper, Sampling From A Manifold, the mathematical statisticians Persi Diaconis, Susan Holmes and Mehrdad Shahshahani develop a general approach to this converse problem using fundamental ideas from
geometric measure theory. They show how this topological / geometrical approach can be used not only to construct algorithms for generating test data for topological statistics, but also for testing
the goodness of fit of sufficient statistics for exponential family distributions and for other statistical problems that may be conceptualized as sampling from a manifold.
The first example in the paper shows how to correctly sample from a torus:
M = { [(R+rcos(q))cos(y), (R+rcos(q))sin(y),sin(q)]}
where q and y are both in [0,2p) and R > r > 0
The authors point out that naively drawing q and y from uniformly distributed random variables will lead to sampled points that are denser than they should be in regions of higher curvature such as
the inside of the torus. The correct way to sample is to use the theory they develop to derive the joint density, g(q, y), of q and y on the manifold. The theorem that enables this is an extension of
the change of variables formula from calculus. Readers who have used Jacobians to work through tricky problems involving the sums and products of random variables in a probability course will
recognize what is going on.
As it turns out, g(q, y) factors into into:
g1(q) = (1/2p)(1 + (r/R)cos(q))
where q e [0,2p) and g2(y) is uniform on [0,2p).
To actually sample from g1(q) the authors provide the R code for a rejection sampling algorithm.
#Rejection sampler
return(xvec[yvec<fx]) }
Created by Pretty R at inside-R.org
The following plot shows the histogram and density curve for 10,000 draws from g1(q).
(Look here for the code to draw the plot: Download G1(theta)_histogram)
The math in Sampling from a Manifold is intense. However, the authors help where they can. Much of the paper is expository providing an introduction to geometric measure theory and amid the
literature review the authors are kind enough to point out more elementary references that they think are useful. This is hard work, but it is nice to see that R code has a place among all the
abstract ideas.
for the author, please follow the link and comment on his blog:
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or | {"url":"http://www.r-bloggers.com/sampling-from-a-torus/","timestamp":"2014-04-19T14:38:21Z","content_type":null,"content_length":"42173","record_id":"<urn:uuid:eca7ec5f-eb03-4737-8260-640c4284ac61>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00275-ip-10-147-4-33.ec2.internal.warc.gz"} |
Positive Charge Q Is Distributed Uniformly Along ... | Chegg.com
Positive charge Q is distributed uniformly along the positive y-axis between y=0 and y=a. A negative point charge -q lies on the positive x-axis, a distance x from the origin (the figure (Figure 1)
A)Calculate the x-component of the electric field produced by the charge distribution Q at points on the positive x-axis.
(Express your answer in terms of the variables Q,x ,y ,a and appropriate constants.)
B)Calculate the y-component of the electric field produced by the charge distribution Q at points on the positive x-axis.
(Express your answer in terms of the variables Q,x ,y ,a and appropriate constants.)
C)Calculate the x-component of the force that the charge distribution Q exerts on q.
(Express your answer in terms of the variables Q,x ,y ,a,q and appropriate constants.)
D)Calculate the y-component of the force that the charge distribution Q exerts on q.
(Express your answer in terms of the variables Q,x ,y ,a,q and appropriate constants.) | {"url":"http://www.chegg.com/homework-help/questions-and-answers/positive-charge-q-distributed-uniformly-along-positive-y-axis-y-0-y--negative-point-charge-q2887060","timestamp":"2014-04-16T14:09:07Z","content_type":null,"content_length":"19683","record_id":"<urn:uuid:4ba349a6-668d-45eb-80ba-dc3105ae5a78>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00584-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding arc length along the curve
Calculate the arc length along the curve x = y^4/4 + 1/8y^2 from y = 1 to y =2
You have x as a function of y instead of the other way, which might get a little confusing. The formula for arc length is: $s=\int\sqrt{1+f'(y)^2}\ dy$ You just have to get the derivative, plug in,
and do the integration. The limits are just y=1 to 2. It should be pretty straightforward. Post again in this thread if you hit a roadblock. - Hollywood | {"url":"http://mathhelpforum.com/calculus/142979-finding-arc-length-along-curve-print.html","timestamp":"2014-04-17T06:49:44Z","content_type":null,"content_length":"3927","record_id":"<urn:uuid:84e5adb4-8fd1-4aef-89ed-76d4f6d7849d>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00233-ip-10-147-4-33.ec2.internal.warc.gz"} |
Factorization of the polynomial
Since $\sqrt{{{\left( \sqrt{2p+1}+\sqrt{2p-1} \right)}^{2}}}=\sqrt{2p+1}+\sqrt{2p-1},$ which is the denominator, you can factor the numerator as a sum of perfect cubes.
Thank you, but I already tried that to no avail. Thank you, that worked like a charm! I already came up with the idea, but was not sure what to do with $4p$ in the denominator. But since you
suggested it, I took a closer look and sure enough - it worked! There how it goes: 1) First we need to do something with $4p$, which confused me, when I tried to solve it by myself. We need to get it
in the form of $(2p+1)$ and $(2p-1)$ somehow. Well, $4p$ is equal to $2p + 2p$, and we can add and substract 1. Therefore: $4p = 2p - 1 + 2p + 1$ Here we go $\frac{\sqrt{(2p+1)^3}+\sqrt{(2p-1)^3}}}{\
sqrt{2p - 1 + 2p + 1 +2\sqrt{4p^2-1}}}$ Now, let $x = \sqrt{2p+1}$ and $y = \sqrt{2p-1}$, and express everything in terms of x and y: $\frac{x^3+y^3}{\sqrt{x^2 + y^2 + 2xy}}$ By definition $x^2 + y^2
+ 2xy = (x+y)^2$, and $x^3+y^3=(x+y)(x^2-xy+y^2)$, therefore: $\frac{(x+y)(x^2-xy+y^2)}{(x+y)}$ Simplify: $x^2-xy+y^2$ And the answer is: $4p - \sqrt{4p^2-1}$ Thank you guys again, been trying to
solve this tough cookie for a couple of days now. P.S. if it is not a polynomial, then what is it?
Opalg's hint is just to make things easier to work, haha, you coulda solved the problem directly, with no substitutions. | {"url":"http://mathhelpforum.com/algebra/185439-factorization-polynomial.html","timestamp":"2014-04-18T13:00:53Z","content_type":null,"content_length":"52710","record_id":"<urn:uuid:67b38580-68e5-4f7f-966b-4f1ad438ba1b>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00480-ip-10-147-4-33.ec2.internal.warc.gz"} |
Davenport, CA Algebra 1 Tutor
Find a Davenport, CA Algebra 1 Tutor
...F has consistently found that those students who grasp “the why” can easily learn the mechanics. He is constantly reminding his students that chemistry is NOT converting a bunch of numbers
into other numbers but, rather, understanding WHY molecules do what they do when molecules interact: the f...
14 Subjects: including algebra 1, chemistry, English, geometry
...In middle school and high school, I took advanced mathematics courses at our local college (what a treat!). I took my first semester of calculus when I was in the 9th grade. I went on to
graduate with highest honors with a B.S. in Mathematics from the University of North Carolina at Chapel Hill. Because of my love of mathematics, I've always enjoyed teaching mathematics to
8 Subjects: including algebra 1, calculus, geometry, precalculus
...I plan on attending medical school, so I have taken the anatomy and physiology courses offered at Cabrillo for the allied health programs. I've been a teaching assistant in the anatomy lab as
well, where I discovered that I love teaching and helping students succeed in their academic goals. I h...
31 Subjects: including algebra 1, chemistry, reading, statistics
...I'm an Australian high school mathematics and science teacher, with seven years experience, who has recently moved to the bay area because my husband found employment here. I'm an enthusiastic
teacher, who loves helping students to succeed to the best of their ability, and loves all facets of mathematics and physics. I will teach all levels from middle school mathematics up to
11 Subjects: including algebra 1, chemistry, physics, calculus
I graduated from UCLA with a math degree and Pepperdine with an MBA degree. I have taught business psychology in a European university. I tutor middle school and high school math students.
11 Subjects: including algebra 1, calculus, geometry, statistics
Related Davenport, CA Tutors
Davenport, CA Accounting Tutors
Davenport, CA ACT Tutors
Davenport, CA Algebra Tutors
Davenport, CA Algebra 2 Tutors
Davenport, CA Calculus Tutors
Davenport, CA Geometry Tutors
Davenport, CA Math Tutors
Davenport, CA Prealgebra Tutors
Davenport, CA Precalculus Tutors
Davenport, CA SAT Tutors
Davenport, CA SAT Math Tutors
Davenport, CA Science Tutors
Davenport, CA Statistics Tutors
Davenport, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/Davenport_CA_algebra_1_tutors.php","timestamp":"2014-04-17T22:00:52Z","content_type":null,"content_length":"24291","record_id":"<urn:uuid:1c836988-f64b-4e0d-94f3-20aab42691ae>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00425-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fermat's Last Theorem | MathBlog
Fermat’s Last Theorem
Written by Kristian on 27 April 2011
Topics: Math
Fermat’s last theorem is one of the best known mathematical puzzles ever posed. It is very easy to understand yet it eluded a proof for 350 years. Fermat stated in the margin of Arithmetica that he
had the most marvellous proof of the conjecture, but it was too long to fit in the margin. It has always been known as Fermat’s last theorem even though it has only been a conjecture for 350 years.
Pierre de Fermat stated that
it is impossible to separate a cube into two cubes, or a fourth power into two fourth powers, or in general, any power higher than the second, into two like powers. I have discovered a truly
marvellous proof of this, which this margin is too narrow to contain.
In other words
does not have solutions for n > 2.
For n = 2 there exist infinitely many solutions and we have been dealing with them in problem 9 of Project Euler.
The problem was finally solved in 1995 by Andrew Wiles after he dedicated 8 years struggling to prove the theorem. In order to prove the theorem he had to prove several other conjectures and not
least use results and methods in many branches of mathematics developed within the last 100 years. So there is no way that Fermat could have proven it the same way as Andrew Wiles did. Wikipedia has
a section on Fermat’s Last Theorem where they briefly go through the history and the content of the proof.
My story on the theorem
I was first introduced to Fermat’s last theorem when I went in high school a mere 4 years after the theorem of proven. Our math teacher (whom I owe a lot of thanks for sparking my curiosity) wanted
us to watch the movie on the subject made by Simon Singh and John Lynch. Many of my fellow students giggled at Andrew Wiles and thought he was a complete nut job, but I saw something different. I saw
a man with a burning passion for solving this problem, and by the end of the movie I was so touched that I was almost crying. To me it was a real story and a treasure hunt for the truth.
This movie sparked something in me, and inspired me in many ways. I don’t claim to be good at mathematics and I am not rigorous enough to prove many things. But my passion and curiosity for math was
ignited and will burn forever after this movie. It has been aired on television in many countries and until Google video was closed it was available through that service. Today it might be available
through other means on the internet, but I haven’t found a source for it. If you a legal source for the movie I would be very interested in hearing from you.
Simon Singh is also the author of a book on the subject called Fermat’s Last Theorem. Simon Singh is a great story teller and manages to take the reader through the story in a way that most people
can follow. The book takes you all the way from Fermat’s life and achievement and through the long history of Fermat’s Theorem which many people have spend many hours trying to prove without success
until Wiles finally did his brilliant work and finally proved it. So if you love a good story I will highly recommend you to read the book and watch the movie.
On a side note Simon Singh has written other great books on different subjects such as The Code Book.
But did he prove it?
The big question is; did he prove it? Not Wiles of course but Fermat – did he prove it? Most sources believe not. Wolfram has the very good argument that he later on looked for proofs of n=4 and n=5,
which would have been meaningless if he had already proven it.
Due to my personal pride I hope and doubt he in fact did not prove it, because that means he would have had an insight that has eluded the rest of humanity for 350 years even though the mathematics
has evolved incredibly since then.
19 Comments For This Post I'd Love to Hear Yours!
1. I think Fermat did come up with the proof. It was probably something like this: Fermat’s equation is simply the Pythagorean identity times some factor c to the n. Sine squared theta and cosine
squared theta are irrational via Niven’s theorem. So a and b to the n are irrational so they can’t be integers.
2. Beal Fermat and Pythagora’s Triplets http://www.coolissues.com/mathematics/BealFermatPythagorasTriplets.htm
3. Why Wiles’ proof need 100 pages. I cannot even under the first page.
Here is a proof I could understand:
If A^n and B^n are general symmetrical twin-primes, then globally:
But both sides must be integeral. Therefore
C = ( 1/2*(A^n+B^n))^(1/n)
But the right-side is impossible since the nth root of 2 is irrational. Therefore FLT is proved globally. Q.E.D
4. Corrections:
Why Wiles’ proof need 100 pages. I cannot even under the first page.
Here is a proof I could understand:
If A^n and B^n are general symmetrical twin-compositess, then globally:
But both sides must be integeral. Therefore
C = ( 1/2*(A^n+B^n))^(1/n)
But the right-side is impossible since the nth root of 2 is irrational. Therefore FLT is proved globally. Q.E.D
5. Without knowing what you mean with general symmetrical twin-composites, I really doubt that the proof holds. In fact as far as I can see, your assumption that you can pull out 1/2, shows an
assumption you cannot make. And therefore your proof does not hold.
And if you really are correct, I think you should publish that rather in a peer-reviewed journal rather than here, because that would be a really feat to find such a simple proof for something
that have puzzled mathematicians for 300 years.
6. Here is my short proof of Fermat’s Lsst Theorem, please critize if
found wrong.
Aditi Journal of Computational Mathematics
Volume 1 ( 2013 ) , Issue 2
Previous | Next | Back
A Short Proof of Fermat’s Last Theorem
Huen Yeong Kong
Pages 1-4
View Details Abstract References
The paper starts with a linear equation c = (a+b) where c, a and b are positive integers. Global integral equality is assured in this equation. Raising both sides of this equation to the nth
power we get cn = (a+b)n which still retains global integral equality. If the right-side is expanded as Binomial Theorem, we get cn = an + unexpanded intermediate bionomial terms + bn. If the
intermediate binomial terms could be reduced to zero, we get Fermat’s Last Theorem. But this is an impossibility since the right-side is uniformly additive with a and b as positive integers and
n>2. This proves Fermat’s Last Theorem using only 17th centuary mathematics.
7. You are right that if we get that all the intermediate terms can be removed we have Fermat’s last theorem. However, you have not shown that there isn’t a way to group all these terms into the
form a^n + b^n. And therefore your proof does not hold in general.
8. I think you have a point. I will think about it. Meanwhile if
anyone could improve on my paper, that would be welcome. Thanks
for the advice.
Huen Yeong Kong, Singapore
9. Here is my attempt at an alternative proof of FLT:
For integral equality of FLT: z^n=x^n+y^n, let x=y then
Then z must be equal to 2^(1/n)*x for integral equality.
However 2^(1/n) is irrational, therefore 2^(1/n)*x is also irrational. Therefore there is no integral equality for FLT
based on FLT assertions.
Huen Yeopng Kong
10. Hi Huen
Yep that would work for the special case where x = y, now you just need to show it for the infinity of cases where x != y.
I am sorry to sound rude, but I think you should stop these attempts. You wont find a proof for FLT like this. You should rather use your energy and skills on solving some other problems. I can’t
guide you in the right direction here, you would need a mathematician for that.
11. Yes, you are right. Should not spend too much energy on this topic now. But I did learn a lesson on Fermat’s strategy in setting up his conjecture. Start with a globally well-behaved equation,
expand it,
and drop part of it. Then present the deformed formula to the world. It took 370+ year before Wiles solved his problem. Have a good day.
Huen Yeong Kong
12. Yes, that is a very good point. Start from something you know for certain and then work towards what you want to prove. That is always a good approach.
Not the only approach, but a good one.
13. Elementary number theory is simple. School leavers and amateur number theorist like me could comprehend statements of FLT and Goldbach’s Conjecture almost instantly. With modern number theory,
things are not so simple. This group has grown inward cutting themselves off from school leavers and amateur number theorists. I am 82 years old. If I am younger I would like to be an activist to
revive interest in elementary number theory. Even Wolfram gives only a scant one line remark on elementary number theory. There are still plenty of interesting ideas coming out from elementary
number theory.
Huen Yeong Kong
14. I believe Fermat had a proof when he he made his famous statement that he had a marvelous proof. But he might found a mistake in it later.I believe his proof is based on a parametric solution of
the equation of the theorem x^n+y^n=z^n.Still I could not derive what he had in his mind but the one ‘A simple and short analytical proof of Fermat’s last theorem’ which one can read on the
internet(Published in CMNSEM)is close to it,to my mind.Within a short period of time I hope to derive a much shorter one which I believe the one that Fermat had in his mind.
In other words
\displaystyle a^n b^n = c^n
does not have solutions for n > 2.
For n = 2 there exist finitely many solutions
I think you mean *infinitely* here.
16. Yes I do, thanks for noting.
17. Thanks,I wish to share the following reg FLT. Consider any two positive
integers, I have taken 9 & 10, only a random choice.
9^2+10^2=181 -> (181)^1/2 a irrational number between 13 & 14,
9^3+10^3=1729 -> (1729)^1/3 a irrational number between 12 & 13,
9^4+10^4=16561 -> (16561)^1/4 a irrational number between 11 & 12,
9^5+10^5=159049 ->(159049)^1/5 a irrational number between 10 & 11,
Now what about nth root of 9^n+10^n for n>5?. The sum 9^n+10^n converges
to 10^n and nth root lies between 10 & 11, all are irrational Nos.
FLT for 9 & 10 verified.
18. 1. There is another explanation of a simple proof of Fermat’s last theorem as follows:
X^p + Y^p ?= Z^p (X,Y,Z are integers, p: any prime >2) (1)
2. Let‘s divide (1) by (Z-X)^p, we shall get:
(X/(Z-X))^p +(Y/(Z-X))^p ?= (Z/(Z-X))^p (2)
3. That means we shall have:
X’^p + Y’^p ?= Z’^p and Z’ = X’+1 , with X’ =(X/(Z-X)), Y’ =(Y/(Z-X)), Z’ =(Z/(Z-X)) (3)
4. From (3), we shall have these equivalent forms (4) and (5):
Y’^p ?= pX’^(p-1) + …+pX’ +1 (4)
Y’^p ?= p(-Z’)^(p-1) + …+p(-Z’) +1 (5)
5. Similarly, let’s divide (1) by (Z-Y)^p, we shall get:
(X/(Z-Y))^p +(Y/(Z-Y))^p ?= (Z/(Z-Y))^p (6)
That means we shall have these equivalent forms (7), (8) and (9):
X”^p + Y”^p ?= Z”^p and Z” = Y”+1 , with X” =(X/(Z-Y)), Y” =(Y/(Z-Y)), Z” =(Z/(Z-Y)) (7)
From (7), we shall have:
X”^p ?= pY”^(p-1) + …+pY” +1 (8)
X”^p ?= p(-Z”)^(p-1) + …+p(-Z”) +1 (9)
Since p is a prime that is greater than 2, p is an odd number. Then, in (4), for any X’ we should have only one Y’ (that corresponds with X’) as a solution of (1), (3), (4), (5), if X’ could
generate any solution of Fermat’s last theorem in (4).
By the equivalence between X’^p + Y’^p ?= Z’^p (3) and X”^p + Y”^p ?= Z”^p (7), we can deduce a result, that for any X” in (8), we should have only one Y” (that corresponds with X’’ ) as a
solution of (1),(7),(8),(9), if X” could generate any solution of Fermat’s last theorem.
X” cannot generate any solution of Fermat’s last theorem, because we have illogical mathematical deductions, for examples, as follows:
i)In (8), (9), if an X”1 could generate any solution of Fermat’s last theorem, there had to be at least two values Y”1 and Y”2 or at most (p-1) values Y”1, Y”2,…, Y”(p-1),
that were solutions generated by X”, of Fermat’s last theorem. (Please note the even number (p-1) of pY”^(p-1) in (8)). But we already have a condition stated above, that for any X” we should
have only one Y” (that corresponds with X”) as a solution of (1),(7),(8),(9), if X” could generate any solution of Fermat’s last theorem.
Fermat’s last theorem is simply proved!
ii)With X”^p + Y”^p ?= Z”^p, if an X”1 could generate any solution of Fermat’s last theorem, there had to be correspondingly one Y” and one Z” that were solutions generated by X”, of Fermat’s
last theorem. But let’s look at (8) and (9), we must have Y” = -Z”. This is impossible by further logical reasoning such as, for example:
We should have : X”^p + Y”^p ?= Z”^p , then X”^p ?= 2Z”^p or (X”/Z”)^p ?= 2. The equal sign, in (X”/Z”)^p ?= 2, is impossible.
Fermat’s last theorem is simply again proved, with the connection to the concept of (X”/Z”)^p ?= 2. Is it interesting?
19. I must admit that I don’t follow you all the way through, but just the fact that you assume that p is a prime means that you have not proven Fermat’s last theorem. You might have proven something
interesting, but FLT it is not.
Leave a Comment Here's Your Chance to Be Heard!
You can use these html tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>
You can use short tags like: [code language="language"][/code]
The code tag supports the following languages: as3, bash, coldfusion, c, cpp, csharp, css, delphi, diff,erlang, groovy, javascript, java, javafx, perl, php, plain, powershell, python, ruby, scala,
sql, tex, vb, xml
You can use [latex]your formula[/latex] if you want to format something using latex
Notify me of comments via e-mail. You can also subscribe without commenting. | {"url":"http://www.mathblog.dk/fermats-last-theorem/","timestamp":"2014-04-19T04:55:43Z","content_type":null,"content_length":"63802","record_id":"<urn:uuid:f6ba4a7c-3f91-48a1-8a8b-58f2059d1966>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00063-ip-10-147-4-33.ec2.internal.warc.gz"} |
Statistical significance
All results obtained by statistical methods suffer from the disadvantage that they might have been caused by pure statistical accident. The level of statistical significance is determined by the
probability that this has not, in fact, happened. P is an estimate of the probability that the result has occurred by statistical accident. Therefore a large value of P represents a small level of
statistical significance and vice versa.
In experiments where we are obliged to resort to statistics it is therefore proper procedure to define a level of significance at which a correlation will be deemed to have been proven, though the
choice is often actually made after the event. It is important to realise that, however small the value of P, there is always a finite chance that the result is a pure accident. A typical level at
which the threshold of P is set would be 0.01, which means there is a one percent chance that the result was accidental. The significance of such a result would then be indicate by the statement P
Unfortunately it has become customary in some branches of science, particularly epidemiology, to operate with much lower levels of significance. A level frequently quoted is P<0.05. This means that
there is a one in twenty chance that the whole thing was accidental (In one notorious case the threshold was even raised to 0.1 in order to obtain the “required” result). This is particularly
worrying in areas that are newsworthy or politically correct, since it is likely that more than twenty similar experiments are being conducted worldwide, so it is almost certain that there will be
one positive result, whether the correlation is genuine or not. Because negative results are almost never published (publication bias) this means that an unknown but possibly large number of false
claims are sustained as verities.
It is difficult to generalise, but on the whole P<0.01 would normally be considered significant and P<0.001 highly significant.
The provenance of the P<0.05 criterion goes back to the great pioneer of significance testing, R A Fisher, who is deemed to have given it his imprimatur. He did not in fact do this and late in his
life stated that he had just used this level in his calculations as a “mathematical convenience”. Furthermore, he also stated that “without randomisation there is no significance”.
Many leading scientists and mathematicians today believe that the emphasis on significance testing is grossly overdone. P<0.05 had become an end in itself and the determinant of a successful outcome
to an experiment, much to the detriment of the fundamental objective of science, which is to understand.
An alternative way of putting it is to quote a confidence interval, e.g. (relative risk: 0.56;^ 95% confidence interval: 0.32–0.97), which means that there is a one in forty chance that the relative
risk is 0.97, or to all intents and purposes unity. | {"url":"http://www.numberwatch.co.uk/significance.htm","timestamp":"2014-04-20T11:03:43Z","content_type":null,"content_length":"4351","record_id":"<urn:uuid:e80fe3c2-b398-4e9b-bf27-99f65206f712>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00149-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kahler forms on Cohen Macaulay spaces
up vote 2 down vote favorite
Can anyone answer the two following questions:
1. For $n$-dimensional $X$ Cohen-Macaulay complex space, is it true that the sheaf of top degree homolorphic forms $\Omega^{n}_{X}$ has no torsion?
2. For $f:X\rightarrow S$ Cohen-Macaulay morphism of reduced complex spaces, is it true that $\Omega^{n}_{X/S}$ has no torsion on $X$?
I think that these two questions have negative answers; but I don't know how to prove it.
In fact, if it is true then the "fundamental class morphism" would be injective!
Thank you.
ag.algebraic-geometry cv.complex-variables
1 The answer to (1) is definitely "no" since there is always a nowhere-dense analytic set on whose complement the space $X$ is Cohen-Macaulay. One gets similarly many counterexamples to (2)
beginning with any flat analytic map $f$. – BCnrd Jul 26 '10 at 17:05
5 Dear Mohamed: please try to not post duplicate questions. As it stands, one of yours got -1 and the other +1 (who knows what's up with that). – BCnrd Jul 26 '10 at 17:07
3 @BCnrd: I don't understand the relevance of the existence of an open dense Cohen-Macaulay subset. In any case it is easy to give an example: Consider the union of the coordinate axes in the plane,
$xy=0$. Then we have the only relation $xdy+ydx=0$ so that that $xdy$ is a non-zero torsion element (killed by $x$ and $y$). – Torsten Ekedahl Jul 26 '10 at 17:46
@Torsten: I was just pointing out that it should be ubiquitous that the requested condition fails, such as any situation where the sheaf is not torsion-free over the complement of a nowhere-dense
analytic set (I was thinking of $X$ which is not generically reduced). – BCnrd Jul 26 '10 at 18:53
@BCnrd: Got you. A non-reduced counterexample is of course not cheating yet it somehow feels like it.... – Torsten Ekedahl Jul 26 '10 at 18:57
add comment
1 Answer
active oldest votes
Thanks to Brian and Ekedahl.
Yes, in any case the answer is "NO"; After asking the question, I made some easy computation with the Whitney umbrella $\lbrace{(x,y,z)\in {\Bbb C}^{3}: x^{2}-zy^{2}=0}\rbrace$ which
convinces me that 1) is not true.
up vote 1
down vote For the second question, see Kunz-Waldi, Contem.Math 79 \$.5; the kernel of the relative fundamental class is almost never empty...
P.S: Excuse me, I don't know how to add some comment to the question.
2 If one takes a reduced irreducible curve (hence Cohen-Macaulay), most of the time (and easy to check), the module of 1-forms will have torsion. In fact, a well known conjecture of
Berger posits that if this module is torsion free then the curve is smooth. The conjecture is open to the best of my knowledge. – Mohan Jul 26 '10 at 19:26
Hi Mohan, nice to see you here ! – Chandan Singh Dalawat Jul 27 '10 at 3:03
1 kaddar, you should accept this answer by clicking the checkmark to the left, if you're satisfied by this. That way the question will be marked as answered in the system. – j.c. Sep 21
'10 at 3:19
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry cv.complex-variables or ask your own question. | {"url":"http://mathoverflow.net/questions/33417/kahler-forms-on-cohen-macaulay-spaces","timestamp":"2014-04-17T12:54:09Z","content_type":null,"content_length":"61767","record_id":"<urn:uuid:8fbba4e1-683e-45a2-8651-cb7df1bcf720>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00299-ip-10-147-4-33.ec2.internal.warc.gz"} |
Statistical investigation of a sample survey for obtaining farm facts. Iowa Agriculture Experiment Station Research Bulletin
Results 1 - 10 of 12
- DOC. MATH. J. DMV , 1998
"... Following the theoretical studies of J.B. Robinson and H.W. Kuhn in the late 1940s and the early 1950s, G.B. Dantzig, R. Fulkerson, and S.M. Johnson demonstrated in 1954 that large instances of
the TSP could be solved by linear programming. Their approach remains the only known tool for solving TS ..."
Cited by 164 (7 self)
Add to MetaCart
Following the theoretical studies of J.B. Robinson and H.W. Kuhn in the late 1940s and the early 1950s, G.B. Dantzig, R. Fulkerson, and S.M. Johnson demonstrated in 1954 that large instances of the
TSP could be solved by linear programming. Their approach remains the only known tool for solving TSP instances with more than several hundred cities; over the years, it has evolved further through
the work of M. Grötschel , S. Hong , M. Jünger , P. Miliotis , D. Naddef , M. Padberg
"... Introduction As a coherent mathematical discipline, combinatorial optimization is relatively young. When studying the history of the field, one observes a number of independent lines of
research, separately considering problems like optimum assignment, shortest spanning tree, transportation, and the ..."
Cited by 9 (0 self)
Add to MetaCart
Introduction As a coherent mathematical discipline, combinatorial optimization is relatively young. When studying the history of the field, one observes a number of independent lines of research,
separately considering problems like optimum assignment, shortest spanning tree, transportation, and the traveling salesman problem. Only in the 1950's, when the unifying tool of linear and integer
programming became available and the area of operations research got intensive attention, these problems were put into one framework, and relations between them were laid. Indeed, linear programming
forms the hinge in the history of combinatorial optimization. Its initial conception by Kantorovich and Koopmans was motivated by combinatorial applications, in particular in transportation and
transshipment. After the formulation of linear programming as generic problem, and the development in 1947 by Dantzig of the simplex method as a tool, one has tried to attack about all combinatorial
, 2003
"... Multi-phase surveys are often conducted in forest inventory, with the goal of estimating forested area and tree characteristics over large regions. This article describes how design-based
estimation of such quantities, based on information gathered during ground visits of sampled plots, can be ma ..."
Cited by 5 (0 self)
Add to MetaCart
Multi-phase surveys are often conducted in forest inventory, with the goal of estimating forested area and tree characteristics over large regions. This article describes how design-based estimation
of such quantities, based on information gathered during ground visits of sampled plots, can be made more precise by incorporating auxiliary information available from remote sensing. The
relationship between the ground visit measurements and the remote sensing variables is modelled using generalized additive models. Nonparametric estimators for these models are discussed and applied
to forest data collected in the mountains of northern Utah in the United States. Model-assisted estimators that utilize the nonparametric regression fits are proposed for these data. The design
, 2006
"... This course provides a solid grounding in modern survey-sampling theory and methods. The lesson plan includes the first 9 chapters of the assigned text (probability sampling, stratification,
allocation, multi-stage sampling, ratio and regression estimation, domain estimation, variance estimation in ..."
Add to MetaCart
This course provides a solid grounding in modern survey-sampling theory and methods. The lesson plan includes the first 9 chapters of the assigned text (probability sampling, stratification,
allocation, multi-stage sampling, ratio and regression estimation, domain estimation, variance estimation in complex surveys, and methods for handling nonresponse) and Chapter 12 (two-phase sampling,
small-domain estimation, multiple frames, capture/recapture). This is augmented by the instructor’s comments on the text plus supplemental notes on topics like Wilson confidence intervals for small
proportions, unequal probability sampling, and the large-sample properties of common estimation strategies. A final lesson covers the multiple-regression estimator.
, 1966
"... A multi-stage sampling design, particularly intended for large scale sample surveys on successive (or repeated) occasions is developed. The sampling design is general in the sense that the
probabilities of selecting units (for the preliminary first-stage sa~le) are arbitrary. Each of these first-sta ..."
Add to MetaCart
A multi-stage sampling design, particularly intended for large scale sample surveys on successive (or repeated) occasions is developed. The sampling design is general in the sense that the
probabilities of selecting units (for the preliminary first-stage sa~le) are arbitrary. Each of these first-stage units is drawn with replacement. The technique of partial replacement of first-stage
sa~ling units is based on the order of occurrence of these units. The partial replacement technique is developed to meet two basic objectives: (i) To spread the burden of reporting among respondents
which may be expected to help in maintaining a high rate of response. (ii) To enable the sampler to take advantage of the saItij?ling design in the reduction of sampling variance of several
estimators proposed. Several ways of utilizing the past as well as the present information from the sampling design to estimate the total, and the change in total of a population characteristic of
interest, are presented. The nature of the gain in efficiency from using the four different forms of estimators in estimati~g the total, and the change in total, is explored. The comparisons of
efficiency among the estimators wherever possible, are given under certain assumptions simiiar to the assumption of Second Order or Weak Sense Stationarity · used in conventional time series
analysis. • The estimation theory is covered in detail for two-stage sampling on two successive occasions. The extension to higher stage sampling on more than two successive occasions is sufficiently
indicated, In all, the reduction in the variance of an estimator whenever achieved, is in the total variance namely, the between first-stage units variance plus the within first-stage units variance,
and so on if there are more than two stages of sampling.
, 2008
"... One can think of a rotation design as a compromise between a complete sample overlap and taking independent samples. Each extreme has advantages and disadvantages. By using a rotation design,
one hopes to realize some of the variance reduction of the complete sample overlap, while reducing its exces ..."
Add to MetaCart
One can think of a rotation design as a compromise between a complete sample overlap and taking independent samples. Each extreme has advantages and disadvantages. By using a rotation design, one
hopes to realize some of the variance reduction of the complete sample overlap, while reducing its excess burden. In this paper, we start by motivating the use of a rotation design and composite
estimation to improve the estimator of current level of a parameter, θt, then look at compositing to improve the estimator of change, θt! θt-1. Some consideration is then given to doing both:
estimating level and change simultaneously. Finally, we briefly discuss other practical issues that influence the choice of designs and estimators, including generalizing the estimators, panel
conditioning, cost, the mode of data collection, and respondent burden.
"... Several researchers have attempted to develop a general law to predict a general relationship between variance within cluster S and size of the cluster M for purposes like 2 w determination of
optimum cluster size etc. In the present study a non-linear model has been S and M which has shown improvem ..."
Add to MetaCart
Several researchers have attempted to develop a general law to predict a general relationship between variance within cluster S and size of the cluster M for purposes like 2 w determination of
optimum cluster size etc. In the present study a non-linear model has been S and M which has shown improvement suggested for describing the relationship between 2 w over existing models and results
have also been verified with the help of an example.
"... pour l’obtention du grade de ..."
"... In this article, we attempt the problem of estimation of the population ratio of mean in mail surveys. This problem is conducted for current occasion in the context of sampling on two occasions
when there is nonresponse (i) on both occasions, (ii) only on the first occasion and (iii) only on the sec ..."
Add to MetaCart
In this article, we attempt the problem of estimation of the population ratio of mean in mail surveys. This problem is conducted for current occasion in the context of sampling on two occasions when
there is nonresponse (i) on both occasions, (ii) only on the first occasion and (iii) only on the second occasion. We obtain the gain in efficiency of all the estimators over the direct estimate
using no information gathered on the first occasion. We derive the sample sizes and the saving in cost for all the estimators, which have the same precision than the direct estimate using no
information gathered on the first occasion. An empirical study that allows us to investigate the performance of the proposed strategy is carried out. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=2002784","timestamp":"2014-04-18T00:40:59Z","content_type":null,"content_length":"35226","record_id":"<urn:uuid:1ec6cdb2-54f7-4bd2-a79a-e207b6e5254a>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00540-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculus I
April 17th 2007, 04:09 PM
Calculus I
There are a few Calc problems that have me stumped, can anyone help me out!?
1 .Consider the region R bounded by the curve y=sinx and the x-axis on the intercal [o, pi/2]. What is the volume of the solid generated when R is rotated about the x-axis?
What is the volume of the solid generated when the region R above is rotated acout the line y=2?
2.Consider the region S bounded by the y-axis, the line y= cosx and the line y=sinx. What is the area of S?
What is the volume of the region generated by rotating S about he x-axis?
3. Consider the region Q bounded by the x-axis, the y-axis and the line y= squareroot of 16-x^2. What is the volume of the solid generated by rotating the region Q about the y-axis.
April 17th 2007, 04:40 PM
Here's the first part of question 1, one attachment is the diagram and one is the solution
April 17th 2007, 04:54 PM
April 17th 2007, 05:20 PM
Here's the solution to the first part of question 2, again, one attachment is a diagram the other is the solution
I assume you are talking about the region in the first quadrant, the region in the second and third quad would be too complicated for what i think you are doing. you should specify which you mean
April 17th 2007, 05:40 PM
April 17th 2007, 06:10 PM
Here's question 3. i attached both the diagram and solution
Again i assumed you're talking about the region in the 1st quadrant, but in this case that doesn't matter. the graph is symmetric about the y-axis, so if i assumed it was the 2nd quad, we'd get
the same answer.
EDIT: I made a mistake for the diagram. it shows that we are rotating about the x-axis when in fact we are rotating about the y-axis. you can change it
April 17th 2007, 07:16 PM
more questions....
Ha. thanks for pointing out where it is, I know wasnt there earlier, or perhaps my vision is worse than previously suspected...
Well actually on the subject of understanding, if you dont mind:
For part 1a, i followed you for the set up and up untill the part when I think you used the pathagorean identiy to make sinx^2 into cosx+1.....
I think that it is suppose to be 1- cosx^2. and I am not sure how you pulled the pi/2 out of the integrals and got rid of the squared and made it 2x. :confused:
April 17th 2007, 07:25 PM
Ha. thanks for pointing out where it is, I know wasnt there earlier, or perhaps my vision is worse than previously suspected...
Well actually on the subject of understanding, if you dont mind:
For part 1a, i followed you for the set up and up untill the part when I think you used the pathagorean identiy to make sinx^2 into cosx+1.....
I think that it is suppose to be 1- cosx^2. and I am not sure how you pulled the pi/2 out of the integrals and got rid of the squared and made it 2x. :confused:
well, you were correct in that i had the wrong thing. however, I did not use the pythagorean identity, i used the half-angle identity.
You thought i used this: sin^2(x) = 1 - cos^2(x)
In fact, i meant to use this: sin^2(x) = (1 - cos(2x))/2
that's where i got the pi/2 from. i pulled out the 1/2 in front.
changing sin^2(x) to 1 - cos^2(x) would not help, since integrating the squared trig function was the problem in the first place, i had to get rid of the square, so i used the identity above.
however, the plus sign should be a minus sign, make the necessary corrections
April 17th 2007, 07:33 PM
This was my firtst attempt
April 17th 2007, 07:37 PM
This was my firtst attempt
first off, the square is not on the x, it is on the sin(x), so write sin^2(x) or (sin(x))^2
secondly, you cannot integrate squared trig functions like that, the integral of sin^2(x) IS NOT -cos^2(x). you can't just forget the square and continue as normal, you must get rid of it.
also if your method was right, you would end up with +pi
April 17th 2007, 08:04 PM
ah. yeah that makes sense...
thanks once again. | {"url":"http://mathhelpforum.com/calculus/13853-calculus-i-print.html","timestamp":"2014-04-18T03:54:37Z","content_type":null,"content_length":"13957","record_id":"<urn:uuid:39315690-8ad7-46c2-b89d-592b2c7f778c>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00002-ip-10-147-4-33.ec2.internal.warc.gz"} |
Don’t Store That in a Float
I promised in my last post to show an example of the importance of knowing how much precision a float has at a particular value. Here goes.
As a general rule this type of data that should never be stored in a float:
Elapsed game time should never be stored in a float. Use a double instead. I’ll explain why below.
As an extra bonus, because switching to double is not always the best solution, this post demonstrates the dangers of unstable algorithms, and how to use the guarantees of floating-point math to
improve them.
How long has this been going on?
A lot of games have some sort of GetTime() function that returns how long the game has been running. Often these return a floating-point number because it allows for convenient use of seconds as the
units, while allowing sub-second precision.
GetTime() is typically implemented with some sort of high frequency timer such as QueryPerformanceCounter. This allows time resolution of a microsecond or better. However it’s worth looking at what
happens to this resolution if the time is returned as a float, or stored in a float. We can do that using one of the TestFloatPrecision functions from the last post – just call them from the watch
window of the debugger. In the screen shot below I tested the precision available at one minute, one hour, one day, and one week:
It’s important to understand what this data means. The number ‘60’, like all integers up to 16777216, can be exactly represented in a float. The watch window shows that the next value after 60 that
can be represented by a float is about 60.0000038. Therefore, if we use a float to store “60 seconds” then the next time that we can represent is 3.8 microseconds past 60 seconds. If we try to store
a value in-between then it will be rounded up or down.
How long did it take?
One of the most common things to do with time values is to subtract them. For instance, we might have code like this:
double GetTime();
float TimeSomethingBadly()
float fStart = GetTime();
float elapsed = GetTime() - fStart;
return elapsed;
The implication of the precision calculations above is that if ‘fStart’ is around 60, then ‘elapsed’ will be a multiple of 3.8 microseconds (two to the negative eighteenth seconds). That is the most
precision you can get. If less than 3.8 microseconds has elapsed then ‘elapsed’ will either be rounded down to zero, or rounded up to 3.8 microseconds.
Therefore, if our game timer starts at zero and we store time in a float then after a minute the best precision we can get from our timer is 3.8 microseconds. After our game has been running for an
hour our best precision drops to 0.24 milliseconds. After our game has been running for a day our precision drops to 7.8 milliseconds, and after a week our precision drops to 62.5 milliseconds.
This is why storing time in a float is dangerous. If you use float-time to try calculating your frame rate after running for a day then the only answers above 30 fps that are possible are infinity,
128, 64, 42.6, or 32 (since the possible frame lengths are 0, 7.8, 15.6, 23.4, or 31.2 milliseconds). And it only gets worse if you run longer.
As another example consider this code:
double GetTime();
void ThinkBadly()
float startTime = (float)GetTime();
// Do AI stuff here
float elapsedTime = GetTime() - startTime;
assert(elapsedTime < 0.005); //
The purpose of this code is to warn the developers whenever the AI code takes inordinately long. However when the game has been running for a day (actually the problem reaches this level after 65,536
seconds) GetTime() will always be returning a multiple of 0.0078 s, and ‘elapsedTime’ will always be a multiple of that duration. In most cases ‘elapsedTime’ will be equal to zero, but every now and
then, no matter how fast the AI code executes, the time will tick over to the next representation during the AI calculations and ‘elapsedTime’ will be 0.0078 s instead of zero. The assert will then
trigger even though the AI code is actually still under budget.
It’s a catastrophe for base-ten also
The general term for what is happening with these time calculations is catastrophic cancellation. In all of these examples above there are two time values that are accurate to about seven digits.
However they are so close to each other that when they are subtracted the result has, in the worst case, zero significant digits.
We can see the same thing happening with decimal numbers. A float has roughly seven decimal digits of precision so the decimal equivalent would be getting a time value of 60.00000 and having the next
possible time value be 60.00001. Given a seven-digit decimal float we can’t get more than a tenth of a microsecond precision when dealing with time around 60 seconds. When we subtract 60.00000 from
60.00001 then six of the seven digits cancel out and we end up with just one accurate digit. For times less than a tenth of a microsecond we have a complete catastrophe – all seven digits cancel out
and we get zero digits of precision, just like with a binary float.
Double down
The solution to all of this is simple. GetTime() must return a double, and its result must always be stored in a double. The cancellation still occurs, but it is no longer catastrophic. A double has
enough bits in the mantissa that even if your game runs for several millennia your double-precision timers will still have sub-microsecond precision. You can verify this by using the double-precision
variation of TestFloatPrecisionAwayFromZero():
union Double_t
int64_t i;
double f;
uint64_t mantissa : 52;
uint64_t exponent : 11;
uint64_t sign : 1;
} parts;
double TestDoublePrecisionAwayFromZero(double input)
union Double_t num;
num.f = input;
// Incrementing infinity or a NaN would be bad!
assert(num.parts.exponent < 2047);
// Increment the integer representation of our value
num.i += 1;
// Subtract the initial value find our precision
double delta = num.f - input;
return delta;
You can see in the screenshot below that if you store time in doubles then after your game has been running for a week you will have sub-nanosecond precision, and after three millennia you will still
have sub-millisecond precision.
Clearly a double is overkill for storing time, but since a float is underkill a double is the right choice.
Aside: my initial calculation of the precision remaining after three millennia was wrong because the calculation of the number of seconds was done with integer math, and it overflowed and gave a
completely worthless answer. Which proves that integer math can be just as tricky as floating-point math.
Changing your units doesn’t help
All along I am assuming that you are storing your time in seconds. However your choice of units doesn’t significantly affect the results. If you decide that your time units are milliseconds, or days,
then the precision available after your game has been running for a day will be about the same. It is the ratio between the elapsed time and the time being measured that matters.
Or use integers
Tom Forsyth points out that the same issues happen with world coordinates and that switching to integer types can give you greater worst-case precision, as well as consistent precision. The Windows
GetTickCount() and GetTickCount64() functions use this technique, using milliseconds as the units. This alternative to using a double for time is quite reasonable, especially if you encapsulate it
well. A uint32_t with milliseconds as units will overflow every 50 days or so but you can avoid that by using a uint64_t. However despite Tom’s threats to invoke his OffendOMatic rule for all who use
doubles, I still prefer doubles for game time because of the combination of convenient units (seconds) and more than sufficient precision.
While Tom and I appear to disagree over whether you should use double in situations like this, we agree that ‘float’ won’t work.
Note that while GetTickCount() and GetTickCount64() are millisecond precision they are often actually less accurate than you would expect. Unless you have changed the Windows timer frequency with
timeBeginPeriod() the GetTickCount functions will only return a new value every 10-20 milliseconds (insert pithy comment about precision versus accuracy here).
Time deltas fit in a float
It is important to understand that the limited precision of a float is only a problem if you do an unstable calculation, such as catastrophic cancellation cancelling out most of the digits. The code
below, on the other hand, is fine:
double GetTime();
float TimeSomethingWell()
double dStart = GetTime(); // Store time in a double
float elapsed = GetTime() - dStart; // Store *result* in a float
return elapsed;
In TimeSomethingWell() we store the result of the subtraction in a float – after the catastrophic cancellation. Therefore our elapsed time value will have tons of precision.
Similarly, if you are using floats in your animation system to represent short times, such as the location of key-frames in a 60 second animation, then floats are fine. However when you add these to
the current time you need to store the result of the addition in a double.
Forrest Smith made a pretty table showing how the precision of a float changes as the magnitude increases, and I mangled it to suit my needs. Here it is for time:
┃ Float Value │ Time Value │ Float Precision │ Time Precision ┃
┃ 1 │ 1 second │ 1.19E-07 │ 119 nanoseconds ┃
┃ 10 │ 10 seconds │ 9.54E-07 │ .954 microseconds ┃
┃ 100 │ ~1.5 minutes │ 7.63E-06 │ 7.63 microseconds ┃
┃ 1,000 │ ~16 minutes │ 6.10E-05 │ 61.0 microseconds ┃
┃ 10,000 │ ~3 hours │ 0.000977 │ .976 milliseconds ┃
┃ 100,000 │ ~1 day │ 0.00781 │ 7.81 milliseconds ┃
┃ 1,000,000 │ ~11 days │ 0.0625 │ 62.5 milliseconds ┃
┃ 10,000,000 │ ~4 months │ 1 │ 1 second ┃
┃ 100,000,000 │ ~3 years │ 8 │ 8 seconds ┃
┃ 1,000,000,000 │ ~32 years │ 64 │ 64 seconds ┃
And here is the table showing how the precision of a float diminishes when you use it to measure large distances, with meters being the units in this case:
┃ Float Value │ Length Value │ Float Precision │ Length Precision │ Precision Size ┃
┃ 1 │ 1 meter │ 1.19E-07 │ 119 nanometers │ virus ┃
┃ 10 │ 10 meters │ 9.54E-07 │ .954 micrometers │ e. coli bacteria ┃
┃ 100 │ 100 meters │ 7.63E-06 │ 7.63 micrometers │ red blood cell ┃
┃ 1,000 │ 1 kilometer │ 6.10E-05 │ 61.0 micrometers │ human hair width ┃
┃ 10,000 │ 10 kilometers │ 0.000977 │ .976 millimeters │ toenail thickness ┃
┃ 100,000 │ 100 kilometers │ 0.00781 │ 7.81 millimeters │ size of an ant ┃
┃ 1,000,000 │ .16x earth radius │ 0.0625 │ 62.5 millimeters │ credit card width ┃
┃ 10,000,000 │ 1.6x earth radius │ 1 │ 1 meter │ uh… a meter ┃
┃ 100,000,000 │ .14x sun radius │ 8 │ 8 meters │ 4 Chewbaccas ┃
┃ 1,000,000,000 │ 1.4x sun radius │ 64 │ 64 meters │ half a football field ┃
Stable algorithms also matter
Some time ago I investigated some asserts in a particle animation system. Values were going out of range after less than an hour of gameplay and I traced this back to an out-of-range ‘t’ value being
passed to the Lerp function, which expected it to always be from 0.0 to 1.0. Clamping was one obvious solution but I first investigated why ’t’ was going out of range.
One problem with the code was that the three parameters were all floats, so over long periods of time it would inevitably have insufficient precision. However we were getting instability much earlier
than expected and it felt like switching to double immediately might just mask an underlying problem.
The parameters to the function, all time values in seconds, corresponded to the end of an animation segment, the length of that segment, and the current time, which was always between the start of
the segment (segmentEnd-segmentLength) and ‘segmentEnd’. Because the start time of the segment was not passed in this code calculated it, and then did a straightforward calculation to get ‘t’:
float CalcTBad(float segmentEnd, float segmentLength, float time)
float segmentStart = segmentEnd - segmentLength;
float t = (time - segmentStart) / segmentLength;
return t;
Straightforward, but unstable. Because ‘segmentLength’ is presumed to be quite small compared to ‘segmentEnd’, there is some rounding during the first subtraction and the difference between
‘segmentStart’ and ‘segmentEnd’ will be a bit larger or smaller than ‘segmentLength’. The resulting difference will always be a multiple of the current precision, so it will degrade over time, but
even very early in the game the result will not be perfect. Because the value for ‘segmentStart’ is slightly wrong the value of “time – segmentStart” will be slightly wrong, and occasionally ‘t’ will
be outside of the 0.0 to 1.0 range.
This will happen even if you use doubles. The errors will be smaller, but ‘t’ can still go slightly outside the 0.0 to 1.0 range. As the game goes on ‘t’ will range farther outside of the correct
range, but from just a few minutes into the game the results will show signs of instability.
The natural tendency is to say “floating-point math is flaky, clamp the results and move on”, but we can do better, as shown here:
float CalcTGood(float segmentEnd, float segmentLength, float time)
float howLongAgo = segmentEnd - time;
float t = (segmentLength - howLongAgo) / segmentLength;
return t;
Mathematically this calculation is identical to CalcTBad, but from a stability point of view it is greatly improved.
If we assume that ‘time’ and ‘segmentEnd’ are large compared to ‘segmentLength’, then we can reasonably assume that ‘segmentEnd’ is less than twice as large as time. And, it turns out that if two
floats are that close then their difference will fit exactly into a float. Always. So the calculation of ‘howLongAgo’ is exact. Ponder that for a moment – given a few reasonable assumptions we have
exact results for one of our floating-point math operations.
With ‘howLongAgo’ being exact, if ‘time’ is within its prescribed range then ‘howLongAgo’ will be between zero and ‘segmentLength’, and so will ‘segmentLength’ minus ‘howLongAgo’. IEEE floating-point
math guarantees correct rounding so when we divide by ‘segmentLength’ we are guaranteed that ‘t’ will be from 0.0 to 1.0. No clamping needed, even with floats.
This real example demonstrates a few things:
• Any time you add or subtract floats of widely varying magnitudes you need to watch for loss of precision
• Sometimes using ‘double’ instead of ‘float’ is the correct solution, but often a more stable algorithm is more important
• CalcT should probably use double (to give sufficient precision after many hours of gameplay)
Your compiler is trying to tell you something…
With Visual C++ on the default warning level you will get warning C4244 when you assign a double to a float:
warning C4244: ‘initializing’ : conversion from ‘double’ to ‘float’, possible loss of data
Possible loss of data is not necessarily a problem, but it can be. Suppressing warnings, with #pragma warning or with a cast, is something that should be done thoughtfully, after understanding the
issue. Otherwise the compiler might say “I told you so” when your game fails after a twenty-four hour soak test.
Does it matter?
For some game types this problem may be irrelevant. Many games finish in less than an hour and a float that holds 3,600 (seconds) still has sub-millisecond accuracy, which is enough for most
purposes. This means that for those game types you should be fine storing time in a float, as long as you reset the zero-point of GetTime() at the beginning of each game, and as long as the clock
stops running when the game is paused.
For other game types – probably the majority of games – you need to do your time calculations using a double or uint64_t. I’ve seen problems on multiple games who failed to follow this rule. The
problems are particularly tedious to track down and fix because they may take many hours to show up.
Store your time values in a double, and then you don’t need to worry, at least not as much, as long as you avoid unstable algorithms.
Next time…
On the next post I think it might finally be time to start jumping into the delicate subject of how to compare floating-point numbers, with the many subtleties involved. Previous articles in this
series, and other posts, can be found here. | {"url":"http://www.altdevblogaday.com/2012/02/05/dont-store-that-in-a-float/","timestamp":"2014-04-18T13:06:10Z","content_type":null,"content_length":"48509","record_id":"<urn:uuid:d1ee4527-1173-459c-b87e-34daf2dc824f>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00367-ip-10-147-4-33.ec2.internal.warc.gz"} |
Graphing Polynomial Functions with Repeated Factors - Problem 3
I want to graph one really hard example of a polynomial that’s got repeated factors. Here I’ve got a 5th degree polynomial which is called a quintic. First thing I want to do is identify the x
intercepts. The one that comes from these factors is going to be (2, 0) and the one that comes from this factor is (-2, 0). But the behavior of the intercepts is going to be different because we have
different powers here. We’ll see that in a second.
The end behavior comes from the leading term. I have to multiply to figure out what the leading term is. I have -1/3, x², x³, -1/3 x to the 5th. Now a 5th degree polynomial has a behavior kind of
like a 3rd degree where the ends go in opposite directions. Normally the right end goes up and the left end goes down but with this negative, it’s going to be the reverse of that. The left end will
go up and the right will go down. We’ll use that in a second.
Let’s figure out what happens between the 2 intercepts. I can plot those now. We have (-2, 0) and (2, 0). I’ll plot a few points. Remember at -2, this graph is going to behave like a cubic, so it’s
going to do something like this and at 2, it’s going to behave like a quadratic. Now since it's below the x axis, it’s probably going to bounce off like that.
Let’s plot some points to be sure though. -1/3 (x minus 2)², (x plus 2)³. How about, let’s plot zero for sure. I want to know that the y intercept is. So when I plug in zero I get -1/3, -2², 2³. This
is going to give me 4, 8 so I have 32, -32 over 3. This is almost -11. So, why don’t I make my scale, let’s make this 4, 8, 10, sorry 12, this is 10, 11 it’s going to be right around here. Let me
just write that down, 12, -12.
Let’s plot -1, -1/3, we get -1 plus -2, -3 squared, -1 plus 2, 1 cubed so we get 9, divide by 3, -3. So at -1 we have -3. One short of this, right up there. How about positive 1? 1 minus 2, -1
squared, 1 plus 2, 3, cubed -1² is 1, 3³, 27 divide by 3 is 9, -9. Okay, so this is 8, so one past 8 is about, well this is 10, so it’s about here. I think I’ve got enough to draw a decent graph
Remember cubic behavior here at -2. Let’s draw this end going like that and this one going something like this. Then as we come in to this guy; remember we’re going to have quadratic behavior. We’re
going to hit the x axis and bounce off. So this is a little tricky. I’m going to start from here maybe. There, that’s not too bad.
Now remember until we actually learn calculus we can’t always find the exact location of turning points like the one that happens in here somewhere but a qualitative graph like this is going to be
good enough in pre-calculus. Make sure you’ve got your intercepts and the correct behavior at the intercepts, when it’s easy to do get the y intercept as well. Make sure you’ve got correct end
behavior. And that’s enough to get a pretty good graph of a polynomial function with repeated factors.
polynomial functions quintic functions leading term end behavior x intercepts | {"url":"https://www.brightstorm.com/math/precalculus/polynomial-and-rational-functions/graphing-polynomial-functions-with-repeated-factors-problem-3/","timestamp":"2014-04-17T04:12:34Z","content_type":null,"content_length":"71201","record_id":"<urn:uuid:49a7a8a9-3be1-43c6-b7bb-46f40509d244>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00450-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculating groove width- calladine algorithm
Hi ,
I am interested to know the programming part of refined groove width calculation. I read this forum n found abt Calladine paper(1998). In their algorithm it says u fit a curve to three points. What
formula was used (as there is more than one way of fitting a curve in 3D space) It will be nice tif u can explain how to go about considerding a example of say thre vectors, V1(x1,y1,z1) ; V2
(x2,y2,z2) and V3(x3,y3,z3). Also it will be nice to know of any other algorithm/ work that talks about groove width calculation that is accepted by Nucleic acid community. their advantages n
disadvantages. esp with respected irregular deformed helices. | {"url":"http://forum.x3dna.org/general-discussions/calculating-groove-width-calladine-algorithm/msg877/","timestamp":"2014-04-18T13:59:17Z","content_type":null,"content_length":"23925","record_id":"<urn:uuid:29f05647-54b1-4085-b76d-3e21660df71f>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00630-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Late homework and extra time policies
Replies: 10 Last Post: Mar 22, 2008 2:05 PM
Messages: [ Previous | Next ]
Re: Late homework and extra time policies
Posted: Mar 17, 2008 7:06 PM
I was not going to respond because what works for one teacher may not work for others.
Here is what I do. Tests count 75% of their grade and 25% is homework
For tests: I make up a review test, they do it, we go over next class day and then they get the test the following day. If a student does not finish the test, they stay in the room , finishes it
then go on to their next class. Most teachers do not object to me taking a few minutes from their class. I end up doing the same thing.
For homework: I type up partial notes and during class we do the discussion and finish the notes. The homework pages and problem numbers are typed on the paper. Odd numbered problems have answers in
the back so I expect students to do the steps, work whatever and check the answers in the book. Instant feedback. I expect homework to be correct before it is handed in so students may see me for
extra help to arrive at the book answers. Students have more time to do/fix any mistakes. This sometimes gets to be difficult with the paper work. I encourage students to come into my room and work
with me and with their friends to learn the concepts and be familiar with the algebra
Some students like this way of learning math and others still think to copy others work but it soon becomes obvious who is not doing their part.
Of Hammondsport
----- Original Message -----
From: Nannette O'Grady
To: nysmathab@mathforum.org
Sent: Monday, March 17, 2008 3:58 PM
Subject: RE: Late homework and extra time policies
What's to stop a student who starts a test earlier in the day from saying they need more time, going to look up how to solve the problems they left blank and then coming back later to complete the
From: owner-nysmathab@mathforum.org [mailto:owner-nysmathab@mathforum.org] On Behalf Of LLal98@aol.com
Sent: Saturday, March 15, 2008 2:13 PM
To: nysmathab@mathforum.org
Subject: Re: Late homework and extra time policies
I feel the same way. I do accept late homework, although I give it a "check minus". I also allow a few extra minutes on tests - the kids can come during their lunch or after school. I offer this to
everyone, not just the ones who ask. They have plenty of time for the Regents, so I'd rather they get in the habit of working carefully and not rushing.
It's Tax Time! Get tips, forms and advice on AOL Money & Finance.
Date Subject Author
3/15/08 Late homework and extra time policies Laura Bernhardt
3/15/08 Re: Late homework and extra time policies Nashua Birnholz
3/15/08 RE: Late homework and extra time policies Jonathan
3/15/08 Re: Late homework and extra time policies Guest
3/17/08 RE: Late homework and extra time policies Nannette O'Grady
3/17/08 Re: Late homework and extra time policies VERN TENNEY
3/17/08 RE: Late homework and extra time policies Westendorf, Neal
3/16/08 Re: Late homework and extra time policies Tom Kenyon
3/16/08 RE: Late homework and extra time policies Kate Nowak
3/22/08 Eleanor Pupko
3/17/08 Re: Late homework and extra time policies AFantaske@gowcsd.org | {"url":"http://mathforum.org/kb/message.jspa?messageID=6141279","timestamp":"2014-04-17T07:53:40Z","content_type":null,"content_length":"31587","record_id":"<urn:uuid:7546ad84-4874-4eef-88d1-696785b0388b>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00178-ip-10-147-4-33.ec2.internal.warc.gz"} |
Brent's Cycle Detection Algorithm (The Teleporting Turtle)
Brent's Cycle Detection Algorithm (The Teleporting Turtle) 0
In How do you determine if your singly-linked list has a cycle? In 1980, Brent invented an algorithm that not only worked in linear time, but required less stepping than Floyd's Tortoise and the
Brief Hare algorithm (however it is slightly more complex). Although stepping through a 'regular' linked list is computationally easy, these algorithms are also used for factorization and
pseudorandom number generators, linked lists are implicit and finding the next member is computationally difficult.... more
Implementation Difficulty
1 turtle = top
2 rabbit = top
4 steps_taken = 0
5 step_limit = 2
7 forever:
8 if rabbit == end:
9 return 'No Loop Found'
10 rabbit = rabbit.next
12 steps_taken += 1
14 if rabbit == turtle:
15 return 'Loop found'
17 if steps_taken == step_limit:
18 steps_taken = 0
19 step_limit *= 2
20 // teleport the turtle
21 turtle = rabbit
How do you determine if your singly-linked list has a cycle? In 1980, Brent invented an algorithm that not only worked in linear time, but required less stepping than Floyd's Tortoise and the Hare
algorithm (however it is slightly more complex). Although stepping through a 'regular' linked list is computationally easy, these algorithms are also used for factorization and pseudorandom number
generators, linked lists are implicit and finding the next member is computationally difficult.
Brent's algorithm features a moving rabbit and a stationary, then teleporting, turtle. Both turtle and rabbit start at the top of the list. The rabbit takes one step per iteration. If it is then at
the same position as the stationary turtle, there is obviously a loop. If it reaches the end of the list, there is no loop.
Of course, this by itself will take infinite time if there is a loop. So every once in a while, we teleport the turtle to the rabbit's position, and let the rabbit continue moving. We start out
waiting just 2 steps before teleportation, and we double that each time we move the turtle.
Why move the turtle at all? Well, the loop might not include the entire list; if a rabbit gets stuck in a loop further down, without the turtle, it will go forever. Why take twice as long each time?
Eventually, the length of time between teleportations will become longer than the size of the loop, and the turtle will be there waiting for the rabbit when it gets back.
Note that like Floyd's Tortoise and Hare algorithm, this one runs in O(N). However you're doing less stepping than with Floyd's (in fact the upper bound for steps is the number you would do with
Floyd's algorithm). According to Brent's research, his algorithm is 24-36% faster on average for implicit linked list algorithms. | {"url":"http://www.siafoo.net/algorithm/11","timestamp":"2014-04-20T08:35:22Z","content_type":null,"content_length":"26486","record_id":"<urn:uuid:0a117015-0599-490d-87a9-ccbb031eb671>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00213-ip-10-147-4-33.ec2.internal.warc.gz"} |
The earth has a radius of approximately 6400 km. How many radians per second is the earth rotating at the equator? - Homework Help - eNotes.com
The earth has a radius of approximately 6400 km. How many radians per second is the earth rotating at the equator?
The earth rotates by 360 degree or 2pi radians in 24 hours.
1hour = 60 minutes.
1minute = 60 seconds.
Therefore 24 hours = 24*60 minutes .
24*60 minutes = 24*60*60 seconds.
Therefore it takes 24*60*60 seconds for earth to rotate 2pi radians. So to obtain the angular speed of the earth in radians, we didivide 2pi radians by 24*60*60 seconds .
Therefore the earth rotates by 2pi/24*60*60 radians per scond , or 0.00007272205217 radians in one second.
Since the radius of the earth is given, any object on the surface of the earth moves by a distance of (2pi/24*60*60)6400km per second, or 465.4211 meter per second.
To answer this question, you have to first calculate how many seconds it takes for a point at the equator to complete a full rotation. The answer, of course, is 24 hours. So, the earth makes 2pi
radians in 24 hours.
24 hours = 24 hours * 60 min / hour * 60 sec / min
24 hours = 86,400 seconds
2 pi radians / 86,400 seconds = 7.27 x 10^-5 rad / sec
Note that the radius of the earth does not enter into the calculation because the question is one of angular velocity, not m/s.
To get the velocity of the point, remember that the distance over an angle on the circumference of a circle is the angle x the radius.
So the velocity of the point is:
7.27 x 10^-5 x 6400 km = 465 m/s = 1040 miles per hour
No offense, but to answer the question of how many radians per second the earth is rotating at the equator, the earth's radius of 6400km and also the reference to "equator" need not be given, since
rad/s is an angular speed.
Perhaps you have intention to ask "what is the speed of a particular point on Earth at the equator in km/h or m/s?". I will dwell on this at the last part of this answer.
First of all,
Speed of rotation of earth 1 cycle in 24 hours
= 2 * pi * radian / 24 hrs
= 2 * pi * radian / (24 * 3600 sec)
= 2.3148 * pi * 10^(-5) rad/sec
= 7.2722 * 10^(-5) rad/sec
[This works out to about 0.004167 deg/sec or 15 deg/hour]
Now for the speed of a point on the circumference of the Earth at the equator. Since, by definition, 1 radian will be the angle subtended by an arc which has a length of the radius, the speed of a
point on earth on the equator would therefore be given by:
speed (of a point on equator)
= [ 7.2722 * 10^(-8) ] * 6400 km /sec
= 0.4654 km/sec
= 465.4 m/sec .... [ Wow! awesome, isn't it? that's more than 1 round of the stadium in 1 second! And that's faster than the speed of sound which is 333 m/sec]
= 1675.5 km/h .... [ That's more than 10 times the speed of a car at full speed driving on the expressway! ]
Radian is a unit of measure of angle. The measure of angle of one complete rotation of earth in terms of radians is 2pi radians. In terms of degrees it is equal to 360 degrees.
Please note that the rotation of earth in terms of radians is same, irrespective of the point on earth where the measurement is made.
The earth makes one complete rotation around its axis every day (24 hours). This means:
Rotation of earth in radians per day = 2pi
We know that:
Number of seconds in a day = 24*60*60 = 86400
pi = 3.14159
Rotation of earth in radians per second = (2*3.14159)/86400
= 0.000072722 radians/s
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/earth-has-radius-approximately-6400-km-how-many-221709","timestamp":"2014-04-21T00:27:44Z","content_type":null,"content_length":"34587","record_id":"<urn:uuid:a698ec96-fa58-4c7e-8063-7e853e9467f9>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00580-ip-10-147-4-33.ec2.internal.warc.gz"} |
Magazine article By Muranaka, Ken
Futures (Cedar Falls, IA) , Vol. 29, No. 6
Article excerpt
How can you forecast a market turn? Try taking the participants' psychological temperature.
Sentiment indicators are a measure of the emotions and expectations of investors. In Japan, technical analysts use the psychological index (psychological line or PI), a stochastic oscillator, to
predict market direction. It's a simple measure to compute. Yet, if viewed as a statistical model, the PI can be a powerful tool to analyze swings in speculative markets.
The P1 formula is simply:
(X/12] * 100 = PI%
where x is the number of days that the market closed higher compared to the previous day in the past 12 consecutive observations. The PI always ranges between 0% and 100%. The PI usually is plotted
like the stochastic oscillator and the relative strength index (RSI). If the PI at or above 75%, the market is considered overbought, generating a sell signal. A PI reading at or below 25%, depicts
an oversold market and generates a buy signal.
The PI is not a primary analytical method used by Japanese traders, but rather is one more tool that a trader can add to his current approach. Used in conjunction with other oscillators such as the
RSI and moving averages, the PI can capture short-term moves that may not be detected by moving averages and thus can improve the overall predictive power of the trader's analysis.
PI as a statistical model The PI as a random variable will follow a binomial distribution. The 75% line means that the market closed higher nine out of 12 times, whereas the 25% line means that the
market closed higher only three out of 12 times.
It is easy to compute the PI values with a spreadsheet such as Excel. Here, the Excel's IF function is used to assign a value of 1 if the closing price is higher than that of the previous day.
Otherwise, a value of 0 is assigned. Next, the sum of the 12 consecutive observations is divided by 12. The PI can be expressed by the binomial distribution:
([n above x])[P.sup.x][(1 - P).sup.n-x] for x=0 1, 2,[ldots], or, n,
where n = 12. The PI is defined by the probability of "X ups in n observations."
If p = 0.5, then the market goes up or goes down by the same chance. For the PI with n = 12 and p = 0.5, the index reaches 75% or above with the probability of 7.3%, which is the sum of the
probabilities for x = 9, x = 10, x = 11 and x = 12. The index falls to 25% or below with this same probability, which is the sum of the probabilities for x = 3, x 2, x = 1 and x = 0. These
probabilities can be computed directly from the binomial formula or a table found in any statistical textbook.
Can the 25% and 75% psychological lines really determine a turning point in the market? Because the PI is associated more often with stock markets than with commodities, we chose the Nikkei 225's
data to test the predictive power of this indicator along with the short-term five- and 12-day moving averages.
Testing the model Swings in the cash values of the Nikkei 225 were modeled by the PI as a binomial random variable. Two cases were considered -- bull and bear markets. The prices were obtained from
the newspaper Nihon Keizai Shimbun. For a bear market, 201 PIs were computed using the closing prices between Aug. 26, 1997, and June 23, 1998. All the calculations were done with Excel 97.
"Probabilities" (above) is the observed probability distribution showing a fairly good fit with the theoretical binomial distribution having the parameters p = 0.5 and n = 12, but the observed
probability distribution is slightly skewed. By trial and error, the value of the probability p to minimize the sum of squares [(observed - theoretical).sup.2] was found to be about 0.48, suggesting
that the Nikkei 225 index may not appreciate or depreciate with equal probability, and that the market is more likely to decline by 2% as compared to the previous day. However, the theoretical
probabilities for the PI [leq] 25% and PI [geq] 75% are 6. … | {"url":"http://www.questia.com/library/1G1-63024940/opinion-oscillator","timestamp":"2014-04-25T03:59:00Z","content_type":null,"content_length":"64183","record_id":"<urn:uuid:f9304e85-28ce-4322-aa05-c16e57dd2bc8>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00618-ip-10-147-4-33.ec2.internal.warc.gz"} |
Logistic Recursion Relation
February 24th 2013, 03:15 PM
Logistic Recursion Relation
A certain population is believed to satisfy the logistic recursion relation
The data for this population can be found at posted data. Find the constants http://webwork.sdsu.edu/webwork2_fil...0fcdd79cd1.png and http://webwork.sdsu.edu/webwork2_fil...224e2cc9b1.png that
fit the data to the update function f as well as possible in a least squares sense and use the values of A and B to predict the equilibrium values of http://webwork.sdsu.edu/
I found A to be -.0085 and B to be 2.7
I need help finding Peq. thank you.
Here is the posted data
n Pn
1 18.4835
2 47.0015119
3 108.126374
4 192.565052
5 204.734597
6 196.495243
7 202.348922
8 198.308856
9 201.159491
10 199.176929
11 200.570391
February 24th 2013, 05:37 PM
Re: Logistic Recursion Relation
Hey danielwow.
Are you trying to find the fixed point where Pn = A*(Pn)^2 + B*Pn?
February 24th 2013, 05:44 PM
Re: Logistic Recursion Relation
No, i am trying to find Pequilibrium i found zero was one of them but there are multiple Peq
also thanks for replying | {"url":"http://mathhelpforum.com/calculus/213734-logistic-recursion-relation-print.html","timestamp":"2014-04-16T07:35:14Z","content_type":null,"content_length":"7632","record_id":"<urn:uuid:54a377b0-a6cf-41dd-b0c8-29f363cb5065>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00086-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rife Bare Tech Notes > Wire Lengths
Wire Lengths
In your post, the comments about short grounding cables have much
However, the use of TUNED lengths of cables between equipment appears to
be VERY IMPORTANT. Dr Bare specifies 1/2 wave or 18ft cables in his
construction plans.
It appears that frequency tuned lengths of cables are required to avoid
suppresion of harmonics of the fundemental frequencies which essential
to plasma effect . This process is not well understood at this time.
In my pesonal experience, my equipment operated "electronicaly" fine
using short cables, but I found that I did not "feel" the ray beam
effect, EXCEPT WHEN USING TUNED CABLES. I have used full wave, 3/4
wave and half wave connecting cables with success.
It is my opinion that it also may be effective to use smaller fractions,
(such as 1/4, 1/8, 1/16, 1/32, 1/64 th wave) if properly tuned, however
I have not tried this yet. (and tuning becomes much more critical as
you approach a smaller fraction of full wave). Please refer to my
previous posts on cable lengths and math involved. Also errors in
calculating the velocity factor in the wire become significant.
In summary I strongly recomend that until further data is obtained, that
1/2 wave or 18ft cables be used to connect cb to linear and linear to
the antenna tuning device. These are commonly availible from cb radio
I had it backwards when I stated the length to be increased vs
decreased. I have been using way-off lengths to match wavelength since
I have been using 95%, which is incorrect since looking in catalogs
informs that the propagation velocity is usually around 70%, not
For a 70% cable, then, the following chart would be relevant
(rounded to the nearest half inch.)
27.125 MHz
Ft (0.70)xWL (ft) ft, in
1/1 36.276 22.59 22, 7
1/2 18.13 12.69 12, 8
1/4 9.07 6.35 6, 4
1/8 4.53 3.17 3, 2
1/16 2.27 1.59 1, 7
1/32 1.134 0.79 0, 9.5
From: Domenic Spinale
Subject: Re: Propagation Velocity
A while back there was some talk on the List about the use of exact
sub-multiples of 1/4 wavelength transmission lines; and back then I did not
have time to get in my two cents worth.
When a lossless transmission line is terminated in its characteristic
impedance, the voltage measured at all points along the line will be the
same and the line is said to be flat which means the SWR is 1:1. The
opposite extreme is when the transmission line is either open or short
circuited which results in 100% of the signal that went down the line to be
reflected back up the line. The 27 MHz signal from the B/R system is
essentially a sine wave and the reflected signal is also a sine wave. With
two signals going in opposite directions on the same transmission line at
the same time, they algebraically add. The summation of the two waves is
called a standing wave. The highest voltage that can occur is at the point
where the peaks of the two waves are both at maximum with the same polarity
which results in twice the signal voltage. The other extreme is the point
where the two waves are exactly opposite in both polarity and amplitude and
they add to zero. The SWR is the maximum voltage point divided by the
minimum voltage point. When the two waves are of equal amplitude, the SWR
is 2 divided by zero which is infinity. These points of maximum voltage and
minimum voltage repeat every half wave length. A wave length can also be
expressed in degrees. A full wave length would be 360 degrees, similarly a
half wave length would be 180 degrees, and 1/4 wave length would be 90
degrees. Now recalling the sine function from trigonometry, it is zero at
zero degrees and increases in value to its maximum positive value of one at
90 degrees where it reverses direction then passes through zero at 180
degrees, continues in a negative direction to its maximum negative value of
-1 at 270 degrees where it changes in a posit ave direction, and passes
through zero at 360 degrees which is the same as zero degrees. The purpose
of this explanation is to show that the sine wave has a value of zero every
180 degrees (at 0 and 180 degrees) which is every half wave length. Maximum
amplitude points occur at 90 and 270 degrees, 180 degrees apart, also a
half wave length, but displaced from the zero points by 90 degrees or 1/4
wave length.
In conclusion it can be shown that a unique point (a zero or a maximum)
occurs every 90 degrees or 1/4 wave length.
If anyone would like to explain what is critical about 1/8, 1/16, or 1/32
of a wave length, I am all ears.
>> In conclusion it can be shown that a unique point (a zero or a maximum)
>> occurs every 90 degrees or 1/4 wave length.
>> If anyone would like to explain what is critical about 1/8, 1/16, or 1/32
>> of a wave length, I am all ears.
>Please explain the implications of this. Does it mean that the best
>size RF cables in this system are (ignoring prop velocity) are 9 ft and
>27 ft (approx 1/4 and 1/4+1/2 wavelengths)?
>If so, is there no ideal size if the cables are significantly less than
>these values? If there is no ideal size, then would cables in the,
>say, 1-6 ft range be "shorter the better" (least antenna action)
>or "longer the better (since they would be closer to 9')"?
I've been pondering the answer to the above question for a very long time
because sometimes things are not what they appear to be. I currently
believe that the coax cable length is not critical and shorter is better.
When I initially read that the recommended cable length between the CB and
the linear amplifier, as well as between the linear amplifier and the
antenna tuner was 18 feet, I thought: That's a half wave length - except
someone forgot to correct for the velocity factor of the cable. The 18 foot
length of coax is not a half wave length in the signal path, so scratch
that idea.
Another time I thought: Due to the load, the tube and its coupling back to
the tuner, being unbalanced resulted in a ground current flowing through
all of the ground connections; and since there is a voltage null every half
wave length and the velocity factor being closer to one for current along
the braid of the coax, that explains the recommended 18 foot lengths. This
scheme would hold the case of CB, linear amplifier, and antenna tuner close
to the same RF voltage potential. Then the recommendation is to coil the
coaxial cables. This adds series inductance in the ground current path and
defeats the 1/2 wave length grounding scheme. To correct this poor
grounding system, the recommendation is to connect all the equipment
together with short braided wire.
Thus far, I cannot find a good reason to support the idea of using 18 foot
coaxial cables. Provided that the SWR's are reasonably low there is no
magic length of coaxial cable. If the coaxial cables are short, the braided
wire ground connections should not be needed.
With respect to any imbalance in load current resulting in ground current,
it would make sense to null out the ground current at the balun with some
type of gimmick capacitor between the ground connection on the balun and
one of the balanced output wires. Nulling out ground current can solve
three problems. First, minimize RF interference. Second, minimize any
instability in the function generator caused by RF ground current. And
third, minimize any effect on the SWR when touching any of the equipment.
Other opinions are welcome.
Subject: Re: Propagation Velocity
> I currently
> believe that the coax cable length is not critical and shorter is better.
That is what I have found, empirically, as far as the cable length
itself is concerned. However, using different sized cables can
seemingly affect SWR and power since one size may remove or cause
interference due to proximity with other equipment in the setup. In
the system I am building now shorter is always better so far (this is
where the tuner and balun sit on a shelf above the amp and CB.) In my
other system, where the balun, amp, CB, and tuner are all on the same
shelf, and the amp is 2" from the balun, using a long enough amp to tuner
coax to go around the balun rather than under or over it results in
better SWR and power.
> Thus far, I cannot find a good reason to support the idea of using 18 foot
> coaxial cables.
Bare states in the manual that it helps the tube light easier. It does
not seem to help in my system.
> If the coaxial cables are short, the braided
> wire ground connections should not be needed.
I have also found this as a result of experimentation. I do have one
ground (7") between the CB and freq generator since it
helps keep 120Hz focused for some reason, but grounding all the
equipment together makes it slightly worse.
I'm using a cheb type tube (He + phanatron type electrodes) and found that
12 gauge rope laid monster speaker wire in thick vinyl jacket of 26" + spade
type connectors works best for me....
The spade connectors are 10-12 ga. crimped on the ends of the wires, and
then silver soldered.... The identical setup w/ flat 1/4" tinned grounding
strap has SWR's of at least .25 to .5 higher (at differant frequencies...
With the spade connectors, the whole cable(s) tip to tip is about 27"
On the cheb tube, there are screw connectors on the ends of the tube that go
to the internal electrodes...
I have not tried an argon tube yet but am saving my pennies for a quartz
bubble tube w/ Argon ir a mix as the Helium seems to work too good...
By that I mean I seem to have created a major kill off of whatever is in me
and was quite toxic for several days including a SVT reaction... and a
blowup (swelling and opening) of a lymph node in my groin.... I wasn't even
supposed to be the patient.... <grin>
It has spooked me however, and the person I was going to use it for, I won't
because of side effects for the moment.... anyways.... I don't want to
cause more pain where there is plenty to begin with.... Need to figure a way
to reduce the tumor in situ w/o more pain....
Hope this helps
GFoye wrote:
> I am not electronics oriented so this may not be good advice but:
> There are two thoughts on tube wire length.
> * One is: short as possible.
> * Other: wire length to fractional wave lengths.
> Therefore, for 1/4 wave, wire should be about 26" to 30" inches.
> Mine seems to work best about 26 to 28 inches - both equal length.
> Any technical input here?
> GF
>[snip] I am using 14.5 feet of 9913F coax ( this is the
correct length, not 18 Feet as described by Jim.). The equation
for calculating ½ wave length must take into account the
velocity factor of the coax which can range for 0.66 to 0.84 (
0.80 in my case).
Rick and List:
It would appear that the above comments are based upon the
assumption that the signal path of interest is the RF power out
of the CB and going through the coax to the linear amplifier,
and/or the RF power coming out of the linear and going through
the coax to the tuner. Suppose the signal current of interest
was the result of the unbalanced current on the lines out of
the balun, and that current flows along the ground path. If
the objective was to hold the CB, the linear, and the tuner at
the same RF potential then 1/2 wave length connections between
them would make sense. When the current path is along the
outer conductor of the coax rather than the center conductor
with the outer conductor being the return path, the velocity
factor as expressed above does not apply.
I ran a quick, basic test on length of wire from balun to
tube. I believe a quarter wave would be about 28 1/2 "
length. (Don't know formula - got the figure from someone
else.) Just wanted to see what would take place with
varying wire length.
For this test I used the following: #10 solid wire,
attached to balun terminal with copper post connector
CP-4-2B Adamax Inc 495-875 purchased at Home depot. This
terminal made it very easy to remove wire, snip off a piece
and reinsert wire- tighten with a set screw.
Uniden 510XL modified per manual
palomar 225
MFJ 949E tuner with external balun connected with 24" coax
Lodestar AG2603AD
argon bubble tube
#10 wire: (two pieces)- made a coil by forming over broom
handle, slip fit for my tube at large diameter. Just insert
tube into the coils -must be sure coils are aligned properly
so there is no stress on tube. Started with 30" overall.
(Before coil, etc.). Snipped off 1/2" at a time.
(Will have to do this one more time to pin point exact
length for my set up. Think I passed it around 29 ". Also,
my tube started acting up, may have damaged it.) What I
found was: the optimum is going to be close to the
calculated figure of 28 1/2"
As length was shortened the SWR's climbed but not
significantly although at a couple points the SWR's did make
a huge jumps. (This is one reason the test would have to be
done about three times to see if this condition would
The significant change was in wattage reduction as length
was decreased.
At optimum (palomar on med) the wattage output was 200 with
reflected close to one.
As wire length was reduced wattage output dropped, lowest
about 150 and reflected about four. (Except at two
intervals where there were significant jumps: SWR 3:1,
watts 110.)
At a couple points I had to change from inductor L to K.
But, not sure what was happening there- would have to do
this again for repeatablity.
I went down to 26" and saw the wattage dropping steadily.
At that point I snipped off a 6" segment down to 20".
Wattage still low but not much different than at 26".
Results: in order to obtain maximum wattage output with
lowest reflected SWR, wire length is important. (So, for
those that build the compact units, how do you get around
this? Somehow you have to fool the system or do you use
long wire?)
I'm not a person with patience so I went through this
rather crudely. But, am going to do it again, starting with
30" and snip off 1/4" at a time and record results then
utilize the optimum figure.
Note: I believe the coils work nicely and could be used
similarly as the copper sleeves. Make a coil and solder
something else to it if you prefer.
Below is a link to a web site that has some interesting information on
coax cables.
Regards, Jason
A note to the group about velocity factors. The output of an R/B is quite
wide band and is not merely fixed at 27.125 MHz. 27.125 Mhz is merely the
center frequency. The primary Lower Side Band (LSB) has harmonics that
extend about 3 MHz below the center frequency.The primary Upper side band
(USB) has harmonics that extend about 3 Mhz above the center frequency .
Most of these harmonics are 40 to 50 Db down, but they still do exist.
Calculation, and then fabrication of cable length based upon velocity
factor and a fixed 27.125 Mhz will result in poor SWR's. There are other
harmonic signals that are generated that extend out to a good 300 to 500
MHz. Again these are very weak. Some harmonics can be picked up below 10
MHz. An optimum coax cable should be able to contain all these harmonics.
The recommended 18' length of coax between the transmitter and the tuner
works just fine. Someone might try and have a 21' length made up to see
what happens to SWR. Might better contain the LSB frequencies.
Jim Bare | {"url":"http://www.electroherbalism.com/Bioelectronics/RifeBare/RifeBareTechNotes/WireLengths.htm","timestamp":"2014-04-16T13:03:51Z","content_type":null,"content_length":"35604","record_id":"<urn:uuid:ffd669e7-8b8a-4be4-9e2f-43192383817d>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00331-ip-10-147-4-33.ec2.internal.warc.gz"} |
Estimating and Rounding
Date: 03/04/2002 at 15:49:48
From: Amy
Subject: Division - one answer problems
I am doing my math homework and we have to divide a two-digit number
into another number and get a one-digit quotient. For example, 192
divided by 86 = . Can you please help me?
Date: 03/04/2002 at 22:56:59
From: Doctor Peterson
Subject: Re: Division - one answer problems
Hi, Amy.
You can find some discussions of the tricks you need in our archives:
Division by Estimation
Long division, Egyptian Division, Guessing
Compatible Number Estimating
There are a couple of important things to remember: you have to
estimate (because you don't have a multiplication table that goes up
to the 86's), and when you estimate you expect not to be exact.
Therefore, you will be learning not only to make the best guess you
can, but to correct the guess WHEN (not if) it turns out to be wrong.
That's just part of the process, and doesn't mean you've made a
So let's look at your example. The first thing I usually do is to
round both numbers, generally so that there is one non-zero digit left
in the divisor and two in the dividend (though that's not always true
- you'll get used to how to make this decision with a little
practice). In this case, 86 rounds up to 90, and 192 rounds down to
190. It won't matter here, but I like to round both numbers in the
same direction, because that's more likely to give a good estimate.
So I'd round both numbers up here (giving preference to the divisor),
making it
90 ) 200
Now, we can divide both numbers by 10 and it won't change the
quotient; so ignore the zeroes on the end:
9 ) 20
Now we've got something we can do: the answer is 2, since 9 * 2 = 18.
That's our estimate; but is it the right answer for the real problem
we're doing? All we can do is check it by multiplying. We're hoping
86 ) 192
To check that (and also to find the remainder), we multiply the
divisor by the quotient: 2 * 86 = 172. This is good: it's less than
192, but not so much less that we could fit another whole 86 into it.
That is, we can subtract to get a remainder, and the remainder is less
than the divisor:
86 } 192
So the remainder is 20.
At leat one of the links I gave you goes into how to correct an
estimate if the check doesn't work out; briefly, you subtract one from
the quotient if it's too big (so that the product was too big to
subtract at all), and you add one if the quotient is too small (so
that the remainder is too big).
Let me know if you need more help. You might want to send a sample
problem worked out, so I can see where you might be going wrong or
getting stuck.
- Doctor Peterson, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/58868.html","timestamp":"2014-04-19T02:51:20Z","content_type":null,"content_length":"8081","record_id":"<urn:uuid:04b102ca-0f90-475e-8f8d-59303fe4a3b3>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00312-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Fermat's Last Theorem.
Replies: 6 Last Post: Jan 25, 2012 8:23 AM
Messages: [ Previous | Next ]
Chuck Re: Fermat's Last Theorem.
Posted: Jan 24, 2012 9:27 AM
Posts: 55
Registered: 12/2/08 On the bottom of page 11 you say that the two roots of your quadratic equation are
(18) k = 5/2 and
(19) k = (a^n +2)/2
I agree.
You then go on to say that (18) and (19) imply that a^n = 3.
Well you lost me on that one.
Date Subject Author
1/24/12 rajesh bhowmick
1/24/12 Re: Fermat's Last Theorem. Chuck
1/24/12 Re: Fermat's Last Theorem. rajesh bhowmick
1/24/12 Re: Fermat's Last Theorem. Chuck
1/24/12 Re: Fermat's Last Theorem. rajesh bhowmick
1/25/12 Re: Fermat's Last Theorem. Chuck
1/25/12 RE: Fermat's Last Theorem. Ben Brink | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2335870&messageID=7652424","timestamp":"2014-04-17T19:30:21Z","content_type":null,"content_length":"23052","record_id":"<urn:uuid:39c70bdd-8b3b-40af-8a4b-29c443459d7d>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00543-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mother Goose Programs | Math and Science Programs | Math Standards
In Principles and Standards for School Mathematics, the National Council of Teachers of Mathematics (NCTM) sets forth standards in 10 areas that cover a broad range of math skills and
understandings—five are identified as process standards and five as content standards.
The Process Standards
Learning mathematics requires action and thinking. NCTM has identified five processes that are especially critical to learning about mathematics: problem solving, reasoning and proof, communicating,
making connections, and representing.
Problem Solving
For young children, this includes...
• Using simple approaches to solving mathematical problems: asking for help, counting, trial-and-error, guessing-and-checking
Reasoning and Proof
For young children, this includes...
• Learning to explain how they solved a mathematical problem: describing the steps taken verbally, in a drawing, or with concrete objects
For young children, this includes...
• Telling others about their math-related work: using language, pictures or other symbols, or concrete objects
• Beginning to use some math language: numbers, shape names, size words, names of math materials, etc.
Making Connections
For young children, this includes...
• Using math skills in a variety of situations, not just when prompted by an adult
• Linking their own math experiences to those of other people, in real life or in books
• Recalling previous math experiences when engaged in current ones
For young children, this includes...
• Using simple pictures, graphs, diagrams, or dictated words to represent their mathematical ideas
The Content Standards
Numbers and Operations
For young children, this includes...
• Recognizing and naming some written numerals
• Having a sense of quantity: knowing that the number name “three” and the symbol “3” mean three of something
• Counting: learning the sequence of number names (1, 2, 3)
• Counting objects: learning to count an object only once, using one-to-one correspondence in counting objects and matching groups of objects
• Beginning addition: Adding two groups of concrete objects by counting the total
• Beginning subtraction: Taking away one group of concrete objects from another by taking some away and counting the remainder
• Comparing: understanding ideas such as more than, less than, and the same as and having a general idea that some numbers stand for a lot and some numbers mean a little
Geometry and Spatial Sense
For young children, this includes...
• Matching, sorting, naming, and describing shapes: circles, squares, rectangles, and triangles
• Naming and describing shapes found in everyday environments
• Combining shapes to make new shapes
• Making shape designs that have symmetry and balance
• Understanding and using words that describe where objects are located: over, under, through, above, below, beside, behind, near, far, inside, outside
Patterns, Functions and Algebra
For young children, this includes...
• Identifying, copying, and making simple patterns: sequenced or repeated organization of objects, sounds, or events
• Using patterns to predict what will come next in a sequence
• Recognizing single number patterns such as “one more”
• Noticing, describing, and explaining mathematical changes in quantity, size, temperature, or weight
For young children, this includes...
• Understanding and using words referring to quantities: big, little, tall, short, long, a lot, a little, hot, cold, heavy, light
• Understanding and using comparative words: more than, less than, bigger than, smaller than, shorter than, longer than, heavier than, colder than
• Showing an awareness of and interest in measuring: imitating the use of measuring tools and measuring with non-standards units
• Comparing objects such as Which of two sticks is longer?
• Beginning to use measurement words, such as inches, feet, miles, pounds, minutes, and hours in their language
Data Analysis, Statistics and Probability
For young children, this includes...
• Sorting objects to answer questions
• Collecting data to answer a question: keeping track of simple information gathered from a group of people or over a short length of time
• Making lists or basic graphs, with (adult) help, to organize collected data | {"url":"http://www.mothergooseprograms.org/math_standards_math.php","timestamp":"2014-04-16T14:15:21Z","content_type":null,"content_length":"16541","record_id":"<urn:uuid:eff921ae-b8dd-4eea-94f4-a7d52336506c>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00030-ip-10-147-4-33.ec2.internal.warc.gz"} |
Catasauqua Geometry Tutor
...Current experience in tutoring GED math preparation. Excellent math skills in algebra 1 and 2, geometry, and trigonometry from an experienced college chemistry instructor and tutor. Experienced
in preparing students for math portion of GED.
36 Subjects: including geometry, chemistry, reading, English
...I specialize in micro- and macroeconomics, from an introductory level up to an advanced level. I have master's degree work in labor economics, financial analysis and game theory. I have the
utmost confidence in my ability to relate this material in a comprehensible manner to the student.
19 Subjects: including geometry, calculus, statistics, GRE
...I have been a practicing engineer for many years and thus I am familiar with many practical applications of math concepts to real world examples. My teaching philosophy is to maintain a
student-focused and student-engaged learning environment to ensure student comprehension and student success. ...
12 Subjects: including geometry, calculus, algebra 1, algebra 2
...Create lists. 7. Format pages and insert headers and footers. 8. Insert graphics, pictures, and table of contents, and PowerPoint adds easy-to-use interactive features that make the usual
slides of boring bulleted text and charts a relic of the past.
27 Subjects: including geometry, calculus, statistics, algebra 1
...In addition to tutoring students in this age group, I have home-schooled my daughters in 3rd-8th grade mathematics so I am familiar with the material. Since I am aware of what skills will be
used in the harder maths I can help your child prepare a strong foundation for the courses that are coming up in the future. I would love to help students in elementary school build a strong
22 Subjects: including geometry, algebra 1, ASVAB, elementary (k-6th) | {"url":"http://www.purplemath.com/catasauqua_pa_geometry_tutors.php","timestamp":"2014-04-16T21:53:51Z","content_type":null,"content_length":"23994","record_id":"<urn:uuid:5f23ced3-e199-4cd5-9d5a-7c9bae30e3c7>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00260-ip-10-147-4-33.ec2.internal.warc.gz"} |
den Haan
Wouter J. den Haan - LSE Macroeconomics Summer Courses
Macroeconomics Summer Courses - August 2014 #1 The essentials: Solving and estimating DSGE models. August 18-22 Instructors:
• Some basic knowledge about DSGE models, e.g., know what an Euler and a Bellman equation are
• Some rudimentary knowledge on programming with Matlab
Course outline 2014:
Monday - Solving
analyzing your first DSGE model
• Overview: This morning we teach you how to use Dynare to solve DSGE models. We also teach you how to incorporate Dynare programs into bigger Matlab programs so that you can, for example, loop
quickly over different parameter values. We also give you the information to calculate standard errors for business cycle statistics like the ratio of two standard deviations of HP-filtered
series. With knowledge of both the theoretical and the empirical tool, you will be able to build a theory, calculate its key properties, and confront these properties with the analogues in the
• Topics:
□ State variables
□ Policy rules (i.e. the recursive solution to DSGE models)
□ Impulse response functions
□ Perturbation analaysis
□ Certainty equivalence
□ Dynare
□ Using the homotophy idea to get good initial values for the steady state (often the hardest part of running Dynare)
□ Parameter values and properties of basic neoclassical model
□ Stylized facts
□ Heteroscedastic and autocorrelation consistent (HAC) estimators and standard errors
• Applications & exercises: In the afternoon, you are asked to write a program to investigate how one can generate sufficient volatility in the unemployment rate in a simple matching model (i.e.
how to solve the Shimer puzzle).
• Other: Dynare uses perturbation analysis to solve DSGE models. This year we do not teach the underlying theory in class, but we provide you with the notes to understand it.
Tuesday - Key tools from the numerical approximation literature and projection methods
• Overview: In the morning, we teach you numerical integration and function approximation. These tools are used inside and oustide economics. Within economics they are not only used to solve
models, but they are also widely used in, for example, econometrics. Numerical integration makes it possible to approximate the conditional expectation with just a few lines of code. When you are
familiar with numerical integration and function analysis, then projection methods are quite easy to understand. In contrast to the approximation technique underlying Dynare, projection methods
are global approximation methods. It is a bit more involved to program a projection method, but for some models it is a much better choice.
• Topics:
□ Numerical integration (Gaussian quadrature)
□ Function approximation (Splines & Polynomials)
□ Projection methods
□ Endogenous grid points
□ Fixed point iteration
□ Time iteration
• Applications & exercises: In the afternoon, you are asked to solve a simple model in which the risk premium varies over the business cycle.This is difficult to do for perturbation analysis since
uncertainty affects the solution in only a limited way. In fact, one needs at least third-order perturbation to have any time-variation in the risk premium. In contrast, this is easy when using
projection methods.
Wednesday - Topics
• Overview:Perturbation is cheap to implement, but less/not appropriate when nonlinearities are important, when there is substantial volatility (e.g. idiosyncratic risk), or when there are
occasionally binding constraints. Projection methods can in principle deal with all these complexities, but is more expensive to implement and very costly if there are many state variables. We
will teach you the Parameterized Expectations Algorithm (PEA) which is a projection method but is less computer-intensive than classic projection methods. We discuss the version of PEA proposed
by Den Haan and Marcet, but also the recent version of Maliar, Maliar, & Judd which contains a simple but powerful improvement. After the discussion of Value Function Iteration you will be
familiar with most of the available tools to solve DSGE models. We will discuss the advantages and disadvantages of the different methods. As part of this discussion we will talk about accuracy
tests, occasionally binding constraints, and penalty functions. There is one topic that has to be discussed in any course on solving DSGE models and that is stability and uniqueness of the
solution. We will teach you what the Blanchard-Kahn conditions have to say about this. Explaining the Blanchard-Kahn conditions automatically teaches you about sun spots or self-fullfilling
expectations an exciting part of economics.
• Topics:
□ Parameterized Expectations Algorithm
□ Value Function Iteration
□ Accuracy tests: Euler errors, Dynamic Euler equation test, DHM statistic
□ Occasionally binding constraints and penalty functions
□ Blanchard-Kahn conditions
□ Sun spots and self-fulfilling expectations
• Applications & exercises: In the afternoon we use PEA to look at another asset pricing model and check for the possiblity of sun spots.
Thursday - Kalman filter & full information methods
• Overview: In the morning, we teach you possibly the most powerful tool in economics, namely the Kalman filter with which you can estimate unobserved components. This tool also plays a key role in
the estimation of DSGE models with full information methods like Maximum Likelihood and Bayesian estimation. Full information methods estimate the model using the complete specification including
specifications of the distribution of all the exogenous shocks. We will teach you how to calculate the likelihood of a model given the data. More importantly, we discuss some tricky issues you
will run into in practice, namely the singularity problems which will restrict the number of observables you can use.
• Topics:
□ Kalman filter
□ State space form
□ Maximum Likelihood
□ Avoiding the singularity problem
• Applications & exercises: Students are asked to estimate a time series model and extract the trend component of TFP.
Friday - Bayesian estimation
• Overview: With the tools learned on Thursday, Bayesian estimation is actually not that difficult, since Bayesian estimation is like Maximum Likelihood with a bit more information (the prior). One
challenge of Bayesian estimation is the evaluation of the posterior. To be able to do this, we teach you Markov Chain Monte Carlo (MCMC) techniques. We also discuss the advantages and
disadvantages of this method relative to its alternatives, such as Maximum Likelihood, Generalized Method of Moments (GMM), and Simulated Method of Moments (SMM).
• Topics:
□ Bayesian estimation
□ MCMC
□ Metropolis Hastings
□ Maximum Likelihood, GMM, SMM
• Applications & exercises: Students are asked to estimate a DSGE model, extract the unobserved components, and discuss which shock is most important for which recession. | {"url":"http://www.wouterdenhaan.com/summercourse_essentials.html","timestamp":"2014-04-20T05:52:19Z","content_type":null,"content_length":"11443","record_id":"<urn:uuid:7ca5a2b5-0e55-4bd6-906c-8bf2f47540a6>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00372-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Math question! Help please! Semester test tomorrowww!!
• one year ago
• one year ago
Best Response
You've already chosen the best response.
There it is :(
Best Response
You've already chosen the best response.
The answer for part B, the one I don't get is 3050, but I don't know how to get there.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Who told you the answer for B is 3050? That's not what I came up with.
Best Response
You've already chosen the best response.
Even though that's a REALLY big number, I realize I forgot to calculate the internal diameter. Hold on.
Best Response
You've already chosen the best response.
Okay. I think I've got the answer. Can you tell me how you started this problem?
Best Response
You've already chosen the best response.
The textbook says that..
Best Response
You've already chosen the best response.
Textbook's wrong. Over THREE THOUSAND washers can be carved from one square meter of metal?
Best Response
You've already chosen the best response.
Yes, so I did 3/0.0004= 7500 and thats the number of washers for question a
Best Response
You've already chosen the best response.
How did you get .0004?
Best Response
You've already chosen the best response.
Oh my gosh, never mind I get it!!
Best Response
You've already chosen the best response.
What do you get?
Best Response
You've already chosen the best response.
You know the formula for the area of a circle is (3.14 * r^2) ? That's why I have no idea where you got the .0004
Best Response
You've already chosen the best response.
So it is 3050 :)
Best Response
You've already chosen the best response.
What's 3050?
Best Response
You've already chosen the best response.
Here's what I did: I figured out the diameter of the entire washer. The washer's EXTERNAL DIAMETER is 2 cm. That means that the radius is 1 cm. And that means that the area is 3.14 cm squared,
which is the same as .0314 meters squared.
Best Response
You've already chosen the best response.
I found the AREA of the entire washer, sorry. That's .0314 meters squared.
Best Response
You've already chosen the best response.
But the washer has a hole. So now we have to find the area of the hole. The DIAMETER of the hole (INTERNAL diameter) is 1 cm. That means that the AREA of the hole is.... (3.14 * .5 squared),
which is .785 cm squared.
Best Response
You've already chosen the best response.
If you subtract that from 3.14 cm squared, you get 2.36 cm squared, which is the same as .0236 meters squared.
Best Response
You've already chosen the best response.
Can you tell me how you got .0004?
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50c4adb9e4b0c673d53ddb5a","timestamp":"2014-04-20T13:48:13Z","content_type":null,"content_length":"90565","record_id":"<urn:uuid:50f2d435-8ebb-446d-b24a-2ba5ece0d78d>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00465-ip-10-147-4-33.ec2.internal.warc.gz"} |
Poisson approximated by Normal for large numbers?
November 18th 2008, 09:12 AM #1
Oct 2008
Poisson approximated by Normal for large numbers?
Just a quick question:
Am I correct in thinking that X~Poi(36) may be approximated using X~N(36,36)? What would be a sufficiently large number to do this?
It depends on what you are trying to do. For rough and ready calculations 10 will often do, 20 is better. Google for "error normal approximation poisson".
November 18th 2008, 12:38 PM #2
Grand Panjandrum
Nov 2005 | {"url":"http://mathhelpforum.com/statistics/60265-poisson-approximated-normal-large-numbers.html","timestamp":"2014-04-17T19:53:47Z","content_type":null,"content_length":"33313","record_id":"<urn:uuid:6039faac-340f-41e6-9051-c46bb31456d1>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00331-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fourier-Motzkin elimination with the editrules package
August 26, 2011
By mark
Last week I talked about our editrules package[1,2] at the useR!2011 conference. In the coming time I plan to write a short series of blogs about the functionality of editrules. Below I describe the
eliminate and isFeasible functions. But first: a bit of theory.
Most people with some linear algebra background will remember the Gaussian elimination procedure for manipulating linear systems of equations of the form Ax=b, with A a real m x n matrix, x an
unknown n-vector and b an m-vector. Gaussian elimination is based on manipulating the rows of augmented matrix [A|b] with three basic operations: permutation of rows, addition of rows, and
multiplication of rows by a non-zero constant. The solution set {x} is invariant under these operations, and by repeatedly applying them, it is possible to transform the system to reduced row echelon
form, for example.
It is (perhaps) a bit less known that similar manipulations are possible for systems of inequalities. Consider a system of inequalities, written as Ax<b. Now, addition of rows and permuting rows of [
A|b] is still possible. However, multiplying a row with a negative constant gives a problem, since this implies changing the comparison operator < of that row to >. Rows with different comparison
operators cannot be added. So, to manipulate a system of inequalities, we restrict our basic operations to permutation of rows, addition of rows and multiplication with a positive constant. These
operations allow us to eliminate a variable (say x[1]) from a system of inequalities in the following way
1. Permute the rows of [A|b] such that the top m[1] rows have positive coefficients in the first column, the middle m[2] rows have negative coefficients and the bottom m[3] rows have 0 coefficient
in the first column.
2. Create a new (m[1]m[2] + m[3]) x (n+1) augmented matrix in the following way.
1. Copy the bottom m[3] rows to the new system.
2. Combine each row in 1…m[1] with each row in m[1]+1…m[2] by addition, multiplying the row with the negative coefficient with a positive constant such that the resulting rows have the first
coefficient equal to zero. Copy all these rows to the new system.
The new system is equivalent to the original one, in the sense that it has a solution if and only if the original system has a solution (A consequence of Farkas' lemma).
At first glance, it would seem that the second step involves a double loop over the sets of rows with positive and negative coefficients. However, using the indexing and recycling properties of R,
explicit loops can be avoided completelely. Here is a simple piece of code demonstrating this. We will define an augmented matrix and eliminate the 4th variable.
# create an augmented matrix.
Ab <- matrix(sample(-9:10),nrow=4,dimnames=list(NULL,c("x1","x2","x3","x4","b")))
### We will eliminate x4 from this matrix.
# Normalize, so all nonzero coefficients x4 have -1 or 1 (second statement uses recyling).
Izero <- Ab[,"x4"] == 0
Ab[!Izero] <- Ab[!Izero, ]/abs(Ab[!Izero,"x4"])
x1 x2 x3 x4 b
[1,] -0.4 -0.6000000 0.900000 1 0.2
[2,] -2.0 4.0000000 -9.000000 0 7.0
[3,] 0.2 1.0000000 -1.400000 -1 1.6
[4,] 2.0 -0.3333333 -2.666667 -1 1.0
# get positive and negative rows:
Ipos <- which(Ab[,"x4"] > 0)
Ineg <- which(Ab[,"x4"] < 0)
# Here's the real trick: use smart indexing to derive new rows:
I1 <- rep(Ipos, times=length(Ineg))
I2 <- rep(Ineg, each=length(Ipos))
# avoid looping!
Ab2 <- Ab[I1,] + Ab[I2,]
Ab2 <- rbind(Ab2, Ab[Izero,])
# and here's the result
x1 x2 x3 x4 b
[1,] -0.2 0.4000000 -0.500000 0 1.8
[2,] 1.6 -0.9333333 -1.766667 0 1.2
[3,] -2.0 4.0000000 -9.000000 0 7.0
Created by Pretty R at inside-R.org
The code above contains the basic methods applied in the eliminate function of the editrules package. The eliminate function has some important extras however: it can recognize numerical near zero's
as zero, and it handles mixed systems of equalities and inequalities (operators may be any of <, ≤, ==, >, ≥). For example, let's put some requirements on simple records, consisting of variables
cost, turnover and profit:
> library(editrules)
E <- editmatrix(c(
+ "cost + turnover == profit",
+ "profit < 0.6*turnover"))
Edit matrix:
cost profit turnover Ops CONSTANT
e1 1 -1 1.0 == 0
e2 0 1 -0.6 < 0
Edit rules:
e1 : cost + turnover == profit
e2 : profit < 0.6*turnover
Edit matrix:
cost profit turnover Ops CONSTANT
e1 1 0 0.4 < 0
Edit rules:
e1 : cost + 0.4*turnover < 0
Created by Pretty R at inside-R.org
So, by eliminating a the variable "profit", we've derived an expression relating cost and turnover. (In a previous post, I already showed how the rules can be read from text.)
As you might expect, the number of new rules can grow exponentially when multiple variables are eliminated one by one. In fact, if the original system has m rows, the system obtained after
eliminating a single variable has maximally m^2/4 rows. It can be shown however, that after eliminating k variables, all rows derived from more than k+1 original rows are redundant[3]. The eliminate
function uses this rule to eliminate most of the redundant rows. To use this facility, you have to overwrite the original editmatrix each time a variable is eliminated, to retain the derivation
history. The reduction can be quite dramatic, as the following example shows:
# Create a random 10 x 11 system of linear inequalities Ax<b.
E <- as.editmatrix(A=matrix(rnorm(100),nrow=10),b=rnorm(10),ops=rep("<",10))
# variables to eliminate
elim <- c("x1","x2","x3")
# eliminate some variables, removing history
F <- E
for ( v in elim){
+ F <- eliminate(F,v)
+ # remove some internals, usually hidden for user
+ attr(F,"H") <- NULL
+ attr(F,"h") <- NULL
# nr of rows, NOT retaining history, after eliminating 3 variables
[1] 1276
# eliminate some variables, retaining history:
G <- E
for ( v in elim ) G <- eliminate(G,v)
# nr of rows, retaining history, after eliminating 3 variables
[1] 29
Created by Pretty R at inside-R.org
So we've managed to avoid 1247 redundant rows out of 1276 in total.
Ok, so we've eliminated a variable. What's the use of all this? Well, first of all, if and only if our system happens to be infeasible, it can be shown that by FM-elimination, a contradictory row
equivalent to to 0 < -1 will always pop up when eliminating variables one by one. In fact, the function isFeasible of the editrules package will do that for you:
Created by Pretty R at inside-R.org
So in this case there is at least one vector (cost, profit, turnover) obeying the rules in E. These functions may be of use when you work with hundreds of variables and hundreds of restrictions
(perhaps growing in time) and you want to know if it all makes sense.
The elimination procedure is an important subroutine of the error localization functionality of the editrules package, about which I will blog another time.
[1] Edwin de Jonge and Mark van der Loo (2011). Manipulation of linear edits and error localization with the editrules package, Discussion paper 11XXX Statistics Netherlands, The Hague/Heerlen (in
press). See also the package vignette
[2] The editrules package is designed to easily define, manipulate and check rules that your data has to obey. The current version (1.0-2) offers functionality for handling numerical equalities and
inequalities. In the coming time we will work to get the prototype version for categorical publication-ready.
[3] See e.g. Williams (1986) Fourier's method of linear programming and it's dual. (pdf) The American Mathematical Monthly 93, 681
for the author, please follow the link and comment on his blog:
Mark van der Loo
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or | {"url":"http://www.r-bloggers.com/fourier-motzkin-elimination-with-the-editrules-package-2/","timestamp":"2014-04-20T01:18:14Z","content_type":null,"content_length":"60986","record_id":"<urn:uuid:6b7b7305-5612-4ac2-9647-907d1fa6ceb6>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00485-ip-10-147-4-33.ec2.internal.warc.gz"} |
What day of week is this???
What day of week is this???
It is absolutely amazing how many times this situationoccurs in DP. Next to the DOLLAR SIGN, dates are the most important datum in any business function. If you have ROBOT it has a day of week
function. If youcan find it, there is an API for it, but I have never looked... The one I use is in the 1934 edition of Websters New collegiate Dictionary ;; by the way it works through2299 and is
under perpetual calendar; It goes as follows: 1. take the last two figures of the year (e.g. 97) and add the integer(yy/7) (no decimal fraction); 2. for the current century add 0; for the next
century, add 6.3 for the months add: 0 for april or july; 1 for jan or oct; 2 for May; 3 for aug; 4 for feb mar or nov; 5 for june; 6 for sept or dec; if a leap year use instead 0 for jan & 3 for
feb.4. add the day of month;5 divide the result by 7 and the remainder will represent the DOW: with 0 = sat; 1 = sun etc. I have programmed this algorithm in assembler, algol, cobol and RPG and I
know of a version in Auto-coder, C and PASCAL. Incidently ;; january 1st, 2000 will be Saturday; check it out.this means monday jan 3rd , 2000 will be a holiday so we will nothave to worry about 2
digit year stuff until tuesday! bob hamilton, texas business systems; 736 pinehurst; richardson, texphone 817-898-6770 off; 972-705-9214 home... No One Ever Needed a Smaller, Slower PC
Tags: None
What day of week is this???
Hi Kendall, Date-handling capabilities are not limited to V3R2 but to RPG/IV. - so if you have the ILE RPG compiler, calculating the duration between two dates and in terms the day of week is
straight forward. About the OPNQRYF-approach; I noticed that Ted Holt published the technique in Tech Talk January 1996. Please let me know if you need further information (I've got an RPG/400-inline
(6/8-digits date) that works very well too). Best regards, Carsten
What day of week is this???
Hi Kendall, Date-handling capabilities are not limited to V3R2 but to RPG/IV. - so if you have the ILE RPG compiler, calculating the duration between two dates and in terms the day of week is
straight forward. About the OPNQRYF-approach; I noticed that Ted Holt published the technique in MC Tech Talk January 1996. Please let me know if you need further information (I've got an RPG/
400-inline (6/8-digits date) that works very well too). Best regards, Carsten
What day of week is this???
Bob: I tried to get the day of the week using your formula for March 26, 1997 which is Wednesday. 97/7 = 13 without decimals. 97 + 13 + 0 + 4(march) + 26 = 140 140 mod 7 = 0 = Saturday. Did I miss
something ? Regards, HREF="http://www2.cybernex.net/~irash/">Ira On Sunday, March 23, 1997, 09:19 AM, bob hamilton wrote: 1. take the last two figures of the year (e.g. 97) and add the integer(yy/7)
(no decimal fraction); 2. for the current century add 0; for the next century, add 6. 3 for the months add: 0 for april or july; 1 for jan or oct; 2 for May; 3 for aug; 4 for feb mar or nov; 5 for
june; 6 for sept or dec; if a leap year use instead 0 for jan & 3 for feb. 4. add the day of month; 5 divide the result by 7 and the remainder will represent the DOW: with 0 = sat; 1 = sun etc.
What day of week is this???
the DOW formula is in ALGOL for ccYYMMDD case( MOD( YY + mod(YY, 4) + ( case cc = 19 then 0; cc = 20 then 1
What day of week is this???
I don't get Bob's formula either Today is Thursday Nov 6 1997. 97/7 = 13 97+13=110 110+0=110 (for current century) . 110+4=114 (for november) . 114+6=120 (for day in month) . 120/7 = 17 rem 1. 1=SUN,
not Thursday.... Bob?.Chris Ringer....
What day of week is this???
Bob E-mailed me and explained that in the first step, you should divide by 4 not 7. Thanks Bob. Chris Ringer...
What day of week is this???
On Tuesday, November 25, 1997, 12:24 PM, Chris Ringer wrote: Bob E-mailed me and explained that in the first step, you should divide by 4 not 7. Thanks Bob. Chris Ringer... from the formula can you
tell why this year's calendar is identical to that for 1941??? Bob Hamilton TEXAS BUSINESS SYSTEMS 736 Pinehurst Richardson, Texas 75080 | {"url":"http://www.mcpressonline.com/forum/forum/x-archive-threads-started-before-01-01-2002/programming-aa/4280-what-day-of-week-is-this?4004-What-day-of-week-is-this=","timestamp":"2014-04-20T05:50:07Z","content_type":null,"content_length":"89984","record_id":"<urn:uuid:9732fcf5-0892-406e-b3c5-55dbbd580209>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00274-ip-10-147-4-33.ec2.internal.warc.gz"} |
ODE system question
up vote 2 down vote favorite
Consider a system of the form: dx/dt = f(x,y) , dy/dt=g(x,y), with the property that the associated ODE dy/dx = g(x,y)/f(x,y) has a unique solution to IVP y(0)=0.
Also, f(x,y) is smooth every except the point (0,0), at which it has an infinite discontinuity, and g(x,y) is continuous everywhere. Does it follow that there is a solution to the system which tends
to (0,0)?
This problem may be not well-formulated, but it seems like there may be a topological argument for the existence of such a solution to the system.
ds.dynamical-systems ca.analysis-and-odes
add comment
1 Answer
active oldest votes
I may be missing some subtle point here, but it seems to me that if you let Y(x) be your presumed solution to Y'(x)=g(x,Y)/f(x,Y) and then let x(t) solve dx/dt=f(x,Y(x)) and put y(t)=
Y(x(t)), you have your answer. The only way this could fail to tend to (0,0) for some value of t is if f(x,Y(x))=0 for arbitrarily small x, in which case I would question the validity
of the assumed solution Y to begin with.
up vote 1 down
vote accepted (Maybe I should have made this a comment rather than an answer, since the problem as I have understood it does not seem all that interesting. If you agree, feel more than free to not
award any points to it, though I'd appreciate the absence of negative points.)
add comment
Not the answer you're looking for? Browse other questions tagged ds.dynamical-systems ca.analysis-and-odes or ask your own question. | {"url":"http://mathoverflow.net/questions/2517/ode-system-question","timestamp":"2014-04-20T01:33:31Z","content_type":null,"content_length":"50921","record_id":"<urn:uuid:4ac3957f-f274-406a-9d58-2aeb466bfda9>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00152-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Digest
Summaries of Media Coverage of Math
Edited by Allyn Jackson, AMS
Contributors: Mike Breen (AMS), Claudia Clark (freelance science writer), Lisa DeKeukelaere (2004 AMS Media Fellow), Annette Emerson (AMS), Brie Finegold (University of California, Santa Barbara),
Adriana Salerno (Bates College)
2009 Math Digest
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
Return to Top | {"url":"http://cust-serv@ams.org/news/math-in-the-media/mathdigest-md-2009-toc","timestamp":"2014-04-19T20:39:31Z","content_type":null,"content_length":"47184","record_id":"<urn:uuid:aaae753f-65b4-4636-a90b-e82500b2d79b>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00226-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Structuring Force of Natural World
Authors: Jin He
The assumption that the mass distribution of spiral galaxies is rational was suggested 11 years ago. The rationality means that on any spiral galaxy disk plane there exists a special net of
orthogonal curves. The ratio of mass density at one side of a curve (from the net) to the one at the other side is constant along the curve. Such curve is called a proportion curve. Such net of
curves is called an orthogonal net of proportion curves. I also suggested that the arms and rings are the disturbance to the rational structure. To achieve the minimal disturbance, the disturbing
waves trace the orthogonal or non-orthogonal proportion curves. I proved 6 years ago that exponential disks and dual-handle structures are rational. Recently, I have also proved that rational
structure satisfies a cubic algebraic equation. Based on these results, this paper ultimately demonstrates visually what the orthogonal net of proportion curves looks like if the superposition of a
disk and dual-handle structures is still rational. That is, based on the natural solution of the equation, the rate of variance along the 'radial' direction of the logarithmic mass density is
obtained. Its image is called the 'basket graph'. The myth of galaxy structure will possibly be resolved based the further study of 'basket graphs'.
Comments: 13 pages. In chinese
Download: PDF
Submission history
[v1] 23 Mar 2011
Unique-IP document downloads: 180 times
Add your own feedback and questions here:
You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted
as unhelpful.
comments powered by | {"url":"http://vixra.org/abs/1103.0090","timestamp":"2014-04-19T00:07:08Z","content_type":null,"content_length":"7884","record_id":"<urn:uuid:5cf4d285-8549-4171-95f0-4be6fb6b873d>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00015-ip-10-147-4-33.ec2.internal.warc.gz"} |
Space Deformation and Group Representation
Abstract All along the 20th century, pure algebraists have dug deep into the fundamental structures of mathematics. In this extremely abstract effort, they were greatly help by the possibility of
representing these structures by space deformations, which could then be understood much better. This has led to breakthroughs, including the proof of Fermat's las theorem. This article introduces
the ideas of group representations.
February 22nd, 2013 |
Last Modified:
March 7th, 2014 |
Tags: Algebra
Group Theory
Linear Algebra
Relativity Theory
1203 |
No Comments »
Space deformations play a key role in our understanding of the Universe. Modeling them mathematically has enabled major breakthroughs in physics and chemistry. Amusingly, as they were studied, they
have also led to major advancements in mathematics too, as they correspond to great representations of more complex abstract structures known as groups. In this article, I’ll give you an introduction
to the idea of group representation which has revolutionized 20th century pure mathematics.
Space Deformation
Wait… What’s a group and what’s a group representation?
Humm… I’ll get to this. But first, let’s talk about space deformations. Look at this moray eel, and imagine it turning its head to the left.
Any turn of the eel’s head corresponds to a space deformation. The set of all space deformations we obtain this way corresponds to all rotations of space.
That’s weird to think of space moving when it seems to be still, independently from our experience…
Indeed. Yet, weirdly enough, the greatest breakthroughs of physics comes from the very understanding of how space gets changed based on who and how it is observed. This is true for Galileo’s inertial
system, Newton’s first law, Einstein’s special and general relativity, quantum mechanics or black hole theory. And the relevancy of studying space deformation is due to the fact that switching from a
viewpoint to another can be thought exactly like a space deformation.
Space deformations sound complicated… Are you sure that it can simplify problems?
Yes, thanks to the awesomeness of mathematics. How can mathematics make us understand space deformations in complicated spaces? Well, in many fields of mathematics which study deformation of
mathematical objects, it is crucial to find something that still characterize deformed objects, no matter which space deformation is applied. We call this constant characteristic an invariant.
Can you provide an example?
In the case of space deformation due to the eel’s turning its head, the most crucial invariant which characterizes these deformations is the invariance of distances. The distance between two fishes
remains the same whether the eel turns its head or not! Another important invariant is the invariance of the concepts of left and right, namely if you deform space by turning your head, what was the
left for someone remains its left. Mathematically, these two invariants characterize the set of rotations. This set is so important that it has its specific notation, namely $SO(3)$. Even though they
might not be aware of it, $SO(3)$ is the group that Galileo and Newton have been using all along. And then came Einstein…
What did Einstein bring?
In 1905, Einstein imagined that space could also be contracted by the motion of the observer! This means that he postulated that distances were no longer an invariant of Nature! Instead, based on
observations such as the Michelson-Morley experiment results, he claimed that the speed of light was an invariant. This implies that, when space gets contracted, time gets dilated accordingly so that
the speed of light remains constant. To understand this, Einstein had the great idea of considering spacetime as a single entity which was deformed by motions. In this setting, the invariance of the
speed of light could be translated into the invariance of a quantity known as the spacetime interval. All spacetime deformations which preserve the spacetime interval are known as the Lorentz group,
denoted $O(1,3)$. The 1 stands for time, and the 3 for space.
Aren’t there more invariants?
Yes! Just like for the rotational group, the orientation of space is conserved. As a result, we say that deformations are proper. And there’s a last important invariant, which is the invariance of
the direction of time. Indeed, as far as we know, time always go towards the future for all observers, at least as long as we don’t exceed the speed of light. This is the orthochronous property.
These three invariants fully described all spacetime deformations due to motions in Einstein’s special relativity theory. This major group of spacetime deformations is known as the restricted Lorentz
group and is denoted $SO^+(1,3)$. With what I’ve said, you might now better understand this great video from Minute Physics:
So Einstein’s special relativity can be reduced to SO^+(1,3)?
Yes! How amazing is that? You can learn more by reading my article on spacetime of special relativity.
What about general relativity? Is it a matter of space deformation?
Yes, but not the kind of space deformation we’re describing in this article! In order to describe groups, the only deformation of space we are interested in are those which transform lines into
lines. In the spacetime of special relativity, a “line” in spacetime corresponds to a motion at constant speed. So deformations in this space must transform motions at constant speed to motions
constant speed. Such deformations are called invertible linear transformations, mathematical better known as linear automorphisms. Together, these deformations form the general linear group, quite
often denoted $GL$.
The Alternating Group
Let’s now get to group representations! Maybe I should define what is meant by group first…
The concept of groups is probably one of the most abstract in mathematics. Roughly said, a group is an abstract collection of objects which combine nicely with each other.
It’s quite abstract indeed…
A great example of groups is the set of symmetries, as you can read it in this article of mine. But there are plenty of other major groups, like Lorentz’s group. Another more popular example is the
set of moves you can do to the Rubik’s cube. In this article, let’s focus on the alternating group of degree 3, denoted $A_3$.
What the hell is an alternating group?
Consider three guys with three hats, as in the following figure:
Now, the three guys can trade their hats. Each trade is called a permutation of hats, and is thus an element of the group of permutations $S_3$. The alternating group of degree 3 is the set of
permutation for which either everyone or no one trades. All 3 possible trades are displayed below, where arrows have to be read “gives his hat to”.
For instance, if the cyan trade is carried out, then the blue guy gets the green hat, the purple guy gets the blue hat, and the green guy gets the blue hat.
And you’re saying that trades can be combined nicely?
Yes. Once a trade has happened, we can carry on with another trade. Assume we started with the cyan one, and that we now proceed the orange one. We would then be combining the cyan trade with the
orange trade. The operations are displayed below:
Note that the overall operation corresponds to doing nothing. This is the left permutation of the above figure (let’s call it the black trade)! This is major feature of groups: the combination of any
two elements of a group is still an element of the group. We say that the combination is a binary operation. Another major property is that we can always go back to the initial position. We say that
any element of the group is invertible. Finally, the last property groups must have is called associativity, but I won’t dwell on this here.
OK, so whenever we combine two trades, which trade do we obtain?
Good question. The following table corresponds to a table of multiplication. The result of the combination of two trades is drawn similarly to the arrows displayed in the previous picture.
Group Representation
OK… What about group representations now?
A representation of a group is the matching of elements of the group with space deformations. Indeed, what’s great about space deformations is that they form a group! Just like moves on a Rubik’s
cube or the trades we have just discussed, space deformations can be sequenced. This corresponds to deforming a deformed space. Also, space deformations can be inverted, which means transformed back
to how space was before being deformed.
OK, space deformations form a group, but groups aren’t necessarily space deformations, are they?
No, but they can always be matched with space deformations. Such a matching is a group representation.
Does the matching have to be a one-to-one match?
Not at all. The one thing that’s tremendously important is the fact that the structure of group is conserved by the matching. This means that the space deformation associated to a combination of two
elements of a group must be the combination of the two space deformations associated to the two elements. In other words, combinations of elements of the group are matched with combinations of their
associated space deformations. This mapping which preserves the group structure is called a morphism.
OK, but how do we construct a group representation?
Well, if I were smart, I could guess one. But since I’m not, let’s use a classical one, called the regular representation. It consists in defining a vector space whose basis vectors correspond to
elements of the group. In our case of A[3], because there are 3 elements in the group, this corresponds to defining a basis of a 3-dimensional space.
This gives a space… but not space deformations…
I’m getting there! Consider a trade of hat. When you combine them with another one, you obtain a third one. Thus, the space deformation induced by a trade consists in switching basis vectors
accordingly to the result of combining the trade with each trade associated to basis vectors. For instance, the cyan trade corresponds to a space deformation that transforms the orange basis vector
by a black basis vector, because the combination of cyan and orange is black. The following figure displays the three space deformations induced by the three elements of the group $A_3$.
I have trouble conceiving these space deformations… Are they rotations of space?
Hehe… That’s a great question! The study of simplifying the understanding of a set of deformations is crucial to better understand group representations. The key ingredient to achieve this
simplification is to break down space into irreducible spaces.
Irreducible Representation
Let’s get back to Einstein’s spacetime of special relativity. His breakthrough was to imagine that space and time couldn’t be separated. In some sense, this means that spacetime cannot be reduced to
space and to time. Considerations of motions imply that spacetime is irreducible.
So if we don’t take motion into account, would spacetime be reducible?
Yes! In this case, no spacetime deformation transforms space into time, nor time into space. As a result, we can separate them. Space can be studied on its own, and space deformations correspond to
rotations. Meanwhile, time is constant.
OK… Now, what about the space we’ve just studied?
Hehe. Check the diagonale and its orthogonal plane:
The green diagonale is a stable line since it remains invariant for any of the three space deformations. Similarly, the blue plane, which is orthogonal to the green diagonale, is a stable subspace.
As a result, we can break our space into the the green and blue subspaces, and study each subspace separately.
Are these subspaces irreducible?
The line surely is irreducible, as it’s the smallest subspace one can think of. We say that it is an irreducible representation of the alternating group. This representation is extremely simple: None
of space deformations associated to the group deform it. This representation exists for any group and is called a the trivial group representation. Obviously, it’s not the most interesting one…
So let’s talk about the orthogonal plane!
Yes. We can project the basis vectors onto the blue plane. The following figure displays what’s happening on this blue plane:
These look like rotations…
Yes! That’s because they are! What this means is that the group $A_3$ can be represented by rotations of 0, 1 and 2 thirds of a turn, which is a much more visual representation than picturing the
trade of hats!
But I guess representations of more complex groups can be so much more complicated that they can’t really be visualized…
Indeed. Surprisingly though, the understanding of space deformations can be reduced to a few values, known as characters.
What are characters?
First, I need to introduce a weird but essential construction. For the sake of group representations, it’s relevant to imagine that each basis vector actually creates its own plane!
What? What are you talking about?
I know it’s a hard thing to get one’s head around, but we may consider that each basis vector creates a plane around which the vector can rotate and stretch (but not much more). This plane is called
the complex plane, and it matches the nice properties of complex numbers. If all basis vectors create their complex planes, we say that the space is a complex vector space. With this construction,
our blue stable plane becomes rather a four dimensional real space, as explained in the following video from the series A Walk through Mathematics by researchers and artists from ENS Lyon, France
(check the whole series, it’s great!):
It’s a nice video… But I’m not sure it helps me visualize 4 dimensional spaces…
I know! It is hard! I’m actively searching for a better visualization of higher dimensional spaces, but the simplest description we seem to have is through the abstractness of algebra…
OK… So if we do that, the blue space is no longer irreducible?
No! It can be decomposed into two complex planes which are stable for all space deformations. On these complex planes, the space deformations still correspond to rotations of 0, 1 or 2 thirds of a
turn. But they can now be expressed by a single complex number, respectively 1, $e^{2iπ/3}$ and $e^{4iπ/3}$. This is displayed below, where the two stable complex planes are colored in pink and
Notice that, on the pink complex plane, the cyan space deformation corresponds to 1 third of a turn clockwise. This means that the cyan space deformation corresponds to multiplying the pink complex
plane by $e^{2iπ/3}$. This number is what we call the character of the cyan element of the alternating group for its irreducible representation on the pink complex plane. The character is thus a
description of the space deformation on this irreducible subspace by one single complex number. In our setting, this character fully describes the space deformation, but we may in fact consider the
character of a space of larger dimension. In this case, the character is a partial description of the space deformation by one single complex number, known as the trace.
If you are familiar with
linear algebra
, you probably know that the trace is the sum of diagonal elements of any matrix representation. More interestingly, the trace is the sum of all eigenvalues counted with multiplicity. If you don’t
understand what I’m saying, don’t worry, you won’t need it to understand this to get the big picture. Just keep in mind that a character is a compact partial description of a space deformation.
So I guess there’s one character for each element of the group and each representation?
Exactly! On the yellow irreducible complex plane, the cyan space deformation corresponds to 2 thirds of a turn clockwise, which means that its character is $e^{4iπ/3}$. For the orange space
deformation, it is the other way around, namely a character $e^{4iπ/3}$ on the pink plane, and $e^{2iπ/3}$ on the yellow one. Meanwhile, the black space deformation still modifies nothing, which
implies that its characters are 1 and 1.
What about the stable green real line we found earlier? Is it irreducible if it generates a complex plane?
Yes. Complex planes are always considered irreducible, as multiplying any of its vector by complex numbers generates the whole plane. In particular, since all space deformations leave the green line
still, the characters on this plane all equal to 1. Thus, we now have computed all characters of irreducible subspaces of the regular representation.
But we haven’t computed characters of all irreducible representations, have we?
Surprisingly, yes! This is a consequence of the Frobenius’ theorem proven in 1897, which applies to finite groups. Now, this theorem involves the concept of central functions which would be too long
for me to explain here. So let’s only focus on its amazing consequences. One of them is the fact that the regular representation contains all irreducible representations. So, yes, we have computed
all irreducible characters! These characters are represented in the following figure, called the character table, which can be considered as a map of the group.
Note that, in our case of A[3], all irreducible spaces are of complex dimension 1. This is due to the fact that $A_3$ is commutative (the order of combination doesn’t matter). In cases of
non-commutative groups, there are irreducible higher dimension representations.
Frobenius’ theorem has other amazing implications! It implies that all representations are made of these irreducible representations. Even more surprising, if we are given the character of a
representation only, we can easily find out which irreducible representations make up the representation described by the character. This means that the understanding of a representation can be
equivalently reduced to its character!
So the understanding of group representation simply boils down to the research of the irreducible spaces?
For finite groups, precisely! And our work is greatly simplified by another corollary of Frobenius’ theorem. Namely, it is a simple calculation to test whether a space is irreducible based on its
character. So really, group representation simply boils down to the mere study of irreducible characters!
Let’s Conclude
To recapitulate, we have discussed space deformations and their importance to physics. Then, we showed how relevant they could be for descriptions of groups. We have then explained how the complexity
of space deformations could be simplified into the the study of stable subspaces, and, eventually, irreducible subspaces. Finally, we have shown how the simpler idea of characters can provide all the
information about representations.
This all sounds very complicated and abstract… What good is there in this?
As said in the introduction, group representations are essential for the understanding of some groups which are so abstract that there are barely other ways to think about them than through their
characters. In particular, the tremendous work of classifying finite groups has involved a great understanding of group representations. Basically, finite groups can all be divided into simple
groups, just like numbers can be divided into primes. Thus, we only need to understand and classify simple groups to understand and classify all groups.
Has it been done?
Yes! This classification is now summarized in a few thousand pages that nobody understands entirely. As it turns out, there are only a few classes of simple groups, namely those constructed by prime
numbers, the alternating groups of all degrees higher than 5, a few other classes of groups, and 26 other isolated groups, called the sporadic groups. While physicists were chasing elementary
particles, pure algebraists were searching for these sporadic groups all along the 20th century to complete the classification. In 1982, their quest ended: Fischer and Griess finally constructed the
last and largest of these sporadic groups. This Monster Group (that’s its name!) contains about $10^{54}$ elements! And none of this could have been done if it weren’t for its representation in a
space of dimension 196,883, which greatly simplifies it. This space is its smallest irreducible representation!
Waw! So, is that it? Are algebraists done?
Obviously, no. For one thing, they now have to face infinite groups. And, in this quest, group representations still have a major role to play. In fact, Andrew Wiles’ proof of Fermat’s last theorem
done in 1994 involves the use of several complex groups which can only be manipulated through their group representations!
You must log in to comment this article. | {"url":"http://www.science4all.org/le-nguyen-hoang/space-deformation-and-group-representation/","timestamp":"2014-04-19T06:58:19Z","content_type":null,"content_length":"71671","record_id":"<urn:uuid:6e6d76a2-8b0e-403f-a2cb-37b11d057e29>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00659-ip-10-147-4-33.ec2.internal.warc.gz"} |
A mess with which I am comfortable - Statistical Modeling, Causal Inference, and Social Science
A mess with which I am comfortable
2 Comments
1. Looking at your recent postings regarding to statistical methods, it is not surprising that you have thought about this “mess”. My impression is that you are trying to find the methods to include
higher order interactions in multilevel modeling. With my short experiences for this type of data, I am just wondering whether the higher order interactions are really key to entangle existing
complexity. It will be very interesting to see why you decided to go for interactions, not other aspects of modeling through your work. However, though I have worked on this problem little, I
would rather keep my hierarchical model simple by pushing all uncertainties in data into the random effects for space and time. Well, I cannot explain yet why or how I come up with this kind of
thoughts, I believe keeping a model simple other than random effects and the hierarchical structures due to them would be the best way to benefit this method the most. Will see how your thoughts
develop and how I can define my thoughts after seeing your thoughts.
2. Hi Andrew,
Very interesting paper. I have a minor question about Table 3. You say that “ignoring the weighting or treating the weights as constant underestimates uncertainty, whereas uncertainty is
overestimated by treating the weights as inverse probabilities”, and maybe I’m backwards here but the column for “conditioning on weights” looks like higher numbers than the column for “assuming
inv-prob”. My apologies if I’m totally confused!
Recent Comments
• Phillip M. on Transitioning to Stan
• Anonymous on When you believe in things that you don’t understand
• Noah on Transitioning to Stan
• Nick Menzies on When you believe in things that you don’t understand | {"url":"http://andrewgelman.com/2013/04/20/a-mess-with-which-i-am-comfortable/","timestamp":"2014-04-16T13:04:54Z","content_type":null,"content_length":"23345","record_id":"<urn:uuid:5c54fc73-037a-40ec-8404-05815e20aa44>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00381-ip-10-147-4-33.ec2.internal.warc.gz"} |
New York Precalculus Tutor
Find a New York Precalculus Tutor
...I am an economic historian and am currently working on a number of papers for presentation at conferences. I am equally familiar with American and global history. I have taught chemistry
privately and in college for over thirty years and have BS and MS degrees in the subject, as well as R&D experience.
50 Subjects: including precalculus, chemistry, calculus, physics
...My years of teaching were very enjoyable and worthwhile. I always had good results with my students. At this time, I wish to remain productive by doing some Physics tutoring.
7 Subjects: including precalculus, physics, geometry, algebra 1
I love math/science and love to share my enthusiasm for these subjects with my students. I did my undergraduate in Physics and Astronomy at Vassar, and did an Engineering degree at Dartmouth. I'm
now a PhD student at Columbia in Astronomy (have completed two Masters by now) and will be done in a year.
11 Subjects: including precalculus, Spanish, calculus, physics
...Cancelling a lesson less than twenty four hours before it is scheduled to take place will incur a one hour fee. Not showing up on time is considered a cancellation.I have taught genetics to
middle/high schoolers through the New York Academy of Sciences. I have taught the basis of DNA and Mendelian genetics.
22 Subjects: including precalculus, reading, ACT Science, public speaking
...I am experienced in teaching algebra, geometry, calculus, biology, chemistry, physics, writing, public speaking and presentation, as well as all elementary subjects. I have tutored pupils
ranging from elementary age through college. My number one priority is the learning experience of my pupil.
43 Subjects: including precalculus, reading, writing, English | {"url":"http://www.purplemath.com/New_York_Precalculus_tutors.php","timestamp":"2014-04-20T07:06:16Z","content_type":null,"content_length":"23962","record_id":"<urn:uuid:8c8092ff-c930-45f8-9055-ebff91a8c971>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00506-ip-10-147-4-33.ec2.internal.warc.gz"} |
1.2 – A stable Holeum
For the sake of simplicity we consider two identical black holes of mass $m$. The gravitational potential between them is given by
$V(r)=-\frac{m^{2}G}{r}=-\frac{\alpha_{g}\hbar c}{r}$ (eq. 1.25)
where $r$ is the distance between them. $\hbar$, $c$ and $G$ are Planck’s constant reduced by $2\pi$, the speed of light in vacuum and Newton’s universal gravitational constant respectively. $\alpha_
{g}$ is the gravitational analogue of the fine structure constant, given by
$\alpha_{g}=\frac{m^{2}G}{\hbar c}=\frac{m^{2}}{m_{P}^{2}}$ (eq. 1.26)
$m_{P}=\left( \frac{\hbar c}{G}\right) ^{\frac{1}{2}}$ (eq. 1.27)
is the Planck mass. The Schrödinger equation is exactly solvable for the $r^{-1}$ potential and the energy eigenvalues, formally identical with those of the hydrogen atom, are given by [2]
$E_{n}=-\frac{\mu c^{2}\alpha_{g}^{2}}{2n^{2}}$ (eq. 1.28)
where $n$ is the principal quantum number, n=1,2,…$\infty$ and $\mu=m/2$ is the reduced mass. In the following we will consider, for simplicity, only the $l=0$, $s$-states. The eigenfunction for an
$ns$ state is given by [1]
$\Psi_{ns}=A_{n}L_{n-1}^{1}(t)e^{-t/2}$ (eq. 1.29)
$\chi=\frac{\alpha_{g}^{2}}{nR}$ (eq. 1.31)
$R=\frac{2mG}{c^{2}}$ (eq. 1.32)
where $R$ is the Schwarzschild radius of the black hole. Here $L_{n}^{m}(x)$ is the associated Laguerre polynomial and
$A_{n}^{2}=\frac{4\chi^{3}}{n^{2}(n!)^{2}}$ (eq. 1.33)
The maxima of the probability density
$g(r)=r^{2}|\Psi_{ns}|^{2}$ (eq. 1.34)
give us the radii of the stable orbits. For the 1s state the radius of the orbit is given by
$r_{_{1}}=\frac{R}{\alpha_{g}^{2}}$ (eq. 1.35)
For the 2s state there are two orbits with radii
$r_{2\pm}=\frac{\left(3\pm\sqrt{5}\right)R}{\alpha_{g}^{2}}$ (eq. 1.36)
For $m\gg1$ we have [3],
$L_{m}^{\alpha}(x)\cong\pi^{-\frac{1}{2}}\left(m+\alpha\right)!x^{-\frac{\alpha}{2}-\frac{1}{4}}m^{\frac{\alpha}{2}-\frac{1}{4}}e^{\frac{x}{2}}\cos\left[2(mx)^{\frac{1}{2}}-\frac{\alpha\pi} (eq.
{2}-\frac{\pi}{4}\right]$ 1.37)
Using this we can show that for $n$, $n^{^{\prime}}\gg1$, the radii of the stable orbits are given by
$r_{n}=\frac{\pi^{2}n^{^{\prime}2}R}{8\alpha_{g}^{2}}$ (eq. 1.38)
where $n^{^{\prime}}$=1,2,…,n. The maxima of the probability density are given by
$g_{\max}\cong\frac{n^{^{\prime}}\alpha_{g}^{2}}{2n^{3}R}$ (eq. 1.39)
Because of the large factor $n^{3}$ in the denominator this is appreciable only for $n^{^{\prime}}=n$. Thus we take $n^{^{\prime}}=n$ in equations (1.38) and (1.39) to get the equation
$r_{n}\cong\left(\frac{n^{2}R}{\alpha_{g}^{2}}\right)\left(\frac{\pi^{2}}{8}\right)$ (eq. 1.40)
It is interesting to note that the value given by the semiclassical Bohr theory is
$r_{n}\cong\frac{n^{2}R}{\alpha_{g}^{2}}$ (eq. 1.41)
Since $\pi^{2}/8$ in equation (1.40) is of the order of unity the two results, equations (1.40) and (1.41), are comparable. Since the area of a black hole never decreases and since the black holes in
a stable Holeum must not overlap, all bound state radii $r_{n}$ must exceed twice the black hole horizon radius in appropriate coordinates. Naïvely one might be tempted to say $r_{n}>2R$. Strictly
speaking, we are prevented from extracting such a precise inequality, however, by the fact that our analysis is only valid for values of $r\gg R$.
A true “nonoverlap” condition for the black holes can only really be extracted in the strong field regime, where our analysis is not valid. In the strong field case care must be taken, since the
position of the black hole horizon is coordinate-dependent, and in terms of an isotropic radial coordinate for which $r^{2}=x^{2}+y^{2}+z^{2}$ in terms of asymptotically Euclidean coordinates $x$,
$y$, $z$ (and which, therefore, is the radial coordinate appropriate to the weak field limit used here), the position of the horizon is actually at $r=R/4$.
However, black holes which are very close together can really only be analysed by a full general relativistic solution to the appropriate two-body problem. Coordinate distances cannot be expected to
be simply additive in this case. We nonetheless find that it is useful to extract a rough dividing line between stable and unstable Holeums. We will, therefore, define the gravitational radius of a
black hole in a purely Newtonian sense as the radius for which the escape velocity from a spherical body of mass m is equal to the velocity of light. This singles out the Schwarzschild radius,
equation (1.32), as the “Newtonian black hole radius”.
As outlined above, using $2R$ as the minimum possible separation of Newtonian “black holes”, we are led to conclude that
for all $n$ and at all times. Now we consider two cases:
$\alpha_{g}^{2}<\frac{\pi^{2}}{16}$ (eq. 1.43)
$\alpha_{g}^{2}>\frac{\pi^{2}}{16}$ (eq. 1.44)
From equations (1.40) and (1.43) we see that the condition for nonoverlap, equation (1.42), can be satisfied for all $n$, including $n=1$, the ground state. In this case, the Holeum is as stable as a
hydrogen atom. On the other hand when $\alpha_{g}$ is given by equation (1.44) the nonoverlap condition, equation (1.42) can be satisfied only if
$n^{2}>\frac{16\alpha_{g}^{2}}{\pi^{2}}>1$ (eq. 1.45)
In this case, the Holeum can exist only in excited states and the system is denied the stable ground state $n=1$. This will eventually result in the coalescence of the constituent black holes and the
destruction of the Holeum.
Now we would like to discuss the validity of equation (1.40). It is derived in the framework of Newtonian Gravity which applies for $r\gg R$. A Holeum of atomic size has a ground state radius of
about $10^{-10}$ m whereas $R<10^{-35}$ m.
Thus, the condition $r\gg R$ is eminently satisfied. Similar considerations apply to the nuclear-sized Holeum. In fact, even if we take the black hole mass as big as $m=0.1m_{P}$, we get the Holeum
$r_{n}=10^{4}n^{2}R\left(\frac{\pi^{2}}{8}\right)$ (eq. 1.46)
And this, too, eminently satisfies the NG condition, $r_{n}\gg R$. Now consider the dividing line
$\alpha_{g}^{2}=\frac{\pi^{2}}{16}$ (eq. 1.47)
between the unstable and stable Holeums. This corresponds to the mass of the black holes $m=0.8862m_{P}$. For this case the NG breaks down as expected from our discussion above. Nevertheless, we note
that the potential for $r\gg R$ is still $r^{-1}$ and in view of the BSW and the POTHA presented in the introduction, we might still hope to get reasonable order of magnitude values of the bound
state parameters in this strong field regime.
In summary, equation (1.40) is to be regarded as an asymptotic expression in the strong field case and exact elsewhere, whereas the inequalities in equations (1.42) - (1.45) are to be regarded as
probably only valid to within an order of magnitude. This caveat must be borne in mind in what follows.
The mass of the bound state is given by
$M_{n}=2m+\frac{E_{n}}{c^{2}}$ (eq. 1.48)
Substituting equation (1.28) into equation (1.48) we obtain
$M_{n}=2m\left( 1-\frac{\alpha_{g}^{2}}{8n^{2}}\right)$ (eq. 1.49)
From equations (1.31), (1.40) and (1.49) we get
$\frac{M_{n}}{r_{n}}=\left(\frac{16\alpha_{g}^{2}}{\pi^{2}n^{2}}\right)\left(\frac{c^{2}}{2G}\right)\left(1-\frac{\alpha_{g}^{2}}{8n^{2}}\right)$ (eq. 1.50)
In view of equation (1.43), we see that for a stable Holeum
$\frac{M_{n}}{r_{n}}<\frac{c^{2}}{2G}$ (eq. 1.51)
for all n. This shows that a stable Holeum satisfying equation (1.43) is not a black hole. With the help of equation (1.26) we can rewrite the condition for a stable Holeum, equation (1.43) as
$m<\left(\frac{\pi^{\frac{1}{2}}}{2}\right) m_{P}\equiv m_{c}$ (eq. 1.52)
where $m_{c}$ will be called the cosmic limit for the formation of a stable Holeum. The numerical value of $m_{c}$, equation (1.52), is
$m_{c}=0.8862m_{P}$ (eq. 1.53)
whereas the semiclassical Bohr result, equation (1.41), gives a slightly different value $m_{c}=2^{-\frac{1}{4}}m_{P}=0.8409m_{P}$. Thus, we find that if each of the masses of two identical black
holes is less than $m_{c}$ then they will form a stable Holeum. We note that if the black holes have unequal masses $m_{1}$ and $m_{2}$ then the condition for a stable Holeum would be
$\left(m_{1}m_{2}\right)^{\frac{1}{2}}<m_{c}$ (eq. 1.54)
Equation (1.52) is both the necessary and sufficient condition for the nonoverlap of the constituent black holes of a stable Holeum embodied in equation (1.42). Not only that, it guarantees that the
Holeum will be stable. Equation (1.52) implies equation (1.42) which, in turn, implies that a Holeum occupies space just like ordinary matter. Its size cannot be reduced below $2R$. This nonoverlap
property is similar to the Pauli exclusion principle and reminds us of the following result from the second quantized field theory.
If we try to second-quantize a spinor field using a commutation rule rather than an anticommutation one, then there is no lower bound on the energy of the bound state and there will be no stable
fermions in the universe. On the other hand, if we quantize a spinor field using an anticommutation rule then there is a lower bound and the system is stable.
The anticommutation rule leads to the exclusion property. This is a spin-dependent property. In our case we have derived equation (1.40) from the maxima of the probability density which has no
classical analogue. This is a purely
quantum mechanical property except that it is a mass-dependent one. If the mass of the black holes is less than $m_{c}$, there is no overlap and the system also has the ground state $n=1$. If the
mass is greater than $m_{c}$, they overlap and the ground state $n=1$ is unavailable to them. They would annihilate.
[1] L. K. Chavda and Abhijit L. Chavda, Dark matter and stable bound states of primordial black holes, arXiv:gr-qc/0308054 (2002).
[2] Merzbacher E, Quantum Mechanics (New York: Wiley) p 190 (1961).
[3] Gradshteyn I S and Ryzhik I W, Table of Integrals, Series and Products (New York: Academic) p 1039 (1965). | {"url":"http://actaphysica.com/black-holes/a-stable-holeum/","timestamp":"2014-04-19T17:37:04Z","content_type":null,"content_length":"57859","record_id":"<urn:uuid:d0918177-a7fc-4e83-8bff-11de95fb5cc6>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00618-ip-10-147-4-33.ec2.internal.warc.gz"} |
North Chelmsford Geometry Tutor
Find a North Chelmsford Geometry Tutor
...I specialize in biology, but am also qualified to tutor other science courses. I also offer SAT prep and writing help. My content knowledge: Science - In undergraduate, I majored in biology and
graduated as valedictorian of my class.
22 Subjects: including geometry, reading, writing, biology
...A little about me...I have a Bachelor of Science degree in Mathematics, graduating Summa Cum Laude with a GPA of 3.92. I have over 8 years experience tutoring math at colleges and on a private
basis. Not only am I a nationally certified peer tutor through the College Reading & Learning Association, but I am also licensed to teach math in the state of Massachusetts at the high school
15 Subjects: including geometry, calculus, statistics, algebra 1
...Many problems on the SAT are unlike any that students may have experienced in their Math classes at school. I help my students with "SAT Math" by a) expanding and deepening their understanding
of the ideas behind Mathematics; b) showing them how to think on their feet and apply basic Mathematica...
14 Subjects: including geometry, calculus, trigonometry, SAT math
...All of this is helped by heavy doses of encouragement; by identifying, tracking, and celebrating tangible progress towards goals; and by constant subtle and/or explicit reminders of why the
work at hand is, in fact, worth doing (which it invariably is). I have enjoyed this work a great deal and ...
26 Subjects: including geometry, English, reading, ESL/ESOL
...I was a GMAT instructor for Princeton Review and Kaplan As someone who has had to juggle many subjects, papers, projects, tests, and deadlines in my undergraduate and graduate studies, I am
delighted to help and have helped numerous students throughout the years to improve their study effectivene...
67 Subjects: including geometry, English, calculus, reading | {"url":"http://www.purplemath.com/north_chelmsford_ma_geometry_tutors.php","timestamp":"2014-04-16T19:32:03Z","content_type":null,"content_length":"24332","record_id":"<urn:uuid:ba2866fa-357f-42cf-b4c7-455211652af5>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00177-ip-10-147-4-33.ec2.internal.warc.gz"} |
Visit the Department of Mathematics web site for current course listings.
To apply for a graduate program visit the Faculty of Graduate Studies.
Math 418-419
Starting Fall 2013, Math 418 will be crosslisted with Math 544, and Math 419 with Math 545.
Probability: Math 544-545 (crosslisted with Math 418-419)
This pair of courses provides a thorough introduction to measure-theoretic probability. No prior knowledge of probability is assumed. Results from measure theory are stated and used without proof.
The focus is on discrete time and continuous time stochastic processes. Topics include: martingales, law of large numbers, central limit theorem, Brownian motion and special topics.
Stochastic Analysis: Math 546
This is a rigorous course on finite dimensional continuous stochastic processes, focusing on Markov processes. Topics include: stochastic integration with respect to continuous semimartingales, Itô's
formula for continuous semimartingales and applications, stochastic differential equations, Girsanov's formula, martingale problems.
Additional topics depending on the interests of the class may then be chosen from: one-dimensional diffusion theory, local time, introduction to SLE, applications to areas such as filtering,
stochastic control, genetics, mathematical finance, Stroock-Varadhan theory for finite dimensional diffusions.
Prerequisites: Math 545 or consent of the instructor. Students from other Departments interested in learning about stochastic analysis from a mathematical perspective are encouraged.
Discrete Probability: Math 548
This course covers more advanced topics in discrete probability. Some probability background is needed, including Markov chains and martingales. Measure theory may be used at some points. Topics
include spectral analysis of Markov chains and mixing times, electrical networks and random walks, random graphs (Erdos-Renyi, random regular graphs, etc.), percolation (leading up to Smirnov's
theorem on conformal invariance) and other statistical mechanics models (Ising, Potts).
Topics in Probability: Math 608
This is a topics course in probability which is offered when there is sufficient student interest. The topic of the course changes from year to year depending on the interests of students and
In Fall 2013 a pilot course on Stochastic Processes and their Applications will be taught as MATH 608D. The course is intended for graduate students in applied fields with a need for Markov Chains
and Monte Carlo methods, but without an advanced background in analysis. The course outline can be found here.
Topics in Mathematical Physics: Math 609
This topics course often studies a subject of interest to graduate students in probability. | {"url":"http://www.math.ubc.ca/Research/Probability/courses.html","timestamp":"2014-04-20T08:19:56Z","content_type":null,"content_length":"5088","record_id":"<urn:uuid:77791b56-dd30-49e4-ba17-66d0ac190b16>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00460-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculus: Power Series Videos | MindBites
Series: Calculus: Power Series
About this Series
• Lessons: 5
• Total Time: 0h 51m
• Use: Watch Online & Download
• Access Period: Unlimited
• Created At: 07/29/2009
• Last Updated At: 04/11/2011
In this 5-lesson series, we'll cover Power Series, Intervals of Convergence, and the Radius of Convergence. A power series is the infinite sum of successive powers of a given variable each
accompanied by a coefficient. Power series are the generalization of Taylor and Maclaurin series. All power series converge for x = c. If a power series converges for any other values of x, then it
converges for all x-values on an interval of convergence centered at x = c.
The interval of convergence of a power series is the collection of points for which the series converges. The radius of convergence of a power series is the distance between the center and either
endpoint of the interval of convergence. We'll derive both of these concepts. Then, you'll learn to use the ratio test to find the radius and interval of convergence. We'll do this with several
series, including x^n / n! and x^n / n and x^n * (x - 1)^n and (-1)^n (x-3)^n / (n*2^n) and (-x)^n / n^(1/2). Once we find the intervals and radius of convergence for each of these, we'll learn how
to check our endpoints such that we can verify whether they are included in the interval or not.
For the next layer of information on Power Series, check out: the Power Series Function Representations series at: http://mindbites.com/series/200
Taught by Professor Edward Burger, this series comes from a comprehensive Calculus course. This course and others are available from Thinkwell, Inc. The full course can be found at http://
www.thinkwell.com/student/product/calculus. The full course covers limits, derivatives, implicit differentiation, integration or antidifferentiation, L'Hopital's Rule, functions and their inverses,
improper integrals, integral calculus, differential calculus, sequences, series, differential equations, parametric equations, polar coordinates, vector calculus and a variety of other AP Calculus,
College Calculus and Calculus II topics.
About this Author
2174 lessons
Founded in 1997, Thinkwell has succeeded in creating "next-generation" textbooks that help students learn and teachers teach. Capitalizing on the power of new technology, Thinkwell products prepare
students more effectively for their coursework than any printed textbook can. Thinkwell has assembled a group of talented industry professionals who have shaped the company into the leading provider
of technology-based textbooks. For more information about Thinkwell, please visit www.thinkwell.com or visit Thinkwell's Video Lesson Store at http://thinkwell.mindbites.com/.
Thinkwell lessons feature a star-studded cast of outstanding university professors: Edward Burger (Pre-Algebra through...
Lessons Included
None of the lesson in this series have been reviewed.
Below are the descriptions for each of the lessons included in the series:
• Calculus: The Definition of Power Series
Taught by Professor Edward Burger, this lesson comes from a comprehensive Calculus course. This course and others are available from Thinkwell, Inc. The full course can be found at http://
www.thinkwell.com/student/product/calculus. The full course covers limits, derivatives, implicit differentiation, integration or antidifferentiation, L'Hopital's Rule, functions and their
inverses, improper integrals, integral calculus, differential calculus, sequences, series, differential equations, parametric equations, polar coordinates, vector calculus and a variety of other
AP Calculus, College Calculus and Calculus II topics.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from
Connecticut College.
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger
has won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association
of America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, "Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas" and of the textbook "The
Heart of Mathematics: An Invitation to Effective Thinking". He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math
journals, including The "Journal of Number Theory" and "American Mathematical Monthly". His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of
numbers, and the theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
• Calculus: The Interval and Radius of Convergence
Taught by Professor Edward Burger, this lesson comes from a comprehensive Calculus course. This course and others are available from Thinkwell, Inc. The full course can be found at http://
www.thinkwell.com/student/product/calculus. The full course covers limits, derivatives, implicit differentiation, integration or antidifferentiation, L'Hopital's Rule, functions and their
inverses, improper integrals, integral calculus, differential calculus, sequences, series, differential equations, parametric equations, polar coordinates, vector calculus and a variety of other
AP Calculus, College Calculus and Calculus II topics.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from
Connecticut College.
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger
has won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association
of America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, "Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas" and of the textbook "The
Heart of Mathematics: An Invitation to Effective Thinking". He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math
journals, including The "Journal of Number Theory" and "American Mathematical Monthly". His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of
numbers, and the theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
• Calculus: Finding Convergence Interval & Radius 1
Taught by Professor Edward Burger, this lesson comes from a comprehensive Calculus course. This course and others are available from Thinkwell, Inc. The full course can be found at http://
www.thinkwell.com/student/product/calculus. The full course covers limits, derivatives, implicit differentiation, integration or antidifferentiation, L'Hopital's Rule, functions and their
inverses, improper integrals, integral calculus, differential calculus, sequences, series, differential equations, parametric equations, polar coordinates, vector calculus and a variety of other
AP Calculus, College Calculus and Calculus II topics.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from
Connecticut College.
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger
has won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association
of America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, "Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas" and of the textbook "The
Heart of Mathematics: An Invitation to Effective Thinking". He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math
journals, including The "Journal of Number Theory" and "American Mathematical Monthly". His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of
numbers, and the theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
• Calculus: Finding Convergence Interval & Radius 2
Taught by Professor Edward Burger, this lesson comes from a comprehensive Calculus course. This course and others are available from Thinkwell, Inc. The full course can be found at http://
www.thinkwell.com/student/product/calculus. The full course covers limits, derivatives, implicit differentiation, integration or antidifferentiation, L'Hopital's Rule, functions and their
inverses, improper integrals, integral calculus, differential calculus, sequences, series, differential equations, parametric equations, polar coordinates, vector calculus and a variety of other
AP Calculus, College Calculus and Calculus II topics.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from
Connecticut College.
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger
has won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association
of America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, "Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas" and of the textbook "The
Heart of Mathematics: An Invitation to Effective Thinking". He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math
journals, including The "Journal of Number Theory" and "American Mathematical Monthly". His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of
numbers, and the theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
• Calculus: Interval and Radius of Convergence: Pt 3
Taught by Professor Edward Burger, this lesson comes from a comprehensive Calculus course. This course and others are available from Thinkwell, Inc. The full course can be found at http://
www.thinkwell.com/student/product/calculus. The full course covers limits, derivatives, implicit differentiation, integration or antidifferentiation, L'Hopital's Rule, functions and their
inverses, improper integrals, integral calculus, differential calculus, sequences, series, differential equations, parametric equations, polar coordinates, vector calculus and a variety of other
AP Calculus, College Calculus and Calculus II topics.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from
Connecticut College.
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger
has won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association
of America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, "Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas" and of the textbook "The
Heart of Mathematics: An Invitation to Effective Thinking". He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math
journals, including The "Journal of Number Theory" and "American Mathematical Monthly". His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of
numbers, and the theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
Supplementary Files:
• Once you purchase this series you will have access to these files:
Buy Now and Start Learning
Also from Thinkwell:
Link to this page
Copy and paste the following snippet: | {"url":"http://www.mindbites.com/series/199-calculus-power-series","timestamp":"2014-04-21T09:46:39Z","content_type":null,"content_length":"43016","record_id":"<urn:uuid:7e4177ba-b537-42b9-bd6d-e4e09811748e>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00241-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is integer factorization harder than RSA ($n=pq$) factorization?
up vote 5 down vote favorite
This is a repost. I could not get a precise answer on math.SE and cstheory.SE
Let FACT denote the integer factoring problem: given $n \in \mathbb{N},$ find primes $p_i \in \mathbb{N},$ and integers $e_i \in \mathbb{N},$ such that $n = \prod_{i=0}^{k} p_{i}^{e_i}.$
Let RSA denote the special case of factoring problem where $k=2, e_i=1$ for all $i$. That is, given $n,$ find two primes $p,q,$ such that $n = pq$ or NONE if this factorization does not exist.
Obviously, RSA is an instance of FACT. Is FACT harder than RSA? Given an oracle that solves RSA in polynomial time, could we construct a polynomial time algorithm to solve FACT?
factorization computational-complexity reference-request
2 I closed your answer as 'no longer relevant.' The reason is that you didn't wait long enough before cross posting your question. Once you've waited a few days, flag your post for moderator
attention and it will be reopened. – François G. Dorais♦ May 25 '11 at 3:37
@François: Well, fair enough. – M.S. May 25 '11 at 3:44
3 I would very much want the question to be reopen but I don't want to argue about MO policy so I will just wait patiently. meanwhile, to clarify the question, what would the oracle return for an
input which is not a pq number? will it return "false" or just run into an infinite loop? the question is whether you can use this oracle for detection of semi-primness as well as for
factorization of semi-primes. (I have no idea what the answer is in neither of the cases...) – KotelKanim May 25 '11 at 7:05
@KotelKanim If $n$ is not decomposable into a product $pq$ of two primes, then RSA oracle will terminate (in polytime) with the answer NONE (or 'false' if you wish). – M.S. May 25 '11 at 16:29
add comment
closed as no longer relevant by François G. Dorais♦ May 25 '11 at 3:35
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally
applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.If this question can be reworded to fit the rules in the help center
, please edit the question.
Browse other questions tagged factorization computational-complexity reference-request or ask your own question. | {"url":"http://mathoverflow.net/questions/65921/is-integer-factorization-harder-than-rsa-n-pq-factorization","timestamp":"2014-04-16T04:57:27Z","content_type":null,"content_length":"44264","record_id":"<urn:uuid:60245d45-a08a-4e95-b4e1-ac1d51036e15>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00369-ip-10-147-4-33.ec2.internal.warc.gz"} |
6 Times Table Multiplication Song - Times Tables Math Songs. Sample now.
6 Times Table
Multiplication Song
"Hey, 6!"
Here's the 6 Times Table Song.
"Our son was recently diagnosed with dyslexia. We downloaded the multiplication rap songs so that he could put them on his iPod. We have found them to be very helpful and he has really enjoyed them
as well. He is now having tremendous success with his multiplication." Kristin H.
to this sample of the
6 Times Table Multiplication Song
(2 through 9 times tables) to your computer, iPod, MP3 player.
Only $9.97
Charge to your credit card or PayPal
1 YEAR
100% Satisfaction
Download all 8 songs>>>>
I was stuck on my times tables
And was feelin' kind of low,
I decided to ask my buddy "6",
I knew that he would know.
You see "6" is pretty smart
But he likes to play this game,
If you want his advice you've got to pay the price
And figure out his name.
Everyday his name changes
And nobody really knows why,
But you times his name by the number on his shirt
And for a day that's what he goes by.
Hey "6"! What's your number?
Times 2, that makes 12,
Hey "6"! What's your number?
Times 3, 18 and well,
Hey "6"! What's your number?
Times 4, you're 24,
Hey "6"! What's your number?
Times 5, 30 for sure.
On his shirt 5 was his number
Which made me think awhile,
Then I turned and said "Hello, 30"
He answered with a smile.
Got lucky with my answer,
Must be an easier way,
I knew somehow that "6" would know,
He got asked this every day.
Hey "6"! What's your number?
Times 6, you're 36,
Hey "6"! What's your number?
Times 7, 42 sticks,
Hey "6"! What's your number?
Times 8, you're 48,
Hey "6"! What's your number?
Times 9, 54 straight.
So I told him 'bout my problem,
How I was stuck on my 6 times table,
So naturally I though of him
For help if he was able.
He gave me a pencil and paper,
Said, "Boy I'm glad you came",
"You want to know the tricks,
ask your buddy "6",
That's how I got my name".
Hey "6"! What's your number?
Times 10, 60 clicks,
Hey "6"! What's your number?
Times 11, you're 66,
Hey "6"! What's your number?
Times 12, 72 kicks,
Hey "6"! What's your number?
Times 1, you're just plain 6.
So I wrote the 6 times tables
And we rhymed all the answers,
Then he told me to think of the room
As if it were full of dancers.
We began to make a rhythm
With our hands, mouths and feet,
Then he told me to sing what I had written down,
And I'd have the 6's beat.
6 times 2, that makes 12,
6 times 3, 18 and well,
6 times 4, is 24,
6 times 5, is 30 for sure,
6 times 6, is 36,
6 times 7, is 42 sticks,
6 times 8, is 48,
6 times 9, is 54 straight,
6 times 10, is 60 clicks,
6 times 11, is 66,
6 times 12, is 72 kicks,
6 times 1, is just plain 6.
So that's all there is to it,
Just sing and rhyme and rap,
The next time you need to multiply,
You'll have it in a snap.
Hey "6", what's your number...
Hey "6", what's your number...
Hey "6", what's your number...
Hey "6", what's your number...
Hey "6", what's your number...
Hey "6"!
The End
The 6 Times Table Multiplication Song
© 1991 YORK 10 Industries
© 2004-13 Abbey World Media, Inc.
Learn to multiply with a song -
The 6 Times Table Multiplication Song "Hey, '6'!" -
an effective learning tool and FUN too.
Back to top - 6 Times Table Multiplication Song | {"url":"http://www.math-help-multiplication-tables.com/6-times-table-multiplication-song.html","timestamp":"2014-04-21T04:47:14Z","content_type":null,"content_length":"19686","record_id":"<urn:uuid:e6e7fcff-3e6c-4dfa-ba1d-07af6eb0fc3d>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00265-ip-10-147-4-33.ec2.internal.warc.gz"} |
Conducting Math Research
July 27th 2011, 12:26 PM
Conducting Math Research
I'm doing my first undergrad research project and I'd like some advice.
The problem statement is "Find the relation between the Feynman Path Integral and Distributional Modes."
I have outlined my approach as follows:
I. Research basic quantum physics and terms (plancks constant, hamiltonian, lagrangian, etc.)
II.Deep research on Feynman Path Integrals
III. Deep research on Distributional Modes
IV. What does it mean to find the relation between these two things? At what point can I say I've found a relation, or at what point can I say there is no relation? Also, there might be a
relation but it might be contrived. How do I know if it is a true relation or not? (I figured I need to answer these questions before I can actually find the relation :) )
V. Find the relation.
Is there any advice you could offer me? Any help would be appreciated.
Thanks (Cool)
August 6th 2011, 07:12 PM
Re: Conducting Math Research
Maybe I can make my question simpler:
In general, how does one conduct mathematical research where the question involves finding the relation between two separate mathematical topics?
August 11th 2011, 04:52 AM
Re: Conducting Math Research
In general, how does one conduct mathematical research where the question involves finding the relation between two separate mathematical topics?
They come up with a third math topic that contains/expands them. :P | {"url":"http://mathhelpforum.com/advanced-math-topics/185185-conducting-math-research-print.html","timestamp":"2014-04-17T19:47:33Z","content_type":null,"content_length":"5185","record_id":"<urn:uuid:6611c51b-a951-4c8a-94f5-f32a8928abd0>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00383-ip-10-147-4-33.ec2.internal.warc.gz"} |
Converting MATLAB Algorithms into Serialized Designs for HDL Code Generation
Simulink^® lets you integrate MATLAB^® algorithms into a Simulink model for C or HDL code generation. However, many MATLAB implementations of signal processing, communications, and image processing
algorithms require some redesign to make them suitable for HDL code generation. For example, they often use data types such as doubles, strings, and structures, and contain control flow constructs,
such as while loops and break statements, that do not map well to hardware. Apart from these constructs, MATLAB algorithms that operate on large data sets are not always written to take account of
hardware design characteristics like streaming and resource sharing. This article uses a typical software implementation of an adaptive median filter to illustrate the process of converting MATLAB
algorithms for HDL code generation.
We start with a Simulink model that takes a noisy 131x131 pixel image and applies an adaptive median filter to obtain the denoised image (Figure 1, top left).
The current version of the algorithm is implemented in MATLAB for C code generation (Figure 1, top right). The algorithm takes the whole input image, 'I', as input, operates on the data in double
precision, and returns the denoised image, 'J', as output. The core of the algorithm is implemented in three levels of nested loops operating on the entire image. The two outer loops iterate over the
rows and columns of the image. The innermost loop implements the adaptive nature of the filter by comparing the median to a threshold and deciding whether to replace the pixel or increase the
neighborhood size and recalculate the median.
The algorithm uses four neighborhood sizes: 3x3, 5x5, 7x7, and 9x9. Even though the current implementation contains constructs and paradigms typical in software implementations and is efficient for
software, in its current form it is not suitable for hardware synthesis, for the following reasons:
The algorithm operates on the entire image. Typical hardware implementations stream the data into the chip in small chunks called windows or kernels to reduce the chip I/O count; the data is
processed at a faster rate, and the chip finishes processing an entire frame of data before the next frame is available.
The algorithm uses double data types. Double data types are not efficient for hardware realization. Hardware implementations must use silicon area efficiently and avoid usage of double-precision
arithmetic, which consumes more area and power. The algorithm should use fixed-point data types as opposed to floating-point data types (double).
The algorithm uses expensive math functions. The use of operators like sin, divide, and modulo on variables leads to inefficient hardware. Naïve implementation of these functions in hardware results
in low clock frequencies. For hardware design tradeoffs, we need to use low-cost repetitive add- or subtract-based algorithms, such as CORDIC.
Software loops in the algorithm must be mapped efficiently to hardware. Since hardware execution needs to be deterministic, we cannot allow loops with dynamic bounds. Hardware is parallel, which
means that we could unroll the loop execution in hardware to increase concurrency, but this uses up more silicon area.
The algorithm contains large arrays and matrices. When mapped to hardware, large arrays and matrices consume area resources like registers and RAMs.
In the next section we will see how a restructured hardware-oriented implementation of the same adaptive median filter addresses each of these issues.
Modifying the Adaptive Filter Algorithm for Hardware
The process of converting the original adaptive median algorithm to hardware involves the following tasks:
• Serializing the input image for processing
• Separating the adaptive median filter computation for parallelization
• Updating the original image using the denoised pixel values
This article focuses on the first two tasks.
Serializing the Input Image
Most hardware algorithms do not work on the whole image but on smaller windows at each time step. As a result, the original input image must be serialized and streamed into the chip and, depending on
how much of the image needs to be available for the algorithm computation, buffered onto the on-chip memory. This hardware modeling of the algorithm must take into account the amount of memory
available to hold the image data, in addition to the number of I/O pins available on the chip to stream the data in.
In our example, serialization involves restructuring the Simulink model so that our adaptive filter design breaks the image into 9x1 columns of pixel data and feeds it as input to the filter. The
data is buffered inside the chip for 9 cycles, creating a 9x9 window to compute a new center pixel. The filter processes this window of data and streams a modified center pixel value for the 9x9
window. At the output of the filter, the modified center pixel data is applied to the original image to reconstruct a denoised image. Now that the filter is working on smaller windows of data, it
needs to run at a faster rate to finish processing the whole algorithm on the image before the next image is available at the input. We model this algorithm behavior using rate transition blocks.
This sort of image buffering would require additional control signals for the streaming of data to be processed by the algorithm implemented in the hardware. In this model (Figure 2) the subsystem
"capture_column_ data" helps to sweep through the image, and in 9x1 windows, feeds the data to the main Filter subsystem ("MedianFilter_2D_ HW"). Since the 2D adaptive median filter works on a
maximum window size of 9x9, it takes 9 cycles to fill the filter pipeline at the beginning of each row of images and compute the first center pixel. This means we need additional control signals at
the output of the filter to indicate the validity of the center pixel output.
At the output of the filter, the subsystem "update_image" takes the filtered data from the "MedianFilter_2D_HW" subsystem and reconstructs the full image based on the control signals.
Parallelizing the Algorithm
The adaptive median filter bases its selection of a window size for calculating the median on local statistics. The software-oriented implementation computes these statistics for each window size
sequentially in nested loops. The hardware implementation can perform these computations in parallel.
The new filter implementation partitions the data buffer into 3x3, 5x5, 7x7, and 9x9 regions and implements separate median filters to compute the minimum, median, and maximum values for each
subregion in parallel (Figure 2, bottom right). Parallelizing the window computations lets the filter perform faster in hardware.
Optimizing the Algorithm for Hardware
To find the minimum, median, and maximum values of the neighboring pixels, the nested loops in the software implementation iterate over all the rows from left to right and top to bottom. In the
hardwarefriendly implementation, min/max/median computation occurs only on the regions of interest, identified using a 1D median filter. Figure 3 (top) shows computation of min/max/median values for
a 3x3 window; as can be seen, an nxn region of pixels requires {N^2 * floor(log[2] N^2/2)} number of comparators.
To implement the algorithm on 3x3, 5x5, 7x7, and 9x9 windows, we would require a total of 4752 (9*4 + 25*12 + 49*24 + 81*40) comparators.
We can explore other area tradeoffs—for example, we can implement a 2D filtering algorithm that works on individual rows and columns of the nxn region rather than on all the pixels. This would
consume fewer resources than the 1D filter and would require 800 comparators (18 + 100 + 196 + 486) instead of 4752. However, because we know that the center pixel values are usually found in the 3x3
region, we could compromise on quality by applying the lossy 2D algorithm (get_median_2d) on other regions while applying the 1D algorithm on the 3x3 region (Figure 3, bottom).
To experiment with these tradeoffs we simply swap the call to functions on the path get_median_1d with get_ median_2d and simulate the model to compare the noise reduction differences between
different choices.
The output pixel of this algorithm is used to denoise the original image.
Advantages of This Approach
MATLAB and Simulink provide a concise way of representing the algorithm: The adaptive median filter is described in about 186 lines of MATLAB code. A comparable C-code implementation would require
almost 1000 lines; an HDL implementation, more than 2000 lines. Understanding the modeling tradeoffs when targeting hardware and software is key for efficient implementation of complex signal and
video processing algorithms. MATLAB and Simulink help you to explore these tradeoffs at a high level of abstraction without encoding too much hardware detail, providing an effective way to use the
MATLAB environment for hardware deployment.
Adaptive Median Filters
Adaptive median filtering is a digital image processing technique commonly used to reduce speckle noise and saltand- pepper noise. A generic median filter replaces the current pixel value with the
median of its neighboring pixel values; it affects all the pixels, whether or not they are noisy, and hence, blurs images with high noise content. The adaptive median filter overcomes this limitation
by selectively replacing the pixel values. It makes the decision by analyzing the median. If the median is skewed by the noise, it adapts itself by defining the median over larger regions.
Because of these advantages, adaptive median filters are commonly used as a preprocessing step for cleaning up the image for further processing. Hardware implementations of the filters are highly
desirable due to the algorithm's computation complexity and high throughput requirements. | {"url":"http://www.mathworks.co.kr/company/newsletters/articles/converting-matlab-algorithms-into-serialized-designs-for-hdl-code-generation.html?nocookie=true","timestamp":"2014-04-23T19:19:25Z","content_type":null,"content_length":"36428","record_id":"<urn:uuid:e84c7a95-d206-4ab4-bd22-b206311d9310>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00019-ip-10-147-4-33.ec2.internal.warc.gz"} |
Introduction to AI - Week 3
Each path through tree corresponds to implication like
FORALL r Patrons(r,Full) & WaitEstimate(r,0-10)
& Hungry(r,N) -> WillWait(r)
Hence Decision tree corresponds to conjunction of implications.
Cannot express tests that refer to two different objects like:
EXISTS r[2] Nearby(r[2]) & Price(r,p) & Price(r[2],p[2]) & Cheaper(p[2],p)
Expressiveness essentially propositional logic (no function symbols, no existential quantifier)
Complexity for n attributes is 2^2^n, since for each function 2^n values have to be defined. (e.g. for n=6 there are 2 x 10^19 different functions)
Functions like parity function (1 for even, 0 for odd) or majority function (1 if more than half of the inputs are 1) end in large decision trees.
Some Examples
Ex Atrributes Goal
Alt Bar Fri Hun Pat Price Rain Res Type Est Wait
-------- ------------- -------------
X[1] Yes No No Yes Some £ £ £ No Yes French 0-10 Yes
X[2] Yes No No Yes Full £ No No Thai 30-60 No
X[3] No Yes No No Some £ No No Burger 0-10 Yes
X[4] Yes No Yes Yes Full £ Yes No Thai 10-30 Yes
X[5] Yes No Yes No Full £ £ £ No Yes French >60 No
X[6] No Yes No Yes Some £ £ Yes Yes Italian 0-10 Yes
X[7] No Yes No No None £ Yes No Burger 0-10 No
X[8] No No No Yes Some £ £ Yes Yes Thai 0-10 Yes
X[9] No Yes Yes No Full £ Yes No Burger >60 No
X[10] Yes Yes Yes Yes Full £ £ £ No Yes Italian 10-30 No
X[11] No No No No None £ No No Thai 0-10 No
X[12] Yes Yes Yes Yes Full £ No No Burger 30-60 Yes
Different Solutions
• Trivial solution: Construct decision tree that has one path to a leaf for each example
Given the examples o.k., but else bad
• Occam's razor: The most likely hypothesis is the simplest one that is consistent with all observation.
• Finding smallest decision trees is intractable, hence heuristic decisions: test the most important attribute first. Most important = makes most difference to the classification of an example.
Short paths in the trees, small trees
• Compare splitting the examples by testing on attributes (cf. Patrons, Type)
Selecting Best Attributes
Selecting Best Attributes (Cont'd)
Recursive Algorithm
• If there are positive and negative examples, choose "best" attribute to split (i.e., Patrons in the example above)
• If all remaining examples positive (or all negative), then done
• If no examples left, no information, hence default value
• If no attributes left, but both positive and negative examples, then problem. Examples have same description but different classification due to incorrect data (noise), not enough information, or
nondeterministic domain
Then majority vote.
DecTreeLearning ID3
function DecTreeL(ex,attr,default);;; returns decision tree
;;; set of examples, of attributes, default for goal predicate
if ex empty then return default
elseif all ex have same classification then return it
elseif attributes empty then return MajVal(ex)
else ChooseAttribute(attr,ex) -> best1
new decision tree with root test best1 -> tree1
for each value v[i] of best1 do
{elements of ex with best1=v[i}] -> ex[i]
DecTreeL(ex[i],attr,MajVal(ex)) -> subtree1
add branch to tree with label v[i]
subtree subtree1 end return tree
Generated Decision Tree
Discussion of the Result
Comparison of original tree and learned tree:
• Trees differ.
• Learned tree is smaller (no test for raining and reservation, since all examples can be classified without them).
• Detects regularities (waiting for Thai food on weekends).
• Can make mistakes (e.g. case where the wait is <10, but the restaurant is full and not hungry).
• Question: if consistent, but incorrect tree, how correct is the tree?
Assessing the Performance
• Collect a large set of examples.
• Divide it into two disjoint sets: training set and test set.
• Learn algorithm with training set and generate hypothesis H.
• Measure percentage of examples in test set that are correctly classified by H.
• Repeat steps 1 to 4 for different sets.
Assessing the Performance (Cont'd)
Learning curve shows increase of the quality of the prediction, when training set grows.
• ID3 used to classify boards in a chess endgame. ID3 had to recognise boards that led to a loss within 3 moves. Classification of half a million positive situations from 1.4 million different
possible boards. Typical learning curve as result.
• Building up an expert system for designing gas-oil separation systems for oil platforms. gasoil XPS of BP with 2500 rules. Building by hand: 10 person-years, using decision-tree learning 100
• Learning to fly: Flight simulator, generated by watching three skilled human pilots. 90,000 examples and 20 state variables labelled by the action taken. Extract decision tree which was
translated into C code. Program could fly better than its teachers.
Finding Best Attributes
In order to build up small decision trees: select best attributes first (best = most informative)
• measure information in bits
• One bit of information is enough to answer a yes/no question about which one has no idea (e.g. flip of a fair coin)
• If the possible answers u[i] have probabilities P(u[i]), then
I(P(u[1]),... ,P(u[n])) = SUM[i=1]^n -P(u[i])* ld(P(u[i]))
• e.g., fair coin: I(½,½) = 1 (1 bit)
• e.g., if we know already the outcome by 99%, the information of the real outcome has the (expected) information of: I(1/100,99/100)=0.08 bits.
If we know outcome by 100%, no additional information, I=0.
ld(x) (dual logarithm) is defined for every positive real number x such that
2^ld(x) = x
Logarithm (Cont'd)
Some values:
x 1 2 4 8 10 16 1/2 1/4 1/8
ld(x) 0 1 2 3 3.32 4 -1 -2 -3
lim[x-> 0+]ld(x)=-∞ lim[x-> 0+]x*ld(x)=0
log[10]x= log[10]2 * ld(x)
Calculations in the Examples
= SUM[i=1]^2 -P(u[i])* ld(P(u[i]))
= -½*ld(½)-½*ld(½)
= -½* (-1)-½* (-1)
= ½+½
= 1
with P(u[i])=½
Calculations in the Examples (Cont'd)
I(0/ 100,100/ 100)
= SUM[i=1]^2 -P(u[i])* ld(P(u[i]))
= -0/ 100*ld(0/ 100)-100/ 100*ld(100/ 100)
*= -0* (-∞)-1* 0
*= 0+0
= 0
with P(u[1])=0/ 100 and with P(u[2])=100/ 100
(*) Strictly you have to use here
lim[x-> 0+]x*ld (x) = 0.
Calculations in the Examples (Cont'd)
I(1/ 100,99/ 100)
= SUM[i=1]^2 -P(u[i])* ld(P(u[i]))
= -1/ 100*ld(1/ 100)-99/ 100*ld(99/ 100)
= -1/ 100* (-6.64386)-99/ 100* (-0.0145)
= 0.066439 + 0.014355
= 0.080794
with P(u[1])=1/ 100 and with P(u[2])=99/ 100
ld(1/ 100) = -6.64386 and ld(99/ 100) = -0.0145
Applied to Attributes
p := number of positive examples
n := number of negative examples
Information contained in correct answer:
I(p/ p+n,n/ p+n)= - p/ p+n ld p/ p+n - n/ p+nldn/ p+n
Restaurant example: p=n=6, hence we need 1 bit of information. A test of one single attribute A will not usually give this, but only some of it. A divides example set E into subsets E[1],... ,E[u].
Each subset E[i] has p[i] positive and n[i] negative examples, so we need in this branch an additional I(p[i]/ (p[i]+n[i]),n[i]/ (p[i]+n[i])) bits of information, weighted by
(p[i]+n[i])/(p+n) (probability of a random example)
HENCE information gain:
Gain(A) = I(p/ p+n,n/ p+n) - SUM[i=1]^u p[i]+n[i]/p+n * I(p[i]/ p[i]+n[i],n[i]/ p[i]+n[i])
Choose attribute with largest information gain.
In the restaurant example, initially:
alternative bar friday hungry patrons
-------- ------------- -------------
0.0 0.0 0.020721 0.195709 0.540852
price rain reservation type estimate
-------- ------------- -------------
0.195709 0.0 0.020721 0.0 0.207519
Hence: choose "patrons"
Noise and Overfitting
• Overfitting, problem not to find meaningless regularity in the data (examples: rolling dice characterised according to attributes like hour, day, month result in perfect decision tree, when no
two examples have identical description)
• possibility: decision tree pruning by detecting irrelevant attributes. Irrelevant = no information gain for an infinitely large sample.
null hypothesis, assumes that there is no underlying pattern. Only if significant deviation (e.g more than 5%) attribute considered.
• alternative: cross-validation, i.e. take only part of the data for learning and rest for testing the prediction performance. Repeat with different subsets and select best tree. (can be combined
with pruning)
Reinforcement Learning
Assume the following stochastic environment
Each training sequence has the form:
• (1,1)->(1,2)->(1,3)->(2,3) ->(1,3)->(2,3)->(3,3)->(4,3) reward +1
• (1,1)->(2,1)->(1,1)->(2,1) ->(3,1)->(3,2)->(4,2) reward -1
Probability for a transition to a neighbouring state is equal among all possibilities, i.e.
Assume utility function is additive, i.e.
U([s[0],s[1],... ,s[n]]) = reward(s[0]) + U([s[1],... ,s[n]])
with e.g. pected utility of a state is the expected reward-to-go of that state.
Utility to be Learned
Can be learned by Least Mean Squares approach, short LMS, (also called adaptive control theory). It assumes that the observed reward-to-go on that sequence provides direct evidence of the actual
expected reward-to-go. At end of each sequence: calculate reward-to-go for each state and update utility
Passive Reinforcement Learning
vars U ;;; table of utility estimates
vars N ;;; table of frequencies for states
vars M ;;; table of transition probabilities from state to state
vars percepts ;;; percept sequence, initially empty
function Passive-RL-Agent(e);;; returns an action
add e to percepts
increment N(State(e))
UPDATE(U,e,percepts,M,N) -> U
if Terminal?(e) then nil -> percepts
return action Observe
function LMS-Update(U,e,percepts,M,N);;; returns updated U
if Terminal?(e) then 0 -> reward-to-go;
for each e[i] in percepts (starting at end) do
reward-to-go+Reward(e[i]) -> reward-to-go;
-> U(State(e[i]));
Summary - Decision Tree Learning
• Decision Tree Learning: very efficient way of non-incremental learning space.
□ It adds a subtree to the current tree and continues its search.
□ It does not backtrack.
□ It is highly dependent upon the criteria for selecting properties to test.
□ It can be extended to allow more than two values as result of the classification
□ It can be extended to deal with noise.
Summary - Reinforcement Learning
• Reinforcement Learning: incremental learning approach.
□ We could only give a glimpse of reinforcement learning.
□ We looked only at the example of a passive agent, which observes the world. Typically you will have an active agent, which can make decisions based on its partial knowledge of the world.
□ An active agent has to decide whether it should exploit its current knowledge, or explore the world.
Further Reading
• S. Russell, P. Norvig. Artificial Intelligence - A Modern Approach. 2nd Edition, Pearson Education, 2003. Sections 18.3 & 21.2.
• G. Luger, W. Stubblefield. Artificial Intelligence - Structures and Strategies for Complex Problem Solving. 2nd Edition, The Benjamin/Cummings Publishing Company, 1993.
• J.R. Quinlan, Induction of Decision Trees. Machine Learning, 9(1):81-106, 1986.
• J.R. Quinlan, The effect of noise on concept learning. In Michalski et al., eds., Machine Learning: An Artificial Intelligence Approach, Vol. 2. Morgan Kaufmann. 1986.
© Manfred Kerber, 2004, Introduction to AI
The URL of this page is http://www.cs.bham.ac.uk/~mmk/Teaching/AI/Teaching/AI/l3.html.
URL of module http://www.cs.bham.ac.uk/~mmk/Teaching/AI/ | {"url":"http://www.cs.bham.ac.uk/~mmk/Teaching/AI/l3.html","timestamp":"2014-04-16T17:31:24Z","content_type":null,"content_length":"31360","record_id":"<urn:uuid:ce135e89-b088-47e5-adf6-cc7410c116fc>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00078-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mission Viejo Algebra Tutor
...For most of my professional career I worked as an aerospace engineer on the Space Station. Wanting to learn more about science, I obtained a second Bachelors degree in physics. I have worked in
multiple areas including structural design, advanced spacecraft design, database design, and programming.
14 Subjects: including algebra 1, algebra 2, calculus, physics
...Already rockin' the math? I'll help you drive ahead to score that scholarship! I got one and I can help you do the same.
24 Subjects: including algebra 1, calculus, algebra 2, physics
Hello, I am very motivated to help you get the understanding and knowledge you need with computer programming. I have over 10 years experience with visual basic and VBA using Excel. I have created
over the years apps for Excel that have saved me and my coworkers hundreds of hours.
10 Subjects: including algebra 1, Java, Microsoft Excel, C++
...I have elementary math background. I teach math to learning disabled children. There are strategies in math that I can share with students.
8 Subjects: including algebra 1, reading, English, grammar
...Students love working with me because I take the time to listen and understand where they're coming from and I explain difficult concepts in a simple way. WHAT DO I TEACH? Standardized Test
Prep - SAT exam- ACT exam- SSAT exam- SAT IIs (Math and Science related exams)- AP exams (Math and Scienc...
23 Subjects: including algebra 1, algebra 2, chemistry, calculus | {"url":"http://www.purplemath.com/Mission_Viejo_Algebra_tutors.php","timestamp":"2014-04-17T22:02:47Z","content_type":null,"content_length":"23828","record_id":"<urn:uuid:51b4bc2b-d63c-47b3-89a3-3e1096e1547f>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00609-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about interpreting the y-intercept on Algebra and Geometry Help
Using Linear Models – Graphing a Linear Model, Interpret the y-intercept
Posted on October 30, 2009 by Mr. Pi
Recent Comments
quadratic equation c… on Modeling Data With Quadratic…
Lemuel on Quadratic Functions and Their…
Arizona Bayfield on How To Simplify Rational …
Suemac on Proving Lines Parallel with Tr…
Mr. Pi on Subsets of Real Numbers
This is the second post in the series on working with linear models. Here is the link to the first post: Using Linear Models – Graphing a Linear Model Using the x- and y-Intercepts
In this post I will discuss writing a linear model or linear equation to represent a problem situation. Once the equation is written, I will identify and interpret the y-intercept. The embedded video
will model all of the steps necessary to solve the problem. Here is the problem.
A spring has a length of 8 cm when a 20-g mass is hanging at the bottom end. Each additional gram stretches the spring another 0.15 cm. Write an equation for the length y of the spring as a function
of the mass x of the attached weight. Graph the equation. Interpret the y-intercept.
Understanding the first sentence of the problem and how the values are related is key to writing the equation. From the first sentence it can be concluded that the length of the spring will depend on
the amount of weight attached to the spring. This means the weight is the independent variable and the length of the spring is the dependent variable. The problem even goes as far as stating that
length should be represented by the variable y and the weight is represented with the variable x. x traditionally is the independent variable and y is the dependent variable, as is the case in this
The second sentence reads, “Each additional gram stretches the spring another 0.15 cm.” This relationship between the additional weight and the amount of stretch is a rate of change. The spring will
expand 0.15 cm for every 1 gram added. Remember, the slope of a line is also a rate comparing the change in y-values over the change in x-values and because the y-values are representing the length
of the spring and the x-values are representing the weight added, we can ascertain that the slope is 0.15cm/1gram or m = 0.15.
It is given in the problem, the spring has a length of 8 cm when a 20-g mass is hanging at the bottom end. This can be written as the ordered pair (20, 8 ) and can be used with the slope m = 0.15 to
write a linear equation for this problem. To write this equation, it will be easiest to start with the point-slope form of a linear equation:
y – y[1] = m(x – x[1]) –> Point-Slope Form of a Linear Equation (1)
Substituting the point (20,8) and the slope m = 0.15 into equation 1,
y – 8 = 0.15(x – 20). (2)
To you can make a graph from this form, but since interpreting the y-intercept is a part of the problem using some algebra to put the equation into slope-intercept form will be useful. Remember, the
slope-intercept form of a linear equation is given by y = mx + b, where m is the slope and b is the y-intercept. To put equation 2 into slope-intercept form, first 0.15 must be distributed to give:
y – 8 = 0.15x – 3. (3)
Finally, adding 8 to both sides of equation 3 will give the slope-intercept form:
Y = 0.15x + 5. (4)
The y-intercept is (0,5) and means that when the spring has now weight attached , it is 5 cm long. As before, see the video for the graphing portion of this problem.
A related example to this spring problem, what mass would be needed to stretch the spring to a length of 9.5 cm? This is very simple to complete if you understand the meaning of the variables. Y
represents the length of the spring and x represents the amount of weight attached to the spring. For this problem, you must find HOW MUCH WEIGHT must be attached to stretch the spring 9.5 cm. Thus,
you are given the length of the spring or the y-value and you must solve for the x-value.
To complete this task, substitute 9.5 for y in equation 4 to get:
9.5 = 0.15x + 5. (5)
To solve for x, subtract 5 from both sides,
4.5 = 0.15x (6)
And divide both sides by 0.15 to find,
x = 30 (7)
Relating this answer to the problem, it will take 30 grams to stretch the spring to 9.5 cm.
Filed under: Algebra 2, Equations and Graphs, Using Linear Models | Tagged: Algebra 2, graphing a linear model using x- and y- intercepts, interpreting the y-intercept, point-slope form of a linear
equation, problem solving, slope-intercept form of a linear equation, writing a linear equation, writing a linear model | 3 Comments »
Using Linear Models – Graphing a Linear Model Using the x- and y- Intercepts
Posted on October 27, 2009 by Mr. Pi
This video is about using linear models. The exercise models how to write a linear model or linear equation for a given problem situation.
Suppose an airplane descends at a rate of 300 ft/min from an elevation of 8000 ft. Write and graph an equation to model the plane’s elevation as a function of the time it has been descending.
Interpret the intercept at which the graph intersects the vertical axis.
To solve this problem, it is essential that one understands independent and dependent variables. Remember, when graphing on the coordinate plane, the independent variable is graphed along the x-axis
and the dependent variable is graphed along the y-axis. As with most problems involving time, TIME is the independent variable and the countdown will start when the plane starts its decent. The
plane’s elevation is the dependent variable. It has a distinct starting value of 8000 feet and it will be changing at a rate of -300 feet per minute. To find the plane’s elevation at any given time:
x = time elapsed measured in minutes
y = plane’s elevation measured in feet
The elevation equals the rate of descent times the time plus the starting elevation. The previous sentence translates into equation 1.
y = -300x + 8000 (1)
To graph this linear model, it would be best to use the x- and y-intercepts. The y-intercept will be (0, 8000). This represents the beginning of the decent. The elapsed time is zero and the
plane is at its starting elevation of 8000 feet. The x-intercept will is found by substituting 0 in for y in equation 1, giving equation 2.
0 = -300x + 8000 (2)
To find the x-intercept or the amount of time it will take the plane to land, add 300x to both sides of (2) to get:
300x = 8000 (3).
Divide both sides (3) by 3000:
x = 80/3 = 26.7 seconds (4).
There are now two ordered pairs that can be graphed: (0, 8000) and (26.7, 0). See the video for the actual graphing.
Filed under: Algebra 2, Equations and Graphs, Using Linear Models | Tagged: Algebra 2, graphing a linear model using x- and y- intercepts, interpreting the y-intercept, problem solving, writing a
linear equation, writing a linear model | 5 Comments » | {"url":"http://mrpilarski.wordpress.com/tag/interpreting-the-y-intercept/","timestamp":"2014-04-17T21:30:26Z","content_type":null,"content_length":"86350","record_id":"<urn:uuid:ea3377b3-79e7-494c-a88a-696c79066de1>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00586-ip-10-147-4-33.ec2.internal.warc.gz"} |
What are the '10 dimensions' that physicists are always talking about?
The first four are the ones we all know and love: Three dimensions for space and one dimension for time.
The other 6 are called 'internal degrees of freedom' and are related to the number of fundamental symmetries present in the physical world at the quantum scale. The equations that physicists work
with require these additional dimensions so that new symmetries can be defined that allow physicists to understand physical relationships between the various particle families. They think these are
actual, real dimensions to the physical world, only that they are now 'compact' and have finite sizes unlike our 4 dimensions of space and time which seem almost to be infinite in size. Each point in
4 dimensional space-time has another 6 dimensions attached to it which 'particles and forces' can use as extra degrees of freedom to define themselves and how they will interact with each other. Do
not confuse them with 'hyperspace' because the particles do not actually 'move' along these other dimensions. They are not 'spatial' dimensions, but are as unlike space and time as time is unlike
Return to
Ask the Astronomer | {"url":"http://www.astronomycafe.net/qadir/ask/a10936.html","timestamp":"2014-04-18T05:56:21Z","content_type":null,"content_length":"2187","record_id":"<urn:uuid:c2edb294-7e9b-4533-b611-7db87bbd6944>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00294-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: June 2008 [00578]
[Date Index] [Thread Index] [Author Index]
Re: Insight into Solve...
• To: mathgroup at smc.vnet.net
• Subject: [mg89849] Re: [mg89815] Insight into Solve...
• From: Andrzej Kozlowski <akoz at mimuw.edu.pl>
• Date: Sun, 22 Jun 2008 03:25:49 -0400 (EDT)
• References: <200806210931.FAA15784@smc.vnet.net>
On 21 Jun 2008, at 18:31, David Reiss wrote:
> Some insight requested...:
> When Solving for the center of a circle that goes through two
> specified points,
> Solve[{(x1 - x0)^2 + (y1 - y0)^2 == r^2, (x2 - x0)^2 + (y2 - y0)^2 ==
> r^2}, {x0, y0}]
> the result gives expressions for x0 and y0 that are structurally very
> different even though the symmetry of the problem says that they are,
> in fact expressions that are ultimately very similar.
> My question is what is the reason in the algorithm that Solve uses
> that causes the initial results to structurally look so different. It
> appears that Solve is not aware of the symmetry in the problem.
> Note that if instead of using x0 and y0 one used z0 and y0, then the
> structural forms of the expressions are reversed suggesting that Solve
> is taking variables alphabetically (no surprise here).
> The problem with this sort of result from Solve is that one needs to
> explicitly manipulate the resulting expressions to exploit the
> symmetries in order for the final expressions to structurally/visually
> exhibit those symmetries. FullSimplify does not, starting from the
> results of Solve, succeed in rendering the final expressions into the
> desired form.
> E.g, if temp is the result of the Solve command above,
> FullSimplify[circlePoints,
> Assumptions -> {r >= 0, x1 \[Element] Reals, x2 \[Element] Reals,
> y1 \[Element] Reals, y2 \[Element] Reals}]
> does not sufficiently reduce the expressions in to forms that
> explicitly exhibit the symmetry transform into one another.
> Is there another approach that comes to anyones' mind that will simply
> yield the anticipated results?
> Thanks in advance,
> David
The Groebner basis algorithm used by Solve works, essentially, by
replacing the original system of equations by another system, which is
easy to solve. The system that is obtained depends on the ordering of
all the variables, including the ones you treat as parameters, so in
general, for equations with parameters, symmetry will not be
preserved. The easiest way to get symmetric answers is, I think, by
specifying explicitly the variables that should be eliminated. For
example, in this case you can first find x0 by eliminating y0, and
then, separately, y0 by eliminating x0:
Solve[{(x1 - x0)^2 + (y1 - y0)^2 == r^2, (x2 - x0)^2 + (y2 - y0)^2 ==
r^2}, {x0}, {y0}]
Solve[{(x1 - x0)^2 + (y1 - y0)^2 == r^2, (x2 - x0)^2 + (y2 - y0)^2 ==
r^2}, {y0}, {x0}]
If you compare the answers you will see that the symmetry has been
Andrzej Kozlowski
• References: | {"url":"http://forums.wolfram.com/mathgroup/archive/2008/Jun/msg00578.html","timestamp":"2014-04-17T15:49:05Z","content_type":null,"content_length":"27997","record_id":"<urn:uuid:de888643-bc5a-4fe8-87b3-78a77fd16aff>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00447-ip-10-147-4-33.ec2.internal.warc.gz"} |
Do you need to say what left-unique and right-unique means?
up vote 1 down vote favorite
I am talking about a relation that is what Wikipedia describes as left-unique and right-unique. I never heard these terms before, but I have heard of the alternatives (injective and functional). The
question is, which terminology do you recommend? Should I include short definitions? (The context is a text in the area of formal methods. I'm not sure if this helps.)
These are some trade-offs that I see:
• I think that left-unique and right-unique are not widely known, but I'm not sure at all.
• functional is overloaded
• injective sounds too fancy (subjective, of course)
• left-unique and right-unique are symmetric (good, of course)
Edit: It seems the question is unclear. Here are more details. I describe sets X and Y and then say:
1. now we must find an injective and functional relation between sets X and Y such that...
2. now we must find a left-unique and right unique relation between sets X and Y...
Which one do you recommend? What other information would you add? The relation does not have to be total. For example, various different ranges correspond to different 'feasible' relations.
Technically I should not need to say that the relation does not have to be total, but will many people assume that it has to be total if I don't say it?
1 I certainly did not know the terms left-unique and right-unique, and moreover when I tried to guess what they meant, I ended up with the opposite meanings. Left-unique, I reasoned, must mean that
a pair in the relation is uniquely determined by its left member, i.e., functional. But that is the definition of right-unique. Go figure – Harald Hanche-Olsen Mar 11 '10 at 13:28
1 People in formal methods know the standard usages, which are "injective" and "functional". If you're worried about "functional" being taken to mean "higher-order function", then use the phrase
"functional relation", as in "$R$ is a functional relation". – Neel Krishnaswami Mar 11 '10 at 14:21
@Harald: I guessed the same way as you did. @Neel: Thanks. At the moment I'm inclined to say we must find an injective and functional relation between X and Y, and not define injective/functional.
– rgrig Mar 11 '10 at 17:43
add comment
1 Answer
active oldest votes
Injective and functional are completely standard in this case. This is what you should use. The term "functional" is not overloaded, when you are using it to say that something is a
function. Being functional means exactly that the relation is a function.
A relation that is injective and functional is precisely an injective function on its domain. It is a bijection of its domain with its range.
up vote 4
down vote If you don't want to think of the relation as a function, then you can also describe it as a one-to-one correspondence of its domain with its range.
(And I don't think any of these terms I suggest would need to be defined, since their meaning is fairly universally known. This would definitely not be true of left-unique and
Joel, I think most people take R is a function from X to Y to mean that for each $x\in X$ there is exactly one $y\in Y$ such that $xRy$; similarly, I think most people take R is a
bijection between X and Y to mean that R and its inverse are both functions. Also, I think most people use one-to-one correspondence as a synonym for bijection. That is not what I want
to say. What I want to say is that for each $x\in X$ there is at most one $y\in Y$ such that $xRy$ and vice-versa. (This is what the left-unique and right-unique definitions that I
pointed to say.) – rgrig Mar 11 '10 at 17:38
Well, I never said R should be a function from X to Y, or a bijection between X and Y, but rather, that it is a function on its domain, or a bijection of its domain with its range. The
domain of a relation R is the set of x for which there is y with xRy, and the range is the corresponding set of y. These may not be X and Y, respectively, and this should resolve the
confusion. It is completely correct to say that a relation is functional if and only if it is a function from its domain to its range, and this is why the word functional is used. –
Joel David Hamkins Mar 11 '10 at 18:45
In particular, what I mean to say is that I stand by my answer. – Joel David Hamkins Mar 11 '10 at 18:55
@Joel: I agree. I never said you were wrong. I am just pointing out that neither 'bijection' nor 'one-to-one correspondence' mean the same as (left-unique and right-unique), so they
aren't what I need to say. (I did up-vote your answer :), just so you know.) – rgrig Mar 11 '10 at 19:27
Rgrig, but if you say "bijection of its domain with its range" or "one-to-one correspondence of its domain with its range", as I suggested, then it IS what you mean to say. – Joel
David Hamkins Mar 11 '10 at 19:46
show 1 more comment
Not the answer you're looking for? Browse other questions tagged terminology or ask your own question. | {"url":"http://mathoverflow.net/questions/17854/do-you-need-to-say-what-left-unique-and-right-unique-means","timestamp":"2014-04-19T19:58:42Z","content_type":null,"content_length":"63126","record_id":"<urn:uuid:2cae68c8-8e2e-4b65-802a-7be2c161d177>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00384-ip-10-147-4-33.ec2.internal.warc.gz"} |
Weyl Character Formula for Quantum Groups
up vote 5 down vote favorite
How much is known about the Weyl character formula for quantum groups? More specifically, has the formula been generalized to the general setting of deformed coordinate algebras $\mathbb{C}[G_q]$ of
semi-simple Lie groups and their associated flag varieties? I am most interested in the non-root of unity case.
quantum-groups qa.quantum-algebra ag.algebraic-geometry noncommutative-geometry
The label "quantum group" covers several different constructions having some common connection to a given root system or complex semisimple Lie algebra/group. So the question needs careful
2 formulation. Apparently the "Weyl character formula" is supposed to result from something like global sections of a line bundle? Work from around 1990 by Lusztig, Andersen-Polo-Wen, and others
showed how much carries over from the classical theory for quantum groups attached to a universal enveloping algebra. "Weyl modules" occur naturally here, with the usual character formula, but may
not be simple. – Jim Humphreys Jun 28 '10 at 18:25
1 I agree with Jim. Your question is rather pointless in the generality you ask. Simple modules for $U_q (g)$ when $q$ is not root of unity will be deformations of simple $g$-modules and satisfy
usual Weyl character formula. Is this what you are asking or are you after roots of unity? – Bugs Bunny Jun 28 '10 at 18:32
I'm asking about the deformed coordinate rings $\mathbb{C}[G_q]$ of semi-simple Lie groups for q not a root of unity. – John McCarthy 0 secs ago – John McCarthy Jun 28 '10 at 18:46
@John: Maybe you should edit this qualification into your question? The representation theory in that case is less familiar to me, but has been studied a lot by DeConcini, Kac, Procesi and others.
For quantized enveloping algebras (including those at a root of unity), the work I mentioned covers the connection with classical Weyl theory. – Jim Humphreys Jun 28 '10 at 19:40
not quite an answer. Grothendieck-Riemann-Roch of usual flag variety of Lie algebra is Weyl character formula. Therefore, "quantum Weyl character formula" should be G-R-R for quantized flag
1 variety(as a noncommutative projective scheme). The definition of quantized flag variety is given by Lunts-Rosenberg in their paper:"localization for quantum group". I am now trying to calculate
G-R-R for quantized flag variety of $sl_2$ – Shizhuo Zhang Jun 29 '10 at 1:08
show 1 more comment
1 Answer
active oldest votes
The question still feels a bit vague to me, but this started to get too long to be a comment. There are a number of issues:
There's the question of the definition of $\mathbb C[G_q]$, and say an analogue of the Peter-Weyl theorem, and there is also the issue of doing this at a root of unity case, or better
studying things integrally. For this say, a recent paper of Lusztig gives a definition of a quantum coordinate ring for any (finite type) root datum, which specializes to the
Kostant-Chevalley form.
up vote 5
down vote Andersen-Polo-Wen and others have studied studied quantum induction functors which correspond to taking global sections on the classical flag variety, and these might be what you want
(Ryom-Hansen also proved a version of Kempf vanishing in this context for example). This was also more recently taken up by Kumar and Littelmann in the context of studying Frobenius
splitting. Finally there's the issue of understand quantum flag varieties as noncommutative spaces as in the previous comment, for which along with the Lunts-Rosenberg paper, there is also
more recent work of Backelin and Kremnizer.
add comment
Not the answer you're looking for? Browse other questions tagged quantum-groups qa.quantum-algebra ag.algebraic-geometry noncommutative-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/29810/weyl-character-formula-for-quantum-groups?sort=oldest","timestamp":"2014-04-20T18:35:08Z","content_type":null,"content_length":"57888","record_id":"<urn:uuid:61d7c1bf-49e5-4724-b4d6-abb66b8ea678>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00306-ip-10-147-4-33.ec2.internal.warc.gz"} |
unifrom distribution of a disc
1. The problem statement, all variables and given/known data
[tex]\D = \{(x,y) \in \mathbb{R}^2 | x^2 + y^2 \leq 1\} [/tex] i.e. a disc or radius 1.
Write down the pdf f_{xy} for a uniform distribution on the disc.
2. Relevant equations
3. The attempt at a solution
[tex] f_{xy} = \frac{(x^2 + y^2)}{\pi} \mbox{for} x^2 + y^2 \leq 1[/tex] 0 otherwise
as the area of the disc pi and to make it uniform you divide by pi so the probability integrates to 1 | {"url":"http://www.physicsforums.com/showthread.php?t=349730","timestamp":"2014-04-18T08:20:10Z","content_type":null,"content_length":"31040","record_id":"<urn:uuid:ec79a0c4-73eb-444c-9594-0b3b9b4417ce>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00024-ip-10-147-4-33.ec2.internal.warc.gz"} |
Joint Probability density
January 24th 2009, 10:30 AM #1
May 2008
Joint Probability density
Suppose that P, the price of a certain commodity (in dollars), and S, its total sales (in 10,000 units), are random variables whose joint probability distribution can be approximated closely with
the joint probability density
f(p,s)=5pe^(-ps) for 0.2<p<0.4, s>0 and 0 elsewhere
Find the probabilities that
(a) the price will be less than 30 cents and sales will exceed 20,000 units;
(b) the price will between 25 cents and 30 cents and sales will be less than 10,000 units;
(c) the marginal density of P;
(d) the conditional density of S given P=p;
(e) the probability that sales will be less than 30,000 units when p=25 cents.
Suppose that P, the price of a certain commodity (in dollars), and S, its total sales (in 10,000 units), are random variables whose joint probability distribution can be approximated closely with
the joint probability density
f(p,s)=5pe^(-ps) for 0.2<p<0.4, s>0 and 0 elsewhere
Find the probabilities that
(a) the price will be less than 30 cents and sales will exceed 20,000 units;
(b) the price will between 25 cents and 30 cents and sales will be less than 10,000 units;
(c) the marginal density of P;
(d) the conditional density of S given P=p;
(e) the probability that sales will be less than 30,000 units when p=25 cents.
These are all set up and solved form the basic definitions.
(a) $\int_{p=0}^{p = 0.3} \int_{s=2}^{s=+\infty} f(p, s) \, ds \, dp$.
(b) $\int_{p=0.25}^{p = 0.3} \int_{s=0}^{s=1} f(p, s) \, ds \, dp$.
(c) $f_P(p) = \int_{s=0}^{s=+\infty} f(p, s) \, ds$.
(d) $f_S(s | p) = \frac{f(p, s)}{f_P(p)}$.
(e) $\int_{s=0}^{s=3} f_S(s | p = 0.25) \, ds$.
how to calculate the first part (the ds part, $\int_{s=2}^{s=+\infty} f(p, s) \, ds$.
and I think the dp part is $\int_{p=0.2}^{p=0.3} f(p, s) \, dp$, is it right?
how to calculate the first part (the ds part, $\int_{s=2}^{s=+\infty} f(p, s) \, ds$. Mr F says: Do you know how to integrate? You're integrating a simple exponential function. Where are you
stuck here?
and I think the dp part is $\int_{p=0.2}^{p=0.3} f(p, s) \, dp$, is it right? Mr F says: Why would you think that when the question clearly says "the price will be less than 30 cents "?!
how to calculate the first part (the ds part, . Mr F says: Do you know how to integrate? You're integrating a simple exponential function. Where are you stuck here? the problem is the S is form
2 to infin. there is not exactly number for infin. like if it is from negative infin to 2,then i know the number is from 0 to 2.
and I think the dp part is , is it right? Mr F says: Why would you think that when the question clearly says "the price will be less than 30 cents "?! because the problem is the Price is 0.2<p
<0.4, so think is should be 0.2 to 0.3
Well, that's a good reason why.
how to calculate the first part (the ds part, . Mr F says: Do you know how to integrate? You're integrating a simple exponential function. Where are you stuck here? the problem is the S is form
2 to infin. there is not exactly number for infin. like if it is from negative infin to 2,then i know the number is from 0 to 2.
$\lim_{a \rightarrow +\infty} \int_2^{a} f(p, s) \, ds$ etc.
January 24th 2009, 01:07 PM #2
January 25th 2009, 10:17 AM #3
May 2008
January 25th 2009, 12:48 PM #4
January 25th 2009, 05:22 PM #5
May 2008
January 25th 2009, 06:10 PM #6 | {"url":"http://mathhelpforum.com/advanced-statistics/69712-joint-probability-density.html","timestamp":"2014-04-17T10:29:04Z","content_type":null,"content_length":"54289","record_id":"<urn:uuid:3e7e377f-2a9f-4750-a0c7-f6d1a9c1a735>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00410-ip-10-147-4-33.ec2.internal.warc.gz"} |
Zentralblatt MATH
Publications of (and about) Paul Erdös
Zbl.No: 311.05113
Autor: Bonnet, R.; Erdös, Paul
Title: The chromatic index of an infinite complete hypergraph: A partition theorem. (In English)
Source: Proc. 1rst Working Sem. Hypergraphs, Columbus 1972, Lecture Notes Math. 411, 54-60 (1974).
Review: [For the entire collection see Zbl 282.00007.]
Let p,m and n be three cardinals such that p < m < n and 1 < p < \omega \leq n, and let S denote a set of cardinality n. The authors show, assuming the axiom of choice, that the m-tuples of S can be
coloured with n^m colours such that every p-tuple of S is contained in exactly one m-tuple of each colour.
Reviewer: J.W.Moon
Classif.: * 05C15 Chromatic theory of graphs and maps
05C99 Graph theory
© European Mathematical Society & FIZ Karlsruhe & Springer-Verlag
│Books │Problems │Set Theory │Combinatorics │Extremal Probl/Ramsey Th. │
│Graph Theory │Add.Number Theory│Mult.Number Theory│Analysis │Geometry │
│Probabability│Personalia │About Paul Erdös │Publication Year│Home Page │ | {"url":"http://www.emis.de/classics/Erdos/cit/31105113.htm","timestamp":"2014-04-19T20:09:03Z","content_type":null,"content_length":"3483","record_id":"<urn:uuid:2ebc9935-96f7-4528-bc95-81165273222f>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00007-ip-10-147-4-33.ec2.internal.warc.gz"} |
Relates to the alienation of parkland in the city of Yonkers
• Jun 21, 2013: SUBSTITUTED BY A7362D
• Jun 21, 2013: ORDERED TO THIRD READING CAL.1612
• Jun 13, 2013: PRINT NUMBER 5101C
• Jun 13, 2013: AMEND AND RECOMMIT TO RULES
• Jun 11, 2013: REPORTED AND COMMITTED TO RULES
• Jun 4, 2013: REPORTED AND COMMITTED TO FINANCE
• May 30, 2013: PRINT NUMBER 5101B
• May 30, 2013: AMEND AND RECOMMIT TO LOCAL GOVERNMENT
• May 17, 2013: PRINT NUMBER 5101A
• May 17, 2013: AMEND AND RECOMMIT TO LOCAL GOVERNMENT
• May 8, 2013: REFERRED TO LOCAL GOVERNMENT
VOTE: COMMITTEE VOTE: - Rules - Jun 21, 2013
Ayes (23): Skelos, Libous, Bonacic, Carlucci, Farley, Flanagan, Fuschillo, Larkin, LaValle, Marcellino, Maziarz, Seward, Valesky, Stewart-Cousins, Breslin, Dilan, Hassell-Thompson, Krueger,
Montgomery, Parker, Perkins, Espaillat, Gianaris
Ayes W/R (2): Hannon, Nozzolio
BILL NUMBER:S5101C
TITLE OF BILL: An act authorizing the leasing of certain parkland for development of a parking facility with roof top park occupying the same parkland footprint such that there is no net parkland loss associated with this facility, and for the construction of associated roadways, a pedestrian foot bridge, a loading dock and other new park amenities, required to support new development of an exhibition hall, cultural space, a wellness center, hotel, restaurants, and a community programming not for profit uniquely located on the Hudson River in a historic former power plant, which site is currently an abandoned brownfield site known as the Glenwood Power Plant, uniquely located on the Hudson River adjacent to John F. Kennedy Marina and Trevor Park in the City of Yonkers, which site lacks any land for parking, and which project was contemplated in the approved May 2009 City of Yonkers Alexander Street Master Plan with a new road network connecting Alexander Street to the south past the Glenwood Power Plant through John F. Kennedy Marina and Trevor Park north to provide increased public access to the Hudson River and enhance said parks through the development of a new sustainable riverfront, transit-oriented project by enhancing the existing parks with new amenities
PURPOSE OR GENERAL IDEA OF BILL: To authorize the leasing and conveyance of certain parkland for development of a parking facility with an equivalent size rooftop park, a pedestrian footbridge and roadways required to support a new development, which is currently and abandoned brownfield site uniquely located on the Hudson River to a historic former power plant and in the vicinity of Trevor Park and WK." Marina and Park in the City of Yonkers.
Section 1 authorizes the alienation of certain parkland.
Section 2 authorizes a lease of certain parkland.
Section 3 requires the dedication of replacement parkland.
Sections 4 and 5 contain the metes and bounds of the parkland being alienated and the replacement parkland
Section 6 contains a reverter clause.
Section 7 requires the compliance with the determination of the
Secretary of the Interior if the parkland being alienated received funding from Federal Land and Water Conservation Fund.
Section 8 contains the effective date.
JUSTIFICATION: This legislation will facilitate the redevelopment of a currently abandoned vacant power plant brownfield site on the Hudson River by allowing for construction of a new parking facility and associated roadways in the City of Yonkers, which will benefit the development of an exhibition hall, cultural space, a wellness center, hotel, restaurants and a community programming not for profit and shall provide capital improvements directly in the same parks of
greater value than the fair market value of the parkland being alienated causing said parks to be significantly improved by adding new and improved parkland amenities larger in size and seater in value than what is currently present and also dedicated parkland the equivalent size-of the parkland being alienated
PRIOR LEGISLATIVE HISTORY: New bill
FISCAL IMPLICATIONS: None to the State, undermined to the City of Yonkers.
EFFECTIVE DATE: Immediately
Introduced by Sens. STEWART-COUSINS, LATIMER -- read twice and ordered
printed, and when printed to be committed to the Committee on Local
Government -- committee discharged, bill amended, ordered reprinted as
amended and recommitted to said committee -- committee discharged,
bill amended, ordered reprinted as amended and recommitted to said
committee -- reported favorably from said committee and committed to
the Committee on Finance -- reported favorably from said committee and
committed to the Committee on Rules -- committee discharged, bill
amended, ordered reprinted as amended and recommitted to said commit-
AN ACT authorizing the leasing of certain parkland for development of a
parking facility with roof top park occupying the same parkland foot-
print such that there is no net parkland loss associated with this
facility, and for the construction of associated roadways, a pedestri-
an foot bridge, a loading dock and other new park amenities, required
to support new development of an exhibition hall, cultural space, a
wellness center, hotel, restaurants, and a community programming not
for profit uniquely located on the Hudson River in a historic former
power plant, which site is currently an abandoned brownfield site
known as the Glenwood Power Plant, uniquely located on the Hudson
River adjacent to John F. Kennedy Marina and Trevor Park in the City
of Yonkers, which site lacks any land for parking, and which project
was contemplated in the approved May 2009 City of Yonkers Alexander
Street Master Plan with a new road network connecting Alexander Street
to the south past the Glenwood Power Plant through John F. Kennedy
Marina and Trevor Park north to provide increased public access to the
Hudson River and enhance said parks through the development of a new
sustainable riverfront, transit-oriented project by enhancing the
existing parks with new amenities
THE PEOPLE OF THE STATE OF NEW YORK, REPRESENTED IN SENATE AND ASSEM-
BLY, DO ENACT AS FOLLOWS:
EXPLANATION--Matter in ITALICS (underscored) is new; matter in brackets
[ ] is old law to be omitted.
S. 5101--C 2
Section 1. Subject to the provisions of this act, the city of Yonkers,
in the county of Westchester, acting by and through its mayor, is hereby
authorized to discontinue the use of the municipally owned parkland
described in section four of this act that are no longer needed for park
S 2. Subject to the provisions of this act, the city of Yonkers, in
the county of Westchester, acting by and through its mayor, is hereby
authorized to temporarily discontinue use of the parkland described in
section four of this act for the purpose of leasing the land under such
park to Glenwood POH LLC, for a term not to exceed thirty years, to
facilitate the building of a public parking garage and a public park.
The fair market value of such lease shall be dedicated by the city of
Yonkers for the acquisition of additional parkland and/or improvements
to existing parkland.
S 3. The authorization contained in section one of this act shall only
be effective on the condition that the city of Yonkers, in the county of
Westchester, acquire and dedicate as parklands the lands described in
section five of this act, such land to be used for park purposes. If the
replacement lands are less than the fair market value of the lands being
alienated, the city of Yonkers must dedicate the difference for the
acquisition of additional parkland and/or for capital improvements to
existing parkland.
S 4. The lands referred to in sections one and two of this act are
located, bounded and described as follows:
PARCELS FOR LEASE OR EASEMENT ACCESS
PERMANENT PARCEL 1A - #1
All that tract or parcel of lands being part of TREVOR PARK, SECTION
2, BLOCK 2125, LOT 1, city of Yonkers, state of New York more partic-
ularly bounded and described as follows:
BEGINNING at a point, said point being the following two (2) courses
from the southwesterly corner of Lot 1, Block 2125,
a) North 69°46'27" West a distance of 225.88 feet to a point, thence;
b) North 70°54'27" West a distance of 75.00 feet, and running thence;
1. North 73°50'27" West a distance of 18.23 feet to a point, thence;
2. North 13°02'29" East a distance of 110.24 feet to a point, thence;
3. South 76°57'31" East a distance of 18.20 feet to a point, thence;
4. South 13°02'29" West a distance of 111.23 feet to the POINT OF
Containing an area of 2,016 square feet or 0.046 acres more or less.
PERMANENT PARCEL 1A - #2
TREVOR PARK
SECTION 2, BLOCK 2125, LOT 1
BEGINNING at a point, said point being North 11°46'18" East a distance
of 190.00 feet from the intersection of Ravine Drive and Glenwood
Avenue, and running thence,
1. North 11°46'18" East a distance of 73.56 feet to a point, thence;
2. South 76°57'31" East a distance of 135.37 feet to a point, thence;
3. North 13°02'29" East a distance of 485.00 feet to a point, thence;
4. North 76°57'31" West a distance of 408.00 feet to a point, thence;
5. South 13°02'29" West a distance of 44.26 feet to a point, thence;
6. North 76°57'31" West a distance of 50.00 feet to a point, thence;
7. North 13°02'29" East a distance of 108.34 feet to a point, thence;
8. South 76°57'31" East a distance of 420.67 feet to a point, thence;
9. North 83°26'03" East a distance of 89.65 feet to a point, thence;
10. South 85°41'12" East a distance of 95.95 feet to a point, thence;
11. South 04°18'48" West a distance of 56.00 feet to a point, thence;
S. 5101--C 3
12. South 89°18'09" West a distance of 125.78 feet to a point, thence;
13. South 13°02'29" West a distance of 104.07 feet to a point, thence;
14. South 64°59'09" East a distance of 92.02 feet to a point, thence;
15. South 03°16'20" East a distance of 11.36 feet to a point, thence;
16. North 64°59'09" West a distance of 95.29 feet to a point, thence;
17. South 13°02'29" West a distance of 396.51 feet to a point, thence;
18. South 58°42'41" West a distance of 99.04 feet to a point, thence;
19. North 78°13'42" West a distance of 91.20 feet to the POINT OF
Containing an area of 65,402 square feet or 1.501 acres more or less.
PERMANENT PARCEL 1A - #3
TREVOR PARK
SECTION 2, BLOCK 2125, LOT 1
BEGINNING at a point in the westerly right-of-way line of Warburton
Avenue, said point being the following two (2) courses along said right-
of-way from the southeasterly corner of Lot 1, Block 2125,
c) North 08°10'38" East a distance of 101.29 feet to a point, thence;
d) North 04°18'48" East a distance of 149.53 feet, and running thence;
5. North 64°59'09" West a distance of 38.53 feet to a point, thence;
6. On a curve to the right having a radius of 480.00 feet, and arc
length of 10.42 feet, whose chord bears North 08°43'07" East a chord
distance of 10.42 feet to a point, thence;
7. South 64°59'09" East a distance of 37.67 feet to a point, thence;
8. South 04°18'48" West a distance of 10.69 feet to the POINT OF
Containing an area of 381 square feet or 0.009 acres more or less.
PERMANENT PARCEL 1B
TREVOR PARK
SECTION 2, BLOCK 2125, LOT 1
BEGINNING at a point, said point being North 12°56'23" East a distance
of 107.20 feet from the southwesterly corner of Lot 1, Block 2125, and
running thence,
1. North 12°56'23" East a distance of 34.51 feet to a point, thence;
2. South 76°57'31" East a distance of 105.94 feet to a point, thence;
3. South 13°02'29" West a distance of 34.51 feet to a point, thence;
4. North 76°57'31" West a distance of 105.88 feet to the POINT OF
Containing an area of 3,654 square feet or 0.084 acres more or less.
PERMANENT PARCEL 2A
J. F. KENNEDY PARK
SECTION 2, BLOCK 2640, LOT 1
CITY OF YONKERS
BEGINNING at a point, said point being North 14°12'46" East a distance
of 225.16 feet from the southeasterly corner of Lot 1, Block 2640, and
running thence,
1. On a curve to the left having a radius of 488.00 feet, an arc
length of 48.26 feet, whose chord bears North 02°42'58" West a chord
distance of 48.24 feet to a point of reverse curvature, thence;
2. On a curve to the right having a radius of 212.00 feet, an arc
length of 52.61 feet whose chord bears North 01°33'37" East a chord
distance of 52.48 feet to a point of tangency, thence;
3. North 08°39'21" East a distance of 365.40 feet to a point of curva-
ture, thence;
S. 5101--C 4
4. On a curve to the right having a radius of 213.30 feet, an arc
length of 26.06 feet, whose chord bears North 11°38'58" East a chord
distance of 26.04 feet to a point of tangency, thence;
5. North 15°08'59" East a distance of 230.05 feet to a point, thence;
6. South 74°51'01" East a distance of 25.28 feet to a point, thence;
7. On a curve to the left having a radius of 46.00 feet, an arc length
of 48.97 feet, whose chord bears South 33°24'00" East a chord distance
of 46.69 feet to a point, thence;
8. South 15°10'46" West a distance of 37.11 feet to a point, thence;
9. On a curve to the left having a radius of 45.00 feet, an arc length
of 61.88 feet, whose chord bears South 54°32'47" West a chord distance
of 57.12 feet to a point of tangency, thence;
10. South 15°08'59" West a distance of 117.82 feet to a point to a
point of curvature, thence;
11. On a curve to the left having a radius of 189.30 feet, an arc
length of 21.45 feet, whose chord bears South 11°54'10" West a chord
distance of 21.44 feet to a point of tangency;
12. South 08°39'21" West a distance of 367.36 feet to a point of
curvature, thence;
13. On a curve to the left having a radius of 188.00, an arc length of
13.09 feet, whose chord bears South 06°39'40" West a chord distance of
13.09 feet to a point, thence;
14. South 14°15'25" West a distance of 86.70 feet to the POINT OF
Containing an area of 18,408 square feet or 0.423 acres more or less.
PERMANENT PARCEL 3A
TREVOR PARK
SECTION 2, BLOCK 2125, LOT 1
BEGINNING at a point, said point being the following two (2) courses
from the terminus of the eleventh (11) course of Permanent Parcel 3B,
e) South 74°48'40" East a distance of 16.54 feet to a point, thence;
f) On a curve to the left having a radius of 145.26 feet, an arc
length of 19.77 feet, whose chord bears South 78°42'40" East a chord
distance of 19.76 feet, and running thence;
5. On a curve to the left, having a radius of 145.26 feet, an arc
length of 49.18 feet, whose chord bears North 87°41'27" East a chord
distance of 48.94 feet to a point, thence;
6. South 04°43'17" West a distance of 116.96 feet to a point, thence;
7. South 15°38'33" West a distance of 217.19 feet to a point, thence;
8. South 10°27'26" West a distance of 310.97 feet to a point, thence;
9. North 76°57'31" West a distance of 76.46 feet to a point, thence;
10. North 13°02'29" East a distance of 587.79 feet to a point, thence;
11. North 23°54'27" East a distance of 43.38 feet to the POINT OF
Containing an area of 42,926 square feet or 0.985 acres more or less.
PERMANENT PARCEL 3B
TREVOR PARK
SECTION 2, BLOCK 2125, LOT 1
BEGINNING at a point in the westerly right-of-way line of Warburton
Avenue, said point being the common corner of Lot 1, Block 2125 and Lot
2, Block 2125, and running thence,
1. On a curve to the left having a radius of 1033.00 feet, an arc
length of 70.38 feet, whose chord bears South 11°09'41" West a chord
distance of 70.37 feet to a point, thence;
S. 5101--C 5
2. On a curve to the left having a radius of 58.00 feet, an arc length
of 53.99 feet, whose chord bears North 72°10'42" West a chord distance
of 52.06 feet to a point of tangency, thence;
3. South 81°09'16" West a distance of 15.31 feet to a point of curva-
ture, thence;
4. On a curve to the left having a radius of 100.00 feet, an arc
length of 83.21 feet, whose chord bears South 57°18'58" West a chord
distance of 80.83 feet to a point of tangency, thence;
5. South 33°28'39" West a distance of 122.78 feet to a point, thence;
6. South 33°30'09" West a distance of 193.30 feet to a point, thence;
7. South 32°44'29" West a distance of 98.69 feet to a point to a point
of curvature, thence;
8. On a curve to the left having a radius of 824.81 feet, an arc
length of 55.31 feet, whose chord bears South 30°49'14" West a chord
distance of 55.30 feet to a point of tangency, thence;
9. South 28°53'58" West a distance of 69.21 feet to a point of curva-
ture, thence;
10. On a curve to the right having a radius of 145.26 feet, an arc
length of 193.41 feet, whose chord bears South 67°02'39" West a chord
distance of 179.44 feet to a point of tangency, thence;
11. North 74°48'40" West a distance of 16.54 feet to a point, thence;
12. North 15°10'59" East a distance of 30.00 feet to a point, thence;
13. South 74°48'40" East a distance of 16.54 feet to a point of
tangency, thence;
14. On a curve to the left having a radius of 115.26 feet, an arc
length of 153.47 feet, whose chord bears North 67°02'39" East a chord
distance of 142.38 feet to a point of tangency, thence;
15. North 28°53'58" East a distance of 69.21 feet to a point to a
point of curvature, thence;
16. On a curve to the right having a radius of 854.81 feet, an arc
length of 57.32 feet, whose chord bears North 30°49'14" East a chord
distance of 57.31 feet to a point of tangency, thence;
17. North 32°44'29" East a distance of 98.89 feet to a point, thence;
18. North 33°30'09" East a distance of 193.50 feet to a point, thence;
19. North 33°28'39" East a distance of 137.97 feet to a point to a
point of curvature, thence;
20. On a curve to the right having a radius of 132.53 feet, an arc
length of 81.91 feet, whose chord bears North 51°10'57" East a chord
distance of 80.61 feet to a point, thence;
21. North 21°06'45" West a distance of 9.50 feet to a point, thence;
22. On a curve to the right having a radius of 150.00 feet, an arc
length of 106.34 feet, whose chord bears North 84°13'12" East a chord
distance of 104.12 feet to the POINT OF BEGINNING.
Containing an area of 30,111 square feet or 0.691 acres more or less.
TEMPORARY PARCEL 4A
TREVOR PARK
SECTION 2, BLOCK 2125, LOT 1
BEGINNING at a point, said point being the following two (2) courses
from the southwesterly corner of Lot 1, Block 2125,
g) North 69°46'27" West a distance of 225.88 feet to a point, thence;
h) North 70°54'27" West a distance of 75.00 feet, and running thence;
12. North 13°02'29" East a distance of 111.23 feet to a point, thence;
13. South 76°57'31" East a distance of 31.80 feet to a point, thence;
14. South 13°02'29" West a distance of 101.23 feet to a point, thence;
15. South 76°57'31" East a distance of 208.00 feet to a point, thence;
16. North 13°02'29" East a distance of 180.00 feet to a point, thence;
S. 5101--C 6
17. South 76°57'31" East a distance of 200.00 feet to a point, thence;
18. South 13°02'29" West a distance of 485.00 feet to a point, thence;
19. North 76°57'31" West a distance of 135.37 feet to a point, thence;
20. North 11°46'18" East a distance of 258.91 feet to a point, thence;
21. North 69°46'27" West a distance of 225.88 feet to a point, thence;
22. North 70°54'27" West a distance of 75.00 feet to the POINT OF
Containing an area of 89,733 square feet or 2.060 acres more or less.
TEMPORARY PARCEL 4B
TREVOR PARK
SECTION 2, BLOCK 2125, LOT 1
BEGINNING at a point, said point being South 85°35'07" West a distance
of 33.33 feet from the point of beginning of Temporary Parcel 4A, and
running thence,
1. North 13°02'29" East a distance of 180.00 feet to a point, thence;
2. South 76°57'31" East a distance of 208.00 feet to a point, thence;
3. South 13°02'29" West a distance of 180.00 feet to a point, thence;
4. North 76°57'31" West a distance of 208.00 feet to the POINT OF
Containing an area of 37,440 square feet or 0.860 acres more or less.
TEMPORARY PARCEL 5A
TREVOR PARK
SECTION 2, BLOCK 2125, LOT 1
BEGINNING at a point, said point being the terminus of the sixteenth
(16) course of Permanent Parcel 1A-#2, and running thence,
1. South 64°59'09" East a distance of 95.29 feet to a point, thence;
2. South 02°23'14" West a distance of 220.44 feet to a point, thence;
3. North 78°43'58" West a distance of 134.03 feet to a point, thence;
4. North 13°02'29" East a distance of 240.55 feet to the POINT OF
Containing an area of 25,807 square feet or 0.592 acres more or less.
TEMPORARY PARCEL 5B - #1
TREVOR PARK
SECTION 2, BLOCK 2125, LOT 1
BEGINNING at a point, said point being the southeasterly corner of Lot
1, Block 2125, and running thence,
1. North 72°40'12" West a distance of 1.77 feet to a point, thence;
2. On a curve to the right having a radius of 203.00 feet, an arc
length of 110.63 feet, whose chord bears North 13°06'52" West a chord
distance of 109.26 feet to a point of tangency, thence;
3. North 02°29'50" East a distance of 64.88 feet to a point of curva-
ture, thence;
4. On a curve to the left having a radius of 650.00 feet, an arc
length of 65.45 feet whose chord bears North 00°23'15" West a chord
distance of 65.43 feet to a point of tangency, thence;
5. North 03°16'20" West a distance of 52.56 feet to a point of curva-
ture, thence;
6. On a curve to the left having a radius of 300.00 feet, an arc
length of 109.38 feet, whose chord bears North 13°42'59" West a chord
distance of 108.77 feet to a point of compound curvature, thence;
7. On a curve to the left having a radius of 37.00 feet, an arc length
of 44.47 feet, whose chord bears North 58°35'34" West a chord distance
of 41.84 feet to a point, thence;
8. North 13°02'29" East a distance of 8.26 feet to a point, thence;
S. 5101--C 7
9. On a curve to the right having a radius of 45.00 feet, an arc
length of 51.80 feet, whose chord bears South 57°08'10" East a chord
distance of 48.98 feet to a point of compound curvature, thence;
10. On a curve to the right having a radius of 308.00 feet, an arc
length of 112.29 feet, whose chord bears South 13°43'00" East a chord
distance of 111.67 feet to a point of tangency, thence;
11. South 03°16'20" East a distance of 26.42 feet to a point, thence;
12. On a curve to the right having a radius of 488.00 feet, an arc
length of 21.77 feet, whose chord bears North 09°00'51" East a chord
distance of 21.76 feet to a point of reverse curvature, thence;
13. On a curve to the left having a radius of 187.00 feet, an arc
length of 47.38 feet, whose chord bears North 03°02'00" East a chord
distance of 47.25 feet to a point of tangency, thence;
14. North 04°13'30" West a distance of 23.28 feet to a point, thence;
15. On a curve to the right having a radius of 158.00 feet, an arc
length of 51.90 feet, whose chord bears North 05°11'05" East a chord
distance of 51.66 feet to a point of compound curvature, thence;
16. On a curve to the right having a radius of 48.00 feet, an arc
length of 21.44 feet, whose chord bears North 25°45'54" East a chord
distance of 21.26 feet to a point, thence;
17. North 89°18'09" East a distance of 11.14 feet to a point, thence;
18. On a curve to the left having a radius of 40.00 feet, an arc
length of 25.06 feet, whose chord bears South 30°45'44" West a chord
distance of 24.66 feet to a point of compound curvature, thence;
19. On a curve to the left having a radius of 150.00 feet, an arc
length of 49.39 feet, whose chord bears South 05°12'25" West a chord
distance of 49.16 feet to a point of tangency, thence;
20. South 04°13'30" East a distance of 23.28 feet to a point of curva-
ture, thence;
21. On a curve to the right having a radius of 195.00 feet, an arc
length of 49.41 feet, whose chord bears South 03°02'00" West a chord
distance of 49.27 feet to a point of reverse curvature, thence;
22. On a curve to the left having a radius of 480.00 feet, an arc
length of 65.30 feet, whose chord bears South 06°23'41" West a chord
distance of 65.25 feet to a point of tangency, thence;
23. South 02°29'50" West a distance of 112.48 feet to a point of
curvature, thence;
24. On a curve to the left having a radius of 195.00 feet, an arc
length of 98.31 feet, whose chord bears South 11°56'46" East a chord
distance of 97.27 feet to a point, thence;
25. South 08°10'38" West a distance of 11.54 feet to the POINT OF
Containing an area of 5,143 square feet or 0.118 acres more or less.
TEMPORARY PARCEL 5B - #2
TREVOR PARK
SECTION 2, BLOCK 2125, LOT 1
BEGINNING at a point, said point being North 85°41'12" East a distance
of 50.91 feet from the intersection of the terminus of the tenth (10)
course of Permanent Parcel 1A-#2 with the westerly right-of-way line of
Warburton Avenue, and running thence,
1. North 85°41'12" West a distance of 9.26 feet to a point thence;
2. On a curve to the right having a radius of 15.00 feet, an arc
length of 3.63 feet, whose chord bears North 18°58'12" West a chord
distance of 3.62 feet to a point of compound curvature, thence;
S. 5101--C 8
3. On a curve to the right having a radius of 88.00 feet, an arc
length of 43.12 feet, whose chord bears North 01°59'47" East a chord
distance of 42.69 feet to a point of tangency, thence;
4. North 16°01'59" East a distance of 96.14 feet to a point of curva-
ture, thence;
5. On a curve to the left having a radius of 112.00 feet, an arc
length of 31.89 feet, whose chord bears North 07°52'38" East a chord
distance of 31.78 feet to a point, thence;
6. North 89°43'16" East a distance of 8.00 feet to a point, thence;
7. On a curve to the right having a radius of 120.00 feet, an arc
length of 34.16 feet, whose chord bears South 07°52'38" West a chord
distance of 34.05 feet to a point of tangency, thence;
8. South 16°01'59" West a distance of 96.14 feet to a point of curva-
ture, thence;
9. On a curve to the left having a radius of 80.00 feet, an arc length
of 39.20 feet, whose chord bears South 01°59'47" West a chord distance
of 38.81 feet to a point of compound curvature, thence;
10. On a curve to the left having a radius of 7.00 feet, an arc length
of 1.69 feet, whose chord bears South 18°58'12" East a chord distance of
1.69 feet to a point of tangency, thence;
11. South 25°53'59" East a distance of 4.66 feet to the POINT OF
Containing an area of 1,403 square feet or 0.032 acres more or less.
TEMPORARY PARCEL 5B - #3
TREVOR PARK
SECTION 2, BLOCK 2125, LOT 1
BEGINNING at a point, said point being South 64°31'02" West a distance
of 27.35 feet from the terminus of the fifth (5) course of Temporary
Parcel 5B-#2, and running thence,
1. North 90°00'00" West a distance of 25.19 feet to a point, thence;
2. North 01°24'12" West a distance of 118.38 feet to a point, thence;
3. North 05°10'41" West a distance of 30.43 feet to a point, thence;
4. North 10°23'36" West a distance of 63.37 feet to a point, thence;
5. North 79°58'41" East a distance of 8.00 feet to a point, thence;
6. South 10°23'36" East a distance of 63.68 feet to a point, thence;
7. South 05°10'41" East a distance of 31.06 feet to a point, thence;
8. South 01°24'12" East a distance of 110.84 feet to a point, thence;
9. North 90°00'00" East a distance of 12.67 feet to a point to a point
of curvature, thence;
10. On a curve to the right having a radius of 5.00 feet, an arc
length of 8.36 feet, whose chord bears South 42°06'16" East a chord
distance of 7.42 feet to a point of tangency, thence;
11. South 05°47'28" West a distance of 2.51 feet to the POINT OF
Containing an area of 1,838 square feet or 0.042 acres more or less.
TEMPORARY PARCEL 5B - #4
TREVOR PARK
SECTION 2, BLOCK 2125, LOT 1
BEGINNING at a point, said point being the terminus of the first (1)
course of Permanent Parcel 3A, and running thence,
1. South 20°50'24" East a distance of 29.33 feet to a point of curva-
ture, thence;
2. On a curve to the right having a radius of 89.90 feet, an arc
length of 58.07 feet, whose chord bears South 02°14'45" East a chord
distance of 57.06 feet to a point of compound curvature, thence;
S. 5101--C 9
3. On a curve to the right having a radius of 216.20 feet, an arc
length of 20.20 feet, whose chord bears South 18°56'05" West a chord
distance of 20.19 feet to a point of reverse curvature, thence;
4. On a curve to the left having a radius of 240.26 feet, an arc
length of 61.23 feet, whose chord bears South 14°18'39" West a chord
distance of 61.06 feet to a point of reverse curvature, thence;
5. On a curve to the right having a radius of 551.36 feet, an arc
length of 143.11 feet, whose chord bears South 14°26'45" West a chord
distance of 142.71 feet to a point of reverse curvature, thence;
6. On a curve to the left having a radius of 357.96 feet, an arc
length of 67.95 feet, whose chord bears South 16°26'38" West a chord
distance of 67.85 feet to a point of reverse curvature, thence;
7. On a curve to the left having a radius of 228.38 feet, an arc
length of 52.62 feet, whose chord bears South 04°24'18" West a chord
distance of 52.51 feet to a point of reverse curvature, thence;
8. On a curve to the right having a radius of 312.62 feet, an arc
length of 88.82 feet, whose chord bears South 05°56'36" West a chord
distance of 88.52 feet to a point of reverse curvature, thence;
9. On a curve to the left having a radius of 436.92 feet, an arc
length of 78.03 feet, whose chord bears South 08°57'59" West a chord
distance of 77.93 feet to a point of compound curvature, thence;
10. On a curve to the left having a radius of 28.68 feet, an arc
length of 33.05 feet, whose chord bears South 29°09'59" East a chord
distance of 31.25 feet to a point, thence;
11. South 27°49'00" West a distance of 8.00 feet to a point, thence;
12. On a curve to the right having a radius of 36.68 feet, an arc
length of 42.27 feet, whose chord bears North 29°09'59" West a chord
distance of 39.97 feet to a point of compound curvature, thence;
13. On a curve to the right having a radius of 444.92 feet, an arc
length of 79.46 feet, whose chord bears North 08°57'59" East a chord
distance of 79.35 feet to a point of reverse curvature, thence;
14. On a curve to the left having a radius of 304.62 feet, an arc
length of 86.55 feet, whose chord bears North 05°56'36" East a chord
distance of 86.26 feet to a point of reverse curvature, thence;
15. On a curve to the right having a radius of 236.38 feet, an arc
length of 54.47 feet, whose chord bears North 04°24'18" East a chord
distance of 54.35 feet to a point of compound curvature, thence;
16. On a curve to the right having a radius of 365.96 feet, an arc
length of 69.47 feet, whose chord bears North 16°26'38" East a chord
distance of 69.36 feet to a point of reverse curvature, thence;
17. On a curve to the left having a radius of 543.36 feet, an arc
length of 141.04 feet, whose chord bears North 14°26'45" East a chord
distance of 140.64 feet to a point of reverse curvature, thence;
18. On a curve to the right having a radius of 248.26 feet, an arc
length of 63.27 feet, whose chord bears North 14°18'39" East a chord
distance of 63.10 feet to a point of reverse curvature, thence;
19. On a curve to the left having a radius of 208.20 feet, an arc
length of 19.45 feet, whose chord bears North 18°56'05" East a chord
distance of 19.45 feet to a point of compound curvature, thence;
20. On a curve to the left having a radius of 81.90 feet, an arc
length of 52.89 feet, whose chord bears North 02°14'38" West a chord
distance of 51.98 feet to a point of tangency, thence;
21. North 20°50'24" West a distance of 12.60 feet to a point, thence;
22. North 04°43'17" East a distance of 18.54 feet to the POINT OF
Containing an area of 5,015 square feet or 0.115 acres more or less.
S. 5101--C 10
TEMPORARY PARCEL 6A
J. F. KENNEDY PARK
SECTION 2, BLOCK 2640, LOT 1
BEGINNING at a point, said point being the terminus of the third (3)
course of Permanent Parcel 1C, and running thence,
1. North 76°15'44" West a distance of 16.12 feet to a point, thence;
2. North 14°07'15" East a distance of 786.44 feet to a point, thence;
3. North 17°50'36" East a distance of 611.83 feet to a point, thence;
4. North 42°35'06" West a distance of 154.38 feet to a point, thence;
5. North 19°22'04" East a distance of 134.89 feet to a point, thence;
6. South 72°15'22" East a distance of 104.93 feet to a point, thence;
7. South 17°44'38" West a distance of 13.51 feet to a point, thence;
8. North 72°15'22" West a distance of 90.00 feet to a point, thence;
9. South 19°22'04" West a distance of 109.18 feet to a point, thence;
10. South 42°35'06" East a distance of 153.64 feet to a point,
11. South 17°50'36" West a distance of 623.93 feet to a point,
12. South 14°07'15" West a distance of 292.13 feet to a point of
curvature, thence;
13. On a curve to the left having a radius of 98.73 feet, an arc
length of 41.27 feet, whose chord bears South 01°24'31" West a chord
distance of 40.97 feet to a point of tangency, thence;
14. South 10°34'00" East a distance of 74.16 feet to a point to a
point of curvature, thence;
15. On a curve to the right having a radius of 165.00 feet, an arc
length of 141.82 feet, whose chord bears South 14°03'25" West a chord
distance of 137.50 feet to a point of tangency, thence;
16. South 38°40'50" West a distance of 74.36 feet to a point of curva-
ture, thence;
17. On a curve to the left having a radius of 100.00 feet, an arc
length of 43.53 feet, whose chord bears South 26°12'33" West a chord
distance of 43.19 feet to a point of tangency, thence;
18. South 13°44'16" West a distance of 139.01 feet to the POINT OF
Containing an area of 37,632 square feet or 0.864 acres more or less.
TEMPORARY PARCEL 6B - #1
J. F. KENNEDY PARK
SECTION 2, BLOCK 2640, LOT 1
BEGINNING at a point, said point being North 14°07'15" East a distance
of 214.41 feet from the terminus of the sixth (6) course of Temporary
Parcel 6C, and running thence,
1. North 14°07'15" East a distance of 8.06 feet to a point, thence;
2. On a curve to the right having a radius of 96.11 feet, an arc
length of 45.60 feet, whose chord bears South 69°01'07" East a chord
distance of 45.17 feet to a point of reverse curvature, thence;
3. On a curve to the left having a radius of 25.10 feet, an arc length
of 37.68 feet, whose chord bears North 81°33'51" East a chord distance
of 34.24 feet to a point of compound curvature, thence;
4. On a curve to the left having a radius of 136.56 feet, an arc
length of 90.59 feet, whose chord bears North 19°33'03" East a chord
distance of 88.94 feet to a point of compound curvature, thence;
5. On a curve to the left having a radius of 343.87 feet, an arc
length of 88.41 feet, whose chord bears North 06°49'08" West a chord
distance of 88.17 feet to a point, thence;
6. North 75°54'08" West a distance of 46.29 feet to a point, thence;
S. 5101--C 11
7. North 17°50'36" East a distance of 8.02 feet to a point, thence;
8. South 75°54'08" East a distance of 94.56 feet to a point, thence;
9. On a curve to the left having a radius of 15.19 feet, an arc length
of 5.67 feet, whose chord bears North 13°54'13" West a chord distance of
5.63 feet to a point of tangency, thence;
10. North 24°35'41" West a distance of 24.19 feet to a point of curva-
ture, thence;
11. On a curve to the right having a radius of 59.00 feet, an arc
length of 77.33 feet, whose chord bears North 12°57'16" East a chord
distance of 71.91 feet to a point of tangency, thence;
12. North 50°30'12" East a distance of 15.54 feet to a point, thence;
13. On a curve to the left having a radius of 25.54 feet, an arc
length of 27.13 feet, whose chord bears North 65°30'42" West a chord
distance of 25.88 feet to a point, thence;
14. On a curve to the right having a radius of 11.00 feet, an arc
length of 28.67 feet, whose chord bears North 62°17'30" West a chord
distance of 21.22 feet to a point, thence;
15. On a curve to the right having a radius of 60.49 feet, an arc
length of 31.29 feet, whose chord bears North 65°23'26" West a chord
distance of 30.94 feet to a point, thence;
16. North 17°50'36" East a distance of 5.42 feet to a point, thence;
17. On a curve to the left having a radius of 55.49 feet, an arc
length of 32.17 feet, whose chord bears South 65°07'19" East a chord
distance of 31.72 feet to a point, thence;
18. On a curve to the right having a radius of 11.00 feet, an arc
length of 29.29 feet, whose chord bears South 63°50'06" East a chord
distance of 21.37 feet to a point, thence;
19. On a curve to the right having a radius of 30.54 feet, an arc
length of 28.83 feet, whose chord bears South 62°50'54" East a chord
distance of 27.77 feet to a point, thence;
20. North 50°30'12" East a distance of 3.58 feet to a point of curva-
ture, thence;
21. On a curve to the left having a radius of 17.73 feet, an arc
length of 9.77 feet, whose chord bears North 33°15'15" East a chord
distance of 9.65 feet to a point of tangency, thence;
22. North 17°27'35" East a distance of 22.19 feet to a point of curva-
ture, thence;
23. On a curve to the left having a radius of 6.00 feet, an arc
length of 9.25 feet, whose chord bears North 24°53'08" West a chord
distance of 8.36 feet to a point of tangency, thence;
24. North 69°01'00" West a distance of 78.83 feet to a point, thence;
25. North 17°50'36" East a distance of 8.01 feet to a point, thence;
26. South 69°01'00" East a distance of 79.27 feet to a point of
curvature, thence;
27. On a curve to the right having a radius of 14.00 feet, an arc
length of 21.42 feet, whose chord bears South 25°11'37" East a chord
distance of 19.39 feet to a point of tangency, thence;
28. South 17°27'35" West a distance of 22.09 feet to a point, thence;
29. On a curve to the right having a radius of 25.73 feet, an arc
length of 14.30 feet, whose chord bears South 33°22'40" West a chord
distance of 14.11 feet to a point of tangency, thence;
30. South 50°30'12" West a distance of 24.23 feet to a point of curva-
ture, thence;
31. On a curve to the left having a radius of 51.00 feet, an arc
length of 66.85 feet, whose chord bears South 12°57'16" West a chord
distance of 62.16 feet to a point of tangency, thence;
S. 5101--C 12
32. South 24°35'41" East a distance of 24.67 feet to a point, thence;
33. On a curve to the right having a radius of 16.00 feet, an arc
length of 11.06 feet, whose chord bears South 04°47'51" East a chord
distance of 10.84 feet to a point of tangency, thence;
34. South 14°59'58" West a distance of 7.34 feet to a point, thence;
35. North 75°54'08" West a distance of 40.50 feet to a point of curva-
ture, thence;
36. On a curve to the left having a radius of 4.00 feet, an arc length
of 8.14 feet, whose chord bears South 45°50'05" West a chord distance of
6.80 feet to a point of reverse curvature, thence;
37. On a curve to the right having a radius of 351.87 feet, an arc
length of 79.68 feet, whose chord bears South 05°56'26" East a chord
distance of 79.51 feet to a point of compound curvature, thence;
38. On a curve to the right having a radius of 144.56 feet, an arc
length of 95.89 feet, whose chord bears South 19°33'03" West a chord
distance of 94.15 feet to a point of compound curvature, thence;
39. On a curve to the right having a radius of 33.10 feet, an arc
length of 49.69 feet, whose chord bears South 81°33'51" West a chord
distance of 45.16 feet to a point of reverse curvature, thence;
40. On a curve to the left having a radius of 88.11 feet, an arc
length of 42.75 feet, whose chord bears North 69°19'33" West a chord
distance of 42.33 feet to the POINT OF BEGINNING.
Containing an area of 5,707 square feet or 0.131 acres more or less.
TEMPORARY PARCEL 6B - #2
J. F. KENNEDY PARK
SECTION 2, BLOCK 2640, LOT 1
BEGINNING at a point, said point being South 75°14'25" East a distance
of 23.90 feet from the terminus of the twenty ninth (29) course of
Temporary Parcel 6B-#1, and running thence,
1. North 15°08'59" East a distance of 8.00 feet to a point, thence;
2. South 75°54'08" East a distance of 1.03 feet to a point, thence;
3. North 15°07'37" East a distance of 71.93 feet to a point, thence;
4. South 74°52'23" East a distance of 7.00 feet to a point, thence;
5. South 15°07'37" West a distance of 79.81 feet to a point, thence;
6. North 75°54'08" West a distance of 8.03 feet to the POINT OF BEGIN-
Containing an area of 567 square feet or 0.013 acres more or less.
TEMPORARY PARCEL 6C
J. F. KENNEDY PARK
SECTION 2, BLOCK 2640, LOT 1
BEGINNING at a point, said point being the terminus of the fourth (4)
course of Permanent Parcel 1C, and running thence,
1. North 13°44'16" East a distance of 19.36 feet to a point of curva-
ture, thence;
2. On a curve to the right having a radius of 100.00 feet, an arc
length of 43.53 feet, whose chord bears North 26°12'33" East a chord
distance of 43.19 feet to a point of tangency, thence;
3. North 38°40'50" East a distance of 74.36 feet to a point of curva-
ture, thence;
4. On a curve to the left having a radius of 165.00 feet, an arc
length of 141.82 feet, whose chord bears North 14°03'25" East a chord
distance of 137.50 feet to a point of tangency, thence;
5. North 10°34'00" West a distance of 74.16 feet to a point of curva-
ture, thence;
S. 5101--C 13
6. On a curve to the right having a radius of 98.73 feet, an arc
length of 41.27 feet, whose chord bears North 01°24'31" East a chord
distance of 40.97 feet to a point of tangency, thence;
7. North 14°07'15" East a distance of 292.13 feet to a point, thence;
8. North 17°50'36" East a distance of 623.93 feet to a point, thence;
9. North 42°35'06" West a distance of 15.16 feet to a point, thence;
10. North 14°11'26" East a distance of 57.33 feet to a point, thence;
11. On a curve to the left having a radius of 69.29 feet, an arc
length of 77.51 feet, whose chord bears North 35°44'06" East a chord
distance of 73.53 feet to a point, thence;
12. South 73°14'03" East a distance of 94.41 feet to a point, thence;
13. North 17°18'27" East a distance of 36.93 feet to a point, thence;
14. South 67°46'09" East a distance of 77.83 feet to a point, thence;
15. South 21°46'35" West a distance of 175.02 feet to a point, thence;
16. South 74°36'27" West a distance of 14.03 feet to a point, thence;
17. South 20°59'00" West a distance of 409.94 feet to a point, thence;
18. South 74°51'01" East a distance of 23.02 feet to a point, thence;
19. South 15°08'59" West a distance of 64.00 feet to a point, thence;
20. North 74°51'01" West a distance of 23.98 feet to a point, thence;
21. South 15°07'37" West a distance of 17.81 feet to a point, thence;
22. North 75°54'08" West a distance of 8.03 feet to a point, thence;
23. South 14°38'59" West a distance of 146.25 feet to a point, thence;
24. North 74°51'01" West a distance of 25.28 feet to a point, thence;
25. South 15°08'59" West a distance of 230.05 feet to a point of
curvature, thence;
26. On a curve to the left having a radius of 213.30 feet, an arc
length of 26.06 feet, whose chord bears South 11°38'58" West a chord
distance of 26.04 feet to a point of tangency, thence;
27. South 08°39'21" West a distance of 353.95 feet to a point, thence;
28. North 84°35'23" West a distance of 138.81 feet to the POINT OF
Containing an area of 179,448 square feet or 4.120 acres more or less.
All said parcels above subject to any easements or restrictions of
record which an accurate title search may discover.
J. F. KENNEDY PARK PARCEL 1.C FOR CONVEYANCE
SECTION 2, BLOCK 2640, LOT 1
BEGINNING at a point, said point being the southeasterly corner of Lot
1, Block 2640, and running thence,
1. North 73°24'27" West a distance of 257.70 feet to a point, thence;
2. North 13°22'17" East a distance of 181.53 feet to a point, thence;
3. South 76°15'44" East a distance of 97.31 feet to a point, thence;
4. North 13°44'16" East a distance of 119.65 feet to a point, thence;
5. South 84°35'23" East a distance of 138.81 feet to a point, thence;
6. South 08°39'21" West a distance of 11.45 feet to a point of curva-
ture, thence;
7. On a curve to the left having a radius of 212.00 feet, an arc
length of 52.61 feet whose chord bears South 01°33'37" West a chord
distance of 52.48 feet to a point of reverse curvature, thence;
8. On a curve to the right having a radius of 488.00 feet, an arc
length of 48.26 feet whose chord bears South 02°42'58" East a chord
distance of 48.24 feet to a point, thence;
9. South 14°12'46" West a distance of 225.16 feet to the POINT OF
Containing an area of 68,417 square feet or 1.571 acres more or less.
S. 5101--C 14
Subject to any easements or restrictions of record which an accurate
title search may discover.
S 5. The lands referred to in section three of this act to be dedi-
cated as parkland are:
all that certain piece or parcel of land lying and being in the city
of Yonkers, county of Westchester and state of New York and being more
particularly bounded and described as follows:
Beginning at a point at the Northerly bounds of Nepperhan Avenue and
the Easterly bounds of New Main Street. Thence;
Along New Main Street N 29°58'00" W a distance of 268.75' to a point
at the Southerly bounds of Ann Street. Thence;
Along the Southerly bounds of Ann Street N 65°41'40" E a distance of
121.40' to a point at the Westerly Bounds of lands now or formerly the
City of Yonkers (Getty Square Parking Area). Thence;
Though the of lands now or formerly the City of Yonkers (Getty Square
Parking Area) S 37°29'05" E a distance of 160.61' to a point at the
Westerly bounds of Henry Herz Street. Thence;
Along the Westerly bounds of Henry Herz Street S 31°37'01" E a
distance of 98.04' to a point at the Northerly bounds of Nepperhan
Avenue. Thence;
Along the Northerly bounds of Nepperhan Avenue S 60°13'00" W a
distance of 144.65' to a point which is the point of beginning,
Having an area of 35636.99 square feet, 0.818 acres.
All that certain piece or parcel of land lying and being in the city
of Yonkers, county of Westchester and state of New York and being more
particularly bounded and described as follows:
Beginning at a point, point being S 12°40'23"W 142.89' from the North-
west corner of lands of now or formerly Yonkers CDA. Thence through the
lands of now or formerly Yonkers CDA the following 9 courses and
1. S 03°19'54" W a distance of 39.47' to a point. Thence;
2. S 01°50'41" W an American Sugar distance of 144.90' to a point.
3. N 89°01'20" W a distance of 88.02' to a point. Thence;
4. S 04°18'15" W a distance of 51.28' to a point. Thence;
5. S 04°02'40" W a distance of 38.06' to a point. Thence;
6. S 32°58'32" W a distance of 63.42' to a point. Thence;
7. S 12°13'48" W a distance of 248.54' to a point. Thence;
8. On a curve turning to the left with an arc length of 14.24', with a
radius of 32.00', with a chord bearing of S 66°45'37" E, with a chord
length of 14.13' to a point. Thence;
9. On a reverse curve turning to the right with an arc length of
143.20', with a radius of 157.00', with a chord bearing of S 53°22'59"
E, with a chord length of 138.29' to a point at the Northerly bounds of
the now or formerly American Sugar. Thence;
Along the bounds of the now or formerly American Sugar N 87°02'24" W a
distance of 57.60' to the Mean High Water Level. Thence;
Along Mean High Water Level 594± to a point. Thence:
S 78°40'50" E a distance of 145.69' to a point. Thence:
N 12°40'23" E a distance of 187.20' to a point which is the point of
Having an area of 40691.34 square feet, 0.934 acres.
S 6. Should the leased lands described in section four of this act
ever cease to be used for the purposes described in section two of this
act, the lease shall terminate and those lands and improvements thereto
S. 5101--C 15
shall revert to the city of Yonkers for exclusive park and recreational
S 7. If the parkland that is the subject of this act has received
funding pursuant to the federal land and water conservation fund, the
discontinuance of parkland authorized by the provisions of this act
shall not occur until the municipality has complied with the federal
requirements pertaining to the conversion of parklands, including satis-
fying the secretary of the interior that the discontinuance will include
all conditions which the secretary of the interior deems necessary to
assure the substitution of other lands shall be equivalent in fair
market value and recreational usefulness to the lands being discontin-
S 8. This act shall take effect immediately.
Open Legislation comments facilitate discussion of New York State legislation. All comments are subject to moderation. Comments deemed off-topic, commercial, campaign-related, self-promotional; or
that contain profanity or hate speech; or that link to sites outside of the nysenate.gov domain are not permitted, and will not be published. Comment moderation is generally performed Monday through
By contributing or voting you agree to the Terms of Participation and verify you are over 13.
blog comments powered by Disqus | {"url":"http://open.nysenate.gov/legislation/bill/S5101C-2013","timestamp":"2014-04-23T20:07:37Z","content_type":null,"content_length":"69863","record_id":"<urn:uuid:aec9df3a-6d64-4262-9b45-48edda18ccbe>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00075-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 14
- JOURNAL OF COMPUTER AND SYSTEM SCIENCES , 1994
"... In this paper we consider two party communication complexity when the input sizes of the two players differ significantly, the "asymmetric" case. Most of previous work on communication
complexity only considers the total number of bits sent, but we study tradeoffs between the number of bits the ..."
Cited by 85 (9 self)
Add to MetaCart
In this paper we consider two party communication complexity when the input sizes of the two players differ significantly, the "asymmetric" case. Most of previous work on communication complexity
only considers the total number of bits sent, but we study tradeoffs between the number of bits the first player sends and the number of bits the second sends. These
- In Proceedings of the Thirty-First Annual ACM Symposium on Theory of Computing
"... We obtain matching upper and lower bounds for the amount of time to find the predecessor of a given element among the elements of a fixed efficiently stored set. Our algorithms are for the
unit-cost word-level RAM with multiplication and extend to give optimal dynamic algorithms. The lower bounds ar ..."
Cited by 63 (0 self)
Add to MetaCart
We obtain matching upper and lower bounds for the amount of time to find the predecessor of a given element among the elements of a fixed efficiently stored set. Our algorithms are for the unit-cost
word-level RAM with multiplication and extend to give optimal dynamic algorithms. The lower bounds are proved in a much stronger communication game model, but they apply to the cell probe and RAM
models and to both static and dynamic predecessor problems.
, 1994
"... We prove \Omega\Gamma p log log n) lower bounds on the random access machine complexity of several dynamic, partially dynamic and static data structure problems, including the union-split-find
problem, dynamic prefix problems and onedimensional range query problems. The proof techniques include a ..."
Cited by 49 (3 self)
Add to MetaCart
We prove \Omega\Gamma p log log n) lower bounds on the random access machine complexity of several dynamic, partially dynamic and static data structure problems, including the union-split-find
problem, dynamic prefix problems and onedimensional range query problems. The proof techniques include a general technique using perfect hashing for reducing static data structure problems (with a
restriction of the size of the structure) into partially dynamic data structure problems (with no such restriction), thus providing a way to transfer lower bounds. We use a generalization of a method
due to Ajtai for proving the lower bounds on the static problems, but describe the proof in terms of communication complexity, revealing a striking similarity to the proof used by Karchmer and
Wigderson for proving lower bounds on the monotone circuit depth of connectivity. 1 Introduction and summary of results In this paper we give lower bounds for the complexity of implementing several
dynamic and sta...
- In 19th Conference on the Foundations of Software Technology and Theoretical Computer Science (FSTTCS), 1999. Advances in Data Structures Workshop
"... The cell probe model is a general, combinatorial model of data structures. We give a survey of known results about the cell probe complexity of static and dynamic data structure problems, with
an emphasis on techniques for proving lower bounds. 1 ..."
Cited by 29 (0 self)
Add to MetaCart
The cell probe model is a general, combinatorial model of data structures. We give a survey of known results about the cell probe complexity of static and dynamic data structure problems, with an
emphasis on techniques for proving lower bounds. 1
, 1997
"... We develop the first nontrivial lower bounds on the complexity of online hyperplane and halfspace emptiness queries. Our lower bounds apply to a general class... ..."
Cited by 14 (1 self)
Add to MetaCart
We develop the first nontrivial lower bounds on the complexity of online hyperplane and halfspace emptiness queries. Our lower bounds apply to a general class...
, 1997
"... We obtain improved lower bounds for a class of static and dynamic data structure problems that includes several problems of searching sorted lists as special cases. These lower bounds nearly
match the upper bounds given by recent striking improvements in searching algorithms given by Fredman and Wil ..."
Cited by 5 (0 self)
Add to MetaCart
We obtain improved lower bounds for a class of static and dynamic data structure problems that includes several problems of searching sorted lists as special cases. These lower bounds nearly match
the upper bounds given by recent striking improvements in searching algorithms given by Fredman and Willard's fusion trees [9] and Andersson's search data structure [5]. Thus they show sharp
limitations on the running time improvements obtainable using the unit-cost word-level RAM operations that those algorithms employ. 1 Introduction Traditional analysis of problems such as sorting and
searching is often schizophrenic in dealing with the operations one is permitted to perform on the input data. In one view, the elements being sorted are seen as abstract objects which may only be
compared. In the other view, one is able to perform certain word-level operations, such as indirect addressing using the elements themselves, in algorithms like bucket and radix sorting.
Traditionally, the second v...
"... An optimal index solving top-k document retrieval [Navarro and Nekrich, SODA’12] takes O(m + k) time for a pattern of length m, but its space is at least 80n bytes for a collection of n symbols.
We reduce it to 1.5n– 3n bytes, with O(m+(k+log log n) log log n) time, on typical texts. The index is u ..."
Cited by 4 (3 self)
Add to MetaCart
An optimal index solving top-k document retrieval [Navarro and Nekrich, SODA’12] takes O(m + k) time for a pattern of length m, but its space is at least 80n bytes for a collection of n symbols. We
reduce it to 1.5n– 3n bytes, with O(m+(k+log log n) log log n) time, on typical texts. The index is up to 25 times faster than the best previous compressed solutions, and requires at most 5 % more
space in practice (and in some cases as little as one half). Apart from replacing classical by compressed data structures, our main idea is to replace suffix tree sampling by frequency thresholding
to achieve compression.
, 1996
"... We consider the problem of maintaining a dynamic ordered set of n integers in the range 0 : : 2^w - 1, under the operations of insertion, deletion and predecessor queries, on a unit-cost RAM
with a word length of w bits. We show that all the operations above can be performed in O(min{log w, 1 log n/ ..."
Cited by 3 (0 self)
Add to MetaCart
We consider the problem of maintaining a dynamic ordered set of n integers in the range 0 : : 2^w - 1, under the operations of insertion, deletion and predecessor queries, on a unit-cost RAM with a
word length of w bits. We show that all the operations above can be performed in O(min{log w, 1 log n/log w}) expected time, assuming the updates are oblivious, i.e., independent of the random
choices made by the data structure. This improves upon the (deterministic) running time of O(min{log w, sqrt log n}) obtained by Fredman and Willard. We also give a very simple deterministic data
structure which matches the bound of Fredman and Willard. Finally, from the randomized data structure we are able to derive improved deterministic data structures for the static version of this
"... Abstract. The dynamic trie is a fundamental data structure which finds applications in many areas. This paper proposes a compressed version of the dynamic trie data structure. Our data-structure
is not only space efficient, it also allows pattern searching in o(|P|) time and leaf insertion/deletion ..."
Cited by 1 (0 self)
Add to MetaCart
Abstract. The dynamic trie is a fundamental data structure which finds applications in many areas. This paper proposes a compressed version of the dynamic trie data structure. Our data-structure is
not only space efficient, it also allows pattern searching in o(|P|) time and leaf insertion/deletion in o(log n) time, where |P | is the length of the pattern and n is the size of the trie. To
demonstrate the usefulness of the new data structure, we apply it to the LZ-compression problem. For a string S of length s over an alphabet A of size σ, the previously best known algorithms for
computing the Ziv-Lempel encoding (lz78) ofS either run in: (1) O(s) timeandO(slog s) bits working space; or (2) O(sσ) time and O(sHk +slog σ/logσ s) bits working space, where Hk is the korder
entropy of the text. No previous algorithm runs in sublinear time. Our new data structure implies a LZ-compression algorithm which runs in sublinear time and uses optimal working space. More
precisely, the LZ-compression algorithm uses O(s(log σ +loglogσs)/logσ s)bitsworking space and runs in O(s(log log s) 2 /(logσ s log log log s)) worst-case time, log log log s o(log s which is
sublinear when σ =2 (log log s) 2). 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=875809","timestamp":"2014-04-19T01:02:33Z","content_type":null,"content_length":"35093","record_id":"<urn:uuid:42e14bdd-52d5-4533-a110-72fbb1c3770d>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00530-ip-10-147-4-33.ec2.internal.warc.gz"} |
Planar eucliean bipartite matching with squared distances
up vote 1 down vote favorite
This is probably a really stupid question, but suppose I have two sets of points in the plane $X$ and $Y$ each with cardinality $|X| = |Y| = n$. For any bipartite matching $M$ between $X$ and $Y$,
let $c_1(M)$ denote the total "cost" of the matching $M$, in which we say that the cost between a pair $(x_i,y_j)$ is simply the Euclidean distance between $x_i$ and $y_j$. Similarly let $c_2(M)$
denote the total cost of the matching $M$ where the cost between a pair $(x_i,y_j)$ is the square of the distance between $x_i$ and $y_j$. Let $M_1$ and $M_2$ denote the optimal matchings with
respect to cost functions $c_1$ and $c_2$ respectively. My question is: what are the point sets $X$ and $Y$ that maximize the ratio $c_1(M_2)/c_1(M_1)$?
perfect-matchings geometry graph-theory co.combinatorics
1 In other words, if you compute the minimum matching, and then realize you forgot to take square roots when tabulating the distances, how far off the mark can you be? – Johan Wästlund Sep 30 '12 at
add comment
1 Answer
active oldest votes
The quotient can get as close as we please to $\sqrt{n}$.
Start by putting a red and a blue point distance $\sqrt{n}$ apart (let me use colors instead of "point in $X$"). Then put $n-1$ pairs of coinciding points (or extremely close if you
don't want them to coincide), one red and one blue, as "stepping stones" between them, a unit distance apart, but along a large circular path.
With true distances, a pair of coinciding points of opposite color should always be matched to each other, so $c_1(M_1)=\sqrt{n}$. But with squared distances, the cost of that matching
up vote 3 is $n$, so it will be just as good to use the stepping stones and pay for $n$ edges of cost 1 (and strictly better with a slight perturbation of the points).
down vote
Provided the stepping stones are arranged so that under squared distances no shortcut will pay, we get $c_1(M_2) = n$.
Clearly $\sqrt{n}$ is the best we can do. If we take the distance $c_1(M_2)$ and chop it into $n$ pieces, then the sum of the squares of the pieces is minimized when they are equal,
therefore $$\frac{c_1(M_2)^2}n\leq c_2(M_2) \leq c_2(M_1)\leq c_1(M_1)^2.$$
add comment
Not the answer you're looking for? Browse other questions tagged perfect-matchings geometry graph-theory co.combinatorics or ask your own question. | {"url":"http://mathoverflow.net/questions/108428/planar-eucliean-bipartite-matching-with-squared-distances","timestamp":"2014-04-17T15:48:30Z","content_type":null,"content_length":"52410","record_id":"<urn:uuid:ed8ea3c9-1a51-4c4a-9d13-063ee6919e64>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00293-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: General Form of Relationships?
Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
Home -> Community -> Usenet -> comp.databases.theory -> Re: General Form of Relationships?
Re: General Form of Relationships?
From: Neo <neo55592_at_hotmail.com> Date: 31 Dec 2002 15:15:34 -0800 Message-ID: <4b45d3ad.0212311515.7e449bec@posting.google.com>
> The most general way to view a "relationship" among a set of n variables
> is as an arbitrary subset of n-space. (In your examples, the variables
> may come from other sets S_1, ..., S_n instead of the real number line,
> in which case a "relationship" is a subset of the cartesian product
> S_1 x S_2 x ... x S_n .)
I didn't state my question clearly but your post triggers an idea. Suppose the x-axis represents a set of balls and the y-axis a set of colors, then a dot at x,y would indicate "ball2 is Red". How to
extend this system to represent "ball between box and chair"? Could I suppose 3-axes where each represents the set of all things in a room. Then a dot a x,y,z would indicate what thing is between two
other things? Received on Tue Dec 31 2002 - 17:15:34 CST
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US | {"url":"http://www.orafaq.com/usenet/comp.databases.theory/2002/12/31/0284.htm","timestamp":"2014-04-16T11:28:50Z","content_type":null,"content_length":"7056","record_id":"<urn:uuid:980d31e7-cba2-4268-8ea4-7fc7ba59aed6>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00233-ip-10-147-4-33.ec2.internal.warc.gz"} |
Electric Force and Charged Spheres
1. The problem statement, all variables and given/known data
Consider the following configuration of fixed, uniformly charged spheres on an xy coordinate system:
·a blue sphere fixed at the origin with positive charge q.
·a red sphere fixed at the point (d1,0) with unknown charge q_red,
·a yellow sphere fixed at the point (d2*cos theta, -d2*sin theta) with unknown charge q_yellow.
The net electric force on the blue sphere is observed to be vector F = (0,-F), where F>0 .
Here is a makeshift sketch:
O = charged particle
O_(q)___________O (d1, 0) q_red
----\O (d2*cos theta, -d2*sin theta) q_yellow
in which the dotted line between q at (0,0) and q_yellow is d2 and theta = angle between d2 and the x-axis (The x- axis is depicted by __________ pattern.)
What is the sign of the charge on the yellow sphere?
What is the sign of the charge on the red sphere?
2. Relevant equations
Possibly Coulomb’s Law:
F_mag = (k*|q1*q2|)/(r^2), where k = 8.988 * 10^9 N*m^2/C^2
3. The attempt at a solution
The y-component of the electric force is –F. The q_red particle does not have a y-component; it only has an x-component. The q_yellow charge has a y-component. If a – sign follows, then the q_yellow
chage is positive since – indicates repulsion????
Is the q_red charge negative? If the electric force’s x-component is 0, then q_red’s x-component must be negative if q_yellow’s x-component is positive??? | {"url":"http://www.physicsforums.com/showthread.php?t=150814","timestamp":"2014-04-19T15:02:30Z","content_type":null,"content_length":"21195","record_id":"<urn:uuid:ecf650e5-aac5-432d-94fa-bcc9e3d14215>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00043-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bethlehem, PA Statistics Tutor
Find a Bethlehem, PA Statistics Tutor
...I use Excel now as a teacher in my college classes to demonstrate the math I teach. Geometry is most often thought of as lines, angles, and triangles. It certainly is that, but it is also a
subject that disciplines the thought process.
11 Subjects: including statistics, physics, probability, ACT Math
...In addition to all the calculus courses I had in college, I also taught a calc course while student teaching. While it has been a while since I taught this subject, I do still feel I know it,
and I am willing to put in the time to refresh my memory ahead of any tutoring sessions. I have taught ...
12 Subjects: including statistics, calculus, geometry, algebra 1
...Create tables. 6. Create lists. 7. Format pages and insert headers and footers. 8.
27 Subjects: including statistics, calculus, geometry, algebra 1
...Most people think of geometry as a math course, which, of course, it is. However, it is also a reasoning course and a language course. The reasoning and language parts present the major
stumbling blocks for students.
23 Subjects: including statistics, English, calculus, algebra 1
...I played piano for five years (although I am no longer proficient) as well, so I have played in both clefs. I love to share the importance of music and how it can help bond people of any
culture. I believe phonics is the foundation of learning English.
31 Subjects: including statistics, English, reading, geometry
Related Bethlehem, PA Tutors
Bethlehem, PA Accounting Tutors
Bethlehem, PA ACT Tutors
Bethlehem, PA Algebra Tutors
Bethlehem, PA Algebra 2 Tutors
Bethlehem, PA Calculus Tutors
Bethlehem, PA Geometry Tutors
Bethlehem, PA Math Tutors
Bethlehem, PA Prealgebra Tutors
Bethlehem, PA Precalculus Tutors
Bethlehem, PA SAT Tutors
Bethlehem, PA SAT Math Tutors
Bethlehem, PA Science Tutors
Bethlehem, PA Statistics Tutors
Bethlehem, PA Trigonometry Tutors
Nearby Cities With statistics Tutor
Allentown, PA statistics Tutors
Catasauqua statistics Tutors
Easton, PA statistics Tutors
Emmaus statistics Tutors
Ewing Township, NJ statistics Tutors
Forks Township, PA statistics Tutors
Fountain Hill, PA statistics Tutors
Freemansburg, PA statistics Tutors
Hellertown statistics Tutors
Levittown, PA statistics Tutors
Northampton, PA statistics Tutors
Palmer Township, PA statistics Tutors
Reading, PA statistics Tutors
Trenton, NJ statistics Tutors
Washington Street, NJ statistics Tutors | {"url":"http://www.purplemath.com/Bethlehem_PA_statistics_tutors.php","timestamp":"2014-04-17T07:48:39Z","content_type":null,"content_length":"23960","record_id":"<urn:uuid:5b5c725b-6c94-4421-9166-2cb986b14c71>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00356-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kings Point, NY Precalculus Tutor
Find a Kings Point, NY Precalculus Tutor
...I prefer to tutor near where I live in Park Slope, Brooklyn, NY. I am willing to travel anywhere reasonable public transportation will take me, but beware that I will charge extra for long
distances requiring more than 45 minutes' travel. Please note that unless you have already submitted a cre...
25 Subjects: including precalculus, chemistry, physics, calculus
I instill a deep level of understanding of the material that will not only prepare the student for tests, but also be able to apply these concepts to real world problems and build upon this
knowledge in future classes. I encourage multiple ways of solving problems and never judge the student. I blend in critical thinking skills and estimations.
9 Subjects: including precalculus, chemistry, physics, calculus
...As a full time Statistician, I continue to use SAS almost on a daily basis. I took Biostatistics in the graduate level during my Master's program and received an A. It dealt with applying
statistical methods in biology and medicine.
18 Subjects: including precalculus, calculus, statistics, algebra 1
...I have tutored students ranging from elementary to high school level in various subjects such as Spanish and mathematics. I find tutoring very rewarding because I can sense students'
difficulties and enjoy explaining concepts, especially those that I may have had trouble with myself. I also enjoy languages, having studied German and Russian for three years, and Spanish for
five years.
29 Subjects: including precalculus, Spanish, physics, writing
...I have over five years of experience in tutoring for ACT Math. I have developed my own content summaries and an outline of test strategies. I have over five years of experience in tutoring for
ACT Science.
22 Subjects: including precalculus, calculus, physics, geometry
Related Kings Point, NY Tutors
Kings Point, NY Accounting Tutors
Kings Point, NY ACT Tutors
Kings Point, NY Algebra Tutors
Kings Point, NY Algebra 2 Tutors
Kings Point, NY Calculus Tutors
Kings Point, NY Geometry Tutors
Kings Point, NY Math Tutors
Kings Point, NY Prealgebra Tutors
Kings Point, NY Precalculus Tutors
Kings Point, NY SAT Tutors
Kings Point, NY SAT Math Tutors
Kings Point, NY Science Tutors
Kings Point, NY Statistics Tutors
Kings Point, NY Trigonometry Tutors
Nearby Cities With precalculus Tutor
Glen Oaks precalculus Tutors
Great Nck Plz, NY precalculus Tutors
Great Neck precalculus Tutors
Great Neck Estates, NY precalculus Tutors
Great Neck Plaza, NY precalculus Tutors
Kenilworth, NY precalculus Tutors
Kensington, NY precalculus Tutors
Manhasset precalculus Tutors
Plandome, NY precalculus Tutors
Port Washington, NY precalculus Tutors
Russell Gardens, NY precalculus Tutors
Saddle Rock, NY precalculus Tutors
Sands Point, NY precalculus Tutors
Thomaston, NY precalculus Tutors
Whitestone precalculus Tutors | {"url":"http://www.purplemath.com/Kings_Point_NY_Precalculus_tutors.php","timestamp":"2014-04-17T21:24:22Z","content_type":null,"content_length":"24509","record_id":"<urn:uuid:1302ac62-62c7-4650-8f9f-13fd5011f6fd>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00363-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: COLLOQUIUM
University of Regina
Department of Mathematics and Statistics
Speaker: Dr. Shaun Fallat
University of Regina, Canada.
Title: Hadamard products, Retractability, and Oppenheim's inequality
Date: Friday, May 11, 2007
Time: 3:30 p.m.
Place: Math & Stats Lounge (CW 307.20)
For two m×n matrices A = [aij] and B = [bij], the matrix AB = [aijbij]
is called the Hadamard product of A and B. It has long been known that
the Hadamard product of two positive semidefinite (PSD) matrices is again
PSD. From this fact, many determinantal inequalities have subsequently
been derived about det(A B) when A and B are PSD. One of the most
celebrated inequalities is known as Oppenheim's inequality. For other im-
portant positivity classes, such as: M-matrices, inverse M-matrices, and to-
tally nonnegative matrices closure under Hadamard multiplication no longer
holds, and thus inequalities like Oppenheim's may no longer be valid. On
the other hand, if we specialize to the subsets of these classes that enjoy | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/919/1802727.html","timestamp":"2014-04-16T18:39:10Z","content_type":null,"content_length":"8100","record_id":"<urn:uuid:3f2f04c9-54ab-45d2-9bb2-c85970e5ff46>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00347-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: Goedel: truth and misinterpretations
Torkel Franzen torkel at sm.luth.se
Wed Nov 1 02:07:48 EST 2000
V.Kanovei says. with reference to
(1) Even if Goldbach's conjecture is true, it is not necessarily
provable in ZFC
>To be scientifically considerable, "thesis" (1) has to be
>preceded by at least explanation, if not a rigorous definition,
>what is the intended meaning of "true". That has not been made
>clear in the course of the discussion.
I'm a bit surprised by this comment, since I have explicitly stated
that "Goldbach's conjecture is true" in (1) is equivalent to
Goldbach's conjecture itself. That is, (1) says exactly the same thing
(2) Even if every even number greater than 2 is the sum of two
primes, this is not necessarily provable in ZFC.
You have a difficulty, then, with the use of Goldbach's conjecture
in a context such as (2). Can you explain the nature of this
difficulty? It is insufficient to merely *claim* that we cannot
meaningfully say such things as "every even number greater than 2 is
the sum of two primes" except in certain restricted types of context,
such as "it has been mathematically proved that ...".
Torkel Franzen
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2000-November/004505.html","timestamp":"2014-04-16T19:31:48Z","content_type":null,"content_length":"3392","record_id":"<urn:uuid:1eb01c1f-95e6-4a5e-b965-f13bdb8e99a7>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00059-ip-10-147-4-33.ec2.internal.warc.gz"} |
Portola Hills, CA Math Tutor
Find a Portola Hills, CA Math Tutor
...I've been tutoring mathematics, chemistry and also physics for almost two years. Right now I'm working as a math tutor at IVC's campus and am also working with a tutoring agency. I took honors
calculus in college and started to tutor based on my professors suggestions for helping other students.
7 Subjects: including calculus, algebra 1, algebra 2, trigonometry
I have more than 20 years of experience teaching and tutoring Mathematics at the elementary, high school and college levels. I've been tutoring Spanish for several years. I have a Bachelor Degree
in Mathematics and a Master in Mathematics Education.
8 Subjects: including algebra 1, algebra 2, geometry, prealgebra
My name is Jaclyn, and I am currently a PhD student at the University of California-Riverside, where my current major is Educational Psychology. I am originally from the midwest and just recently
moved out here to go to school. I got my undergrad at the University of Kansas in Secondary English Education.
18 Subjects: including prealgebra, algebra 1, geometry, reading
...I have taught Visual Basic in National university and privately to several adults in the US and in India. I conduct online classes in Microsoft Office, desktop publishing and basic web site
development. I have taught computers at Saddleback Valley Unified School district for 5 years.
14 Subjects: including logic, elementary math, algebra 1, prealgebra
...I now do it regularly for business and pleasure. I have helped students write for many different functions including business briefs, project proposals, letters and essays. I won't dictate a
stream of sentences for you to write.
12 Subjects: including algebra 1, algebra 2, SAT math, chemistry
Related Portola Hills, CA Tutors
Portola Hills, CA Accounting Tutors
Portola Hills, CA ACT Tutors
Portola Hills, CA Algebra Tutors
Portola Hills, CA Algebra 2 Tutors
Portola Hills, CA Calculus Tutors
Portola Hills, CA Geometry Tutors
Portola Hills, CA Math Tutors
Portola Hills, CA Prealgebra Tutors
Portola Hills, CA Precalculus Tutors
Portola Hills, CA SAT Tutors
Portola Hills, CA SAT Math Tutors
Portola Hills, CA Science Tutors
Portola Hills, CA Statistics Tutors
Portola Hills, CA Trigonometry Tutors
Nearby Cities With Math Tutor
Balboa Island, CA Math Tutors
Balboa, CA Math Tutors
Box Springs, CA Math Tutors
Coto De Caza, CA Math Tutors
Foothill Ranch Math Tutors
Modjeska Canyon, CA Math Tutors
Modjeska, CA Math Tutors
Naples, CA Math Tutors
Robinson Ranch, CA Math Tutors
Robinson Rnch, CA Math Tutors
San Luis Rey Math Tutors
Smiley Heights, CA Math Tutors
South Laguna, CA Math Tutors
Trabuco, CA Math Tutors
Vista Del Lago, PR Math Tutors | {"url":"http://www.purplemath.com/Portola_Hills_CA_Math_tutors.php","timestamp":"2014-04-20T21:19:01Z","content_type":null,"content_length":"24108","record_id":"<urn:uuid:e1d70e24-907d-4f5d-baf2-0b6a7c9f4476>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00598-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: A mod2 Machine.
in reply to A mod2 Machine.
using floats to compute modulo does not seem reasonable to me.
I only included this:
my $n = 534587;
print "even\n" if($n % 2 == 0); # modulo operator of possibly a flo
+at or possibly an integer (*)
1. it is equivalent to the OP's "Most obvious (easiest) solution" (infact code-ninja used this exact example here: Re^2: A mod2 Machine.)
2. it might actually be doing a (possibly comparatively slow) float operation because use integer was not used.
It probably had to go: float 534587 --> integer 534587 --> integer modulo operation on integer 534587
Whereas doing a proper integer bit test should be faster:
use integer;
my $n = 534587;
print "even\n" if($n & 1 == 0); # checking LSB with Bitwise And on
+an integer | {"url":"http://www.perlmonks.org/bare/?node_id=1044135","timestamp":"2014-04-19T13:09:19Z","content_type":null,"content_length":"3577","record_id":"<urn:uuid:b8fb8ed1-a4e2-4105-879c-fffbf353dfd5>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00201-ip-10-147-4-33.ec2.internal.warc.gz"} |
Department of Mathematics
Professor Goldston wins 2014 AMS Cole Prize in Number Theory
Congratulations to Dan Goldston who, along with three other researchers in Number Theory, has won the 2014 AMS Frank Nelson Cole Prize in Number Theory. All four winners have done research associated
with the gaps between prime numbers related to the Twin Prime Conjecture which states that there are an infinite number of pairs of primes separated by only two (such as 11 and 13). One of the four
winners proved that there are infinitely many pairs of primes that are at most 70 million apart. This seems to be a very active area of research right now and another researcher has recently
submitted a result which evidently shows that there are infinitely many primes not more than 600 apart. Many other number theorists around the world are now working feverishly to use these new
techniques to reduce the bound as low as possible. More details about this research can be found in the Quanta Magazine article, "Together and Alone, Closing the Prime Gap"
Spring Rare & Endangered Courses
Math 211A: Geometry of Projective Spaces
Tenure-Track Statistics Faculty
SJSU Mathematics and Statistics Department has one Tenure-Track Assistant Professor opening in Statistics starting August 2014. The candidate must teach undergraduate and graduate statistics courses,
maintain an active research program, and supervise student research and industry sponsored student projects. PhD required at time of appointment. For full consideration, submit all application
materials by December 16, 2013. See the complete position description at http://www.sjsu.edu/statistics/employment.
SJSU is an Equal Opportunity/Affirmative Action Employer committed to the core values of inclusion, civility, and respect for each individual.
Repeating a lower division Math Class more than twice
At the request of colleagues in Physics and Engineering, the Math Department voted on a new policy for students registering in a lower division math course for the third (or more) time. There is a
special form entitled Repeating a Course for More Than Two Times Petition at www.sjsu.edu/registrar/docs/Multiple_Repeat.pdf which the instructor must sign for the student to be allowed to register
in a course for the third (or more) time. The Math Department is now asking their instructors teaching lower division math courses during the regular school year not sign such forms and let students
know that they are only allowed to take a course for the third time during the summer where they pay the full cost and they are less likely to be taking up a space from a student who is taking the
course for the first time. | {"url":"http://www.sjsu.edu/math/index.html","timestamp":"2014-04-16T16:12:02Z","content_type":null,"content_length":"21936","record_id":"<urn:uuid:6ecff621-65d1-478d-97a4-9108a5377dbf>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00168-ip-10-147-4-33.ec2.internal.warc.gz"} |