content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
area under curve
October 25th 2008, 10:48 AM #1
Oct 2008
area under curve
Consider the limit of summed terms (attachment). Could someone please explain why each of the sums in the 'attached' expression gives an over-estimate of the area beneath y=x^2 +3, between x=0
and x=1.
****I'm rephrasing my question in LATEX so that more posters are attracted to assist me (please).
$\lim_{n \to 0} \frac {1}{n} \Sigma^{\infinity}_{k=1}\(\frac{k^2}{n^2} + 3)$
Could someone please explain why each of the sums in the above expression gives an over-estimate of the area beneath the curve $y = x^2 +3$ , between x=0 and x=1 *****
Last edited by tsal15; October 25th 2008 at 08:57 PM.
This is a Riemann Integral calculating the height of rectangle with the value of the function for their right side. That's why it overestimates. If you would have taken the left side you would
have k going from 0 to n-1 instead of 1 to n and in that case it would underestimates.
Although, when you calculate the limit it goes to the same value. You thus only overestimates when there is no limit.
I'm rephrasing my question in LATEX so that more posters are attracted to assist me (please).
$\lim_{n \to 0} \frac {1}{n} \Sigma^{\infinity}_{k=1}\(\frac{k^2}{n^2} + 3)$
Could someone please explain why each of the sums in the above expression gives an over-estimate of the area beneath the curve $y = x^2 +3$ , between x=0 and x=1
This is a Riemann Integral calculating the height of rectangle with the value of the function for their right side. That's why it overestimates. If you would have taken the left side you would
have k going from 0 to n-1 instead of 1 to n and in that case it would underestimates.
Although, when you calculate the limit it goes to the same value. You thus only overestimates when there is no limit.
Sorry vincisonfire, but I haven't come across Riemann Integrals before, could you briefly explain it?
This is a Riemann Integral calculating the height of rectangle with the value of the function for their right side. That's why it overestimates. If you would have taken the left side you would
have k going from 0 to n-1 instead of 1 to n and in that case it would underestimates.
Although, when you calculate the limit it goes to the same value. You thus only overestimates when there is no limit.
Hey vincisonfire, could you please explain it in simpler terms?
Can anyone else voice their opinion on this question?
Thanks in advance.
It's not really easy to explain without drawings, so please refer to this link if you feel you need graphical visualizations.
A Riemann Integral is a definition of the area 'under a curve'. Imagine that you lived in a world with no calculus, and you really really needed to know at least an approximation of the area
under a curve.
Assuming you do know how to calculate the area of rectangles, you'd eventually think about discretizing the curve. That is, splitting it into lots of rectangles, calculate the area of each and
then sum all of them.
Let $f(x)$ be your curve and $a,b$ be your area bounds. If you have $n$ rectangles, each rectangle will have a base of $\frac{b-a}{n}$. Now you need to define arbitrarially what is the height of
your rectangle going to be. Now it's up to you: you can take the leftmost value of $f(x)$ in your rectangle interval, or the rightmost, or the middle, or any other arbitrary value relative to
your rectangle. Notice that if $f(x)$ is crescent, taking the rightmost value will overestimate your rectangle, and thus your total area. I hope you can visualize it: the rectangle will
'overflow' the function in the top.
Obviously, as $n$ grows larger, your area tends to the real area under the curve. That's why the value of the limit when $n\to\infty$ is defined to be the real area under the curve, according to
the Riemann definition.
Notice that the Riemann Integral is INITIALLY unrelated to antiderivatives, or the indefinite integrals. In particular, the indefinite integral is a function, and the Riemann Integral is a
number. They are connected by the Fundamental Theorem Of Calculus, as you may or may not know (yet).
Hope this helps,
Thanks rafael almeida, i think i get it now.
also how do i evaluate $lim_{n \to infinity} \frac {1}{n} \Sigma_{k=1} ^{n} \(\frac {k^2}{n^2} +3)$
using the expression:
$\Sigma_{k=1} ^{n} \k^2 = \frac {1}{6} n(n+1)(2n+1)$
I tried it myself and evaluated it to zero... i'm pretty sure this is wrong...could u show me the correct way thanks
Thanks rafael almeida, i think i get it now.
also how do i evaluate $lim_{n \to infinity} \frac {1}{n} \Sigma_{k=1} ^{n} \(\frac {k^2}{n^2} +3)$
using the expression:
$\Sigma_{k=1} ^{n} \k^2 = \frac {1}{6} n(n+1)(2n+1)$
I tried it myself and evaluated it to zero... i'm pretty sure this is wrong...could u show me the correct way thanks
You want to evaluate $L = \lim_{n\to\infty} \frac {1}{n} \sum_{k=1} ^{n} \left(\frac{k^2}{n^2} + 3\right)$ using the fact that $\sum_{k=1} ^{n} k^2 = \frac{1}{6} n(n+1)(2n+1)$. (hint: click on
the equations to see the LaTeX I used to form them)
My approach would be:
$\lim_{n\to\infty} \frac {1}{n} \sum_{k=1} ^{n} \left(\frac{k^2}{n^2} + 3\right) = \lim_{n\to\infty} \frac {1}{n} \sum_{k=1} ^{n} \left(\frac{k^2}{n^2}\right) + \lim_{n\to\infty} \frac {3n}{n} =
\lim_{n\to\infty} \frac {1}{n^3} \sum_{k=1} ^{n} (k^2) + 3$
And then apply your formula for the sum of the first n squares:
$\lim_{n\to\infty} \left(\frac{n(n+1)(2n+1)}{6n^3}\right) = \lim_{n\to\infty} \left(\frac{2n^3 + 3n^2 + n}{6n^3}\right) = \lim_{n\to\infty} \left(\frac{n^3\left(2 + \frac{3}{n} + \frac{1}{n^2}\
And you can safely cross out the $n^3$ term because as $n \to \infty$, it is far away from zero. Also, the terms $\frac{3}{n}$ and $\frac{1}{n^2}$ go to zero when $n \to \infty$. So this leaves
you with:
$\lim_{n\to\infty} \left(\frac{n^3\left(2 + \frac{3}{n} + \frac{1}{n^2}\right)}{6n^3}\right) = \lim_{n\to\infty} \left(\frac{2}{6}\right)$
So, finally,
$L = \frac{2}{6} + 3 = \frac{10}{3}$
I hope I have not made major mistakes. However, this is the method.
Hope this helps,
Last edited by Rafael Almeida; October 26th 2008 at 06:50 PM.
October 25th 2008, 11:07 AM #2
October 25th 2008, 08:52 PM #3
Oct 2008
October 25th 2008, 08:54 PM #4
Oct 2008
October 26th 2008, 04:37 PM #5
Oct 2008
October 26th 2008, 04:37 PM #6
Oct 2008
October 26th 2008, 05:03 PM #7
Junior Member
Oct 2008
October 26th 2008, 05:11 PM #8
Oct 2008
October 26th 2008, 05:59 PM #9
Junior Member
Oct 2008 | {"url":"http://mathhelpforum.com/calculus/55618-area-under-curve.html","timestamp":"2014-04-16T19:31:58Z","content_type":null,"content_length":"60241","record_id":"<urn:uuid:3bce5e36-c308-49a3-9c63-d1caf1951222>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00031-ip-10-147-4-33.ec2.internal.warc.gz"} |
limit application of permutative rewrite rules
Major Section: MISCELLANEOUS
See rule-classes for a discussion of the syntax of the :loop-stopper field of :rewrite rule-classes. Here we describe how that field is used, and also how that field is created when the user does not
explicitly supply it.
For example, the built-in :rewrite rule commutativity-of-+,
(implies (and (acl2-numberp x)
(acl2-numberp y))
(equal (+ x y) (+ y x))),
creates a rewrite rule with a loop-stopper of ((x y binary-+)). This means, very roughly, that the term corresponding to y must be ``smaller'' than the term corresponding to x in order for this rule
to apply. However, the presence of binary-+ in the list means that certain functions that are ``invisible'' with respect to binary-+ (by default, unary-- is the only such function) are more or less
ignored when making this ``smaller'' test. We are much more precise below.
Our explanation of loop-stopping is in four parts. First we discuss ACL2's notion of ``term order.'' Next, we bring in the notion of ``invisibility'', and use it together with term order to define
orderings on terms that are used in the loop-stopping algorithm. Third, we describe that algorithm. These topics all assume that we have in hand the :loop-stopper field of a given rewrite rule; the
fourth and final topic describes how that field is calculated when it is not supplied by the user.
ACL2 must sometimes decide which of two terms is syntactically simpler. It uses a total ordering on terms, called the ``term order.'' Under this ordering constants such as '(a b c) are simpler than
terms containing variables such as x and (+ 1 x). Terms containing variables are ordered according to how many occurrences of variables there are. Thus x and (+ 1 x) are both simpler than (cons x x)
and (+ x y). If variable counts do not decide the order, then the number of function applications are tried. Thus (cons x x) is simpler than (+ x (+ 1 y)) because the latter has one more function
application. Finally, if the number of function applications do not decide the order, a lexicographic ordering on Lisp objects is used. See term-order for details.
When the loop-stopping algorithm is controlling the use of permutative :rewrite rules it allows term1 to be moved leftward over term2 only if term1 is smaller, in a suitable sense. Note: The sense
used in loop-stopping is not the above explained term order but a more complicated ordering described below. The use of a total ordering stops rules like commutativity from looping indefinitely
because it allows (+ b a) to be permuted to (+ a b) but not vice versa, assuming a is smaller than b in the ordering. Given a set of permutative rules that allows arbitrary permutations of the tips
of a tree of function calls, this will normalize the tree so that the smallest argument is leftmost and the arguments ascend in the order toward the right. Thus, for example, if the same argument
appears twice in the tree, as x does in the binary-+ tree denoted by the term (+ a x b x), then when the allowed permutations are done, all occurrences of the duplicated argument in the tree will be
adjacent, e.g., the tree above will be normalized to (+ a b x x).
Suppose the loop-stopping algorithm used term order, as noted above, and consider the binary-+ tree denoted by (+ x y (- x)). The arguments here are in ascending term order already. Thus, no
permutative rules are applied. But because we are inside a +-expression it is very convenient if x and (- x) could be given virtually the same position in the ordering so that y is not allowed to
separate them. This would allow such rules as (+ i (- i) j) = j to be applied. In support of this, the ordering used in the control of permutative rules allows certain unary functions, e.g., the
unary minus function above, to be ``invisible'' with respect to certain ``surrounding'' functions, e.g., + function above.
Briefly, a unary function symbol fn1 is invisible with respect to a function symbol fn2 if fn2 belongs to the value of fn1 in invisible-fns-table; also see set-invisible-fns-table, which explains its
format and how it can be set by the user. Roughly speaking, ``invisible'' function symbols are ignored for the purposes of the term-order test.
Consider the example above, (+ x y (- x)). The translated version of this term is (binary-+ x (binary-+ y (unary-- x))). The initial invisible-fns-table makes unary-- invisible with repect to
binary-+. The commutativity rule for binary-+ will attempt to swap y and (unary-- x) and the loop-stopping algorithm is called to approve or disapprove. If term order is used, the swap will be
disapproved. But term order is not used. While the loop-stopping algorithm is permuting arguments inside a binary-+ expression, unary-- is invisible. Thus, insted of comparing y with (unary-- x), the
loop-stopping algorithm compares y with x, approving the swap because x comes before y.
Here is a more precise specification of the total order used for loop-stopping with respect to a list, fns, of functions that are to be considered invisible. Let x and y be distinct terms; we specify
when ``x is smaller than y with respect to fns.'' If x is the application of a unary function symbol that belongs to fns, replace x by its argument. Repeat this process until the result is not the
application of such a function; let us call the result x-guts. Similarly obtain y-guts from y. Now if x-guts is the same term as y-guts, then x is smaller than y in this order iff x is smaller than y
in the standard term order. On the other hand, if x-guts is different than y-guts, then x is smaller than y in this order iff x-guts is smaller than y-guts in the standard term order.
Now we may describe the loop-stopping algorithm. Consider a rewrite rule with conclusion (equiv lhs rhs) that applies to a term x in a given context; see rewrite. Suppose that this rewrite rule has a
loop-stopper field (technically, the :heuristic-info field) of ((x1 y1 . fns-1) ... (xn yn . fns-n)). (Note that this field can be observed by using the command :pr with the name of the rule; see pr
.) We describe when rewriting is permitted. The simplest case is when the loop-stopper list is nil (i.e., n is 0); in that case, rewriting is permitted. Otherwise, for each i from 1 to n let xi' be
the actual term corresponding to the variable xi when lhs is matched against the term to be rewritten, and similarly correspond yi' with y. If xi' and yi' are the same term for all i, then rewriting
is not permitted. Otherwise, let k be the least i such that xi' and yi' are distinct. Let fns be the list of all functions that are invisible with respect to every function in fns-k, if fns-k is
non-empty; otherwise, let fns be nil. Then rewriting is permitted if and only if yi' is smaller than xi' with respect to fns, in the sense defined in the preceding paragraph.
It remains only to describe how the loop-stopper field is calculated for a rewrite rule when this field is not supplied by the user. (On the other hand, to see how the user may specify the
:loop-stopper, see rule-classes.) Suppose the conclusion of the rule is of the form (equiv lhs rhs). First of all, if rhs is not an instance of the left hand side by a substitution whose range is a
list of distinct variables, then the loop-stopper field is nil. Otherwise, consider all pairs (u . v) from this substitution with the property that the first occurrence of v appears in front of the
first occurrence of u in the print representation of rhs. For each such u and v, form a list fns of all functions fn in lhs with the property that u or v (or both) appears as a top-level argument of
a subterm of lhs with function symbol fn. Then the loop-stopper for this rewrite rule is a list of all lists (u v . fns). | {"url":"http://planet.racket-lang.org/package-source/cce/dracula.plt/2/1/language/acl2-html-docs/LOOP-STOPPER.html","timestamp":"2014-04-18T06:49:53Z","content_type":null,"content_length":"11062","record_id":"<urn:uuid:329a7ffb-0ce0-4013-a862-6f2bbef6d7d4>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00227-ip-10-147-4-33.ec2.internal.warc.gz"} |
Chapter 2 Skill Building
Course Hero has millions of student submitted documents similar to the one below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support.
Find millions of documents on Course Hero - Study Guides, Lecture Notes, Reference Materials, Practice Exams and more. Course Hero has millions of course specific materials providing students with
the best way to expand their education.
Below is a small sample set of documents:
Stevens - PEP - 111
MasteringPhysics: Assignment Print ViewVector Addition and SubtractionIn general it is best to conceptualize vectors as arrows in space, and then to make calculations with them using their
components. (You must first specify a coordinate system in
Stevens - PEP - 111
MasteringPhysics: Assignment Print ViewFree-Body Diagrams: IntroductionLearning Goal: To learn to draw free-body diagrams for various real-life situations. Imagine that you are given a description of
a real-life situation and are asked to analyze
Stevens - PEP - 111
MasteringPhysics: Assignment Print ViewApplying Newton's 2nd LawLearning Goal: To learn a systematic approach to solving Newton's 2nd law problems using a simple example. Once you have decided to
solve a problem using Newton's 2nd law, there are s
Stevens - PEP - 111
MasteringPhysics: Assignment Print ViewProjectile Motion TutorialLearning Goal: Understand how to apply the equations for 1-dimensional motion to the y and x directions separately in order to derive
standard formulae for the range and height of a
Stevens - PEP - 111
MasteringPhysics: Assignment Print ViewUniform Circular MotionLearning Goal: To find the velocity and acceleration vectors for uniform circular motion and to recognize that this acceleration is the
centripetal acceleration. Suppose that a particle
Stevens - PEP - 111
MasteringPhysics: Assignment Print ViewNewton's 3rd Law DiscussedLearning Goal: To understand Newton's 3rd law, which states that a physical interaction always generates a pair of forces on the two
interacting bodies. In Principia, Newton wrote: T
Stevens - PEP - 111
MasteringPhysics: Assignment Print ViewConservation of Momentum in Inelastic CollisionsLearning Goal: To understand the vector nature of momentum in the case in which two objects collide and stick
together. In this problem we will consider a colli
Stevens - PEP - 111
MasteringPhysics: Assignment Print ViewPSS 10.1: College BoredLearning Goal: To practice Problem-Solving Strategy 10.1 for problems involving conservation of mechancial energy. A bored college
student decides to try bungee jumping. He attaches an
Northeastern - LNI - 150
LNI U150 Italian Culture - Fall 2007 Reflection on the main achievements of Italian Renaissance.1) Humanism was the overriding philosophy of the Florentine Renaissance. Based on what we have
discussed in class, research and answer these questions: H
Stevens - PEP - 111
MasteringPhysics: Assignment Print ViewIntroduction to Potential EnergyLearning Goal: Understand that conservative forces can be removed from the work integral by incorporating them into a new form
of energy called potential energy that must be ad
Stevens - PEP - 111
MasteringPhysics: Assignment Print ViewA Matter of Some GravityLearning Goal: To understand Newton's law of gravitation and the distinction between inertial and gravitational masses. In this problem,
you will practice using Newton's law of gravita
Stevens - PEP - 111
MasteringPhysics: Assignment Print ViewPSS 13.1: Let's Go for a SpinLearning Goal: To practice Problem-Solving Strategy 13.1 for problems involving rotational dynamics. A uniform board of mass and
length is pivoted on one end and is supported in t
N. Illinois - MEE - 350
Thermodynamics Review sheet1-1 Thermodynamics and Energy Thermodynamics can be defined as the science of energy Energy can be viewed as the ability to cause changes Conservations of energy principle
Energy can change from one form to another but th
Carnegie Mellon - ACC - 70122
70-122 Fall 2007 Introduction to Accounting Professor NanName_ Section_Final Exam (Version A Solution)Instructions: This is a close-book, close-notes exam. You are allowed to use a non-programming
calculator. There are two required parts with t
Carnegie Mellon - ACC - 70122
70-122 Fall 2007 Introduction to Accounting Professor NanName_ Section_Mid-term Exam 1-Solution (Version A)Instructions: This is a close-book, close-notes exam. You are allowed to use a
non-programming calculator. There are two parts with total
N. Illinois - MEE - 350
Thermodynamics Chapter 4 Review4.1 Moving Boundary WorkMoving Boundary Work work done through expansion and compression - analysis is done under quasi-equilibrium status - the differential work done
in thei matter is as follows:Wb Fds PAds PdV =
N. Illinois - MEE - 350
Thermodynamics Chapter 6 Review Notes - The second Law of Thermodynamics6.1 Introduction to the Second Law- energy always flows from high potential to low potential - energy has a quality not just a
quantitl - the second law is used to calculate t
N. Illinois - MEE - 350
Thermodynamics Notes Ch 2: 2-1 Intorduction: - energy cannot be created or destroyed by a process, it can only be converted from one form to another electricity is high quality energy and heat is low
quality energy2-2 Forms of energy - the sum of a
N. Illinois - MEE - 350
Thermodynamics Chapter 7 Review Notes: Entropy7.1 Entropy- the second law of thermodynamics leads to inequalities, namely, an irreversible prosess is less efficient than a reversible one. Q Clausius
Inequality: 0 T - this indicates that the cyclic
N. Illinois - MEE - 350
Chapter 5 Review Notes:5.1 Conservation of mass- For a closed system: the mass must remain constant - For a Control Volume mass can enter and leave the system so we must keep track of how much mass
is entering and leaving the system Mass and Volu
Philadelphia College of Osteopathic Medicine - PHYS - 500
Carnegie Mellon - ACC - 70122
70-122 Fall 2007 Introduction to Accounting Professor NanName_ Section_Mid-term Exam 2 Version A SolutionInstructions: This is a close-book, close-notes exam. You are allowed to use a non-programming
calculator. There are two required parts wit
Carnegie Mellon - ACC - 70122
70-122 Fall 2007 Introduction to Accounting Professor Nan Quiz 1Name_ Section_Instructions: This is a close-book, close-notes quiz. You are allowed to use a non-programming calculator. There are two
required parts with totally 100 points (Part I
Carnegie Mellon - ACC - 70122
70-122 Fall 2007 Introduction to Accounting Professor Nan Quiz 2Name_ Section_Instructions: This is a close-book, close-notes quiz. You are allowed to use a non-programming calculator. There are two
required parts with totally 100 points (Part I
Carnegie Mellon - ACC - 70122
70-122 Fall 2007 Introduction to Accounting Professor Nan Quiz 3Name_ Section_Instructions: This is a close-book, close-notes quiz. You are allowed to use a non-programming calculator. There are two
required parts with totally 100 points (Part I
Carnegie Mellon - ACC - 70122
70-122 Fall 2007 Introduction to Accounting Professor Nan Quiz 4Name_ Section_Instructions: This is a close-book, close-notes quiz. You are allowed to use a non-programming calculator. There are two
required parts with totally 100 points (Part I
Philadelphia College of Osteopathic Medicine - PHYS - 500
Philadelphia College of Osteopathic Medicine - PHYS - 500
Philadelphia College of Osteopathic Medicine - PHYS - 500
Philadelphia College of Osteopathic Medicine - PHYS - 500
Philadelphia College of Osteopathic Medicine - PHYS - 500
Philadelphia College of Osteopathic Medicine - PHYS - 500
Philadelphia College of Osteopathic Medicine - PHYS - 500
Philadelphia College of Osteopathic Medicine - PHYS - 500
Philadelphia College of Osteopathic Medicine - PHYS - 500
Philadelphia College of Osteopathic Medicine - PHYS - 500
Philadelphia College of Osteopathic Medicine - PHYS - 500
Philadelphia College of Osteopathic Medicine - PHYS - 500
Philadelphia College of Osteopathic Medicine - PHYS - 500
Philadelphia College of Osteopathic Medicine - PHYS - 500
Philadelphia College of Osteopathic Medicine - PHYS - 500
Philadelphia College of Osteopathic Medicine - PHYS - 500
Philadelphia College of Osteopathic Medicine - PHYS - 500
Philadelphia College of Osteopathic Medicine - PHYS - 500
Philadelphia College of Osteopathic Medicine - PHYS - 500
Drexel - MATH - 122
UConn - ECON - 253
Economics 253W: Public Finance Spring 2008 Tentative Syllabus Department of Economics University of Connecticut Professor: Dhammika Dharmapala Office: Monteith 306 Office Hours: Tues, Thurs 11-12
Email: dhammika.dharmapala@uconn.edu Outline Economics
Drexel - MATH - 122
Drexel - MATH - 122
Michigan State University - CE - 305
Michigan State University - CE - 305
Michigan State University - CE - 305
Michigan State University - CE - 305
Michigan State University - CE - 305
Michigan State University - CE - 305
Michigan State University - CE - 305
Michigan State University - ME - 361
Michigan State University - ME - 361
Michigan State University - ME - 361
Michigan State University - ME - 361
Michigan State University - ME - 361 | {"url":"http://www.coursehero.com/file/98410/Chapter-2-Skill-Building/","timestamp":"2014-04-20T08:14:57Z","content_type":null,"content_length":"69268","record_id":"<urn:uuid:bbad1982-a6f2-4e1e-ae87-b3dea1ef5382>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00213-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tractable Answer-Set Programming with Weight Constraints: Bounded Treewidth Is not Enough
Last modified: 2010-04-27
Cardinality constraints or, more generally, weight constraints are well recognized as an important extension of answer-set programming. Clearly, all common algorithmic tasks related to programs with
cardinality or weight constraints (PWCs) - like checking the consistency of a program - are intractable. Many intractable problems in the area of knowledge representation and reasoning have been
shown to become tractable if the treewidth of the programs or formulas under consideration is bounded by some constant. The goal of this paper is to apply the notion of treewidth to PWCs and to
identify tractable fragments. It will turn out that the straightforward application of treewidth to PWCs does not suffice to obtain tractability. However, by imposing further restrictions,
tractability can be achieved. | {"url":"http://aaai.org/ocs/index.php/KR/KR2010/paper/viewPaper/1368","timestamp":"2014-04-20T03:12:16Z","content_type":null,"content_length":"12549","record_id":"<urn:uuid:063667e8-c7f3-40e9-8771-648ec27fc1f8>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00365-ip-10-147-4-33.ec2.internal.warc.gz"} |
Discrete Rating System
The Discrete Rating System
I think it was in the late 1980s that I began to contemplate the basic idea behind this methodology. From my experience, it seemed that many systems which rate athletic teams tend to be overly
influenced by large score differences. However, if a team loses a game it wasn't suppose to, the overall impression about that team's ability should only decrease slightly, unless the outcome is
because a key player was injured, or some other similar piece of data that most numerical based systems can not take into account.
So I postulated that each team should be assigned an integer rating, and a predicted score differential would be generated before each game from those ratings, and if the actual result of the game
was close to that prediction, then the ratings for those two teams would be considered reasonably accurate. If not, they would be modified by only +1/-1, depending upon if the team did better/worse
than expected. I experimented with this idea, first analyzing professional football data from previous seasons of the NFL. I later switched my focus to college football, utilizing all the games
between what I considered the major college programs from 1965 to 1990 to determine what the threshold should be when determining if the actual score was too far away from the prediction, and how
many points each integer increment in a team's rating should be worth.
For example, let's take the game where Ohio State traveled to Michigan for the last game of each team's regular season in 2003. Michigan was rated as an 8, and Ohio State was rated as a 5. Assuming
each integer represented a 3 point measure of that team's abilities, and then including the 3 point home field advantage, Michigan would be favored by this system to win by that game by 12 points,
and the actual score was 35-21. Given that result, it is fairly obvious that neither team's rating should be updated, as that prediction's level of accuracy is very reasonable.
Using the same 3 point per integer rating value (and "secret" threshold), this system usually predicts about 70-78% of the entire season's games correctly, using the final ratings for a team from one
year to begin the next. (Also, the home field advantage is 3.0001, so that a visiting team must have a rating that is 2 larger than the home team to be favored to win.) Of course, it takes a few
years for the ratings to migrate towards where they should be, but by starting with the games in 1965, I don't think there would be much difference in the initial team ratings for 2004 if I went back
and redid them by using the data from a few years earlier or later than 1965. (I hope to investigate this sort or "retroactive study" if and when time permits.)
Because many teams end up with the same rating, it was previously hard to consider how to incorporate these ratings to uniquely rank the teams like the two human polls (AP and USA Today) do. However,
I had an idea come to me on the weekend of 10/23/2004, and so I tried it and am reasonably happy that it does reflect the relative strength of the teams as well as being a reasonable measure for a
team's level of success that year. Basically, it simply uses the team's integer rating, subtracts the square root of the score difference for each loss, (half a point for a tie, when considering data
before 1996 when overtime was enacted to break such ties) and then multiplies this quantity by 3 (the points per integer rating increment) and finally adds 100 to place most team's ratings above
zero. (If a tie occurs using this ranking strategy, there is a simply, similar methodology to employ that only uses that year's games to determine a team's rating, and that value, after being divided
by 100, is adding to the two teams whose ratings are equal, to break the tie.) The ranking for NCAA Division 1-A college football can be found here.
Back to Prof. Trono's Home page. (This page last modified April 10, 2007 .) | {"url":"http://academics.smcvt.edu/jtrono/DiscreteRatings/Discrete.htm","timestamp":"2014-04-18T05:31:02Z","content_type":null,"content_length":"5354","record_id":"<urn:uuid:f0f7c6ba-5e5c-4439-9008-b9700fbafe19>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00530-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Path of a Critical Thinker (part two)
I began this series began by discussing logic, because good thinking - critical thinking - is a skill one must develop, much like learning to play piano. Learning logic is to the thinker what
learning chords and finger placement is to the pianist. Critical thinking can be one of the hardest tasks, for the topics and issues to which critical thinking must be applied are oftentimes the same
topics and issues that make us the most emotional; moreover, we are forced to find balance between two seemingly opposing inclinations, as Carl Sagan explains:
"It seems to me what is called for is an exquisite balance between two conflicting needs: the most skeptical scrutiny of all hypotheses that are served up to us and at the same time a great
openness to new ideas … If you are only skeptical, then no new ideas make it through to you … On the other hand, if you are open to the point of gullibility and have not an ounce of skeptical
sense in you, then you cannot distinguish the useful ideas from the worthless ones."
As we continue on in this series, I must note that, even though it may be important, the path of a critical thinker is often a difficult road. Maybe that's why most people don't bother to attempt it,
and choose instead to remain with their comfortable and familiar beliefs.
• Inductive vs. Deductive Arguments
Two types of arguments * - inductive and deductive - are perhaps best distinguished this way: a deductive argument is one in which, if the argument is sound, it's impossible for the
conclusion to be false, whereas an inductive argument is one in which, if the argument is sound, it's merely improbable for the conclusion to be false. Put another way, a deductive argument
deals with possibility, whereas an inductive argument deals with probability. Consider this argument:
Premise 1: One becomes a Sith Lord only by utilizing the power of Dark Side of the Force. Premise 2: Benedict is a Sith Lord. __________________ ∴ Benedict utilizes the power of the Dark
Side of the Force.
This is a deductive argument because, if the argument is sound - i.e., the argument is valid (has proper form) and all the premises are true - then the conclusion must be true, and it's
impossible for the conclusion to be false. Now here's an example of an inductive argument:
Premise 1: Sith Lords have only ever used red-bladed lightsabers. Premise 2: Benedict is a Sith Lord. __________________ ∴ Benedict uses a red-bladed lightsaber.
This argument is inductive because, if the argument is sound, then the conclusion is only probably true. After all, Benedict might decide to break tradition and start using a green
lightsaber. We can't say for certain that he won't. The best we can say is, since this is how things have been for a certain period of time, it's probable that it won't be much different now
with Benedict's lightsaber. An interesting aspect of inductive arguments is that, with each argument, the level of probability won't be the same. The argument that says, "the sun has risen
each day for as long as the earth has been orbiting the sun; therefore, the sun will rise again tomorrow," has a much greater probability than the argument that says, "Mr. Jones has gone on
his morning walk at 6 AM every day for the past 40 years; therefore, Mr. Jones will go on his walk again tomorrow morning." There are more factors which could render the conclusion false in
the latter argument than in the former, thus making the former argument's conclusion more probable. In some cases, an inductive argument might even be completely wrong. For example, If I flip
a coin and it lands "heads" seven times in a row, I could make an inductive argument saying the coin will land "heads" again, because that's how it's landed each time in the past. We'll come
back to this in more detail when we get to the section on fallacies. All I'll say here is that, while this coin flip argument would technically be inductive, it is definitely not a sound
argument. I find it to be better form to admit the probability in the conclusion of an inductive argument. For example, in my argument for Mr. Jones' morning walk, my conclusion would look
something like this: "Mr. Jones will probably go on his morning walk tomorrow morning." I may even word it like this: "Mr. Jones will probably go on his morning walk, provided nothing out of
the ordinary occurs."
[* There's also abductive reasoning, which I'll discuss later.]
• The Necessity of Logic
I've written about this on my blog before. Perhaps I should have started the series with this section, given that there are people out there who deny the value of logic for several reasons
(whether miseducation or religious bias or some other reason). A key defense of logic is that logic cannot be denied without using logic and assuming the rules of logic are applicable. A
person can say, "logic is bunk," but as soon as she tries to explain why it's bunk, she's appealing to the rules of logic. Likewise (and some people use this as a way of dismissing logic),
one cannot prove that logic is "logical" without using logic to prove it and thus using circular reasoning. They're right, by the way: we can't justify logic without breaking logic's own
rules. Still, though we cannot justify using logic without appealing to logic (leading to circular reasoning), the presumption of logic is necessary for us to understand anything at all. If
we deny the rules of logic, then we must admit that statements like "there is a god" and "there is not a god" are both true – and false. "Obama is a liberal" and "Obama is not a liberal" are
equally true - and equally false. You are reading this article and you are not reading this article. Charles Darwin is dead, and he is alive! And anyone who agrees with this is correct – and
incorrect. Talk about senseless propositions! Without logic, there can be no actual communication, and no conveyance of any kind of truth. Logic is the most foundational assumption one can
Previously: part one | Next: part three
— Dead-Logic
[Read the entire series: The Path of a Critical Thinker]
No comments: | {"url":"http://dead-logic.blogspot.com/2012/02/path-of-critical-thinker-part-two.html","timestamp":"2014-04-21T12:28:42Z","content_type":null,"content_length":"166191","record_id":"<urn:uuid:5e4f8e06-0abf-4073-bd8f-1b79de6372db>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00206-ip-10-147-4-33.ec2.internal.warc.gz"} |
Charles Tweedie, M.A., B.Sc., F.R.S.E.
by E M Horsburgh
The death has occurred in Edinburgh, in September, after a long illness, of Charles Tweedie. Readers of the Gazette will remember him as the author of two brilliant articles in its pages, the one "A
Study of the Life and Writings of Colin Maclaurin" and the other the "Life of James Stirling, the Venetian."
Mr Tweedie was born in 1868. He was a member of a Scottish Border family, and was educated in Edinburgh. He became a student of Edinburgh University in the days of Tait and Chrystal. In 1890 he
graduated M.A. with First-Class Honours in Mathematics and Natural Philosophy. In the same year he took the degree B.Sc. After this he continued his mathematical studies at the Universities of
Göttingen and Berlin. He then returned to Edinburgh University, where he became the Lecturer in Pure Mathematics under Professor Chrystal. He held this appointment for over twenty years.
Mr Tweedie was a man of wide attainments. He was a good linguist, and was well read in the works of the geometers of Britain, France, Italy, and Germany. He wrote a large number of original papers.
He contributed a considerable number of articles to the Proceedings of the Edinburgh Mathematical Society. Of this Society he was a Past-President. He was the author of certain papers in the Proc.
Roy. Soc. Edinburgh, and he produced many mathematical articles for an encyclopaedia. He was keenly interested in the school teaching of mathematics, and, in conjunction with a friend, wrote a
well-known Trigonometry. For a long time he was a University Inspector of Schools under the Scottish Education Department.
Owing to failing health he was obliged to resign his academic duties some years ago, but was able, in the face of overwhelming difficulties, due to illness, to pursue a long-cherished scheme of
writing on the Scottish mathematicians particularly Maclaurin and Stirling. Some of these papers appeared in vols. viii., ix. and x. of the Gazette - two have been mentioned already - others were
published in the Proceedings of the Royal Society of Edinburgh, and in the Proceedings of the Edinburgh Mathematical Society. In continuation of this line of research he published in 1922 a volume
entitled James Stirling, a Sketch of his Life and Works, along with his Scientific Correspondence. His last publication dealt with Gray, the arithmetician, the Scottish Cocker.
This obituary was written by E M Horsburgh and published in The Mathematical Gazette. The reference is:
E M Horsburgh, Obituary: Charles Tweedie, M.A., B.Sc., F.R.S.E., The Mathematical Gazette 12 (179) (1925), 523. | {"url":"http://www-gap.dcs.st-and.ac.uk/~history/Obits2/Tweedie_Gazette_obituary.html","timestamp":"2014-04-19T22:08:41Z","content_type":null,"content_length":"3245","record_id":"<urn:uuid:93866bac-7af3-43c6-90f0-04e543e41812>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00439-ip-10-147-4-33.ec2.internal.warc.gz"} |
The moment-generating function of the minimum of bivariate normal random variables
Results 1 - 10 of 14
- In DAC , 2004
"... Variability in digital integrated circuits makes timing verification an extremely challenging task. In this paper, a canonical first order delay model is proposed that takes into account both
correlated and independent randomness. A novel linear-time block-based statistical timing algorithm is emplo ..."
Cited by 125 (4 self)
Add to MetaCart
Variability in digital integrated circuits makes timing verification an extremely challenging task. In this paper, a canonical first order delay model is proposed that takes into account both
correlated and independent randomness. A novel linear-time block-based statistical timing algorithm is employed to propagate timing quantities like arrival times and required arrival times through
the timing graph in this canonical form. At the end of the statistical timing, the sensitivities of all timing quantities to each of the sources of variation are available. Excessive sensitivities
can then be targeted by manual or automatic optimization methods to improve the robustness of the design. The statistical timing analysis is incremental, and is therefore suitable for use in the
inner loop of physical synthesis or other optimization programs. The second novel contribution of this paper is the computation of local and global criticality probabilities. For a very small cost in
CPU time, the probability of each edge or node of the timing graph being critical is computed. These criticality probabilities provide additional useful diagnostics to synthesis, optimization, test
generation and path enumeration programs. Numerical results are presented on industrial ASIC chips with over two million logic gates. 1.
- In DAC , 2005
"... The impact of parameter variations on timing due to process and environmental variations has become significant in recent years. With each new technology node this variability is becoming more
prominent. In this work, we present a general Statistical Timing Analysis (STA) framework that captures spa ..."
Cited by 27 (6 self)
Add to MetaCart
The impact of parameter variations on timing due to process and environmental variations has become significant in recent years. With each new technology node this variability is becoming more
prominent. In this work, we present a general Statistical Timing Analysis (STA) framework that captures spatial correlations between gate delays. Our technique does not make any assumption about the
distributions of the parameter variations, gate delay and arrival times. We propose a Taylor-series expansion based polynomial representation of gate delays and arrival times which is able to e#
ectively capture the non-linear dependencies that arise due to increasing parameter variations. In order to reduce the computational complexity introduced due to polynomial modeling during STA, we
propose an e#cient linear-modeling driven polynomial STA scheme. On an average the degree-2 polynomial scheme had a 7.3x speedup as compared to Monte Carlo with 0.049 units of rms error w.r.t Monte
Carlo. Our technique is generic and can be applied to arbitrary variations in the underlying parameters.
- In ICCAD , 2005
"... Abstract — As technology scales into the sub-90nm domain, manufacturing variations become an increasingly significant portion of circuit delay. As a result, delays must be modeled as statistical
distributions during both analysis and optimization. This paper uses incremental, parametric statistical ..."
Cited by 25 (1 self)
Add to MetaCart
Abstract — As technology scales into the sub-90nm domain, manufacturing variations become an increasingly significant portion of circuit delay. As a result, delays must be modeled as statistical
distributions during both analysis and optimization. This paper uses incremental, parametric statistical static timing analysis (SSTA) to perform gate sizing with a required yield target. Both
correlated and uncorrelated process parameters are considered by using a first-order linear delay model with fitted process sensitivities. The fitted sensitivities are verified to be accurate with
circuit simulations. Statistical information in the form of criticality probabilities are used to actively guide the optimization process which reduces run-time and improves area and performance. The
gate sizing results show a significant improvement in worst slack at 99.86 % yield over deterministic optimization. I.
- ECB WORKING PAPER SERIES NO 1042 , 2009
"... ..."
"... Statistical static timing analysis (SSTA) is emerging as a solution for predicting the timing characteristics of digital circuits under process variability. For computing the statistical max of
two arrival time probability distributions, existing analytical SSTA approaches use the results given by C ..."
Cited by 3 (0 self)
Add to MetaCart
Statistical static timing analysis (SSTA) is emerging as a solution for predicting the timing characteristics of digital circuits under process variability. For computing the statistical max of two
arrival time probability distributions, existing analytical SSTA approaches use the results given by Clark in [8]. These analytical results are exact when the two operand arrival time distributions
have jointly Gaussian distributions. Due to the nonlinear max operation, arrival time distributions are typically skewed. Furthermore, nonlinear dependence of gate delays and non-gaussian process
parameters also make the arrival time distributions asymmetric. Therefore, for computing the max accurately, a new approach is required that accounts for the inherent skewness in arrival time
distributions. In this work, we present analytical solution for computing the statistical max operation. 1 First, the skewness in arrival time distribution is modeled by matching its first three
moments to a so-called skewed normal distribution. Then by extending Clark’s work to handle skewed normal distributions we derive analytical expressions for computing the moments of the max. We then
show using initial simulations results that using a skewness based max operation has a significant potential to improve the accuracy of the statistical max operation in SSTA while retaining its
computational efficiency. 1.
"... Abstract—This paper quantifies the approximation error when results obtained by Clark (Oper. Res., vol. 9, p. 145, 1961) are employed to compute the maximum (max) of Gaussian random variables,
which is a fundamental operation in statistical timing. We show that a finite lookup table can be used to s ..."
Add to MetaCart
Abstract—This paper quantifies the approximation error when results obtained by Clark (Oper. Res., vol. 9, p. 145, 1961) are employed to compute the maximum (max) of Gaussian random variables, which
is a fundamental operation in statistical timing. We show that a finite lookup table can be used to store these errors. Based on the error computations, approaches to different orderings for pairwise
max operations on a set of Gaussians are proposed. Experimental results show accuracy improvements in the computation of the max of multiple Gaussians, in comparison to the traditional approach. In
addition, we present an approach to compute the tightness probabilities of Gaussian random variables with dynamic runtime-accuracy tradeoff options. We replace required numerical computations for
their estimations by closed form expressions based on Taylor series expansion that involve table lookup and a few fundamental arithmetic operations. Experimental results demonstrate an average
speedup of 2 × using our approach for computing the maximum of two Gaussians, in comparison to the traditional approach, without any accuracy penalty. Index Terms—Computer-aided design (CAD),
Gaussian approximation, statistical timing, very large-scale integration
"... Skew-normal distributions extend the normal distributions through a shape parameter α; they reduce to the standard normal random variable Z for α = 0 and to |Z | or the half-normal when α → ∞.
In spite of the skewness they (dis)inherit some properties of normal random variables: Square of a skew-nor ..."
Add to MetaCart
Skew-normal distributions extend the normal distributions through a shape parameter α; they reduce to the standard normal random variable Z for α = 0 and to |Z | or the half-normal when α → ∞. In
spite of the skewness they (dis)inherit some properties of normal random variables: Square of a skew-normal random variable has a chi-square distribution with one degree of freedom, but the sum of
two independent skew-normal random variables is not generally skew-normal. We review and explain this lack of closure and other properties of skew-normal random variables via their representations as
a special linear combination of independent normal and half-normal random variables. Analogues of such representations are used to define multivariate skew-normal distributions with a closure
property similar to that of multivariate normal distributions.
"... A central tenet of economics is that people respond to incentives. While an appropriately crafted incentive scheme can achieve the second-best optimum in the presence of moral hazard, the
principal must be very well informed about the environment (e.g. the agent’s preferences and the production tech ..."
Add to MetaCart
A central tenet of economics is that people respond to incentives. While an appropriately crafted incentive scheme can achieve the second-best optimum in the presence of moral hazard, the principal
must be very well informed about the environment (e.g. the agent’s preferences and the production technology) in order to achieve this. Indeed it is often suggested that incentive schemes can be
gamed by an agent with superior knowledge of the environment, and furthermore that lack of transparency about the nature of the incentive scheme can reduce gaming. We provide a formal theory of these
phenomena. We show that random or ambiguous incentive schemes induce more balanced efforts from an agent who performs multiple tasks and who is better informed about the environment than the
principal is. On the other hand, such random schemes impose more risk on the agent per unit of effort induced. By identifying settings in which random schemes are especially effective in inducing
balanced efforts, we show that, if tasks are sufficiently complementary for the principal, random incentive schemes can dominate the best deterministic scheme. (JEL L13, L22)
, 2008
"... A general methodology is presented for the construction and effective use of control variates for reversible MCMC samplers. The values of the coefficients of the optimal linear combination of
the control variates are computed, and adaptive, consistent MCMC estimators are derived for these optimal co ..."
Add to MetaCart
A general methodology is presented for the construction and effective use of control variates for reversible MCMC samplers. The values of the coefficients of the optimal linear combination of the
control variates are computed, and adaptive, consistent MCMC estimators are derived for these optimal coefficients. All methodological and asymptotic arguments are rigorously justified. Numerous MCMC
simulation examples from Bayesian inference applications demonstrate that the resulting variance reduction can be quite dramatic.
, 2007
"... We propose and compare procedures for inference about mean squared prediction error (MSPE) when comparing a benchmark model against a small number of alternatives that nest the benchmark. We
evaluate two procedures that adjust MSPE differences in accordance with Clark and West (2007); one examines t ..."
Add to MetaCart
We propose and compare procedures for inference about mean squared prediction error (MSPE) when comparing a benchmark model against a small number of alternatives that nest the benchmark. We evaluate
two procedures that adjust MSPE differences in accordance with Clark and West (2007); one examines the maximum t-statistic, the other computes a chi-squared statistic. We also examine two procedures
that do not adjust the MSPE differences: a chi-squared statistic, and White’s (2000) reality check. In our simulations, the two statistics that adjust MSPE differences have most accurate size, and
the procedure that looks at the maximum t-statistic has best power. We illustrate our procedures by comparing forecasts of different models for U.S. inflation. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=764798","timestamp":"2014-04-18T13:37:01Z","content_type":null,"content_length":"38191","record_id":"<urn:uuid:998e8c8c-7a7a-485a-ade5-d61dabe984d4>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00576-ip-10-147-4-33.ec2.internal.warc.gz"} |
This Article
Bibliographic References
Add to:
Accelerating a 3D Finite-Difference Earthquake Simulation with a C-to-CUDA Translator
May/June 2012 (vol. 14 no. 3)
pp. 48-59
ASCII Text x
Didem Unat, Jun Zhou, Yifeng Cui, Scott B. Baden, Xing Cai, "Accelerating a 3D Finite-Difference Earthquake Simulation with a C-to-CUDA Translator," Computing in Science and Engineering, vol. 14,
no. 3, pp. 48-59, May/June, 2012.
BibTex x
@article{ 10.1109/MCSE.2012.44,
author = {Didem Unat and Jun Zhou and Yifeng Cui and Scott B. Baden and Xing Cai},
title = {Accelerating a 3D Finite-Difference Earthquake Simulation with a C-to-CUDA Translator},
journal ={Computing in Science and Engineering},
volume = {14},
number = {3},
issn = {1521-9615},
year = {2012},
pages = {48-59},
doi = {http://doi.ieeecomputersociety.org/10.1109/MCSE.2012.44},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - MGZN
JO - Computing in Science and Engineering
TI - Accelerating a 3D Finite-Difference Earthquake Simulation with a C-to-CUDA Translator
IS - 3
SN - 1521-9615
EPD - 48-59
A1 - Didem Unat,
A1 - Jun Zhou,
A1 - Yifeng Cui,
A1 - Scott B. Baden,
A1 - Xing Cai,
PY - 2012
KW - Code generation
KW - optimization
KW - emerging technologies
KW - Earth and atmospheric sciences
KW - scientific computing
VL - 14
JA - Computing in Science and Engineering
ER -
GPUs provide impressive computing power, but GPU programming can be challenging. Here, an experience in porting real-world earthquake code to Nvidia GPUs is described. Specifically, an
annotation-based programming model, called Mint, and its accompanying source-to-source translator are used to automatically generate CUDA source code and simplify the exploration of performance
1. D. Unat, X. Cai, and S.B. Baden, "Mint: Realizing CUDA Performance in 3D Stencil Methods with Annotated C," Proc. Int'l Conf. Supercomputing, ACM, 2011, pp. 214–224.
2. J. Virieux, "P-SV Wave Propagation in Heterogeneous Media: Velocity-Stress Finite-Difference Method," Geophysics, vol. 51, no. 4, 1986, pp. 889–901.
3. P. Moczo et al., "The Finite-Difference and Finite-Element Modeling of Seismic Wave Propagation and Earthquake Motion," Acta Physica Slovaca, vol. 57, no. 2, 2007, pp. 177–406.
4. T. Furumura and L. Chen, "Parallel Simulation of Strong Ground Motions During Recent and Historical Damaging Earthquakes in Tokyo, Japan," Parallel Computation, vol. 31, no. 2, 2005, pp. 149–165.
5. Y. Cui et al., "Scalable Earthquake Simulation on Petascale Supercomputers," Proc. 2010 ACM/IEEE Conf. Supercomputing, IEEE CS, 2010, pp. 1–20.
6. K.B. Olsen, "Simulation of Three-Dimensional Wave Propagation in the Salt Lake Basin," Bulletin Seismological Soc. of Am., vol. 85, no. 6, 1995, pp. 1688–1710.
7. K. Olsen et al., "Strong Shaking in Los Angeles Expected from Southern San Andreas Earthquake," Geophysical Research Letters, vol. 33, no. 7, 2006, pp. 2–5.
8. L. Dalguer and S.M. Day, "Staggered-Grid Split-Node Method for Spontaneous Rupture Simulation," J. Geophysical Research, vol. 112, no. B02302, 2007; doi:10.1029/2006JB004467.
9. A. Simone and S. Hestholm, "Instabilities in Applying Absorbing Boundary Conditions to High-Order Seismic Modeling Algorithms," Geophysics, vol. 63, no. 3, 1998, pp. 1017–1023.
10. J. Nickolls et al., "Scalable Parallel Programming with CUDA," Proc. Siggraph, 2008, ACM, pp. 1–14.
11. P. Micikevicius, "3D Finite Difference Computation on GPUs Using CUDA," Proc. 2nd Workshop General Purpose Processing on Graphics Processing Units, ACM, 2009, pp. 79–84.
12. D. Michéa and D. Komatitsch, "Accelerating a Three-Dimensional Finite-Difference Wave Propagation Code Using GPU Graphics Cards," Geophysical J. Int'l, vol. 182, no. 1, 2010, pp. 389–402.
13. S. Lee and R. Eigenmann, "OpenMPC: Extended OpenMP Programming and Tuning for GPUs," Proc. 2010 ACM/IEEE Conf. Supercomputing, IEEE CS, 2010, pp. 1–11.
14. A. Danalis et al., "The Scalable Heterogeneous Computing (SHOC) Benchmark Suite," Proc. 3rd Workshop on General-Purpose Computation on Graphics Processing Units, 2010, pp. 63–74.
15. V. Volkov and J.W. Demmel, "Benchmarking GPUs to Tune Dense Linear Algebra," Proc. 2008 ACM/IEEE Conf. Supercomputing, IEEE Press, 2008, pp. 31:1–31:11.
1. P. Micikevicius, "3D Finite Difference Computation on GPUs Using CUDA," Proc. 2nd Workshop General Purpose Processing on Graphics Processing Units, ACM, 2009, pp. 79–84.
2. D. Michéa and D. Komatitsch, "Accelerating a Three-Dimensional Finite-Difference Wave Propagation Code Using GPU Graphics Cards," Geophysical J. Int'l, vol. 182, no. 1, 2010, pp. 389–402.
3. Nvidia, Tuning CUDA Applications for Nvidia Fermi, v1.2, application note, 2010.
4. S. Song et al., Seismic Wave Propagation Simulation Using Support Operator Method on Multi-GPU System, tech. report, Minnesota Supercomputing Inst., Univ. Minnesota, 2010; http://
static.msi.umn.edu/rreports/201134.pdf .
5. P.E. Geoffrey, M.D. Steven, and J.-B. Minster, "A Support-Operator Method for Viscoelastic Wave Modeling in 3-D Heterogeneous Media," Geophysical J. Int'l, vol. 172, no. 1, 2008, pp. 331–344.
6. J. Tromp, D. Komatitsch, and Q. Liu, "Spectral-Element and Adjoint Methods in Seismology," Comm. Computational Physics, vol. 3, no. 1, 2008, pp. 1–32.
7. D. Komatitsch, D. Michéa, and G. Erlebacher, "Porting a High-Order Finite-Element Earthquake Modeling Application to Nvidia Graphics Cards using CUDA," J. Parallel and Distributed Computing, vol.
69, no. 5, 2009, pp. 451–460.
8. H. Chafi et al., "A Domain-Specific Approach to Heterogeneous Parallelism," Proc. 16th ACM Symp. Principles and Practice of Parallel Programming, ACM, 2011, pp. 35–46.
9. F.V. Lionetti, A.D. McCulloch,, and S.B. Baden, "Source-to-Source Optimization of CUDA C for GPU Accelerated Cardiac Cell Modeling," Proc. 16th Int'l Euro-Par Conf. Parallel Processing: Part I,
Springer-Verlag, 2010, pp. 38–49.
10. M. Christen and H.B.O. Schenk, "Patus: A Code Generation and Autotuning Framework for Parallel Iterative Stencil Computations on Modern Microarchitectures," Proc. Int'l Conf. Parallel and
Distributed Computing Systems, IEEE Press, 2011, pp. 676–687.
11. D. Unat, X. Cai, and S.B. Baden, "Mint: Realizing CUDA Performance in 3D Stencil Methods with Annotated C," Proc. Int'l Conf. Supercomputing, ACM, 2011, pp. 214–224.
12. D. Unat et al., "Auto-Optimization of a Feature Selection Algorithm," Proc. 4th Workshop on Emerging Applications and Many-Core Architecture, 2011; http://sites.google.com/site/eamaworkshop
13. M. Wolfe, "Implementing the PGI Accelerator Model," Proc. 3rd Workshop on General-Purpose Computation on Graphics Processing Units, ACM, 2010, pp. 43–50.
14. S. Lee and R. Eigenmann, "OpenMPC: Extended OpenMP Programming and Tuning for GPUs," Proc. 2010 ACM/IEEE Conf. Supercomputing, IEEE CS, 2010, pp. 1–11.
15. F. Bodin and S. Bihan, "Heterogeneous Multicore Parallel Programming for Graphics Processing Units," J. Scientific Programming, vol. 17, no. 4, 2009, pp. 325–336.
Index Terms:
Code generation, optimization, emerging technologies, Earth and atmospheric sciences, scientific computing
Didem Unat, Jun Zhou, Yifeng Cui, Scott B. Baden, Xing Cai, "Accelerating a 3D Finite-Difference Earthquake Simulation with a C-to-CUDA Translator," Computing in Science and Engineering, vol. 14, no.
3, pp. 48-59, May-June 2012, doi:10.1109/MCSE.2012.44
Usage of this product signifies your acceptance of the
Terms of Use | {"url":"http://www.computer.org/csdl/mags/cs/2012/03/mcs2012030048-abs.html","timestamp":"2014-04-17T22:07:27Z","content_type":null,"content_length":"56122","record_id":"<urn:uuid:840e2bc8-4cfb-42cf-bff3-b6ded68010aa>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00334-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: SCI.LOGIC is a STAGNANT CESS PITT of LOSERS!
Replies: 3 Last Post: Nov 24, 2012 3:34 PM
Messages: [ Previous | Next ]
george Re: SCI.LOGIC is a STAGNANT CESS PITT of LOSERS!
Posted: Nov 24, 2012 10:21 AM
Posts: 800
Registered: 8/5/08 On Nov 17, 3:50 am, Graham Cooper <grahamcoop...@gmail.com> wrote:
> Any and All mathematical logicians posting to SCI.LOGIC have all been
> verbally abused 100, 1000 times or more until they all left.
Indeed they have, but BY WHOM??
> Me included.
Since YOU are MOST UNfortunately STILL HERE,
either you are still here in your role as the abusER, NOT an abusEE,
or you are NOT a mathematical logician.
Or both, more likely.
Date Subject Author
11/17/12 SCI.LOGIC is a STAGNANT CESS PITT of LOSERS! Graham Cooper
11/17/12 Re: SCI.LOGIC is a STAGNANT CESS PITT of LOSERS! Mark Thorson
11/24/12 Re: SCI.LOGIC is a STAGNANT CESS PITT of LOSERS! george
11/24/12 Re: SCI.LOGIC is a STAGNANT CESS PITT of LOSERS! Graham Cooper | {"url":"http://mathforum.org/kb/message.jspa?messageID=7927492","timestamp":"2014-04-19T17:32:56Z","content_type":null,"content_length":"19976","record_id":"<urn:uuid:fcbe5e73-ebd6-48bd-ba44-e87a57444fc6>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00060-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Resurgent Bootstrap and the 3D Ising Model
In recent years the conformal bootstrap has emerged as a surprisingly powerful tool to study CFTs in dimensions greater than two. In this talk I will explain how crossing symmetry of the four-point
function of scalar operators can be used to extract very non-trivial constraints on the spectrum of a putative CFT in arbitrary spacetime dimension. Applying these techniques in D=3 we will find that
the 3D Ising model lies at a special point in the space of CFTs. Moreover, we will show that constraints from conformal invariance can be used to significantly reduce the error in the known estimates
for the dimensions of operators and suggest a method to generalize this to a compute the dimensions of all operators in the theory. Time permitting, I will mention some more general applications of
this technology, including holography.
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible. | {"url":"http://www.newton.ac.uk/programmes/BSM/seminars/2012052914459.html","timestamp":"2014-04-17T21:38:41Z","content_type":null,"content_length":"6156","record_id":"<urn:uuid:2e1b83b0-f0e3-4e92-ae3f-a93f3e3ae5f8>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00417-ip-10-147-4-33.ec2.internal.warc.gz"} |
Average if multiple criteria and contains
I'm new to this forum and hope someone can guide me in the right direction.
I've been stuck on this project for days now, have looked into various forums including ozgrid but still haven't found a solution. I'm thinking about this even while sleeping
1. I have 2 sheets "Criteria" and "Actual Data". The Autofilter Criteria is defined on the "Criteria"
2. The idea is to run macro through the criteria defined on the "Criteria" tab and perform action on the "Actual Data".
3. The macro for autofilter is executed on "Actual Data" sheet. I've managed to assign the autofilter criteria to the cells reference in "Criteria" tab but what I'm struggling with is how to loop
through the autofilter to the next row or until all criterias have been met.
With the loop I have 2 action that needs to be performed
1. Copy values subsequent cell value on "Criteria" tab on column L to the filtered data on "Actual Data" tab (again this is a loop that I' unable to execute)
2. Criteria tab - If YES is defined from column M to P, clear contents of the data on "Actual Data" of columns C to F.
I'm attaching my example so its easy to understand where I'm stuck.
Thanks in anticipation.
Sub Autofilter()
Application.ScreenUpdating = False
ActiveSheet.AutoFilterMode = False
' Filter using the cell reference on the "Criteria" worksheet
Range("4:4").Autofilter Field:=18, Criteria1:=Range("Criteria!A6").Text
Range("4:4").Autofilter Field:=19, Criteria1:=Range("Criteria!B6").Text
Range("4:4").Autofilter Field:=15, Criteria1:=Range("Criteria!C6").Text
Range("4:4").Autofilter Field:=11, Criteria1:=Range("Criteria!D6").Text
Range("4:4").Autofilter Field:=14, Criteria1:=Range("Criteria!E6").Text
Range("4:4").Autofilter Field:=20, Criteria1:=Range("Criteria!F6").Text
Range("4:4").Autofilter Field:=21, Criteria1:=Range("Criteria!G6").Text
Range("4:4").Autofilter Field:=22, Criteria1:=Range("Criteria!H6").Text
Range("4:4").Autofilter Field:=23, Criteria1:=Range("Criteria!I6").Text
Range("4:4").Autofilter Field:=3, Criteria1:=Range("Criteria!J6").Text
Range("4:4").Autofilter Field:=5, Criteria1:=Range("Criteria!K6").Text
' Copy column "L" values on "Criteria" worksheet and paste on the "Actual Data" worksheet on the filtered data for each
Sheets("Actual Data").Select
ActiveCell.Offset(1, 0).Select
Range(Selection, Selection.End(xlDown)).Select
' Clear contents of the column on "Actual Data" worksheet if "YES" is defined - If "NO" no action required
ActiveCell.Offset(1, 0).Select
Range(Selection, Selection.End(xlDown)).Select
ActiveCell.Offset(1, 0).Select
Range(Selection, Selection.End(xlDown)).Select
ActiveSheet.AutoFilterMode = False
Application.ScreenUpdating = True
End Sub
If you like these VB formatting tags please consider sponsoring the author in support of injured Royal Marines | {"url":"http://www.knowexcel.com/view/16557-average-if-multiple-criteria-and-contains.html","timestamp":"2014-04-16T16:17:08Z","content_type":null,"content_length":"69231","record_id":"<urn:uuid:e6d8a38d-b095-44c0-9415-910f9d79c243>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00104-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numeracy 528] Re: Another perspective on numbers, operations, and negatives
[Numeracy 528] Re: Another perspective on numbers, operations, and negatives
Archived Content Disclaimer
This page contains archived content from a LINCS email discussion list that closed in 2012. This content is not updated as part of LINCS’ ongoing website maintenance, and hyperlinks may be broken.
Ladnor Geissinger ladnor at email.unc.edu Wed Sep 1 01:21:31 EDT 2010
The notes below are directed to Michael Gyori and anyone else who
views numbers as essentially representing quantities.
I recommend the book Negative Mathematics: How Mathematical Rules Can
Be Positively Bent,
An easy introduction to the study of developing algebraic rules to
describe relations among things, by Alberto Martinez, Princeton Univ.
Press, 2006.
The first 5 chapters are mainly devoted to the history of numerical
algebra. He points out that negative numbers have been widely used at
least since the mid-1500s and there was lots of fuss about them even
among mathematicians and scientists until about the mid-1800s by which
time everything had been agreed upon and their utility was no longer in
MacLaurin justified the introduction of signed numbers into algebra on
the basis of physical utility. Math deals with more than magnitudes, it
must also handle other concepts of physical significance.
In DeMorgan's book of 1831 he uses signs to represent directions, and
says "rules of operation are the results of experience, not of abstract
In chapter 5 on page 103 Martinez says "It was because of symbolical
algebra that mathematics on the whole ceased to be defined as the
_science of quantity_."
Note that among other things, DeMorgan was recognizing that when
negative numbers were introduced and people were deciding how to extend
multiplication to all numbers, they were not forced by pure reason to
define (-1)*(-1) = 1. Along the way some pretty good mathematicians had
decided that they couldn't stomach that and had preferred to define
multiplication so that (-1)*(-1) = -1. But that meant that the
distributive law of multiplication over addition no longer held
universally, which meant that one had to be very careful in doing long
calculations, it led to considering many special cases, and calculating
with variables was difficult. This was so complicated and so
inefficient that eventually everyone decided that always having the
distributive law work was so much easier and gave us a much more useful
new tool for analysis.
Ladnor Geissinger
On 8/30/2010 5:00 AM, Michael Gyori wrote:
I really appreciate Ladnor's response below. I believe I
understand the points being made and have no issue with them whatsoever.
My question, again, ties in with how I teach math (not a
primary undertaking in the context of my work). I've stated often that
I teach numbers as representing quantities, back to when this list got
underway and I contended that negative integers do not exist - a
contention that itself triggered some discussion.
Yes, natural numbers do not even include zero, I agree, but at least
we can demonstrate that nothing is left.
I have too many students who have learned to "hate" math. Most of
them become more favorably inclined towards the subject after I work
with them. One reason is that I demonstrate the relationship between
math and issues that can arise in daily life that require math to
solve problems (along the lines of EFF). I continue to struggle with
establishing such a relationship with negative integers, which is the
reason I address this audience in pursuit of some insight that, in
turn, might benefit me and, in turn, my students.
Thanks again,
Michael A. Gyori
Maui International Language School
www.mauilanguage.com <http://www.mauilanguage.com/>
Ladnor Geissinger, Emer. Prof. Mathematics
Univ. of North Carolina, Chapel Hill NC 27599 USA
-------------- next part --------------
An HTML attachment was scrubbed... | {"url":"http://lincs.ed.gov/pipermail/numeracy/2010/000554.html","timestamp":"2014-04-17T12:36:58Z","content_type":null,"content_length":"21502","record_id":"<urn:uuid:1e204be4-07e8-4b08-94df-a802d34194b5>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00576-ip-10-147-4-33.ec2.internal.warc.gz"} |
Louisiana Tech University - Student Competencies in Research & Information Use - Meeting Expectations of Information Literacy: Information Literacy in NCTM Program Standards for Initial Preparation of Mathematics Teachers: Secondary
Pedagogy (Standard 8)
In addition to knowing students as learners, mathematics teacher candidates should develop knowledge of and ability to use and evaluate instructional strategies and classroom organizational models,
ways to represent mathematical concepts and procedures, instructional materials and resources, ways to promote discourse, and means of assessing student understanding. This section on pedagogy is to
address this knowledge and skill.
Standard 8: Knowledge of Mathematics Pedagogy
Candidate possesses a deep understanding of how students learn mathematics and of the pedagogical knowledge specific to mathematics teaching and learning.
8.6 Demonstrates knowledge of research results in the teaching and learning of mathematics.
Standard 14: Knowledge of Data Analysis, Statistics, and Probability
Candidates demonstrate an understanding of concepts and practices related to data analysis, statistics, and probability.
14.1 Design investigations, collect data, and use a variety of ways to display data and interpret data representations that may include bivariate data, conditional probability and geometric
14.2 Use appropriate methods such as random sampling or random assignment of treatments to estimate population characteristics, test conjectured relationships among variables, and analyze data.
14.3 Use appropriate statistical methods and technological tools to describe shape and analyze spread and center.
14.4 Use statistical inference to draw conclusions from data.
14.5 Identify misuses of statistics and invalid conclusions from probability.
14.6 Draw conclusions involving uncertainty by using hands-on and computer-based simulation for estimating probabilities and gathering data to make inferences and conclusions.
14.7 Determine and interpret confidence intervals.
Last modified February 13, 2007
by Boris Teske, Prescott Memorial Library,
Louisiana Tech University, Ruston, LA 71272 | {"url":"http://www.latech.edu/library/infolit/IL_in_NCTM_program_standards_initial_2003_secondary.htm","timestamp":"2014-04-21T10:10:05Z","content_type":null,"content_length":"6024","record_id":"<urn:uuid:5f381663-edc6-48ce-95f1-e6be3d327a4e>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00061-ip-10-147-4-33.ec2.internal.warc.gz"} |
invariant polynomial
invariant polynomial
An invariant polynomial is a polynomial $P$ that is invariant under a (compact) Lie group $\Gamma$ acting on a vector space $V$. Therefore $P$ is $\Gamma$-invariant polynomial if $P(\gamma x)=P(x)$
for all $\gamma\in\Gamma$ and $x\in V$.
• GSS Golubitsky, Martin. Stewart, Ian. Schaeffer, G. David: Singularities and Groups in Bifurcation Theory (Volume II). Springer-Verlag, New York, 1988.
Mathematics Subject Classification
no label found
Added: 2003-06-10 - 14:30 | {"url":"http://planetmath.org/invariantpolynomial","timestamp":"2014-04-20T13:43:03Z","content_type":null,"content_length":"33153","record_id":"<urn:uuid:5075b761-6079-4fc0-868b-efdc8ad682db>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00316-ip-10-147-4-33.ec2.internal.warc.gz"} |
Property value estimation for inhaled therapeutic binary gas mixtures: He, Xe, N
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
Med Gas Res. 2011; 1: 28.
Property value estimation for inhaled therapeutic binary gas mixtures: He, Xe, N[2]O, and N[2 ]with O[2]
The property values of therapeutic gas mixtures are important in designing devices, defining delivery parameters, and in understanding the therapeutic effects. In the medical related literature the
vast majority of articles related to gas mixtures report property values only for the pure substances or estimates based on concentration weighted averages. However, if the molecular size or
structures of the component gases are very different a more accurate estimate should be considered.
In this paper estimates based on kinetic theory are provided of density, viscosity, mean free path, thermal conductivity, specific heat at constant pressure, and diffusivity over a range of
concentrations of He-O[2], Xe-O[2], N[2]O-O[2 ]and N[2]-O[2 ]mixtures at room (or normal) and body temperature, 20 and 37°C, respectively and at atmospheric pressure.
Property value estimations have been provided for therapeutic gas mixtures and compared to experimental values obtained from the literature where possible.
Inhaled therapeutic gases in use today include helium (He) for respiratory treatments, and xenon (Xe) and nitrous oxide (N[2]O) for anesthesia. For clinical applications these gases are used in the
form of mixtures with oxygen in a range of concentrations (typically starting from 20% oxygen (O[2]) concentration by volume, which is equivalent to a mole fraction of 0.20) so as to maintain
adequate oxygenation. Other gases, such as nitric oxide (NO) for pulmonary vascular dilation, are used only in trace amounts.
The property values of therapeutic gas mixtures are important in designing devices, defining delivery parameters, and in understanding the therapeutic effects. Properties of interest include density,
viscosity, mean free path, thermal conductivity, specific heat, and diffusivity. In the medical literature the vast majority of articles related to gas mixtures report property values only for the
pure substances or estimates based on (volume or molar) concentration weighted averages [1-7]. However, if the molecular size or structures of the component gases are very different a more accurate
estimate could be considered [8-10]. For this reason property values of helium and xenon mixtures should be considered for more accurate estimation.
Starting with kinetic theory for molecules treated as hard spheres as a basis, a rich literature has developed regarding the modeling of property values based on first principles and increasing
complexity of the molecular interactions; in particular, the attraction and repulsion of molecules as first formulated by Chapman and Enskog [8,9]. The empirically determined Lennard-Jones potential
energy function has proved to be a good model for many applications. Extensive measurements of the viscosity of gases using oscillating-disk viscometry have primarily been published by Kestin and his
colleagues [11-16]. Other equilibrium and transport properties have been extrapolated from the viscosity measurements using the models described above [8,9]. There also exists limited thermal
conductivity data measured using a hot wire method [17].
The objective of this short communication is to give a straightforward reference to the applied scientist, engineer, and medical personnel who perform research with therapeutic gas mixtures. We
anticipate that this information will assist both in the design and interpretation of experiments. Estimates of density, viscosity, mean free path, thermal conductivity, specific heat at constant
pressure, and diffusivity are provided over a range of concentrations of He-O[2], Xe-O[2], and N[2]O-O[2 ]mixtures at room (or normal) and body temperature, 20 and 37°C, respectively and at
atmospheric pressure; based on kinetic theory and compared to experimental values obtained from the literature where it is possible. For further comparison N[2]-O[2 ]mixtures will be included because
this mixture makes up the composition of medical air.
All of the mixtures can be evaluated as ideal gases under the conditions considered. As such the density is based on the state equation as,
where ρ[mix ]is the mixture density, p is the pressure, T is the absolute temperature and R[mix ]is the gas constant defined for the mixture as
In Equation (2) R[univ ]is the universal gas constant, X[i ]is the mole fraction of the pure gas component, and MW[i ]is the molecular weight of the pure gas component (32 is the molecular weight for
oxygen). The units of R[mix ]depends on the value chosen for R[univ ](e.g., 8314 N-m/kgmol-K).
For viscosity we use a semi-empirical method by Wilke [8] that extends the model for collisions between hard spheres to mixtures.
μ[i ]and $μO2$are the viscosities of the pure gas component and oxygen, respectively. The pure gas viscosity estimates are based on the Lennard-Jones empirical function for the potential:
where r is the distance between the molecules, ε is a characteristic energy of the interaction between molecules and σ is a characteristic diameter, or collision diameter. Equation (5) is a viscosity
formula based on the Lennard-Jones parameters in units of kg/s-m derived for monatomic gases that has also been shown to work well for polyatomic gases [8],
where Ω[μ ]is a function of ε. Lennard-Jones parameters are tabulated for common gases [8,9] and for the gases herein in Table Table11.
Molecular parameters and Lennard-Jones potential parameters for the pure gas components [9].
Mean Free Path
The estimation of mean free path is based on the Chapman-Enskog formulation for hard spheres [18], where the mixture viscosity and density account for the interactions of the different molecules:
The input values are obtained from Equations 1-3.
Specific Heat at Constant Pressure
The specific heat at constant pressure (on a per unit mass basis) for all of the mixtures can be evaluated assuming ideal gas behavior and therefore the specific heat curve is a linear function of
the mass fraction, though nonlinear in terms of the mole fraction
where $cpmix$ and $cpi$ are the specific heats of the mixture and of the pure gas component, respectively. The pure gas values for the monatomic gases are based on the theoretical value $cpi=
2.5RunivMWi$ The polyatomic estimates are based on empirically derived 4^th order polynomials in temperature found in Poling et al. [9].
Thermal Conductivity
Thermal conductivity is treated in an analogous manner to viscosity, where Equation (8a) is equivalent to Equation (3a) and the coefficients are exactly the same based on the pure gas viscosity
The pure gas conductivity estimates are based on a modified Eucken approximation found in Poling et al. [9].
The self diffusivity for a binary system D[ij], represents the movement of species i relative to the mixture, where D[ij ]= D[ji]. The presentation here is based on the method of Fuller et al. given
in Poling et al [9], which uses empirically obtained atomic diffusion volumes (Σv).
In Equation (10) j always represents oxygen, the diffusivity is in m^2/s, T is the temperature in degrees Kelvin, p is the pressure in bar and the atomic diffusion volumes are given in Table Table11
for each gas. $DiO2$is almost independent of composition at low pressures so only a single value will be calculated for each binary gas pair [8].
Of much practical interest is the diffusivity of water vapor or carbon dioxide through the gas mixtures. Values are calculated for these mixtures based on Blanc's law [9].
Where m represents the therapeutic gas mixture considered, j represents the specific therapeutic gas, and k corresponds to H[2]O or CO[2]. The diffusion constants in Equation 11 of H[2]O or CO[2]
through the therapeutic gas or oxygen are calculated using Equation 10 with atomic diffusion volumes of 13.1 and 26.9 for H[2]O or CO[2], respectively.
The molecular weights, gas constants, Lennard-Jones parameters, and atomic diffusion volumes for the pure gases are given in Table Table1.1. The mixture results are given in tabular and graphical
forms. Tables Tables2,2, ,3,3, ,4,4, and and55 give the property values for He, Xe, N[2]O, and N[2 ]with O[2 ]mixtures, as a function of mole fraction at 20°C. Tables Tables6,6, ,7,7, ,8,8,
and and99 are the analogous tables for 37°C. Table Table1010 gives binary diffusivities for the gas mixtures. Figures Figures1,1, ,2,2, ,3,3, ,4,4, and and55 are plots of the 20°C data of
density, viscosity, mean free path, thermal conductivity, and specific heat, respectively.
He-O[2 ]property values at 20°C and 1 atm.
Xe-O[2 ]property values at 20°C and 1 atm.
N[2]O-O[2 ]property values at 20°C and 1 atm.
N[2]-O[2 ]property values at 20°C and 1 atm.
He-O[2 ]property values at 37°C and 1 atm.
Xe-O[2 ]property values at 37°C and 1 atm.
N[2]O-O[2 ]property values at 37°C and 1 atm.
N[2]-O[2 ]property values at 37°C and 1 atm.
Binary diffusivities at 1 atm.
Density of gas mixtures at 20°C and 1 atm.
Viscosity of gas mixtures at 20°C and 1 atm.
Mean free path of gas mixtures at 20°C and 1 atm.
Thermal conductivity of gas mixtures at 20°C and 1 atm.
Specific heat of gas mixtures at 20°C and 1 atm.
In this paper thermophysical property values have been presented for inhaled therapeutic binary gas mixtures. Pure substance values at 20°C and 37°C and mixing formulas based on kinetic theory were
used to estimate the mixture values. The approach was to use relatively simple estimates for nonpolar gases [8]. That is, more complex intermolecular interactions that occur, for example, at high
pressure, were not included.
Whereas many therapeutic gases (e.g.; CO and NO) are used at trace concentrations such that property values of the bulk mixture are essentially equivalent to those of air, mixtures considered herein
have significantly different properties than air which change as a function of component concentration. Mechanical property values of density and viscosity are fundamental to the understanding of gas
transport and airway resistance. The thermal properties of conductivity and capacity are necessary to accurately predict how gas treatments will affect the temperature and humidity of the respiratory
tract. They also will influence the thermodynamic interaction of inhaled aerosols with the gas, and thus the deposition distribution which is particularly relevant for helium-oxygen mixtures.
Diffusion is a key mode of gas transport deep in the lung potentially affecting exchange with the blood.
Bird et al. [8] note that the concept of the mean free path is applicable only if there are no long range forces associated with the hard sphere kinetic theory models. For this reason it is not
typically an element of modern kinetic theory. Nevertheless, it is an important parameter in modeling the interaction of aerosols and gases [19], and thus for combination therapies involving aerosols
and gas mixtures. In contrast to the scheme employed by Loeb [20], the estimation method employed here does not directly take into account the molecular collisions. However, Equation (6) for the mean
free path does account for the collisions of different molecules through the mixture viscosity. As the utility of this parameter in aerosol mechanics is to estimate a reduced drag on small particles
where their size is comparable to the mean free path, this approach would appear to be self consistent.
A comparison of estimated data based on Equation (3) to experimental data for the viscosity at 20°C of helium-oxygen mixtures [14] is shown in Figure Figure6,6, along with the linear curve
representing the concentration weighted average. The maximum relative difference of 0.9% between the theory and experiment occurs at X[He ]= 0.82. For the concentration weighted average value the
maximum relative error of 7.9% occurs at X[He ]= 0.67.
Viscosity of He-O[2 ]mixtures using Equation (3), based on a weighted average of the molar fractions and from experimental measurements [14].
Figure Figure77 shows comparisons of experimental thermal conductivity values [17] for helium-oxygen and xenon-oxygen mixtures at 30°C compared to theoretical values calculated using Equation (8).
The maximum relative differences between the theory and experiment are 4.2% at X[He ]= 0.68 and 4.7% at X[Xe ]= 0.27, respectively.
Thermal conductivity at 30°C for He-O[2 ]and Xe-O[2 ]mixtures using Equation (8), based on a weighted average of the molar fractions and from experimental measurements [17].
Table Table1111 shows a good agreement between experimental data for binary diffusivity of He-O[2 ]and Xe-O[2 ][14,21] with theoretical data calculated using Equation (10). For the diffusivity of
water vapor or carbon dioxide, the simplifying assumption leading to Blanc's law is for a trace component diffusing into a homogeneous, binary mixture. A quantitative definition of trace for the
applicability of this assumption was not found. However, experiments testing diffusion of He, CO and SF[6 ]through gas mixtures similar to alveolar gas (14% O[2], 6% CO[2 ]and 80% N[2]) did not show
significant departures from values predicted on the basis of binary diffusion coefficient values weighted according to fractional concentrations [22] in agreement with Blanc's law. These experiments
were performed with test gas concentrations varying from 0 to 10% suggesting Blanc's law would be appropriate for typical applications of the gases considered herein.
Comparison of experimental and theoretical binary diffusivities based on Equation (10).
In conclusion, the methods presented above allow accurate estimation of thermophysical property values for inhaled therapeutic binary gas mixtures, including He-O[2], Xe-O[2], and N[2]O-O[2], over a
range of concentrations.
Competing interests
The authors declare that they have no competing interests.
Authors' contributions
All of the authors have read and approved the final manuscript.
IK determined the appropriate models, wrote the software to implement the models and drafted the manuscript.
GC provided assistance with determining the models, implementing the software and edited the manuscript.
AM provided assistance with determining the models, implementing the software and edited the manuscript.
PA provided experimental data from the literature and edited the manuscript.
We thank Paul Finlay for performing some of the calculations.
• Anderson M, Svartengren M, Bylin G, Philipson K, Camner P. Deposition in asthmatics of particles inhaled in air or helium-oxygen. Am Rev Respir Dis. 1993;147:524–528. [PubMed]
• Baumert J-H, Reyle-Hahn M, Hecker K, Tenbrinck R, Kuhien R, Rossaint R. Increased airway resistance during xenon anaesthesia in pigs is attributed to physical properties of the gas. Brit
Anaesthesia. 2002;88:540–545. doi: 10.1093/bja/88.4.540. [PubMed] [Cross Ref]
• Darquenne C, Prisk GK. Aerosol deposition in the human respiratory tract breathing air and 80:20 heliox. J Aerosol Med. 2004;17:278–285. doi: 10.1089/jam.2004.17.278. [PMC free article] [PubMed]
[Cross Ref]
• Frazier MD, Cheifetz IM. The role of heliox in paediatric respiratory disease. Paediatric Respiratory Reviews. 2010;11:46–53. doi: 10.1016/j.prrv.2009.10.008. [PubMed] [Cross Ref]
• Hess DR, Fink JB, Venkataraman ST, Kim IK, Meyers TR, Tano BD. The history and physics of heliox. Respir Care. 2006;51:608–612. [PubMed]
• Mihaescu M, Gutmark E, Murugappan S, Elluru R, Cohen A, Willging P. Modeling flow in a compromised pediatric airway breathing air and heliox. Laryngoscope. 2009;119:145–151. doi: 10.1002/
lary.20015. [PubMed] [Cross Ref]
• Palange P, Valli G, Onorati P, Antonucci R, Paoletti P, Rosato A, Manfredi F, Serra P. Effect of heliox on lung dynamic hyperinflation, dyspnea, and exercise endurance capacity in COPD patients.
J Appl Physiol. 2004;97:1637–1642. doi: 10.1152/japplphysiol.01207.2003. [PubMed] [Cross Ref]
• Bird GA, Stewart WE, Lightfoot EN. Transport Phenomena. New York: John Wiley & Sons; 1960.
• Poling BE, Prausnitz JM, O'Connel JP. The Properties of Gases and Liquids. 5. New York: McGraw-Hill; 2007.
• Reid RC, Sherwood TK. The Properties of Gases and Liquides: Their Estimation and Correlation. 2. New York: McGraw-Hill; 1966.
• Bzowski J, Kestin J, Mason EA, Uribe FJ. Equilibrium and transport properties of gas mixtures at low density: Eleven polyatomic gases and five noble gases. J Phys Chem Ref Data. 1990;19
:1179–1232. doi: 10.1063/1.555867. [Cross Ref]
• Hellemans JM, Kestin J, Ro ST. The viscosity of oxygen and of some of its mixtures with other gases. Physica. 1973;65(2):362–375. doi: 10.1016/0031-8914(73)90351-0. [Cross Ref]
• Hellemans JM, Kestin J, Ro ST. On the properties of multicomponent mixtures of monatomic gases. Physica. 1974;71(1):1–16. doi: 10.1016/0031-8914(74)90043-3. [Cross Ref]
• Kestin J, Khalifa HE, Ro ST, Wakeham WA. The viscosity and diffusion coefficients of eighteen binary gas systems. Physica. 1977;88A:242–260.
• Kestin J, Khalifa HE, Wakeham WA. The viscosity and diffusion coefficients of the binary mixtures of xenon with the other noble gases. Physica A: Statistical and Theoretical Physics. 1978;90
(2):215–228. doi: 10.1016/0378-4371(78)90110-3. [Cross Ref]
• Kestin J, Knierim K, Mason EA, Najafi B, Ro ST, Waldman M. Equilibrium and transport properties of the noble gases and their mixtures at low density. J Phys Chem Ref Data. 1984;13:229–303. doi:
10.1063/1.555703. [Cross Ref]
• Srivastava BN, Barua AK. The dilute gas thermal conductivity of the binary mixtures O[2 ]- He, O[2 ]- Ne, O[2 ]- Kr and O[2 ]- Xe is measured at 30 C and 45 C for various compositions by using
the thick-wire variant of the hot-wire method. J Chem Phys. 1960;32:427–435. doi: 10.1063/1.1730711. [Cross Ref]
• Bird GA. Definition of mean free path for real gases. Phys Fluids. 1983;26:3222–3223. doi: 10.1063/1.864095. [Cross Ref]
• Issacs KK, Rosati JA, Martonen TB. In: Aerosols Handbook: Measurement, Dosimetry, and Health Effects. Ruzer LS, Harley NH, editor. CRC Press LLC; 2005. Mechanisms of particle deposition; pp.
• Loeb LB. The Kinetic Theory of Gases. 3. New York: Dover Publications, Inc; 1961.
• Dunlop PJ, Bignelli CM. The temperature and concentration dependences of diffusion coefficients of the systems Ne-O[2], K-O[2], Xe-O[2 ]and He-NO. Ber Bunsen-Ges. 1992;96:1847–1848.
• Worth H, Piper J. Diffusion of helium, carbon monoxide and sulfur hexafluoride in gas mixtures similar to alveolar gas. Respir Physiol. 1978;32:155–166. doi: 10.1016/0034-5687(78)90106-8. [PubMed
] [Cross Ref]
Articles from Medical Gas Research are provided here courtesy of BioMed Central
• PubMed
PubMed citations for these articles
Your browsing activity is empty.
Activity recording is turned off.
See more... | {"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3354644/?tool=pubmed","timestamp":"2014-04-20T01:48:39Z","content_type":null,"content_length":"108609","record_id":"<urn:uuid:1db235bf-527e-4371-8994-cb06dc3c0e85>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00613-ip-10-147-4-33.ec2.internal.warc.gz"} |
Progress in the PRIDE technique for rapidly comparing protein three-dimensional structures
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
BMC Res Notes. 2008; 1: 44.
Progress in the PRIDE technique for rapidly comparing protein three-dimensional structures
Accurate and fast tools for comparing protein three-dimensional structures are necessary to scan and analyze large data sets.
The method described here is not only very fast but it is also reasonable precise, as it is shown by using the CATH database as a test set. Its rapidity depends on the fact that the protein structure
is represented by vectors that monitors the distribution of the inter-residue distances within the protein core and the structure of which is optimized with the Freedman-Diaconis rule.
The similarity score is based on a χ^2 test, the probability density function of which can be accurately estimated.
Although numerous methods for comparison protein three-dimensional (3D) structures were designed, we still lack a unique, commonly accepted procedure to measure the structural diversity between
proteins [1]. In particular, the structures of distantly related proteins should be expressed by the appropriate way allowing their comparison and the 3D structure representations used in modern
algorithms are described in the reviews [2,3]. The most accurate protein structure comparison methods produce protein structure alignments that are computationally intensive. Slower techniques may be
preferable to analyze and classify sufficiently small data sets. However, the time criterion is crucial in the case of integrated survey of large databases, like the Protein Data Bank or the domain
collections CATH and SCOP [4]. This problem is very similar to that encountered few years ago in the case of macromolecular sequence databases, which was solved by the development of tools like FASTA
[5], BLAST [6] or PSI-BLAST [7] that allow one to effectively scan enormous databases like UniProt [8], which presently contain several millions of entries. Although protein 3D structure databases
are still much smaller, several representations of protein structure suitable for rapid comparison without alignment were proposed [9-13]. One of the fast and automatic techniques for protein
structural comparison is PRIDE [9]. In this method the protein structure is represented via a series of distributions of inter-atomic distances allowing the use rapid comparison procedure without
In the present communication, some improvements of the original PRIDE technology are presented. They make it more accurate than the original version without decreasing its speed. The classification
ability of the method was tested on the CATH database.
The PRIDE methodology
In original PRIDE version, a protein structure in defined by the distributions of the distances between C[αi ]and C[α(i+n) ]atoms, where n, which ranges from 3 to 30, is the number of C[α ]atoms
between them in the backbone joint. The comparison between two protein 3D structures is reduced to the comparison between distributions of inter-residue distances. This is performed by chi-square
contingency table analysis, which estimates whether two distributions represent the same overall population and allows one to compute a probability of identity P, ranging from 0 and 1. Since 28 pairs
of histograms are compares, 28 P values are obtained and then averaged to give the overall PRobability of IDEntity (PRIDE) between the two protein 3D structures. Such a similarity score can range, by
definition, from 0 to 1, the latter value indicating the identity between the two protein structures. In the next sections, four modifications, introduced into this computational procedure, will be
Amount of structural information
The maximal value of n, which was equal to 30 in the old PRIDE version, is now selected as a function of the protein dimension. Obviously, the histograms, in which inter-residue distances are binned,
must have a sufficiently high number of observations to be compared via any statistical tool. The number of observations in the histograms increases with the length of the protein and decreases with
n. Therefore, histograms were generated for all n values larger than 3 and lower than n[max], where n[max ]is the value for which there are only 20 C[αi]-C[α(i+n) ]distances. Clearly, if n > n[max],
the histograms would contain less than 20 observations and they were thus ignored. Therefore, the numbers of histograms are different for proteins of different length in the modified PRIDE version.
In the comparison of two domains, represented by series of C[αi]-C[α(i+n) ]histograms, with 3 ≤ n ≤ n[max1 ]for the first domain and 3 ≤ n ≤ n[max2 ]for the second domain, the maximal value of n (n
[max]) was defined as
Moreover, only distances between residues belonging to helices and/or strands were taken into account in the modified PRIDE version, in order to increase the computational speed of the method. The
STRIDE package, based on the detection of hydrogen bonds patterns and backbone torsions, was used for secondary structure assignment [14].
Optimization of the dimension of the histogram intervals
The building of a regular histogram from continuous data demands a cautious specification of the number of bins. In the old version of PRIDE, each bin width was arbitrarily set to 0.5 Å, and adjacent
bins were merged together so that at least 5% of the observations were included in each bin. Here a more rigorous approach was followed. Firstly, inter-residue distances were binned in the histograms
with a fixed bin width of 0.1 Å, a value close to the average expected uncertainty of protein atomic coordinates obtained with crystallographic methods [15]. Then bin widths are changed automatically
to their optimal value BS by using the Freedman-Diaconis rule [16]
where k is the number of observations in the sample x; iqr(x) is the interquartile range of the data of sample x, that is the range between the third and first quartiles. The iqr is expected to
include about half of the data. The optimal BS values are computed for a query protein structure, and then they are used to change the histogram bins for all domains in the scanned database. New
optimal BS values must be recomputed for a new query. Despite this might seem to be rather complicated and time consuming, we verified that once the histograms for the entire database are
pre-computer and stored with very small bins of 0.1 Å, all of them can be re-shaped to the optimal BS very rapidly (see the paragraph "Computational speed" below).
Distribution comparisons
While in the original version of PRIDE, the C[αi]-C[α(i+n) ]distance distributions were compared using the contingency tables [17], another statistical procedure is applied now. Contingency tables
are more suitable to analyze relationships between nominal (categorical) variables and can be applied to compare continuous distributions only by carefully selecting an arbitrary bin size in such a
way that each bin contains sufficient data. Here we adopted another approach that is more suitable to compare continuous distributions and that is computationally not more demanding than the
contingency table analysis. By assuming that the distributions of both binned data sets of inter-residue distances are equally unknown, it is possible to use the chi-square test to disprove the null
hypothesis that the two data sets can be described by the same distribution. If R[i ]is the number of observations in bin i for the first protein and S[i ]is the number of observations in the same
bin i for the second protein, then the chi-square statistics is
χ^2 ranges from 0 to the positive infinity. A large value of χ^2 indicates that the null hypothesis is rather unlikely and that the two proteins are considerably different, and χ^2 can thus be used
as a statistical measure of proximity between two protein 3D structures. On the contrary, two identical protein 3D models are associated with a χ^2 value equal to 0.
Furthermore, the degree of proximity between two protein structures can be also expressed by an incomplete gamma function determining the chi-square probability density function:
where N[b ]is the number of histogram bins, that corresponds to a number of degrees of freedom for histograms with an unequal number of observations. In this case the proximity measure P ranges from
0 to 1 corresponding, respectively, to the completely different and to the identical protein folds.
and P[n ]are computed for each pair of histograms of the C[αi]-C[α(i+n) ]distances for 3 = n = n[max]. Then they are averaged to estimate the global degree of protein structural proximity. It must be
observed that while χ^2 is a distance measure of proximity, with lower values associated with two domains that are similar, P is a measure of similarity, with higher values associated with two
domains that are similar. Beside this difference, both can be used as structural similarity scores and monitor exactly the same protein structural features. However, P has the definite lowest and
highest limits that are equivalent to the similarity score used in the old PRIDE version.
Computational speed
Given the extreme simplicity of the algorithm, it is not surprising that computations can be very fast. The most time consuming step is the computation of the histograms of the C[αi]-C[α(i+n) ]
distributions. However, they can be pre-computed and stored in about 850 seconds (Xenon 3 GHz processor) for the 34,035 protein domains of Table Table1,1, 29,098 of which are long enough to be
represented by at least 30 histograms and 4,937 of which are smaller and can be represented by 10–30 histograms. The comparison of a query with all the database entries takes on average 170 seconds
(by using all the queries of Table Table1),1), 20 of which are needed for the optimization of the bin size, according to the Freedman-Diaconis rule. The overall speed is nearly identical to the
speed of the old PRIDE version. By comparison, the same amount of computations can be performed in about 4,000 seconds by using the SHEBA downloaded software [18]. Other computer programs, like for
example VAST [18], are available only as web-servers and it is thus impossible to compare their computational speed with that of the new PRIDE version. However, it was observed the VAST server is not
particularly fast [19], though this does not demonstrate that the VAST algorithm is not.
The content of the datasets and the query lists used for PRIDE testing
Data sets
The new structure comparison method was benchmarked against the CATH v3.0.0 database [20], which is a hierarchical classification of protein domains according to the class C (prevalence of secondary
structural types), architecture A (the number, type, and reciprocal orientation of the secondary structural elements), topology T (the topological connection of the secondary structural elements) and
homologous superfamily H (a common evolutionary origin supported either by significant sequence similarity or significant structural and functional similarity). Two datasets were created (Table
(Table1),1), one with domains large enough to be represented by at least 30 distributions of C[αi]-C[α(i+n) ]distances, and the other with smaller domains, for which 10 < n[max ]< 30. Domains
containing more then one polypeptide chain were disregarded since, by definition, PRIDE cannot handle them.
Query lists
A non-redundant series of CATH entries were randomly selected from different superfamilies to be used as queries, by ensuring that all the three principal classes C of the database are equally
represented (Table (Table1).1). Some were large domains (n[max ]> 30) and other small domains (10 < n[max ]< 30). About half of them were considered to be "easy" queries, in the sense that they
belong to a CATH fold cluster containing at least 50 domains, and the others were "difficult" queries that belong to small CATH fold groups having no more than 3 domains.
Performance evaluation
The performance of the new PRIDE version can be examined by the computation and the analysis of the ROC curves. The P value, which is a similarity score, is used to calculate ROC curve in the present
study. A threshold similarity is consecutively decreased, with subsequent decrements equal to 0.01, in the entire range of possible P values, from 1 to 0. At each step, each of the queries (Table
(Table1)1) was compared to all the entries of the databases (Table (Table1).1). As a consequence, 4,335,602 comparisons were performed by considering the dataset of large protein domains and 207,354
comparisons were necessary by considering the dataset of small protein domains.
Each comparison can be classified in one of four categories, according to the CATH classification of two domains and their P value. It can be i) a true positive (TP), if the similarity between the
query and the entry is higher that the threshold value and if the query and the entry belong to the same CATH fold; ii) false positive (FP) if the similarity between the query and the entry is higher
that the threshold value despite the fact that they have different CATH classification; iii) a false negative (FN), if the entry and the query are in the same fold cluster despite their estimated
similarity is lower than the threshold value; iv) a true negative (TN), if the similarity is estimated to be smaller that the threshold value and if the query and the entry are actually classified
into different CATH fold groups. On the basis of these definitions it is possible to compute, for each threshold value, the sensitivity and the specificity
Sensitivity = TP/(TP + FN)
Specificity = TN/(TN + FP)
and the ROC curve is obtained by potting Sensitivity against (1-Specificity) for the entire range of possible threshold values. Figure Figure11 shows the ROC curves obtained as described above. It
is necessary to remember that the line through the origin with slope 1, that is the diagonal, would correspond to the similarity detection based on a random measure. Therefore, the area under ROC
curve equal to 0.5 is related to a random similarity measure, larger values indicate better than random estimations, and a value equal to 1 indicates perfect similarity. The areas under the ROC
curves, shown in Figure Figure1,1, are 0.87 and 0.82 for the first and second datasets of Table Table1,1, respectively. Not surprisingly, the area under the ROC curve is larger (0.87) for the first
dataset of Table Table1,1, which contains larger protein domains that can be described with at least 30 histograms of C[αi]-C[α(i+n) ]distances, and smaller (0.82) for the second dataset, which
contains smaller proteins that are represented by a lower number of histograms. Such values are considerably better than that obtained by using the old version of PRIDE (0.55). These values are also
comparable to those obtained with two other procedures for evaluating protein structure similarity – SHEBA (0.93) and VAST (0.90) that are computationally much more demanding then the methods
described in the present manuscript [18]. The areas under the ROC curves were also computed by using separately queries that are classified into the α, β, and α/β classes within the CATH database in
order to estimate the performance of PRIDE on different types of proteins. Values of 0.90, 0.90, and 0.83 were obtained by scanning the database of 29,098 domains with the query sets containing 49 α
proteins, 50 β proteins, and 50 α/β proteins (dataset number 1 of Table Table1),1), indicating that proteins containing both helices and strands are more difficult to be correctly identified,
probably because of the higher structural diversity of protein domains containing different types of secondary structural elements. Additional information is available at [21] (Downloads section).
ROC curves. The solid line shows a ROC curve obtained by comparing 149 CATH domains with 29 098 CATH entries of the first dataset of Table 1 that contains large protein domains; the dashed line
represents a ROC curve calculated for the 42 small CATH domains ...
Competing interests
The authors declare that they have no competing interests.
Authors' contributions
OC supervised and coordinated the project. SK developed the algorithm, carried out the analyses, and prepared, with OC, in the writing of the manuscript. All authors read and approved the manuscript.
This work was supported by the BIN-II network of the GEN-AU Austrian project.
• Kolodny R, Petrey D, Honig B. Protein structure comparison: implications for the nature of 'fold space', and structure and function prediction. Curr Opin Struct Biol. 2006;16:393–398. [PubMed]
• Carugo O. Rapid methods for comparing protein structures and scanning structure databases. Curr Bioinformatics. 2006;1:75–83.
• Carugo O. Recent progress in measuring structural similarity between proteins. Curr Protein Pept Sci. 2007;8:219–241. [PubMed]
• Aung Z, Tan KL. Rapid retrieval of protein structures from databases. Drug Disco Today. 2007;in press [PubMed]
• Pearson WR, Lipman DJ. Improved tools for biological sequence comparison. Proc Natl Acad Sci USA. 1988;85:2444–2448. [PMC free article] [PubMed]
• Altschul SF, Gish W, Miller W, Myers EW, Lipman DJ. Basil local alignment search tool. J Mol Biol. 1990;215:403–410. [PubMed]
• Altschul SF, Madden TL, Schaffer AA, Zhang J, Zhang Z, Miller W, Lipman DJ. Gapped BLAST and PSI-BLAST: a new generation of protein database search programs. Nucl Acids Res. 1997;25:3389–3402. [
PMC free article] [PubMed]
• Leinonen R, Diez FG, Binns D, Fleischmann W, Lopez R, Apweiler R. UniProt archive. Bioinformatics. 2004;20:3226–3227. [PubMed]
• Carugo O, Pongor S. Protein fold similarity estimated by a probabilistic approach based on C(alpha)-C(alpha) distance comparison. J Mol Biol. 2002;315:887–898. [PubMed]
• Rogen P, Fain B. Automatic classification of protein structure by using Gauss integrals. Proc Natl Acad Sci USA. 2003;100:119–124. [PMC free article] [PubMed]
• Bostick DL, Shen M, Vaisman A simple topological representation of protein structure: implications for new, fast, and robust structural classification. Proteins. 2004;56:487–501. [PubMed]
• Zotenko E, Dogan RI, Wilbur WJ, O'Leary DP, Przytycka TM. Structural footprinting in protein structure comparison: the impact of structural fragments. BMC Struct Biol. 2007;7:53. [PMC free
article] [PubMed]
• Choi IG, Kwon J, Kim SH. Local feature frequency profile: a method to measure structural similarity in proteins. Proc Natl Acad Sci USA. 2004;101:3797–3802. [PMC free article] [PubMed]
• Frishman D, Argos P. Knowledge-based protein secondary structure assignment. Proteins. 1995;23:566–579. [PubMed]
• Cruickshank DWJ. Coordinate uncertainty. In: Rossmann MG, Arnold E, editor. International Tables for Crystallography. F. Dordrecht , Kluwer Academic Publisher; 2001. pp. 403–418.
• Freedman D, Diaconis P. On the histogram as a density estimator: L2 theory. Probability Theory and Related Fields. 1081;57:453–476.
• Dowdy S, Wearden S, Chilko D. Statistics for research. Hoboken , John Wiley & Sons; 2004.
• Sam V, Tai CH, Garnier J, Gibrat JF, Lee B, Munson PJ. ROC and confusion analysis of structure comparison methods identify the main causes of divergence from manual protein classification. BMC
Bioinformatics. 2006;7:206. [PMC free article] [PubMed]
• Novotny M, Madsen D, Kleywegt GJ. Evaluation of protein fold comparison servers. Proteins. 2004;54:260–270. [PubMed]
• Orengo CA, Michie AD, Jones S, Jones DT, Swindells MB, Thornton JM. CATH--a hierarchical classification of protein domain structures. Structure. 1997;5:1093–1108. [PubMed]
• Website of Department of Biomolecular Structural Chemistry http://www.univie.ac.at/biochem/
Articles from BMC Research Notes are provided here courtesy of BioMed Central
• PubMed
PubMed citations for these articles
Your browsing activity is empty.
Activity recording is turned off.
See more... | {"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2535597/?tool=pubmed","timestamp":"2014-04-19T07:10:54Z","content_type":null,"content_length":"77052","record_id":"<urn:uuid:bf84308e-b519-4191-872f-a1413b38a74e>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00470-ip-10-147-4-33.ec2.internal.warc.gz"} |
Determining whether a numeric value has at most two decimal places
I was working on a .NET website and wanted to write some C# code to validate that a value input by the user was numeric and had at most two decimal places. This is useful, for example, when
validating that the input represents a dollar amount.
I first tried this, which didn't work:
try { // must be numeric value double d = double.Parse(s); // max of two decimal places if (100 * d != (int)(100 * d)) // max of two decimal places
return false;
return true;
} catch { return false; }
The above is unreliable because, since d is a floating-point number, 100 * d isn't always exactly equal to (int)(100 * d), even when d has two or fewer decimal places. For example, 100 * 1.23 might
evaluate to, say, 122.9999999.
This post on
offers several solutions, but none of them looked right for my purpose. Instead, I came up with this:
try { // must be numeric value double d = double.Parse(s); // max of two decimal places
if (s.IndexOf(".") >= 0)
if (s.Length > s.IndexOf(".") + 3)
return false;
return true; catch { return false; }
The same thing could be accomplished using a regular expression, if you prefer. | {"url":"http://progblog10.blogspot.com/2011/04/determining-whether-numeric-value-has.html","timestamp":"2014-04-16T10:09:55Z","content_type":null,"content_length":"80717","record_id":"<urn:uuid:432d6b4e-676b-4104-b95e-cbe23f676290>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00312-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proof Sorter - the Square Root of 2 Is Irrational
Copyright © University of Cambridge. All rights reserved.
'Proof Sorter - the Square Root of 2 Is Irrational' printed from http://nrich.maths.org/
Full screen version
This text is usually replaced by the Flash movie.
This method of proof can easily be generalised to prove that $\sqrt n$ is irrational when $n$ is not a square number .
What is the length of the diagonal of a square with sides of length 2?
How do we find the value of $\sqrt 2$?
What number has 2 as its square?
What is the side of a square which has area 2?
Now $(1.4)^2=1.96$, so the number $\sqrt 2$ is roughly $1.4$. To get a better approximation divide $2$ by $1.4$ giving about $1.428$, and take the average of $1.4$ and $1.428$ to get $1.414$.
Repeating this process, $2\div 1.414 \approx 1.41443$ so $2\approx 1.414 \times 1.41443$, and the average of these gives the next approximation $1.414215$. We can continue this process indefinitely
getting better approximations but never finding the square root exactly.
If $\sqrt 2$ were a rational number, that is if it could be written as a fraction $p/q$ where $p$ and $q$ are integers, then we could find the exact value. The proof sorter shows that this number is
IRRATIONAL so we cannot find an exact value. | {"url":"http://nrich.maths.org/1404/index?nomenu=1","timestamp":"2014-04-19T22:26:40Z","content_type":null,"content_length":"5143","record_id":"<urn:uuid:2b101be8-dd4b-428b-a4fc-b175f6c8a567>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00470-ip-10-147-4-33.ec2.internal.warc.gz"} |
How do I manipulate infinitesimals? How do I know what assumption is valid?
May 21st 2010, 06:14 PM
How do I manipulate small finite increments? How do I know what assumption is valid?
I know certain things about small finite increments, like how the general formula for dy/dx is derived, but I have no idea how to manipulate the infinitesimals. The assumptions made seems
arbitrary to me, how do I know that it is ok to assume one thing, but wrong to assume in case of others? I mean I can understand that this follows from that assumption and etc, but what made the
assumptions valid I have no idea. Can you please explain to me the concept of delta x, or give me links to some resources so that I can understand at least well enough to get sums like these done
from scratch: (Nod) (Nod)
May 21st 2010, 10:29 PM
I know certain things about infinitesimals, like how the general formula for dy/dx is derived, but I have no idea how to manipulate the infinitesimals. The assumptions made seems arbitrary to me,
how do I know that it is ok to assume one thing, but wrong to assume in case of others? I mean I can understand that this follows from that assumption and etc, but what made the assumptions valid
I have no idea. Can you please explain to me the concept of delta x, or give me links to some resources so that I can understand at least well enough to get sums like these done from scratch:
(Nod) (Nod)
There are no infinitesimals in your attachments there are small finite increments (that is what $\delta x$ denotes) which are subject to all the usual rules of arithmetic and common algebra and
there are limiting processes converting sums with finite increments to integrals as increments become arbitrary small.
May 21st 2010, 11:22 PM
There are no infinitesimals in your attachments there are small finite increments (that is what $\delta x$ denotes) which are subject to all the usual rules of arithmetic and common algebra and
there are limiting processes converting sums with finite increments to integrals as increments become arbitrary small.
Sorry, i didn't know what delta x was called :D
Ok, here is my problem, in the following diagram the height (length) of the cylinder:
My calculations: $Rcos(\theta)-Rcos(\theta+\delta \theta)$= $Rcos(\theta)-R(cos(\theta)cos(\delta \theta)+sin(\theta)sin(\delta \theta)$
$=Rsin\theta*\delta\theta$ (assumptions $cos\delta\theta$becomes 1, $sin\delta\theta$ becomes $\delta \theta$
But the book says that the height is R\delta \theta, what am I doing wrong here?
May 22nd 2010, 01:43 AM
Sorry, i didn't know what delta x was called :D
Ok, here is my problem, in the following diagram the height (length) of the cylinder:
My calculations: $Rcos(\theta)-Rcos(\theta+\delta \theta)$= $Rcos(\theta)-R(cos(\theta)cos(\delta \theta)+sin(\theta)sin(\delta \theta)$
$=Rsin\theta*\delta\theta$ (assumptions $cos\delta\theta$becomes 1, $sin\delta\theta$ becomes $\delta \theta$
But the book says that the height is R\delta \theta, what am I doing wrong here?
You are computing different things, you are computing the x extent of the ring while the book is computing the slant height of the ring.
May 22nd 2010, 08:59 AM
But a cylinder has no slant height, it is a property of cones, and the book instructs me to use elemental cylinders. How do I know that I have to calculate slant height when it tells me to use
cylinders, I calculated what I knew to be height of the cylinders!(Nod)
May 22nd 2010, 10:41 AM
You are going to be computing the mass of a slice of a spherical shell, this is approximately a part of a cone and does have a slant height (and that is what you need to calculate its volume and
hence mass.
May 22nd 2010, 11:14 PM
If I take this as part of a cone, why do I compute its mass using the formula of a cylinder's volume? And if using the formula for cylinder's volume is ok, why is it not ok to use cylinder's
height (which I calculated) instead of slant height?
May 23rd 2010, 12:37 AM
You have not posted the image file showing what you say, so my guess is that you have misconstrued the use of the area formula for a parallelogram for that of a rectangle.
May 23rd 2010, 01:12 AM
Because, for constant density, weight is equal to "density times volume". And if density is a variable you choose your "pieces" so that the density is at least approximately constant in each
piece- sum the pieces and take the limit as their size goes to 0 giving you of the form "density times dVolume".
And if using the formula for cylinder's volume is ok, why is it not ok to use cylinder's height (which I calculated) instead of slant height?
If you are talking about problem 19, a hemi-spherical shell, each band has bases of two different sizes- part of a cone, not a cylinder.
May 23rd 2010, 05:26 AM
May 23rd 2010, 05:31 AM
Because, for constant density, weight is equal to "density times volume". And if density is a variable you choose your "pieces" so that the density is at least approximately constant in each
piece- sum the pieces and take the limit as their size goes to 0 giving you of the form "density times dVolume".
If you are talking about problem 19, a hemi-spherical shell, each band has bases of two different sizes- part of a cone, not a cylinder.
How do I learn to make a valid assumption for similar sums? Can you give me some website links etc?
May 23rd 2010, 07:21 PM
June 1st 2010, 01:52 PM
It was the question, not my own drawing :D
June 1st 2010, 03:52 PM | {"url":"http://mathhelpforum.com/calculus/145911-how-do-i-manipulate-infinitesimals-how-do-i-know-what-assumption-valid-print.html","timestamp":"2014-04-16T04:30:03Z","content_type":null,"content_length":"23367","record_id":"<urn:uuid:78dd6833-a309-4eb1-8797-388b715ec9ed>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00383-ip-10-147-4-33.ec2.internal.warc.gz"} |
line integral, vector calculus
Evaluate the line integral
Force field is the integral in the form of integrand ( (2x dx+ 2y dy + 2 zdz)/r^2).
the domain of integral is C,
C = C1 + C2. C1 is the line segment from (1; 2; 5) to (2; 3; 3). C2,arc
of the circle with radius 2 and centre (2; 3; 1) in the plane x = 2. The arc has initial point
(2; 3; 3) and terminal point (2; 1; 1). The abbreviations r = <x,y,z> and |r| = r
3. The attempt at a solution
I tried the integral with curve one, since the curve one from (1; 2; 5) to (2; 3; 3) is a line segment the function of the following curve I computed with the equation of line r(t)= (1-t)ro + t(r1)
where 0<= t <= 1
ro is (1; 2; 5),
r1 is (2; 3; 3),
i was able to find the r (t) and then took the line integral using the force field above,
Now, I have a problem with the C2 which happens to be an arc, i do know that i can parametrize this C2 with respect to t by saying r(t) = <a cost + x0, a sin t + yo, z0>
but not sure how to get the domain t as i am unable to draw this arc by hand. I would really appreciate if some one tell me how to find out the domain of t in C2 or how to draw it
Sorry for the long post,
Thanks in advance, | {"url":"http://www.physicsforums.com/showthread.php?t=381610","timestamp":"2014-04-21T09:56:19Z","content_type":null,"content_length":"23204","record_id":"<urn:uuid:916724d4-5181-4c61-963d-18953d5567c6>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00629-ip-10-147-4-33.ec2.internal.warc.gz"} |
Conversion Using Unit Analysis ( Read ) | Measurement
Have you ever wondered how much of one unit is found in another unit? Sound confusing? Take a look.
Jeff made 4 liters of lemonade for a party. He is wondering how many milliliters there are in his lemonade.
Do you know how to figure this out?
Using proportions and unit analysis is the way to solve this problem. This Concept will teach you what you need to know.
Proportions can help us convert from one unit of measure to another. For example, suppose you needed to convert from liters to milliliters.
A pitcher holds 4 liters of water. Determine how many milliliters of water the pitcher holds. Use the unit conversion: 1 liter = 1000 milliliters
First, let's set up a proportion to solve this problem.
The first ratio can use the unit conversion and compare liters to milliliters. The second ratio can compare 4 liters to the unknown number of milliliters, $n$
$\frac{liters}{milliliters} = \frac{1}{1000} \qquad \quad \frac{liters}{milliliters} = \frac{4}{n}$
These are equivalent ratios, so we can use them to write a proportion.
$\frac{1}{1000} = \frac{4}{n}$
Consider which strategy to use. Should we solve for $d$
The relationship between the terms in the numerators is easy to see––we can multiply 1 by 4 to get 4. So, the computation will probably be simpler if we use proportional reasoning and multiply both
terms of the first ratio by 4.
$\frac{1}{1000} = \frac{1 \times 4}{1000 \times 4} = \frac{4}{4000} = \frac{4}{n}$
From the work above, we can see that when the first term is 4, the second term is 4000. So, $n = 4000$
The pitcher holds 4000 milliliters of water.
Another strategy for solving a problem like the one with the pitcher is to use unit analysis. In unit analysis, we write ratios as fractions, just as we do when we write a proportion. However, in
unit analysis, we do not want the terms in the fractions to be consistent. Instead, the units in the fractions are written so that certain units cancel one another out.
This will be easier to understand if we consider an example. Let’s go back to the last example with the liters and the pitcher. We can use unit analysis to solve it.
The problem requires us to convert 4 liters to milliliters. Our answer should be in milliliters, so the number of milliliters is unknown.
The measure we are given is 4 liters.
We know that 1 liter (L) = 1000 milliliters (mL). This can be expressed as either $\frac{1L}{1000mL}$$\frac{1000mL}{1L}$conversion factor by which we might multiply 4 liters.
We should start by writing 4 liters as a fraction over 1. We can do this because $4L = \frac{4L}{1}$
We want our answer to be in milliliters, not in liters. So, we want the liters to cancel each other out. Since liters is in the numerator of the fraction above, we should make sure that liters is in
the denominator of the conversion factor we use, like this:
$\frac{4L}{1} \times \frac{1000 mL}{1L}$
Since liters appears in the numerator of one factor and in the denominator of another factor, we can cancel them out, like this:
$\frac{4 \bcancel{L}}{1} \times \frac{1000mL}{1 \bcancel{L}}$
Now we can multiply what is left as we would multiply any fractions.
$\frac{4}{1} \times \frac{1000mL}{1} = \frac{4 \times 1000 mL}{1 \times 1} = \frac{4000mL}{1} = 4000 mL$
The pitcher holds 4000 milliliters of water.
Now it's your turn to try a few conversions.
Example A
How many milliliters in 2.5 liters? Write a proportion and solve.
Solution: 2500 mL
Example B
How many meters in 11 kilometers? Write a proportion and solve.
Solution: 11,000 meters
Example C
How many inches are there in 18 feet? Use a proportion and solve.
Solution: 216 inches
Here is the original problem once again.
Jeff made 4 liters of lemonade for a party. He is wondering how many milliliters there are in his lemonade.
Do you know how to figure this out?
To figure this out, we can start with the unit scale from liters to milliliters.
This means that there are 1000 milliliters in 1 liter.
Now we can set up a proportion.
Cross multiply and solve.
There are 4000 mL in the lemonade container.
Unit Analysis
comparing the number of units by using fractions and canceling out common values.
Scaling Strategy
looking at a common number of units to figure out a best buy. You may compare the amount in two different objects to a common unit and then figure out the best value based on the comparison.
Guided Practice
Here is one for you to try on your own.
Arnaldo needs to buy olive oil. He could buy a 15-ounce bottle of Brand A olive oil for $3, or he could buy a 20-ounce bottle of Brand B olive oil for $5. Which is the better buy?
One way to solve this problem is to find the unit price for each bottle.
Find the unit price for the 15-ounce bottle. Remember, you can find the unit price by dividing the first term by the second term.
$&\ 3 \ \text{for} \ 15 \ oz = \frac{\3}{15oz} && \overset{ \ \ \0.20}{15 \overline{ ) {\3.00 \;}}}\\&&& \quad \underline{-30\;\;\;\;}\\&&& \qquad \ \ \ \ 0\\&&& \qquad \ \ \underline{-0\;\;}\\&&& \
qquad \quad \ \ 0$
Find the unit price for the 20-ounce bottle.
$&\ 5 \ \text{for} \ 20 \ oz = \frac{\5}{20oz} && \overset{ \ \ \0.25}{20 \overline{ ) {\5.00 \;}}}\\&&& \quad \ \underline{-40\;\;\;\;}\\&&& \qquad \ 100\\&&& \quad \ \ \underline{-100\;\;}\\&&& \
qquad \quad \ 0$
Since $0.20 < $0.25, the 15-ounce bottle of Brand A olive oil has the cheaper unit price and is the better buy.
Finding the unit rate is not the only strategy we could have used to solve a problem like the one in the last example. Instead of determining the unit rate, we could have imagined buying several
bottles of each brand until we had the same number of ounces of Brand A oil as Brand B oil. Then we could have compared those costs. This strategy is known as the scaling strategy. Take a look at the
next problem.
Find a common number of ounces for both brands.
The first few multiples of 15 are: 15, 30, 45, 60, and 75.
The first few multiples of 20 are: 20, 40, 60, 80, and 100.
The least common multiple of 15 and 20 is 60. So, we can find the cost of buying 60 ounces of each brand of oil.
A 15-ounce bottle of Brand A oil costs $3.
$15 \times 4 = 60$$\frac{ounces}{price} = \frac{15}{3} = \frac{15 \times 4}{3 \times 4} = \frac{60}{12}$
The cost of 60 ounces of Brand A oil (four 15-ounce bottles of oil) is $12.
A 20-ounce bottle of Brand B oil costs $5.
$20 \times 3 = 60$$\frac{ounces}{price} = \frac{20}{5} = \frac{20 \times 3}{5 \times 3} = \frac{60}{15}$
The cost of 60 ounces of Brand B oil (three 20-ounce bottles of oil) is $15.
Since $12 < $15, it would cost less to buy 60 ounces of Brand A olive oil than to buy 60 ounces of Brand B olive oil. So, the 15-ounce bottle of Brand A olive oil is the better buy.
Directions: Use unit analysis to solve each problem.
1. How many feet in 1 mile?
2. How many feet in 18.5 miles?
3. How many milliliters in 3.75 liters?
4. How many milliliters in 18.25 liters?
5. How many pounds in 3 tons?
6. How many pounds in 2.5 tons?
7. How many pounds in 4.75 tons?
8. How many feet in 18 yards?
9. How many inches in 4 feet?
10. How many inches in 8.75 feet?
11. How many milliliters in 29.5 liters?
Directions: Solve each problem.
12. Fred needs to buy vanilla extract to bake a cake. He could buy a 4-ounce bottle of vanilla extract for $8, or a 6-ounce bottle of vanilla extra for $15. Which bottle is the better buy?
13. A rope is 3 yards long. How many inches long is the rope? Use these unit conversions: 1 yard = 3 feet and 1 foot = 12 inches.
14. At the farmer's market, Maureen can buy 6 ears of corn for $3. At that price, how much would it cost to buy 9 ears of corn?
15. James bought a 128-ounce bottle of apple juice. How many pints of apple juice did James buy? Use these unit conversions: 1 cup = 8 fluid ounces and 1 pint = 2 cups. | {"url":"http://www.ck12.org/measurement/Conversion-Using-Unit-Analysis/lesson/Conversion-Using-Unit-Analysis/","timestamp":"2014-04-19T13:35:11Z","content_type":null,"content_length":"114528","record_id":"<urn:uuid:5eee64ef-9642-471a-975a-ea1e59017203>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00074-ip-10-147-4-33.ec2.internal.warc.gz"} |
David W. Deley
• Research & Development
• Computer Science & Programming
• Technical Writer
• Systems Design Einstein the plumber
• INTP Rational (the “Knowledge Seeking” personality)
How We Understand and Reason About Abstract Concepts
The New 21st-Century Paradigm Shift
Surprisingly, the answer is, "No."
“It is hard to underestimate how far the idea that concepts are physically embodied, using the sensory motor system of the brain, is from disembodied Enlightenment reason—from the usual view of
concepts as disembodied abstractions, entirely separate from the sensory motor system.”—George Lakoff, professor of Cognitive Science, UC Berkeley
A Picture Tour of Santa Barbara, California, from a long time resident's point of view. Get the inside story of what really happens in Santa Barbara.
Newage music cross reference.
Find more of the newage music you like.
For people who love newage music
THE PENTIUM DIVISION FLAW.
A computer which can't do math? After two million original Intel Pentium(TM) processors were sold in the first year, it was discovered they gave incorrect results in rare cases. It is quite
interesting to study the details behind the flaw, as it brings together many aspects of computer science. Includes a detailed description of the radix 4 SRT division algorithm used by the Pentium.
How do computers, which are completely deterministic machines, generate random numbers?
Advanced Kindle Formatting
a big book I recently published.
Brief Introduction to Unicode
a little booklet I recently published.
Understanding what a page fault is can sometimes be crucial to writing efficient code. An overview of virtual memory, page faults, and array addressing in code is presented.
How cyclic redundancy codes work
Article describing how Cyclic Redundancy Codes work
EDX Spelling Checker
Fast embedded spelling checker code.
A low battery alert for laptop computers. (free!)
Find Next Prime
A routine for finding the next highest prime given a starting number. Useful for making hash tables. Also routines for generating prime numbers and for finding the prime factors of a given number.
VMS file organizations
Explanation of all the Record Management Services (RMS) internal file organizations and how files are stored on disks.
Analyze System Crash
An example analysis of a system crash using the Symbolic Dump Analyzer.
Optimize Function
Given a scalar function F, which is a function of several scalar variables {x(1),x(2),...,x(n)}, find the values of {x(1),x(2),...,x(n)} which MINIMIZE the value of function F. Code provides a
choice of the Conjugate Direction Method of Fletcher and Reeves (CDM), or POWELL's method.
Artificial Pancreas System
I coded the first version of an Artificial Pancreas System in C# in October of 2011 while working for the University of California Santa Barbara (UCSB) and the Sansum Diebetic Research Institute.
An example of a digital feedback control system. Intense mathematics.
I was chief editor of the Multi-Edit 2006 User's Manual
PROF. DELEY TALKS ABOUT...
│ ││
♠ My List of Fun Sites to visit
♣ Spritzy Web Page For People Who Are Sensory Stimulation Deprived
For those who feel this page isn't fancy enough and yearn to see the greatest most complex gaudiest graphical page ever I made this one just for you.
You're mad. Bonkers. Off your head.
But I'll tell you a secret:
all the best people are.
—Alice, to the Mad Hatter
What is to give light must endure burning.
—Anton Wildgans^1
[How to break out of frames]
[Site Map]
I hand coded this website using HTML and Multi-Edit.
Today's homepage number is:
Last Update: February 22, 2014
All documents at this site (
) are Copyright © David W. Deley. Permission is granted to use them freely as long as the original author is cited. (Commercial companies should contact me to make sure.)
1. Toward the end of the visit, Einstein asked Leo about his profession. “I'm a plumber,” he said. “Good,” replied Einstein, “Can you tell me where the bathroom is?” | {"url":"http://www.daviddeley.com/","timestamp":"2014-04-21T04:32:10Z","content_type":null,"content_length":"32743","record_id":"<urn:uuid:366004b7-3d9f-4729-8cd0-b96aac31809b>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00345-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solving Systems of Linear Equations by Substitution Examples Page 1
We want to solve one equation for either y or x. Since y is all by itself (has a coefficient of 1) in the second equation, that's the easiest variable to solve for. We like our variables like we like
our cheese: easy.
We solve the second equation for y and get y = 7 – 2x.
Next, we can plug this in for y in the first equation:
Now we have an equation with only an x, so we solve for x. First, distribute the 3 to get
2x + 21 – 6x = 5.
Simplify to get 16 = 4x, then simplify further to find x = 4.
We have half our point, but we'd still like the other half. Don't rag on us...we won't let it ruin our appetite. We still need to find y. We already solved for y using the second equation, so we know
that y = 7 – 2x.
Therefore, if x = 4,
We feel fairly certain that the solution to the system of equations is (4, -1). However, it would be better to feel completely certain. We can accomplish that glorious feeling by making sure this
solution works in both equations.
Is the point (4, -1) a solution to the equation 2x + 3y = 5?
When x = 4 and y = -1, the left-hand side of this equation is
2(4) + 3(-1) = 8 – 3,
which is indeed 5.
Is the point (4, -1) a solution to the equation y + 2x = 7?
When x = 4 and y = -1, the left-hand side of this equation is
(-1) + 2(4) = -1 + 8,
which is indeed 7.
These values work in both equations, so the final solution to the system of equations is (4, -1). It better get itself a work visa, because this bad boy is working everywhere. | {"url":"http://www.shmoop.com/linear-equation-systems/substituting-linear-equation-systems-examples.html","timestamp":"2014-04-19T09:42:48Z","content_type":null,"content_length":"39093","record_id":"<urn:uuid:1767b22c-ef85-418d-bfa2-2a6498862d8f>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00080-ip-10-147-4-33.ec2.internal.warc.gz"} |
DOCUMENTA MATHEMATICA, Vol. 3 (1998), 261-272
DOCUMENTA MATHEMATICA
, Vol. 3 (1998), 261-272
Michael Scheutzow, Heinrich v. Weizsäcker
Which Moments of a Logarithmic Derivative Imply Quasiinvariance?
In many special contexts quasiinvariance of a measure under a one-parameter group of transformations has been established. A remarkable classical general result of A.V. Skorokhod \cite{Skorokhod74}
states that a measure $\mu$ on a Hilbert space is quasiinvariant in a given direction if it has a logarithmic derivative $\beta$ in this direction such that $e^{a|\beta|}$ is $\mu$-integrable for
some $a > 0$. In this note we use the techniques of \cite{Smolyanov-Weizsaecker93} to extend this result to general one-parameter families of measures and moreover we give a complete characterization
of all functions $\psi:[0,\infty) \rightarrow [0,\infty)$ for which the integrability of $\psi(|\beta|)$ implies quasiinvariance of $\mu$. If $\psi$ is convex then a necessary and sufficient
condition is that $\log \psi(x)/{x^2}$ is not integrable at $ \infty$.
1991 Mathematics Subject Classification: 26 A 12, 28 C 20, 60 G 30
Full text: dvi.gz 23 k, dvi 53 k, ps.gz 77 k.
Home Page of DOCUMENTA MATHEMATICA | {"url":"http://www.kurims.kyoto-u.ac.jp/EMIS/journals/DMJDMV/vol-03/09.html","timestamp":"2014-04-21T09:39:16Z","content_type":null,"content_length":"1859","record_id":"<urn:uuid:936cbac8-e293-4a74-953a-817955b60000>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00413-ip-10-147-4-33.ec2.internal.warc.gz"} |
Big O notation... can you check my answers for each code segment?
September 18th, 2013, 05:16 PM #1
Join Date
Mar 2013
Thanked 0 Times in 0 Posts
int sum = 0;
for (int i = 0; i < n; i += 2)
sum++; //I said this is O(log n) because if n=10 then this loop executes half the time. Not sure though. Or is it O(n)?
int sum = 0;
for (int i = 0; i < n; i++)
for (int j = 0; j < n; j++)
sum++; // I said O(n) for this one because it executes 2n times which is therefore O(n).
int sum = 0;
for (int i = 0; i < n; i++)
for (int j = 0; j < n[SUP]2[/SUP]; j++)
sum++; // my answer is O(n[SUP]3[/SUP])
int sum = 0;
for (int i = 0; i < n; i++)
for (int j = 0; j < 2*i; j++)
sum++; //I think this is O(n[SUP]2[/SUP])
int sum = 0;
for (int i = 0; i < n; i++)
for (int j = 0; j < n2; j++)
for (int k = 0; k < j; j++)
sum++; //I think this is O(n[SUP]3[/SUP])
int sum = 0;
for (int i = 1; i <= n; i = i[SUP]2[/SUP])
sum++; //and finally I think this one is O(log n)
Could you correct me? I want to see how well I know this.
1 is wrong. Hint: write out the expression for how many times the loop executes.
3 I'm assuming you meant the inner loop to be 0 to n*n (a.k.a. n squared)? If so, yes it is n cubed.
5 I think you have a typo in the actual code. Perhaps you meant this?
int sum = 0;
for (int i = 0; i < n; i++)
for (int j = 0; j < n2; j++)
for (int k = 0; k < j; k++)
If so, you're not quite right because n2 does not have to equal n. Both terms should appear in the big O, though O(n^3) is not a bad approximation if n2 is approximately the same as n.
The Following User Says Thank You to helloworld922 For This Useful Post:
EDale (September 18th, 2013)
So for #1 if n=10 then the loop executes 5 times. So if it's not O(log n) then it must be O(n) because the number of times it executes is proportional to n? Is that right?
#3 yes that's what I meant, I guess the code didn't work! Thank you.
#5 so then it is O(n^2)? I'm a little lost, however thank you!
The reasoning for 1 is a bit convoluted. It is O(n) because you are running it half the time: O(n/2). But big-O drops the constant factors, so we're left with O(n)
#5 is not O(n^2). O(n^3) isn't a bad approximation, but it's a somewhat lazy answer. Read my hints about including n2 and n in the big-O (unless there were two typos in the original code).
O(log n) would basically be if each iteration you are dividing the remaining elements in half (i.e.- traversing a binary search tree). So an example of something that would look like O(log n) in
a for loop could be:
for(int i = 1; i <= n; i = i * 2)
//some code
if n was 32, i would progress as follows: 1, 2, 4, 8, 16, 32. Notice if there were 32 elements, it only took until the 5th iteration to get to 32. log (base 2) 32 = 5 or 2^5 = 32. Hence it is O
(log n). Now this could change depending on what was in place of //some code. For instance, inside the loop, if you looked through the entire set of n elements (or had the potential to look
through them all, say in a linear search) then you could say O(n log n). This was a very basic explanation but hopefully it makes at least O(log n) a little easier to understand.
September 18th, 2013, 06:38 PM #2
Super Moderator
Join Date
Jun 2009
Thanked 619 Times in 561 Posts
Blog Entries
September 18th, 2013, 06:49 PM #3
Join Date
Mar 2013
Thanked 0 Times in 0 Posts
September 18th, 2013, 07:20 PM #4
Super Moderator
Join Date
Jun 2009
Thanked 619 Times in 561 Posts
Blog Entries
October 1st, 2013, 01:31 AM #5
Junior Member
Join Date
Oct 2013
Thanked 0 Times in 0 Posts | {"url":"http://www.javaprogrammingforums.com/algorithms-recursion/31673-big-o-notation-can-you-check-my-answers-each-code-segment.html","timestamp":"2014-04-16T11:38:10Z","content_type":null,"content_length":"79171","record_id":"<urn:uuid:c50523b1-afc3-4480-88af-ec65657813f9>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00575-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculating MPG from VSS and MAF from OBD2
11-06-2007, 03:22 PM #12
10-31-2007, 04:26 PM #11
I use LtFt in the equation from something I saw here a few years ago, but running a few old OBD2 logs shows Lightner's code works quite well and faster with only 2 PIDs. It's a little sensitive
when my MAF reads < 0.10 once in a while, so filtering that out helps. I wouldn't display it directly though. Average it out over several seconds using a circular buffer. I actually use a 3
second and 5 minute average to show more realistic numbers. I've also decided stoplights are bad for our economy.
okay now how do you do it in a car without all those bells and whistles? (say a 63 Fairlane with a 221ci V-8 engine.)...some function of airflow to rpm?...
There are a number of senders that you can place inline. I don't have links handy, but the last time I went shopping for this, I found several options between $100-200 for kit built airplanes and
marine applications. The aircraft ones are a little more expensive because they are designed not to be able to restrict fuel flow in the event of a failure. The one I spec'd sent a single pulse
for every so many units of fuel that went through and cost about $180
I'd be going this route with my project if I hadn't found out about the OBD port. That only cost me $35 for the translator.
Good luck,
actual fuel consumption
as many have posted already, the formulas given here are for stoichiometric fuel ratios. Unfortunately, modern cars don't run at stoichiometric fuel ratios. They run as lean as possible, just on
the cusp of lean missing when cruising. They run rich anywhere from half-throttle to WOT. They automatically compensate for sub-standard fuels and fuel additives that affect octane.
My suggestion is, use the formula for calculating stoichiometric fuel consumption, but use the LTFT (Long Term Fuel Trim) to find what the vehicle is actually using. LTFT is given in +/- %
(percentage). An example of the formula to use is
MPG = (14.7 * (1 + LTFT/100) * 6.17 * 454 * VSS * 0.621371) / (3600 * MAF / 100)
MPG = 710.7 * VSS / MAF * (1 + LTFT/100)
This formula uses the car's fuel trim value to compensate for the difference in actual AFR and stoichiometric AFR, however I'm not 100% sure that the LTFT value is % difference from
stoichiometric. If anybody has any ideas as to how correct my thoughts are, please respond.
I like this idea, and even older vehicles (like E30 BMWs) used it to calculate the MPG for the on-board computer, whilst also taking a speed reading from the speedometer (which was pulsed from
the diff rather than a mechanical cable)
How would you measure pulse width though, if not available via OBD? wouldn't you have to build your own circuit?
I like this idea and the logic seems sound. I'm going to adjust my formula's and see how things work out. I'm using a calculated adjustment percentage at the moment because my calculations were
coming out too optimistic. (eg. 50mpg when it's really 20mpg)
Since stoich is 0, and it varies +- from 0 I'm pretty sure it is % difference from stoich. I've got a inline-4 so just the bank 1 measurement should be needed.
98' Honda CR-V
*OBDMPG, the RR OBD Plugin*
Yep, I'd agree. It's more accurate to use LTFT. Just one is fine unless your engine has problems. I use a little different math, but here are some ideas.
x = something like 30 in raw form. You'll notice when you let off on the gas, the MAF rate can drop fast and shows 500 MPG or more. I just skip showing it until it gets back to normal.
And LTFT must be positive for this math to work. So...
if(MAF < x ) return
if(LTFT < 0.0) LTFT = -LTFT
MPG = 710.7 * VSS / MAF * (1 + LTFT/100)
Modern cars circa '96 do run at stiochiometric as soon as possible...
as many have posted already, the formulas given here are for stoichiometric fuel ratios. Unfortunately, modern cars don't run at stoichiometric fuel ratios. They run as lean as possible, just on
the cusp of lean missing when cruising. They run rich anywhere from half-throttle to WOT. They automatically compensate for sub-standard fuels and fuel additives that affect octane.
My suggestion is, use the formula for calculating stoichiometric fuel consumption, but use the LTFT (Long Term Fuel Trim) to find what the vehicle is actually using. LTFT is given in +/- %
(percentage). An example of the formula to use is
MPG = (14.7 * (1 + LTFT/100) * 6.17 * 454 * VSS * 0.621371) / (3600 * MAF / 100)
MPG = 710.7 * VSS / MAF * (1 + LTFT/100)
This formula uses the car's fuel trim value to compensate for the difference in actual AFR and stoichiometric AFR, however I'm not 100% sure that the LTFT value is % difference from
stoichiometric. If anybody has any ideas as to how correct my thoughts are, please respond.
and as much as possible because this represents the ideal mixture of air/fuel that produces the exhaust gases best processed by the 3-way converters.
Under non-open loop operation (cold start, hard acceleration, closed throttle coast down) the controller can go open loop and then can feed teh engine nearly any air/fuel mixture it wants.
All fuel consumption calculations are going to be imprecise unless one knows the weight the gas being consumed to a precise value. Gas weight varies due to differences in blends that arise from
octane differences, or gas targeted for a season or a region that has some botique blend requirement and temperature to name a few factors that I can think of. There could be more.
But fuel consumption based on air consumption is probably good enough for all but labs needs.
So if you happened to have your A/F ratio from say.. a wideband O2 sensor, you could plug that in and get an even more accurate guesstimate right?
Yep, I'd agree. It's more accurate to use LTFT. Just one is fine unless your engine has problems. I use a little different math, but here are some ideas.
x = something like 30 in raw form. You'll notice when you let off on the gas, the MAF rate can drop fast and shows 500 MPG or more. I just skip showing it until it gets back to normal.
And LTFT must be positive for this math to work. So...
if(MAF < x ) return
if(LTFT < 0.0) LTFT = -LTFT
MPG = 710.7 * VSS / MAF * (1 + LTFT/100)
LTFT does not have to be positive to work correctly. I convert the % LTFT value to a decimal value to multiply the result by. This means if the computer is using 5% more fuel than what hard-coded
fuel tables tells it to use, then the formula compensates by adding 5% to the result. Similarly, if the computer is using 5% less than what the fuel tables say to use, the formula subtract 5%.
Notice that when LTFT if +5%, the formula multiplies by 1.05, and when the LTFT is -5%, it multiplies by 0.95.
There's only two issues I'm having with the formula I came up with. The first is that LTFT is really only the difference in pulse width of the injectors. This means it also compensates for
different tolerance levels in the injectors output themselves. For example, a partially clogged injector will be given a positive LTFT to compensate for the smaller amount of fuel it outputs as
compared to the other injectors.
Which brings me to the second problem: Modern vehicles are starting to have LTFT's for each individual cylinder, which throws this formula out the window because it only pays attention to one
over-all LTFT. I suppose you could take each available LTFT (whether it's 2 banks or 8 banks), and find the average, which should theoretically find the overall LTFT, but then again, LTFT could
also be compensating for defective injectors or any other number of factors.
Another method to directly figure the amount of fuel used is to use a home-made injector-tester kit. You could create a microcontroller that outputs 25% PW, 50% PW, 75% PW, and 100% PW to each
individual injector for an arbitrary amount of time, say 10 seconds for each level. Use a calibrated test tube to measure the volume of fuel dispensed by each fuel injector, then graph the
results. Find a formula that closely fits the curve of the graph, and use that to interpolate fuel amounts of each injector with the PCM's give PW.
One problem I see with this method, however, is that the fuel rail is under different amounts of pressure at various times, and the vacuum in the intake is different at various times. However,
the fuel pressure is changed proportionately to the amount of vacuum. This is because as vacuum increases, it has a tendency to "pull" the fuel out of the injectors, and when vacuum is minimal,
it takes more pressure to "push" the same amount of fuel out. The idea is to even it out so the computer doesn't have to do so much work and computations to get the right fuel mixture. Hopefully,
this idea means that the varying pressures and vacuums shouldn't matter.
Anywho, if anybody has any ideas to further these lines of thoughts, please post a reply
06-03-2008, 09:02 AM #13
Join Date
Jun 2008
08-07-2008, 01:02 AM #14
Join Date
Aug 2004
08-07-2008, 09:06 AM #15
08-07-2008, 12:54 PM #16
Variable Bitrate
Auto Apps:loading...
Join Date
Nov 2004
Blog Entries
08-07-2008, 03:07 PM #17
08-07-2008, 03:58 PM #18
Join Date
Jul 2008
08-07-2008, 04:50 PM #19
North of the land of Hey Huns
Auto Apps:loading...
Join Date
Jun 2004
Westminster, MD
08-08-2008, 01:29 AM #20
Join Date
Aug 2004 | {"url":"http://www.mp3car.com/engine-management-obd-ii-engine-diagnostics-etc/75138-calculating-mpg-from-vss-and-maf-from-obd2-2.html","timestamp":"2014-04-16T13:26:46Z","content_type":null,"content_length":"100786","record_id":"<urn:uuid:b27a9299-80cb-4825-9836-a0d3dd4967ca>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00470-ip-10-147-4-33.ec2.internal.warc.gz"} |
Measurement Session 10, Grades 3-5: Classroom Case Studies
Solutions for Session 10, Part C
See solutions for Problems: C1 | C2 | C3 | C4
Problem C1
Using 24 square tiles, you can make four distinct rectangles: 1 by 24; 2 by 12; 3 by 8; and 4 by 6. Even though the lengths and widths differ from rectangle to rectangle, their product is always
the same, and equal to the area of the rectangles -- 24. Each of these rectangles represents an array of tiles with particular dimensions. If we count the tiles in each array, we get the area of
the rectangle. The rule for finding the area of any rectangle, given its length and width, is length multiplied by width.
Answers to Questions:
a. This problem prompts students to derive the formula for area of a rectangle. It also delves into their understanding of multiplication by having them explore the relationship between
multiplication and determining area.
b. This problem builds on students' prior experiences with multiplication, area, and patterns. It also lays the groundwork for the kind of pattern recognition and generalization students will use
in algebra.
c. The concept of area and the derivation of its formula were extensively explored in the course. Different arrangements of tiles also draw upon ideas such as conservation of area.
d. How do you know you've found all the combinations? Given an area and one dimension of a rectangle, find the missing dimension.
e. Use this pattern approach for deriving the area formula for parallelograms. In this case, it would be helpful to have students work on grid or graph paper so that they could count side length
and area. Students could also work on finding other "formulas" that involve multiplication; for example, finding the total number of students when given a class size and number of classes, or
finding the number of feet or toes when given a set number of people.
<< back to Problem C1
Problem C2
The measurements aren't the same because the students may have used different measuring tools and techniques. Also, physically measuring an object is likely to produce some degree of measurement
error. The measurement process is, by its nature, never exact. Precision is affected by the measuring tool. The smaller the unit on a measuring tool, the more precise it is.
Answers to Questions:
a. The measurement content of this lesson is the idea that measurements are approximations and that differences in units affect precision.
b. The idea of measurement error forms the basis for the study of standard deviation. This problem builds on students' prior experiences with measurement units, measuring tools, and decimals.
c. This problem relates to one of the big ideas of the course, namely that measurement is an approximation. Also, concepts such as measurement error, precision, and accuracy are evident in this
type of problem.
d. How do you decide at what point a measurement is inaccurate? In other words, how much error is acceptable? How important is measurement precision in different contexts (i.e., building a bridge
or cutting a piece of wrapping paper to wrap a box)?
e. Have students measure time and discuss the accuracy of the measurements. Using stopwatches or wristwatches with a stopwatch function, have students try to record a set length of time. Using
another watch or clock to track the time, tell the students to "start" their stopwatches. Fifteen seconds later, say "stop" to have the students stop their watches. Students should then write
down the time, as precisely as possible, on their watch (e.g., 00:15:09 or 00:15:13 or 00:14:97). Theoretically, all the students should have recorded the same length of time. Their times,
however, will likely vary because measurement is an estimate. Discuss why measurements aren't the same, if they are accurate, and what affects precision.
<< back to Problem C2
Problem C3
The number of centimeter cubes that will fit into a 4-by-6-by-2 box is 48. The number of 1-by-2-by-2 cubes that will fit into this box is 24. Answers will vary for the type of strategy that can be
used to predict how many packages of a particular size will fit into the box. One strategy is to first see how many centimeter cubes are needed to cover the base of the box (4-by-6 rectangle), which
is 24. Then, because the height of the box will be 2, multiply the number of cubes in the base by 2 to get 48. Similarly, you can find out how many 1-by-1-by-2 cubes will cover the base of the box.
Since the height of the box is the same as one dimension of the 1-by-1-by-2 cube, you can place the cubes vertically to determine how many will cover the base of the larger box, and in the process,
entirely fill the space that will be encompassed by the folded box. Answers will vary on what kinds of approaches help determine the number of cubes in a box. One method is to multiply all three
dimensions of a box to determine the number of centimeter cubes that will fit in that box
Answers to Questions:
a. The measurement content of this problem is the determination of the volume of a rectangular prism. The big ideas include understanding volume, cubic units, moving from two dimensions to three
dimensions, and developing a formula for finding the volume of a rectangular prism.
b. This problem builds on students' prior experiences with three-dimensional figures and discovering formulas (as in Problem C1). It also leads to a whole host of packing problems. Filling a box
with packages of different dimensions (for example, 1 by 1 by 2) is harder than filling it with unit cubes, because you may not be able to fill the box completely, depending on how you arrange
the packages. This can help students develop number sense. Students can also try packing more complicated shapes, such as spheres, into rectangular boxes. Now that they cannot fill the space
completely, what is optimal?
c. The concepts of volume and the derivation of its formula were covered in this course. Understanding the relationship between two-dimensional representations and their three-dimensional
counterparts was also a part of this course.
d. Many students will not "see" the formula with only two examples. This activity may instead lead them to compare volume and surface area of a solid. Most students will need many more exercises
just like these (giving dimensions, cutting out a net, predicting volume, and checking volume based on given dimensions and a net, etc.) before they can generalize a formula. As they attempt
more examples, it may be helpful for students to keep their data organized in a table. For students who already have a good grasp of volume of a solid, consider the following questions: Why
does a 1-by-1-by-2 package completely fill the box? Are there other smaller-sized packages that will completely fill the box? Why or why not?
e. A certain toy company makes sets of children's blocks. The blocks are 1 in. cubes. The company is looking for a rectangular box that will hold a set of 64 blocks with no leftover space. Design
a box for the company. Explain why yours is the best design. Additionally, students could explore the relationship between surface area and volume of a rectangular prism by generating all
possible rectangular prisms with a given volume.
<< back to Problem C3
Problem C4
Depending on the triangles drawn, the measurements of the particular angles will vary (with the exception of the equilateral triangle, whose angles will all measure 60 degrees). The sum of the
angles for every triangle is 180 degrees, and this will remain true for any triangle. Testing more triangles will demonstrate this further, but it is not a proof.
Answers to Questions:
a. The measurement content of this problem is the discovery of the fact that the sum of the interior angles of a triangle is 180 degrees.
b. This problem builds on students' prior experiences finding patterns, identifying various types of triangles, and measuring angles. It lays the groundwork for using this (sum of interior angles
equals 180 degrees) and other attributes of triangles, as well as angle relationships, to determine angle measures without using measuring tools such as protractors. The patterning nature of
this problem continues to build a strong generalizing foundation.
c. The concepts of measurement of angles and the sum of angles in a triangle were discussed in this course.
d. Students could apply what they now know about the sum of the angles in a triangle to find missing angle measures without measuring. For example, given two angle measures in a triangle, find the
measure of the third angle.
e. Have students use geometry software or pattern blocks to determine the sum of interior angles of other regular polygons, such as squares and hexagons. Students can cover a new shape with
triangles and use the fact that the sum of interior angles in a triangle is always 180 degrees to determine the sum of the angles in the new shape. If the students use pattern blocks, they can
also find each angle of a regular polygon by placing several of that same type of polygon side by side around a point until the blocks fill up 360 degrees. The students can then divide 360 by
the number of polygons around that point to obtain the measure of each angle. For example, it takes three hexagons meeting at a single point to complete 360 degrees. When you divide 360 degrees
by 3, you get 120 degrees, which is the measure of each interior angle of a regular hexagon.
<< back to Problem C4 | {"url":"http://learner.org/courses/learningmath/measurement/session10/35solutions_c.html","timestamp":"2014-04-16T13:08:02Z","content_type":null,"content_length":"67003","record_id":"<urn:uuid:764c363c-13dd-404f-b7e8-6c6152f7518a>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00507-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Acoustic Metrics, and the OPERA neutrino result
Replies: 5 Last Post: Feb 16, 2012 11:15 AM
Messages: [ Previous | Next ]
united Re: Acoustic Metrics, and the OPERA neutrino result
Posted: Feb 15, 2012 8:59 PM
Posts: 4
Registered: 12/7/04 On 14/02/2012 18:54, Rock Brentwood wrote:
> But how do you propose to do that? It would be a lot like getting
> partisans on the Conservative and Liberal side of the aisle to quietly
> shack up together.
Well, I think they may have accidentally already done it ... or at
least, "rogue elements" may have already done a key part of it.
I don't think that the relativity community appreciated just how much
leeway the quantum gravity guys were given to explore alternative
structures. They were given a wider remit, on the understanding that it
was acknowledged that some of the systems that the QG guys would be
working on were strictly temporary, and that they were therefore able to
be explored and mapped out even if they didn't necessarily agree with
special relativity.
Acoustic metrics are fascinating (IMO), but nobody from the core
relativity community was able to study them (or even ask the questions
that led to them), for about half a century because of the deeply-held
belief that anything that didn't agree with SR was automatically not
credible, and not worth investigating, let alone publishing. It was a
I mean, it was /literally/ a non-subject, there was even a slightly
self-serving clause in Misner, Thorne and Wheeler's "Gravitation" that
went as far as defining a metric theory as being "a theory that had a
metric and reduced to special relativity".
... So a metric theory founded on an acoustic metric supposedly /wasn't/
a metric theory. It wasn't anything. If you thought "rigorously" using
MTW definitions, then the very /idea/ of an acoustic metric theory was
literally "unthinkable", in the Orwellian sense. The language was too
customised to support the current system to allow even the concept of a
serious alternative, and an entire field of math and physics research
was eliminated by our hacking the definitions to make alternative
approaches impossible "by definition".
... until the late 1990s.
What the guys studying acoustic metrics did with their "acoustic black
hole" models was to study how the statistical behaviour of Hawking
radiation seemed to agree with the statistical behaviours of a system in
which special relativity's rules weren't the ones in operation. With an
"acoustic" black hole, the horizon isn't a starting point, a limit that
you draw on your map that nothing can cross. It's an "emergent" end
result. It's more like the effect that you get when you use a laser to
project a line onto the surface of a rippling lake. You never get to see
a "naked singularity" section of line with an exposed dead end, but you
/do/ get loops of line that constantly bud off from the main line and
return. The geometry of the line-boundary appears to fluctuate acausally
according to the description produced by the projection, but there's an
additional dimension at work, and that the horizon represented by the
line isn't a fundamental barrier, but a cross-section between a
projected limit and classical wave behaviour.
The projected line shows discontinuously-disconnecting and -reconnecting
loops, but the physics of the water surface is entirely classical. The
"quantum" behaviour is "visibly" real, but you're essentially dealing
with projective artefacts.
It's like when you see someone walking behind a tree and reemerging at
the other side ... it's not that they're disappearing from the universe
and reemerging from "nowhere" (although that's how it appears in a 2D
projected view of the situation), it's more that we simply don't have a
complete set of data, and don't get to see what behind the tree. We can
deduce what's behind the tree by assuming a deeper continuity, or by
making use of indirect signalling, which in QM terms counts as making
use of virtual particles, which again brings us back to the language
that QM adopts for Hawking radiation, in describing particles whose
presence can only be sensed indirectly.
The significance here of the "acoustic horizon" description is that it
allows the horizon surface to fluctuate in response to events that occur
behind it (the unseen person behind the tree is capable of shaking the
tree, so that we see the shaking). Information leaks out.
It's also analogous (if you want to take a big cross-subject leap) to
the behaviour of a cosmological horizon. A "cosmological" horizon is
noisy, leaky, and emits the analogue of Hawking radiation entirely
classically. If an object nominally behind the horizon accelerates
towards us, it can warp the local geometry and make the effective
horizon surface "jump" to a new location that's behind it. In terms of a
GR1915-style geometrical description, the object has discontinuously
"tunnelled" from behind the horizon to in front of it, but in the
cosmological case, it's actually the effective horizon's position that
has jumped backwards to behind the object, rather then the object
physically jumping forwards.
So cosmological horizons are actually acoustic horizons, and obey the
rules of acoustic metrics rather than those of SR-based theory. If we
can then use topology to map the laws of cosmological horizons onto
gravitational problems, then we get the same sort of description of
leaky, fluctuating limits, and a classical explanation of how objects
with gravitational horizons can give off indirect radiation.
This didn't get done because we were originally convinced that black
holes /couldn't/ radiate, because this violated GR1916. If we unified
the descriptions of "cosmological" curved spacetime and "gravitational"
curved spacetime, we'd have had to say that there was a classical
mechanism for indirect radiation from black holes, which couldn't be
explained by the current version of general relativity ... and we
couldn't modify general relativity to include that behaviour without
then invalidating special relativity's concept of how a lightspeed
barrier was supposed to work, which GR kinda inherited though the
proviso that gravitational physics was supposed to reduce to SR physics.
SR's lightspeed barrier, like GE1916's gravitational horizon, is
prescriptive rather than emergent. To allow gravitational horizons to
wobble and "warp", and leak information classically from behind r=2M
meant that we'd have to also allow the SR-style lightspeed barrier
(which could be described as a form of horizon) to wobble and warp, to
allow physically-accelerated particles to be able to advance forwards of
the main wavefront, due to local warpage produced by the acceleration.
And that behaviour appears to correspond with what now /seems/ to have
been spotted at OPERA.
To me, the "cosmological" case provides an apparently unavoidable
example of "acoustic metric" physics appearing to be physically correct
in real life. The question then seems to me to be: does the universe
support multiple types of physics with different laws and rules for
different types of curvature ... or does it have a single set of
geometrical rules that apply everywhere? If we're applying Occam's Razor
ruthlessly, we could argue that if acoustic metrics apply in cosmology,
and if quantum mechanics' descriptions of black holes seem to be
indistinguishable from what we'd expect if the same "acoustic" laws
applied to gravitation, then perhaps we really do have a single set of
laws for both situations, which then have to apply to the sorts of
problems that we typically try to describe using special relativity, too.
Cosmology provides a theoretical proof-of-concept that this logic
appears to be physically real, quantum mechanics appears to prevent us
from arguing that black holes don't obey the same "acoustic" laws, and
the OPERA result ... maybe ... could turn out to be the third corner of
our triangle, showing the same "acoustic" behaviour in particle physics.
Eric Baird
Date Subject Author
2/12/12 Acoustic Metrics, and the OPERA neutrino result united
2/12/12 Re: Acoustic Metrics, and the OPERA neutrino result Androcles
2/14/12 Re: Acoustic Metrics, and the OPERA neutrino result Rock Brentwood
2/14/12 Re: Acoustic Metrics, and the OPERA neutrino result Rock Brentwood
2/15/12 Re: Acoustic Metrics, and the OPERA neutrino result united
2/16/12 Re: Acoustic Metrics, and the OPERA neutrino result Shmuel (Seymour J.) Metz | {"url":"http://mathforum.org/kb/message.jspa?messageID=7669306","timestamp":"2014-04-17T18:47:21Z","content_type":null,"content_length":"30156","record_id":"<urn:uuid:0142427d-29ad-49fa-8245-8708ca0af8c2>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00509-ip-10-147-4-33.ec2.internal.warc.gz"} |
David Chudzicki's Blog
Alternate title:
I'm a big fan of the Fruchterman & Reingold graph embedding algorithm.
Recently I created
a site
for trying to predict the pattern of colors in pooled knitting.
Next I wanted to be able to predict the
of a knitted object. I'll explain more in a minute, but first here are some examples (click on the images for a gif movie):
A hat:
Some flat knitting (doesn't look flat to you? I'll explain in a minute):
So, what's going on here? I've done two things:
1. Construct an abstract graph corresponding to something someone might knit
2. Embed that graph in 3D (determine where each of the points of the graph should go)
The tricky part is (2), although, as you'll see, I didn't do any of the work on that myself. I was thinking that I should approach the embedding by simulating each vertex as acted on by two forces:
• Attractive: Applies to each pair of vertices connected by an edge, tries to pull them close to each other like a spring.
• Repulsive: Applies to each pair of vertices (regardless of whether they are connected by an edge), pushes them apart. Primarily a close-range force, infinitely strong in the limit as the points
get very close together.
As it turns out, this is a very popular algorithm from Fruchterman & Reingold for graph embeddings (already implemented in iGraph). The
classic paper
is all about embeddings in 2D, but there's no reason not to do this in 3D. In their algorithm, the attractive force increases as distance^2 (unlike springs), and the repulsive force increases as (1/
So, I just have to construct graphs that correspond to things you might knit. To illustrate the idea, here's a very simple example:
And the code (after I wrote the appropriate methods) to generate it:
g = KnittedGraph() #Create new "KnittedGraph" object
g.AddStitches(5) #Add five stitches
g.NewRow() #Create a new row
g.FinishRow() #Finish the new row
This creates the graph, adds five stitches, starts a new row, and then finishes that row. The "flat knitting" above was created just like this: "cast on" 50 stitches, and then knit 20 rows back and
forth. Why didn't it lie flat, though? The real answer: Forcing it to lie flat isn't what would make sense! The real knitting (just like a real "flat") piece of paper has the option to bend in one
direction at a time. The algorithm has a certain distance apart that it wants each pair of linked vertices to be, and the object doesn't have to lie flat to achieve that.
Extending this a little, I can add an "increase" (two stitches connected to the same stitch in the previous row):
g = KnittedGraph() #Create new "KnittedGraph" object
g.AddStitches(5) #Add five stitches
g.NewRow() #Create a new row
g.FinishRow() #Finish the new row
g.Increase() #Add an "increase" (one stitch connected to two in the previous row)
We can also knit "in the round" by knitting a row and then connecting the current stitch to the first stitch:
g = KnittedGraph()
g.ConnectToZero() #Connect the current stitch to the original stitch
g.AddStitches(16) #Knit two more rows (by adding 16 stitches)
Here's an example of the same thing (but larger -- 20 circular rows of 50 stitches each) embedded in 3d:
As a final example, let's try to make something with constant negative curvature (
a pseudo-sphere
). Geeky knitters often do this for fun by knitting in a circle (as above) and increasing the number of stitches at a constant rate:
The way I like to think about this mathematically, negative curvature for a surface means that small discs of a given radius have more volume than on a flat plane (and the opposite for positive
curvature). Think of squashing a saddle flat and having extra material, vs. squashing a hemisphere flat and not having enough.
And here's mine:
Here's my code on GitHub.
(Including a modified version of
this code
for plotting in 3D. Thanks Michael!)
[Edit: Using Ubuntu 10.04.3 LTS & Python 2.6.5, I installed python-iGraph with:
sudo aptitude install python-igraph
Apparently that's not working in Ubuntu 11.10. See
this G+ thread
for more. Thanks, Shae.]
10 comments:
1. Very cool- can you combine the shape and the color information?
2. Yeah, I don't think that would be too hard. I'm also thinking about maybe a nice web front-end (like the color thing) for non-techie users to enter their own patterns.
3. Here is the ppa link for igraph supported by the core team, https://launchpad.net/~igraph/+archive/ppa. Just go there and follow the instructions and your igraph sadness should disappear. --
Peter G. Marshall
4. I bet Ravelry.com would love if you could build a friendly UX on top of this.
5. Work that up to take KnitML input? :)
6. Nice idea! I'd never heard of KnitML.
It looks like someone has done that, probably better than mine, too.
7. Are you assuming a certain stitch pattern like garter stitch? If it were stockinette, I would expect the edges to roll, which would be really interesting to model.
8. It's not really sophisticated enough to take that sort of thing into account. I guess for that maybe I would have to specify that certain edges in the graph like to be at certain angles with each
9. The hyperbolic item pictured is best done in crotchet, as the pseudo-sphere pictured appears to be. Its easy to do in crotchet as the needle is only in contact with the last stitch at any given
time. If knitted, the continuous enlargening of the row being worked on would require circular needles, and would eventually get too big to fit onto any circular needle. In knitting, the stitches
of a row are all in continuous contact with the needles. If using a stitch 1 add 1 formula the object would quickly become unmanageable. The wonderful Hyperbolic Coral Reef project is done in
10. Yeah, you're right. | {"url":"http://blog.davidchudzicki.com/2011/11/simulated-knitting.html","timestamp":"2014-04-19T11:59:04Z","content_type":null,"content_length":"90125","record_id":"<urn:uuid:8617c7bf-43f9-4652-b9f3-9c7ce73ec478>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00555-ip-10-147-4-33.ec2.internal.warc.gz"} |
Please help! if-then-else structure, ass
I'm new to C++ and currently enrolled in an intro class. Due to some uncontrollable events I have fallen behind in the class. Would appreciate help with the following:
CSIT 575 – Programming Fundamentals for Computer Science Lab #3A
To learn to code, compile and run a program containing SELECTION structures
A. Plan and code a C++ program utilizing the if-then-else structure to solve the following:
B. Write an interactive C++ program to determine the day of the week for any given date from 1900-2099. Following are the steps you should use:
C. Input the date in 3 separate parts, month, day and 4-digit year. Error check to make sure the month is between 1 and 12, the day between 1 and 31, and the year between 1900-2099. If there is an
error, identify the type of error and stop the program.
D. If the data is good, divide the last two digits of the year by 4. Store the quotient (ignoring the remainder) in Total. For example, for 1983, divide 83 by 4 and store 20 in Total.
E. Add the last 2 digits of the year to Total.
F. Add the two digits of the day of the month to Total.
G. Using the following table, find the "value of the month" and add it to Total.
January = 1 May = 2 September = 6
February = 4 June = 5 October = 1
March = 4 July = 0 November = 4
April = 0 August = 3 December = 6
H. If the year is 2000 or later, add 6. If the year is 2000 exactly and the month is January or February, subtract 1.
I. Then, determine if the year is a leap year. A leap year is divisible by 4 but not divisible by 100. If it is a leap year and the month is January or February, subtract 1. (The year 2000 is NOT a
leap year).
J. Find the remainder when the Total is divided by 7. Use the remainder to find the day:
1= Sunday 3 = Tuesday 5 = Thursday
2 = Monday 4 = Wednesday 6 = Friday
0 = Saturday
Your three initials, the month, day, and 4-digit year of the date to analyze.
All of the input fields, a message if the data is OK and the day of the week of the date. Output the date and the appropriate error message if the data is bad.
Turn in
Program listing and program output.
Test today’s date, your birthdate, and the dates your instructor gives you and error conditions.
Do you have a specific question or did you just wanted to share the assignment?
There are plenty of tutorials on the net, and here on this site to get you started.
Go documentation and then C++ language tutorial. I suggest you try first and then post your problems
Was stressed out because I'm playing catch up. I'll see what I can come up with and post back here when Ive figured something out or if Im having trouble.
Topic archived. No new replies allowed. | {"url":"http://www.cplusplus.com/forum/beginner/81023/","timestamp":"2014-04-18T18:22:03Z","content_type":null,"content_length":"9781","record_id":"<urn:uuid:7303cc31-ecc5-4c63-9885-38764fea0ec2>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00377-ip-10-147-4-33.ec2.internal.warc.gz"} |
FS Master's Mathematics Education
Master's in Mathematics Education
You have learned so much from teaching mathematics—about mathematics, about students, about yourself as a teacher. Are you ready to take the next step as a mathematics teacher? Can you picture
yourself becoming a leader of mathematics teachers? If so, challenge yourself by enrolling in the Master of Arts in Education (MAEd) in Mathematics Education.
The MAEd in Mathematics Education has something for the mathematics teacher at any level. Participants choose to concentrate at the elementary, middle grades, or high school level. Regardless of the
level, students take mathematics courses for that concentration level as well as courses beyond the concentration level. Elementary and middle grades mathematics programs are offered in a totally
distance education format. The high school concentration is offered in conjunction with the Department of Mathematics and although the mathematics courses are offered on campus, all other course work
is done online.
The 36-hour MAEd is typically completed in two years by part-time graduate students who are full-time teachers. Faculty work hard to help students develop into a professional network of mathematics
educators and many graduates use the master’s degree work as a springboard to National Board Certification. Specific mathematics education courses are offered online during the academic year while
MAEd students are teaching. This allows participants to use their classroom as a laboratory for testing ideas, piloting assessment items, and performing action research.
Click on a grade level link below to learn more.
Elementary, Middle, High School | {"url":"http://www.ecu.edu/cs-educ/msite/Math/FSmastersmatheducation.cfm","timestamp":"2014-04-20T03:39:53Z","content_type":null,"content_length":"28365","record_id":"<urn:uuid:d0f4cdda-1e15-4edc-abd0-91ab95ba850d>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00327-ip-10-147-4-33.ec2.internal.warc.gz"} |
The case for affliction (WARNING: lots of words and numbers)
Back when Burning Crusade first came out, a lot of theorycraft shot around the warlock forums regarding the best DPS spec, and the math behind those assertions. The math was done using numbers close
to 1k shadow damage, and conclusions were reached, cookie cutter specs were settled into and all was right with the world. Now the warlock forums are overflowing with "OMG WUTS TEH BEST DSP SPECZ"
threads, and the answers are starting to get choppy.
Here's the conclusion I have reached, given my own experience, REAL math, and real raiding scenarios.
0/21/40 is the superior DPS spec for a warlock who has reached hit cap. Period. End of story.
Looking through threads on theorycrafting, I usually see numbers close to 2000 damage per second for Destruction, and I see numbers close to 1300 damage per second for Affliction. These numbers are
crap, there is utterly no basis for asserting that ANY DPS class is capable of putting out 2000 damage every second for a sustained period of time. To assert this is moronic.
Debunking bad numbers:
My actual experience with destruction shows that over an entire raid, I do about 1100 sustained damage per second. With 1100 shadow damage, 30-33% crit and hit cap, my average shadowbolt hits for
around 2600. In order to pull down 2000 Damage/second, your average shadowbolt, a 2.5 second cast, would need to hit for 5,000. Show me a warlock who does this consistently, and I'll show you a
warlock who has a private server.
In order for affliction to produce 1400 Damage/second, the 4 DoTs you have to rotate and keep up would have to hit INCREDIBLY hard. Your DoTs and a shadowbolt would need to produce an average 3
second "tick" of 4200 combined. Even with some very generous +damage estimates, your UA would tick for 600, agony would average 480, corruption would be 600 again, and immolate would be around 400.
These are very generous estimates when you factor in partial resists and latency, and they require about as much +damage as you can possibly get end-game. What's that you say? I forgot about
Shadowbolts?! How dare I do such a thing... An affliction warlock with 1500 shadow damage would see an average shadowbolt hit for around 2000 and given the emphasis on +damage over crit, you wouldn't
really see a huge boost with additional crit. (which is why SM/Ruin builds suffer end-game)
An ideal 18 second starting cycle for an affliction lock would look like this:
1 Immolate for 1.5 seconds (2600)
1 UA for 2 seconds (3200)
1 Corruption for 1.5 seconds (3400)
1 Dark Pact for 1.5 seconds (0)
2 shadowbolts for 5 seconds (4000)
1 Immolate for 1.5 seconds (1300 due to only seeing 6 seconds till we hit 18)
2 Shadowbolts (1 crit) for 5 seconds (5000)
1 Nightfall proc for 1.5 seconds (2000)
So, given these generous estimates with immaculate timing, and not counting our initial 1.5 second cast for immolate, we see about 14k damage go out. This scenario is very generous and even includes
a nightfall proc, and works out to 775 damage put out per second.
So, now that we have established that real numbers do not get posted on the lock forums, let's talk real-world average cases.
My current gear swap-outs allow two sets of stats while hit capped:
1100 +shadow damage, 30-33% crit and max hit for destro.
1250 +shadowdamage, 18-20% crit and max hit for affliction.
I do 650-700 DPS as affliction and 1100 DPS as destruction. I have *decent* gear for
both specs, and know how to play both, one is clearly better for damage output. (By a wide margin)
So why is affliction still used in BT/Hyjal?
Affliction is still conidered viable in BT/Hyjal for three reasons:
1)Malediction- This talent takes curse of shadows/elements from 10% to 13% and is seen as a huge boon to shadowpriests and mages.
2)Shadow Embrace- This talent reduces all damage done by whatever has a DoT on it by 5% and is considered invaluable for fights with big spike damage, and fairly valuable for all other scenarios.
3)Self-sufficiency- The affliction lock rarely lifetaps, is incredibly mobile and can heal themselves quite well.
The reality is this:
1)In order for Malediction to profit the overall raid, that 3% boost to CoS/Elements would need to total 500 damage per second. given an average scenario of 2 shadowpriests, 3 mages and 2 other
warlocks, your other damage classes would need to pump out nearly 17,000 damage per second, or startling 2500 DPS EACH. Malediction also returns 3% more mana/health via the s-priests, but it works
out to a negligible amount that can easily be replaced by spellsurge.
2)Reducing all damage a mob does by 5% sounds like a great idea, right? 5% of 8000 is 400, the reality is that your healers still have to heal 95% of whatever that mob does...
Given a raid with 14 DPS players, and an average of 750DPS per person (once again, generous) you end up with a total of 10,500DPS. If you were to bump that number up by 500 DPS you would find that
the mobs die 5% faster anyway, and your healers still heal 5% less, have a lower chance of going OOM and your tank dying anyway.
3)Mobility and self-sustaining power are a benefit on a lot of fights, but just not worth the damage decrease of 50-freaking-percent. I could wait 1.5 seconds in between each of my shadowbolts and
still put up better numbers than affliction. My priest buddy really doesn't think that tossing me a renew is a burden, and just one allows me to tap to full every time.
In summation, when using real numbers, affliction doesn't compete, and you can argue versatility 'till you're blue in the face. For my money, a mob that dies faster dies better.
1 comment:
Lung said...
TLDR! (kidding) | {"url":"http://shadowboltspam.blogspot.com/2008/03/case-for-affliction-warning-lots-of.html","timestamp":"2014-04-18T05:40:19Z","content_type":null,"content_length":"38118","record_id":"<urn:uuid:49a15e9a-9023-4bb5-a596-1f2245438c70>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00079-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is a Mandelbrot Set?
The diagram to the left of this text is a “Mandelbrot Set” named after mathematician Benoit Mandelbrot. This mathematical feedback loop illustrates the “profound connections between fractal
geometry and chaos theory,” according to Mandelbrot. As one of Mandelbrot’s most famous illustrations, the Mandelbrot set is created from fractal geometry, or the study of roughness and
Mandelbrot cites many applications for fractal geometry including data compression techniques, brain wave analysis, design of radio attennae, fiber optics, anatomy and yes—global finance as well.
Upon reviewing the Gaussian models which underpin modern financial theory, Mandelbrot proposed that markets and pricing do not follow smooth bell curve distributions. Instead, Mandelbrot proposed a
better model to describe mathematical laws of chance—fractals. This theory—still hotly contested even after the recent financial meltdown in 2008—holds that markets and prices are not independent,
prices leap (are not continuous), and investors are not rational. Instead, Mandelbrot says we need to, “assume the market is not efficient, instead a wild and complex shape.”
What does any of this have to do with marketing?
Marketers who seek to understand customer behavior by using current and historical data and statistical analysis would be wise to remember the following takeaways from Mandelbrot’s research:
1. Correlation does not equate to causation (a major mistake made by marketing and finance executives alike)
2. A mathematical forecast can never be fully accurate and precise as there are too many (actually infinite) variables to consider that affect the forecast
3. Relying solely on a mathematical forecast or model for decisioning leads to certain death (just ask LTCM, Magnetar Capital, Bear Stearns, AIG et al)
4. Markets and customers are extremely complicated (good luck figuring them out!)
5. Acts of one person, team, company do not exist in a vacuum and can have large repercussions on the whole
6. Customers are not “coin flips” meaning that their actions are not independent of each other and indeed customers have long memories
7. Complexity is often birthed from simplicity (simple rules build complex structures)
8. Power laws exist and are prevalent in markets. This means that marketing executives should always be thinking about the next Black Swan on the horizon. We may not be able to predict exactly what
it is or when it happens, but we can do our best to prepare.
9. Over reliance on statistical techniques (such as root mean square) for forecasting is dangerous. My favorite Mandelbrot quote; “Almost all of our statistical tools are obsolete—least squares,
spectral analysis, all our established theory, closed distributions.”
10. There is no such thing as normal
4 thoughts on “What is a Mandelbrot Set?”
1. Love it. Every bit of the 10 points. Now I need to translate this for some of my clients and figure out what’s the best way to present it. :-)
2. Forget, Benoit, Paul. Mandelbrot are amazingly delicious Jewish cookies, almost like biscotti. :)
□ Now that’s funny Elaine! I may have to try and find some!
☆ Let me know when you’ll be in my neck of the woods, Paul, and I’ll bake you some! | {"url":"http://paulbarsch.wordpress.com/what-is-a-mandelbrot-set/","timestamp":"2014-04-21T02:18:34Z","content_type":null,"content_length":"63343","record_id":"<urn:uuid:1b58552c-c630-4e43-b20b-37c85c3c08cd>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00483-ip-10-147-4-33.ec2.internal.warc.gz"} |
Universal Gas Constant (idea)
universal gas constant
is also extremely useful in
chemical thermodynamics
, as in the equation:
delta G = delta G (STP) + R * T * ln(Q)
delta G: the change in free energy of a reaction
delta G (STP): the known change in free energy at standard temperature and pressure (25 degree celsius, 1 atm).
R: universal gas constant
T: absolute temperature
Q: reaction quotient
Since G is in joules/mole, T is in Kelvin, and Q is in ignored log units, the first expression of R (8.31451 J/mol*k) is used when the other quantities are in SI units. | {"url":"http://everything2.com/user/quijote/writeups/Universal+Gas+Constant","timestamp":"2014-04-19T20:47:53Z","content_type":null,"content_length":"20520","record_id":"<urn:uuid:63e8946c-c1ed-4bc7-93b6-d35fc0d632ee>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00458-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Simplify ln((e^(3)/e^(x))
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50ce8a1de4b0031882dc8b03","timestamp":"2014-04-18T00:24:41Z","content_type":null,"content_length":"51157","record_id":"<urn:uuid:43bcb756-6909-4005-a322-abbf740ed7cf>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00353-ip-10-147-4-33.ec2.internal.warc.gz"} |
Integration by Parts
hi MathsIsFun,
I've always taught this as
Of course it comes to the same thing, but it avoids having a double integral in the rule, so it looks easier to take in.
Also you can easily 'prove' it from the product rule of differentiation.
For choosing u and v I say look for a 'u' that gets easier/simpler when you differentiate it and a 'dv/dx' that doesn't get any more complicated when you integrate it. You could mention this when you
do the e^x times x example.
Good examples .... I especially liked the one that just gets more complicated ... good because it will happen to every student at some time and this shows what to do about it.
Also: would it be better to give the first example BEFORE saying "to help you remember" and the steps?
Your first example is a good starter. If you adopt my suggestion (i) below, then you could pose the problem, introduce what u and dv/dx might be and show how the rule can provide a solution. I would
suggest that you also differentiate the answer to demonstrate that it worked. (And include a comment that doing that is a test for any integral). Then you are reday to state the rule formally.
So, suggestions
(i) consider the alternative formulation of the rule.
(ii) maybe include after some examples a 'why does it work' bit.
(iii) add this example
After one application of the rule the problem gets no simpler.
Apply it again and you get back minus the original integral!
But, now re-arrange to make the original once more the subject and you have the solution. I just loved that example when I first met it.
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei | {"url":"http://www.mathisfunforum.com/viewtopic.php?id=18845","timestamp":"2014-04-17T03:59:26Z","content_type":null,"content_length":"17251","record_id":"<urn:uuid:ffd672ad-4371-4c66-9f45-d481aa13d8a3>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00443-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Compare Two Independent Population Averages
You can compare numerical data for two statistical populations or groups (such as cholesterol levels in men versus women, or income levels for high school versus college grads) to test a claim about
the difference in their averages. (For example, is the difference in the population means equal to zero, indicating their means are equal?) Two independent (totally separate) random samples need to
be selected, one from each population, in order to collect the data needed for this test.
The null hypothesis is that the two population means are the same; in other words, that their difference is equal to 0. The notation for the null hypothesis is
You can also write the null hypothesis as
emphasizing the idea that their difference is equal to zero if the means are the same.
The formula for the test statistic comparing two means (under certain conditions) is:
To calculate it, do the following:
1. Calculate the sample means
are given.) Let n[1] and n[2] represent the two sample sizes (they need not be equal).
2. Find the difference between the two sample means:
Keep in mind that because
is equal to 0 if H[0] is true, it doesn’t need to be included in the numerator of the test statistic. However, if the difference they are testing is any value other than 0, you subtract that
value from x[1]-x[2] in the numerator of the test statistic.
3. Calculate the standard error using the following equation:
4. Divide your result from Step 2 by your result from Step 3.
To interpret the test statistic, add the following two steps to the list:
5. Look up your test statistic on the standard normal (Z-) distribution (see the below Z-table) and calculate the p-value.
6. Compare the p-value to your significance level, (such as 0.05). If it’s less than or equal to your significance level, reject H[0]. Otherwise, fail to reject H[0].
The conditions for using this test are that the two population standard deviations are known and either both populations have a normal distribution or both sample sizes are large enough for the
Central Limit Theorem to be applied.
For example, suppose you want to compare the absorbency of two brands of paper towels (call the brands Stats-absorbent and Sponge-o-matic). You can make this comparison by looking at the average
number of ounces each brand can absorb before being saturated. H[0] says the difference between the average absorbencies is 0 (nonexistent), and H[a] says the difference is not 0. In other words, one
brand is more absorbent than the other. Using statistical notation, you have
Here, you have no indication of which paper towel may be more absorbent, so the not-equal-to alternative is the one to use.
Suppose you select a random sample of 50 paper towels from each brand and measure the absorbency of each paper towel. Suppose the average absorbency of Stats-absorbent (x[1]) for your sample is 3
ounces, and assume the population standard deviation is 0.9 ounces. For Sponge-o-matic (x[2]), the average absorbency is 3.5 ounces according to your sample; assume the population standard deviation
is 1.2 ounces. Carry out this hypothesis test by following the 6 steps listed above:
1. Given the above information, you know
2. The difference between the sample means for (Stats-absorbent – Sponge-o-matic) is
(A negative difference simply means that the second sample mean was larger than the first.)
3. The standard error is
4. Divide the difference, –0.5, by the standard error, 0.2121, which gives you –2.36. This is your test statistic.
5. To find the p-value, look up –2.36 on the standard normal (Z-) distribution — see the above Z-table. The chance of being beyond, in this case to the left of, –2.36 is equal to 0.0091. Because H
[a] is a not-equal-to alternative, you double this percentage to get 2 ∗ 0.0091 = 0.0182, your p-value.
6. This p-value is quite a bit less than 0.05. That means you have fairly strong evidence to reject H[0].
Your conclusion is that a statistically significant difference exists between the absorbency levels of these two brands of paper towels, based on your samples. And Sponge-o-matic comes out on top,
because it has a higher average. (Stats-absorbent minus Sponge-o-matic being negative means Sponge-o-matic had the higher value.)
The temptation is to say, Well, I knew the claim that the absorbency levels were equal was wrong because one brand had a sample mean of 3.5 ounces and the other was 3.0 ounces. Why do I even need a
hypothesis test? All those numbers tell you is something about those 100 paper towels sampled. You also need to factor in variation using the standard error and the normal distribution to be able to
say something about the entire population of paper towels. | {"url":"http://www.dummies.com/how-to/content/how-to-compare-two-independent-population-averages.navId-811047.html","timestamp":"2014-04-21T02:27:06Z","content_type":null,"content_length":"58630","record_id":"<urn:uuid:bc69c6f9-31f2-4899-8a46-65644cbcc363>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00173-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lone Tree, CO Math Tutor
Find a Lone Tree, CO Math Tutor
...After you learn it, math is fun. This approach requires two things - practice and patience. The best way to learn to talk this kind of math is by doing it - so let's dive in and talk math!
18 Subjects: including geometry, Microsoft PowerPoint, discrete math, logic
My name is Logan and I'm enthusiastic about math and science. I'm patient, friendly, and easy-going while being goal-oriented, practical, and encouraging. My mission is to provide my clients with
the best tools possible to solve their own problems and succeed on their own.
13 Subjects: including calculus, elementary (k-6th), reading, study skills
...I have worked as a tutor a couple of years in high school, and worked as an office assistant in a tutor's office as an undergrad. I am also a Bible Study Teacher at my church for 3 years, and
a puppet ministry director for 1 year. As a freshman in college, I have received the Colorado Scholars ...
31 Subjects: including prealgebra, ACT Math, SAT math, algebra 1
...I have Certificate of Senior Chinese Language Teacher authenticated and awarded by IPA. I was a Chinese teacher in Shanghai, China, and also am a Chinese teacher in Denver, USA. I have working
experience of teaching various learners in different forms, including: students, company managers and ...
34 Subjects: including geometry, trigonometry, calculus, ACT Math
I have over ten years of experience teaching and tutoring at the high school and college levels. I received my bachelor's degree in Physics from Lewis and Clark College in Portland, Oregon, and
my master's degree in Physics from the University of Utah. Subjects I have taught include the following:...
11 Subjects: including algebra 1, algebra 2, calculus, geometry
Related Lone Tree, CO Tutors
Lone Tree, CO Accounting Tutors
Lone Tree, CO ACT Tutors
Lone Tree, CO Algebra Tutors
Lone Tree, CO Algebra 2 Tutors
Lone Tree, CO Calculus Tutors
Lone Tree, CO Geometry Tutors
Lone Tree, CO Math Tutors
Lone Tree, CO Prealgebra Tutors
Lone Tree, CO Precalculus Tutors
Lone Tree, CO SAT Tutors
Lone Tree, CO SAT Math Tutors
Lone Tree, CO Science Tutors
Lone Tree, CO Statistics Tutors
Lone Tree, CO Trigonometry Tutors
Nearby Cities With Math Tutor
Aurora, CO Math Tutors
Bow Mar, CO Math Tutors
Castle Rock, CO Math Tutors
Centennial, CO Math Tutors
Cherry Hills Village, CO Math Tutors
Columbine Valley, CO Math Tutors
Englewood, CO Math Tutors
Foxfield, CO Math Tutors
Glendale, CO Math Tutors
Greenwood Village, CO Math Tutors
Highlands Ranch, CO Math Tutors
Littleton, CO Math Tutors
Lonetree, CO Math Tutors
Parker, CO Math Tutors
Sheridan, CO Math Tutors | {"url":"http://www.purplemath.com/Lone_Tree_CO_Math_tutors.php","timestamp":"2014-04-16T07:26:12Z","content_type":null,"content_length":"23918","record_id":"<urn:uuid:6dad9398-1d47-4862-b5aa-592e0c0a165e>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00615-ip-10-147-4-33.ec2.internal.warc.gz"} |
Jacobis Formula
A selection of articles related to jacobis formula.
Original articles from our library related to the Jacobis Formula. See Table of Contents for further available material (downloadable resources) on Jacobis Formula.
Magick >> Qabalah
Magick >> Qabalah
Jacobis Formula is described in multiple online sources, as addition to our editors' articles, see section below for printable documents, Jacobis Formula books and related discussion.
Suggested Pdf Resources
Suggested Web Resources
Great care has been taken to prepare the information on this page. Elements of the content come from factual and lexical knowledge databases, realmagick.com library and third-party sources. We
appreciate your suggestions and comments on further improvements of the site. | {"url":"http://www.realmagick.com/jacobis-formula/","timestamp":"2014-04-16T22:31:05Z","content_type":null,"content_length":"25943","record_id":"<urn:uuid:2a09912e-55c8-4896-8339-b00027f2bdf4>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00093-ip-10-147-4-33.ec2.internal.warc.gz"} |
A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z
Department of Physics and Astronomy
PHYS 105. Foundations of Physics.
1 credit.
An introduction to the study of physics and the physics department. Presentations are given by faculty and students to acquaint the students with current research opportunities in the department and
the application of physics to broad spectrum of topics.
3 credits.
The first semester of a non-calculus sequence in general physics. Topics include principles of mechanics, thermal properties of matter, wave motion and sound. A working knowledge of algebra and
trigonometry is required.
3 credits.
The second semester of a non calculus sequence in general physics. Topics include electric charges, circuits, magnetism, optics, atomic and nuclear physics. Prerequisite: PHYS 140 with a grade of
"C-" or higher.
PHYS 140L*-150L. General Physics Laboratories.
1 credit each semester.
These laboratory courses are designed to complement and supplement the PHYS 140-150 and PHYS 240-250 lecture courses. Prerequisite or corequisite for PHYS 140L: PHYS 140 or PHYS 240. Prerequisite for
PHYS 150L: PHYS 140L and either PHYS 140 or PHYS 240. Prerequisite or corequisite for PHYS 150L: PHYS 150 or PHYS 250.
PHYS 215. Energy and the Environment.
3 credits.
Energy use, sources and trends; fossil fuels, heat-work conversions, thermodynamic restrictions and electric power production; nuclear fission reactors and fusion energy; solar energy and
technologies; alternative energy sources; energy storage; energy conservation; issues of waste and safety. Environmental, social and economic aspects will be discussed. Not open to ISAT majors
scheduled to take ISAT 212 as part of their degree requirements. Prerequisites: One college course in science and one in mathematics.
*PHYS 240. University Physics I.
3 credits.
Kinematics, dynamics, energy and momentum conservation, oscillatory motion, fluid mechanics and waves. Corequisite: MATH 232 or MATH 235.
PHYS 246. Data Acquisition and Analysis Techniques in Physics I.
1 credit.
This laboratory supplements PHYS 240 by establishing the experimental basis of physics. Topics include conception, design and performance of experiments in physics emphasizing data acquisition,
analysis of experimental data, and the handling of experimental uncertainties. Prerequisite: PHYS 240.
PHYS 247. Data Acquisition and Analysis Techniques in Physics II.
1 credit.
This laboratory completes the introductory physics lab sequence and is designed to supplement the PHYS 240 and PHYS 250 lecture courses. Topics include conception, design and performance of
sophisticated experiments in physics, computer simulation of physical processes, analysis of experimental data, including uncertainty estimation, and error propagation. Prerequisites: PHYS 250 and
PHYS 246.
PHYS 250. University Physics II.
3 credits.
Electric forces, fields and potentials; capacitance, dielectrics, resistance and DC circuits; magnetic fields, induced electric fields, inductance and AC circuits; geometrical optics, interference,
diffraction and polarization. Prerequisite: PHYS 240 with a grade of "C-" or higher. Corequisite: MATH 236.
PHYS 260. University Physics III.
4 credits.
Rotational kinematics and rotational dynamics; static equilibrium and elasticity; universal gravitation and orbital mechanics; temperature, heat, heat engines, entropy and kinetic theory; Gauss' law,
electric potential and capacitance; magnetic fields, induced electric fields and inductance; displacement current and electromagnetic waves; and the special theory of relativity. Prerequisite: "C" or
better in PHYS 250 or PHYS 150. Corequisites: MATH 237 and PHYS 247 or PHYS 150L.
PHYS/MATH 265. Introduction to Fluid Mechanics.
4 credits.
Introduces the student to the application of vector calculus to the description of fluids. The Euler equation, viscosity and the Navier-Stokes equation will be covered. Prerequisites: MATH 237 and
PHYS 260.
4 credits.
A course in modern physics, consisting of a discussion of the experimental basis for and fundamental principles of quantum physics, with applications to atomic structure and nuclear physics.
Prerequisite: PHYS 260 or permission of instructor.
PHYS/CHEM/MATS 275. An Introduction to Materials Science.
3 credits.
An introduction to materials science with emphasis on general properties of materials. Topics will include crystal structure, extended and point defects and mechanical, electrical, thermal and
magnetic properties of metals, ceramics, electronic materials, composites and organic materials. Prerequisite: CHEM 131, PHYS 150 or PHYS 250, ISAT 212 or permission of the instructor.
PHYS 295. Laboratory Apparatus Design and Construction.
1 credit.
An introduction to the design and fabrication of laboratory apparatus using machine tools. Prerequisites: PHYS 250 and permission of the instructor.
1-4 credits each semester.
Topics in physics at the second year level. May be repeated for credit when course content changes. Topics selected may dictate prerequisites. Students should consult instructor prior to enrolling
for course. Prerequisite: Permission of the instructor.
3 credits.
Physical models are used to explain biological systems. Topics from biology include cell division, replication, transcription, and translation of DNA, protein folding, and molecular motors. Physics
topics include entropy and free energy, diffusion, and statistical mechanics of two state systems. Experimental tools for biophysics are also discussed. Prerequisite: PHYS 150 or PHYS 250.
PHYS 333. Introduction to Particle Physics.
3 credits.
This is an introduction to current themes and ideas which confront the fundamental nature of matter and interactions. The most widely accepted theory, the Standard Model, will be explored. Possible
extension, beyond the Standard Model physics, will be discussed. Basic properties such as charge, mass, and lepton number will be examined within these frameworks. Experiments that illuminate the
basic nature of matter and ideas such as symmetry and quantum physics will be reviewed and assessed. Prerequisite: PHYS 270.
PHYS/MATS 337. Solid State Physics.
3 credits.
A study of the forces between atoms, crystal structure, lattice vibrations and thermal properties of solids, free electron theory of metals, band theory of solids, semiconductors and dielectrics.
Prerequisite: PHYS 270 or permission of instructor.
3 credits.
An introduction to the study of the atomic nucleus. Topics covered include static nuclear properties and movements, the force between nucleons, the deuteron, nucleon scattering, isospin, nuclear
structure, radioactivity, decay kinematics and selection rules, fission, and fusion. Prerequisite: PHYS 270.
PHYS 339. Introductory Nuclear Science.
4 credits.
An introduction to nuclear science that will provide a solid foundation for experimental work in applied nuclear physics. Detection of ionizing radiation, as it applies to nuclear physics, will be
additionally covered in the laboratory-component of the course. Topics include concepts of radioactive decays, radiation transport and interaction with matter, basics of radiation detection devices,
dosimetry, radiation therapy, X-ray production, and fission nuclear reactors. Prerequisite: PHYS 270 or permission of instructor.
3 credits.
Application of fundamental laws of mechanics to particles and rigid bodies. Topics include statics, dynamics, central forces, oscillatory motion and generalized coordinates. Prerequisites: PHYS 260
and MATH 238.
PHYS/MATH 341. Nonlinear Dynamics and Chaos.
3 credits.
Introductory study of nonlinear dynamics and chaos intended primarily for upper-level undergraduates in science or mathematics. Topics include stability, bifurcations, phase portraits, strange
attractors, fractals and selected applications of nonlinear dynamics in pure and applied science. Computers may be utilized for simulations and graphics. Prerequisites: MATH 238 and MATH 248.
3 credits.
A continuation of PHYS 340 including Lagrangian dynamics, rigid body motion and the theory of small oscillations. Prerequisite: PHYS 340.
PHYS 344. Advanced Physics Laboratory I.
1 credit.
The first course in a three-course laboratory sequence. A set of advanced laboratory experiences in which students are introduced to experimentation in several areas of physics while gaining
experience in experiment design, data analysis, formal report writing and presentations. Prerequisite: PHYS 247.
PHYS 345. Advanced Physics Laboratory II.
1 credit.
This is the second course in a three-course laboratory sequence. A set of advanced laboratory experiences in which students are introduced to experimentation in several areas of physics while gaining
experience in experiment design, data analysis, formal report writing and presentations. Prerequisite: PHYS 344.
PHYS 346. Advanced Physics Laboratory III.
1 credit.
This is the third course in a three-course laboratory sequence. A set of advanced laboratory experiences in which students are introduced to experimentation in several areas of physics while gaining
experience in experiment design, data analysis, formal report writing and presentations. Prerequisite: PHYS 345.
PHYS 350. Electricity and Magnetism.
3 credits.
A study of the electrostatic field, the magnetic field, direct and alternating currents and electromagnetic waves. Prerequisites: PHYS 260 and MATH 238.
PHYS 360. Analog Electronics (2, 4).
4 credits.
DC and AC circuits, spectral and pulse circuit response, semiconductor physics and simple amplifier and oscillator circuits. Prerequisite: PHYS 250 or permission of the instructor.
PHYS/MATH 365. Computational Fluid Mechanics.
3 credits.
Applications of computer models to the understanding of both compressible and incompressible fluid flows. Prerequisites: MATH 248, either MATH 238 or MATH 336, MATH/PHYS 265, and PHYS 340.
PHYS/MATH 366E. Computational Solid Mechanics.
3 credits.
Development and application of mathematical models and computer simulations to investigate problems in solid mechanics, with emphasis on numerical solution of associated boundary value problems.
Prerequisites: MATH/PHYS 266, MATH 238 and MATH 248, or permission of the instructor.
PHYS 371. Introductory Digital Electronics (2, 4).
2 credits.
Transistors, integrated circuits, logic families, gates, latches, decoders, multiplexers, multivibrators, counters and displays. Prerequisite: A grade of "C" in PHYS 150 or PHYS 250 or permission of
the instructor.
PHYS 372. Microcontrollers and Their Applications (2, 4).
2 credits.
Microcontrollers, their instructions, architecture and applications. Prerequisite: PHYS 371 or permission of the instructor.
PHYS 373. Interfacing Microcomputers (2, 4).
2 credits.
A study of the personal computer and its input/output bus, input/output functions, commercially available devices, proto-typing circuit boards and programs for device control. Prerequisite: PHYS 371.
PHYS 380. Thermodynamics and Statistical Mechanics.
3 credits.
A treatment of the thermal properties of matter from both macroscopic and microscopic viewpoints. Topics include the laws of thermodynamics, heat, work, internal energy, entropy, elementary
statistical concepts, ensembles, classical and quantum statistics and kinetic theory. Approximately equal attention will be given to thermodynamics and statistical mechanics. Prerequisites: PHYS 270.
PHYS/MATS 381. Materials Characterization (Lecture/Lab course).
3 credits.
A review of the common analytical techniques used in materials science related industries today, including the evaluation of electrical, optical, structural and mechanical properties. Typical
techniques may include Hall Effect, scanning probe microscopy, scanning electron microscopy, ellipsometry and x-ray diffraction. Prerequisite: PHYS/MATS 275, ISAT/MATS 431 or GEOL/MATS 395.
PHYS 386. Robots: Structure and Theory.
3 credits.
An introduction to the study of autonomous robotic platforms. Topics include robot structure, propulsion systems, robot kinematics, sensors used in robotics, and sensor integration. The course
combines lectures with laboratory activities in which students will get hands-on experience in designing, building, programming, and testing autonomous robotic platforms. Prerequisite: completion of
the basic preparation courses required for the robotics minor or permission of the instructor.
PHYS 390. Computer Applications in Physics.
3 credits.
Applications of automatic computation in the study of various physical systems. Problems are taken from mechanics of particles and continua, electromagnetism, optics, quantum physics, thermodynamics
and transport physics. Prerequisites: MATH/CS 248, PHYS 240, PHYS 250 and six additional credit hours in major courses in physics, excluding PHYS 360, PHYS 371 and PHYS 372.
1 credit per year.
Participation in the department seminar program. Prerequisites: Junior or senior standing and permission of the instructor.
1-4 credits each semester.
Topics in physics at intermediate level. May be repeated for credit when course content changes. Topics selected may dictate prerequisites. Students should consult instructor prior to enrolling for
course. Prerequisite: Permission of the instructor.
PHYS/ASTR 398. Independent Study in Physics or Astronomy.
1-3 credits, repeatable to 4 credits.
An individual project related to some aspect of physics or astronomy. Must be under the guidance of a faculty adviser. A student may not earn more than a total of four credits for PHYS/ASTR 398.
3 credits.
A study of the kinematic properties and physical nature of light including reflection, refraction, interference, diffraction, polarization, coherence and holography. Prerequisites: PHYS 260, PHYS 270
and MATH 237.
PHYS 446. Electricity and Magnetism II.
3 credits.
A continuation of PHYS 350. Emphasis will be placed on the solutions of Maxwell's equations in the presence of matter, on solving boundary-value problems and on the theory of electromagnetic
radiation. Prerequisite: PHYS 350.
PHYS/CHEM 455. Lasers and Their Applications to Physical Sciences (2, 3).
3 credits.
An introduction to both the theoretical and practical aspects of lasers and their applications in the physical sciences. Prerequisite: PHYS 270, CHEM 331 or permission of the instructor.
3 credits.
Principles and applications of quantum mechanics. Topics include wave packets and the uncertainty principle, the Schroedinger equation, one- dimensional potentials, operators and eigenvectors,
three-dimensional motion and angular momentum and the hydrogen atom. Prerequisite: PHYS 340.
PHYS 491-492. Physics Assessment and Seminar.
1 credit per year.
Principal course activities are participation in the departmental assessment program and attendance at departmental seminars. Prerequisite: PHYS 392.
PHYS 494. Internship in Physics.
1-6 credits.
Students participate in research or applied physics outside of the university. A proposal must be approved prior to registration, and a final paper will be completed. Prerequisites: Physics major
with a minimum of 12 physics credit hours and permission of the department head and the instructor.
1-4 credits each semester.
Topics in physics at the advanced level. May be repeated for credit when course content changes. Topics selected may determine prerequisites. Students should consult instructor prior to enrolling for
course. Prerequisite: Permission of the instructor.
PHYS/ASTR 498R. Undergraduate Research in Physics or Astronomy.
1-4 credits, repeatable to 6 credits.
Research in a selected area of physics as arranged with a faculty research adviser. A student may not earn more than a total of six credits for PHYS/ASTR 498R. Prerequisite: Proposal for study must
be approved prior to registration.
6 credits, (Year course, 3 credits each semester).
Participation in this course must be approved during the second semester of the junior year. | {"url":"http://www.jmu.edu/catalog/13/courses/PHYS.html","timestamp":"2014-04-20T08:15:23Z","content_type":null,"content_length":"30263","record_id":"<urn:uuid:98772abf-05cd-46bd-915a-86606bf67f35>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00630-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bloomfield, NJ
Tenafly, NJ 07670
Math Made Easy & Computer Skills Too
...I have an undergraduate degree in Mathematics and Master's and PhD degrees in Computer Science. I have been a user of spreadsheet programs like Excel for 30 years. I can help you with similarity
and congruency of triangles and other shapes, the Pythagorean Theorem,...
Offering 10+ subjects including algebra 1, algebra 2 and geometry | {"url":"http://www.wyzant.com/Bloomfield_NJ_Maths_tutors.aspx","timestamp":"2014-04-19T23:57:31Z","content_type":null,"content_length":"61122","record_id":"<urn:uuid:ba4887a1-1ac0-4200-87e4-ebf202aa540b>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00359-ip-10-147-4-33.ec2.internal.warc.gz"} |
array min value
I have a task where I need to find the minimum number in the array. and function must start with this :
void min (float table [], int size, float & value, int & index)
I do not know how to tackle this.help.
Do you have any code at all?
You will need a variable that stores the "minimum found so far" as you look through the array. Imagine you have an array with the numbers [7, 3, 10, 2].
Look at the first number (7), since it is the first number, make it the minimum so far.
Look at the second number (3), is it less than the minimum so far? Yes, so make it the minimum so far.
Look at the third number (10), is it less than the minimum so far? No.
Look at the fourth number (2), is it less than the minimum so far? Yes, so make it the minimum so far.
Now you've reached the end of the array and you have the minimum. You will also need to keep track of the index of the "minimum so far" as you follow the steps above. Now just translate this process
into code.
i think the best way is you try yourself first and post it here, then the good people of this site will help you further. if you want solutions to your home assignments then you are better off going
to a site where you can rentacoder.
as PP said, you need first to think of the algorithm, here's a rephrase
you have an array of values which you want to traverse
int table[size]
you need an index value to check each element in a
int index // 0..size-1, in C/C++ arrays start at index 0
so you need to go through the array looking for the smallest value
how do you know it is the smallest value? by always storing the smallest value encountered in a variable
before you start, you set smallestvalue to some very large value, then you check array element
loop index = 0 to size-1
if table[index] < smallestvalue then smallestvalue = table[index]
end loop
now you return the value by assigning the 'value' which was passed by reference and thus reflects modifications to it
value = smallestvalue
Just one thing you should consider... if you set the smallest value to "some very large value" as the above poster suggested, your function may not work for all possible inputs. Imagine you set the
smallest value to 1,000,000.. well if the array only has values over 1,000,000, your answer will end up incorrect. The best way to handle this is to treat the first value in the array as the smallest
before the loop begin.
Last edited on
yep i agree that would be the better deal in a real solution, or maxint from limits.h but i think here the difficulty is on another level...
Last edited on
Very helpful guys.Thanks.
I'll try to write a code now and post it here, than we'll see...
int size;
something is wrong in the above lines
you should store the minimum in a varible and then out put it outside the loop.
*hint: u need to deciede somehing(if statement)
I'm assuming this problem was assigned to you, in which case I think you have a small misunderstanding about what the function is supposed to do. From the function definition you gave, it sounds like
your function needs to take an array of values and return (via the parameters value and index) the minimum and the index of the minimum. The function that you wrote does not take a table of values,
but rather assumes the table is empty and gets input from the user to fill the table. Unless the problem specifically tells you to get input for the table entries inside the function, you should
already have the table filled with values once it is passed to the function. Essentially the loop you wrote to get input for the table should be done in your main, not in the function that finds the
Also, as for your question about array size limitations, there is no easy way to deal with this without using a different data structure than a normal array. Using a vector or some kind of linked
list would work best for storing dynamic amounts of data. Because you are dealing with arrays though, you will just have to allocate a lot of space for the table and hope the size the user wants
doesn't exceed it.
Yes i make size=5, cuz i don't like to enter 100× but array is still 100 if i understand right.? i have 100 indices but using just 5 of them, but value in other indices (95) is 0.
I'm confused to with that val. but it's work and i don't know whay if value is unknown..?
nvm then i m confused to...
i m a newb
It's starting value is really low because you don't initialize it to anything...therefore you will never get anything lower then it. Inside of your min function you need to set it to be equal to the
first element of the array.
Topic archived. No new replies allowed. | {"url":"http://www.cplusplus.com/forum/beginner/5689/","timestamp":"2014-04-16T21:55:56Z","content_type":null,"content_length":"25190","record_id":"<urn:uuid:e4abc7d7-acc5-4c36-9943-51a54a1dade9>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00035-ip-10-147-4-33.ec2.internal.warc.gz"} |
convergence in the product and box topology
February 8th 2011, 11:02 AM #1
Feb 2011
convergence in the product and box topology
Hi. Can I have some help in answering the following questions? Thank you.
Let {f_n} be a sequence of functions from N(set of natural numbers) to R(real nos.) where
f_n (s)=1/n if 1<=s<=n
f_n (s)=0 if s>n.
Define f:N to R by f(s)=0 for every s>=1.
a) Does {f_n} (n=1 to inf) converge to f in the R^N (cartesian product) endowed with the product topology?
b) when endowed with the box topology?
Thanks again.
Consider the open set (in the box topology) given by $\prod_{n=1}^\infty(-\frac1{n+1},\frac1{n+1})$.
February 8th 2011, 05:15 PM #2 | {"url":"http://mathhelpforum.com/differential-geometry/170567-convergence-product-box-topology.html","timestamp":"2014-04-16T20:43:48Z","content_type":null,"content_length":"32133","record_id":"<urn:uuid:7a064b1b-df75-46f0-bc84-055cabb0f2e6>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00045-ip-10-147-4-33.ec2.internal.warc.gz"} |
Data Parallel
, 1993
"... We give a brief description of what we consider to be data parallel programming and processing, trying to pinpoint the typical problems and pitfalls that occur. We then proceed with a short
annotated history of data parallel programming, and sketch a taxonomy in which data parallel languages can be ..."
Cited by 5 (0 self)
Add to MetaCart
We give a brief description of what we consider to be data parallel programming and processing, trying to pinpoint the typical problems and pitfalls that occur. We then proceed with a short annotated
history of data parallel programming, and sketch a taxonomy in which data parallel languages can be classified. Finally we present our own model of data parallel programming, which is based on the
view of parallel data collections as functions. We believe that this model has a number of distinct advantages, such as being abstract, independent of implicitly assumed machine models, and general.
, 1987
"... We present a mathematical model of parallel computing in a hypercubical parallel computer. This is based on embedding binary trees or forests into the n-dimensional hypercube. We consider three
different models corresponding to three different computing situations. First we assume that the processin ..."
Add to MetaCart
We present a mathematical model of parallel computing in a hypercubical parallel computer. This is based on embedding binary trees or forests into the n-dimensional hypercube. We consider three
different models corresponding to three different computing situations. First we assume that the processing time at each level of the binary tree is arbitrary, and develop the corresponding
mathematical model of an embedding of a binary tree into the hypercube. Then we assume that the processing time at each level of the binary tree is the same for all processors involved at that level,
and for this we develop the mathematical model of a loop embedding of a binary tree into the hypercube. The most general case is that in which only certain neighboring levels are active. Here we
assume for simplicity that only the processors corresponding to two neighboring levels are active at the same time, and correspondingly we develop the mathematical model of a level embedding of a
binary tree into the hypercube to cover this case. Both loop embeddings and level embeddings allow us to use the same processor several times during the execution Of a program. 0 1989 Academic press,
IIIC. With the advent of VLSI the use of a parallel processor system consisting of many small processors which are connected homogeneously becomes a feasible approach to the design of massively
parallel computer architecture design. The connection machine of [ 21 consisting of 64,000 processors is an example of the combination of VLSI technology with highly homogeneous hypercubical
connection to achieve a massively parallel computer system. Having many processors connected in parallel, the next crucial problem | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=7990752","timestamp":"2014-04-20T19:53:54Z","content_type":null,"content_length":"15478","record_id":"<urn:uuid:b600ab67-bde7-4caa-8f58-e129602eba96>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00444-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patterns In Randomness: The Bob Dylan Edition
The human brain is very good — quite excellent, really — at finding patterns. We delight in puzzles that involve pattern recognition... consider word-search puzzles, the “Where’s Waldo” stuff, and
the game Set. We’re also great at giving patterns amusing interpretations, as we do when we fancy that clouds look like ducks or castles — or when we claim to see images of Jesus in Irish hillsides,
pieces of wood, paper towels, and store receipts. Remember the cheese sandwich with the Virgin Mary on it, which sold on eBay for $28,000 in 2004? Miraculous, indeed.
It’s with the knowledge that we find apparent patterns in randomness that I approach this puzzling aspect of the “random play” feature of my car stereo. I’ve stuck in a microSD card that has about
4000 songs on it. I’ve put it on random play. And it appears to be playing songs in random order.
But it sure seems to be playing a lot of Dylan.
Bob, not Thomas. I like Bob Dylan, of course; that’s why I have quite a bit of him on the microSD card. But, for instance, on one set of local errands, it played two Dylan songs, something else,
another Dylan, two other songs, then another Dylan. Four out of seven? Seems a bit odd.
Now, I know that if you ask a typical person which sequence is more likely to come up in a lottery drawing, 1-2-3-4-5, or 57-12-31-46-9, he will say not only that the latter is more likely, but that
if the former came up he’d be sure something was amiss. In fact, they’re equally likely, and are as likely as any other pre-determined five-number sequence, but the one that looks like a pattern is
one we think “can’t be random.” Similarly, it’s certainly possible to randomly pick four Dylan songs out of seven — or even four in a row, for that matter. And if there’s a bug in the algorithm that
the audio system uses, why would it opt for Dylan, and not, say, Eric Clapton or the Beatles, both of which I also have plenty of on the chip?
So I played around with some numbers. Let’s make some simplifying assumptions, just to test the general question. Assume I have 20 songs from each artist, and a total of 4000 songs (and, so, 200
artists). If I play seven songs, how likely is it that two will be by the same artist?
It’s easier to figure out how likely it is that there won’t be repetitions. The first song can be anything. The likelihood that the second will be of a different artist than the first is (4000-20)/
3999, about 99.5%. The likelihood that the third will differ from both of those is (4000-40)/3998. Repeat that four more times and multiply the probabilities: there’s a 90.4% chance of seven
different artists in seven songs... meaning that there’s about a 9.6% chance of at least one repetition. Probably more likely than we might think.
Let’s look at Dylan, specifically. I have about 120 of his songs on there (3% of the total; maybe I should delete some, but that’s a separate question). What are the chances of having no Dylan in
seven songs? No Dylan for the first is 3880/4000, 97% (makes sense: 3% chance of Dylan in any one selection). Continuing, no Dylan, still, for the second is 3879/3999. Repeat five more times and
multiply: 71.3% chance of no Dylan, so there’s a 28.7% chance of at least one Dylan song if we play seven.
What about the chances of at least two Bob Dylan songs... a repetition of Dylan? Well, we figured out no Dylan above. Let’s figure out exactly one, and then add them. For the first to be Dylan and
none of the others, we have 120/4000 * 3880/3999 * 3879/3998 * 3878/3997 * 3877/3996 * 3876/3995 * 3875/3994. About 2.5%. It’s the same for one Dylan in any other position — the numerators and
denominators can be mixed about. So the chances of exactly one Dylan song out of seven is 2.5 * 7, or 17.5%. Add that to the chances of zero, 71.3 + 17.5 = 88.8%, so there’s an 11.2% chance of at
least two Dylan songs in a mix of seven songs.
In other words, it’s a better than one in four chance that I’ll hear at least one Bob Dylan song, and a better than one in ten chance that I’ll hear at least two of them every time I take a 20- or
30-minute ride. Thrown in some confirmation bias, where I forget about the trips that had Clapton and the Beatles and Billy Joel and Carole King, but no Dylan, and I guess the system is working the
way it’s supposed to.
But, damn, it plays a lot of Bob Dylan!
Mi Cro | 12/15/11 | 16:18 PM
I appreciate your math, but why in the world would you pollute the sample with non-Dylan songs. Bob should be 100%. :-)
I have 500 Dylan songs on an iTunes playlist. Starting from zero, how many random (shuffled) plays will it take to hear every song at least once? Why does Catfish play 15 times and Restless Farewell
never? Does God play dice with Dylan songs? :-)
All the best, - Fabe
Fabe (not verified) | 12/16/11 | 08:08 AM
HA! I was going to answer "Lucky You, Don't count, just be grateful!"to Mr. Leiba- but your answer gave me a laugh- 100% indeed. I can't get enough of that man.
I have my ipod docked in a clock radio and he is the first thing I hear every morning... and sometimes the last thing at night.
hmmm... I wonder if that's why I'm unattached. LOL
ohmercy (not verified) | 12/17/11 | 13:55 PM
Huh... 500 eh? Step up to the big leagues and call me when you hit 1200...
Anonymous (not verified) | 12/17/11 | 14:20 PM
There is no such thing as a random number.
1ifbyrain2ifbytrain (not verified) | 12/16/11 | 11:11 AM
This reminds me of a math class I once had. The instructor assigned the task of flipping a coin 10000 times and recording the results. He could tell when someone cheated and just wrote their own
sequence of "H" and "T" without flipping the coin (or more intelligently simulating the task by computer). A person just writing the sequence will typically never write 8 H's in a row. However, in
10000 real flips, it's almost a certainty that such a sequence will occur. People think they know what "random" looks like, but for the most part our intuitions about randomness are pretty bad.
sean t (not verified) | 12/16/11 | 11:26 AM
WHAT? Only 120 Dylan songs out of 4000? Something is amiss and the "random" incident is trying to tell you: Add More Bob!
loved the post- numbers, random and otherwise are always fascinating even with my less than rudimentary understanding.
ohmercy (not verified) | 12/17/11 | 14:00 PM
OH GOD- I'm so sorry about the three posts in a row- I kept getting a message that I was putting some information in wrong so I redid it got same info-redid it again with some additional- and now I
see three have shown up... is this random?
Oddly, the comment made directly to you, the author didn't come up. Now I wonder if I should post it again?
ohmercy (not verified) | 12/17/11 | 14:07 PM
"In a book that nobody can write."
Nice post that stimulates the brain cells this early in the morning.
First time I've visited here, got here through Expecting Rain.
I'm a systems guy and a songwriter, so it goes without saying I love brain exercises, puzzles, patterns etc.
If you are interested in hearing some of my music, go to my music web page. www.philipbruno.com
I'd recommend They're Gone, Dad I Miss You and The Old Man of the Mountain (Yes, the one from NH).
Keep up the good work.
Phil T.
Phil T. Listener (not verified) | 12/17/11 | 16:52 PM
Barry, your anti spam efforts are appreciated (learned about them through your science 2.0 bio)...'Enjoyed your article and here's a link to my favorite Dylan song with lyrics:
Enrico Uva | 12/17/11 | 21:08 PM
Thanks for all the comments, everyone (and it's amusing to see all the fellow Dylan fans; quick response to that: I love Dylan, but I love a lot of other music as well).
I've just posted an update to this. The frequency of “B” artists in the mix was due to more than an error in randomness.
Barry Leiba | 12/21/11 | 11:24 AM | {"url":"http://www.science20.com/staring_empty_pages/patterns_randomness_bob_dylan_edition-85585","timestamp":"2014-04-18T23:23:58Z","content_type":null,"content_length":"58249","record_id":"<urn:uuid:458a60bd-6181-4fd0-9c13-8e51b62e0cd6>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00034-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Polystyrene Rod Consisting Of Two Cylindrical ... | Chegg.com
A polystyrene rod consisting of two cylindrical portions AB and BC is restrained at both ends and supports two 6 kip loads as shown. Knowing that E = 0.45x10^6 psi, determine (a) the reactions at A
and C, (b) the normal stress in each portion of the rod.
Mechanical Engineering | {"url":"http://www.chegg.com/homework-help/questions-and-answers/polystyrene-rod-consisting-two-cylindrical-portions-ab-bc-restrained-ends-supports-two-6-k-q2068271","timestamp":"2014-04-18T12:57:30Z","content_type":null,"content_length":"21104","record_id":"<urn:uuid:7814443e-833a-4e39-bfae-94b3e3eded8c>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00112-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Homework 101 - Permutation Perspiration
If you remember any high school math, you'll recall the following problem - in how many unique ways can the letters of the word MISSISSIPPI be arranged? Notice there is repetition of some letters -
I and S each appear four times, while P appears twice.
Since it's an arrangement, order matters, which is to say that MISSISSIPPI is a different arrangement from IMSSISSIPPI, obtained by switching only the first two letters. If there were no repetition,
we would use the permutation formula symbolized by [11]P[11], and find out there are almost 40 million arrangements (39,916,800 to be exact). Because of the repetition, many of those arrangements
are the same, so we have to divide that result by products of factorials for each of the repeating letters. (As a reminder, four factorial, symbolized by 4!, means four times three times two times
one, which equals 24.) So, four factorial equals 24, and there are two of those for the letters I and S. For the letter P, we use two factorial which equals two. So, we must divide the huge number
above by the product of 24 times 24 times 2:
Without the repetition, of course, there are enormously fewer arrangements. That's all you'll see in most high school math books with regard to permutations with repetition.
What happens though if one of your bright students asks the following question: How many unique arrangements can be formed from the letters in the word MISSISSIPPI if you want to form arrangements
less than 11 letters long? For example, how many unique five-letter arrangements can be formed? This new problem is not hard, but it will be immeasurably useful to go back to the original problem
and look at it in a different way. Let's do that now, and you'll thank me for it later.
In the original problem, we wanted to form arrangements using all of the letters in the word. Consider that there are only four different types of letters in the word MISSISSIPPI - in order of
decreasing frequency of appearance, they are I, S, P, and M. We can now start the problem by asking: How many ways can we arrange the four I's in the 11 places we must fill? Since the four I's are
indistinguishable, we would use the combination formula represented by [11]C[4], and get 330 ways. There are seven places left to fill, so let's move to the letter S and ask how many ways can we
arrange the four S's in those seven spots - this would be [7]C[4], or 35 ways. There are three ways to arrange the two P's in the three remaining spots, which we get from [3]C[2], and finally [1]C
[1] gives us one way to put the M in the last remaining spot. The counting principle tells us to multiply those four numbers together to get the total number of ways those letters can be arranged:
([11]C[4])([7]C[4])([3]C[2])([1]C[1]) = (330)(35)(3)(1) = 34,650.
Notice that we have obtained the result we got earlier using a single permutation! It is worthwhile to note that the counting principle gave us the unique number of arrangements (permutations) after
we used combinations to take care of all the repetition. Very nice, don't you think? In certain situations, then, combinations + the counting principle = permutations.
Now, back to our bright student who has been waiting patiently for an answer. Armed with what we now know, it is easy to answer his question. If we are forming five-letter arrangements, we start
again and ask: How many ways can the four I's be arranged to fill the five places? This would be [5]C[4], giving 5 ways. We have just filled four of the five spots, leaving only one to be filled.
There are three remaining types of letters, so we can simply multiply by three and we have our answer:
([5]C[4])(3) = (5)(3) = 15.
Hopefully this article will help students and all those teachers out there who find themselves at the mercy of little combinatorial geniuses who happen to work their way into your classrooms.
For more information, see A Discrete Transition To Advanced Mathematics by Bettina and Thomas Richmond, published by the American Mathematical Society. Chapter four of the text is very helpful on
this topic.
Lowell Parker, Ph.D.
Empire State College | {"url":"https://www.24houranswers.com/blog/9/Math-Homework-101---Permutation-Perspiration/","timestamp":"2014-04-17T12:28:52Z","content_type":null,"content_length":"21331","record_id":"<urn:uuid:6ecde5cd-81c1-4426-a99d-a40063b589be>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00628-ip-10-147-4-33.ec2.internal.warc.gz"} |
Multivariable calculus, Section 105
MATH 200: Multivariable calculus, Section 105
• Instructor: Julia Gordon.
• Where and when: Tuesday and Thursday, 9:30-11am, at Buchanan A, room 201.
• Instructor's office: Math 217.
• e-mail: gor at math dot ubc dot ca
• Office Hours: Tuesday 2-4pm, Thursday 11:05am -- noon, and by appointment.
All the information about the course is at
the common course website.
• Current:
□ Review session for the final exam: Wednesday December 11, 12-2pm, in BIOL 2000.
□ Office hours after the end of term:
☆ Thursday December 12, 10am -- noon
☆ Friday December 13, 3-5pm.
• Older:
□ With apologies, the office hours on October 15 and 17 are cancelled, since I am away. The lectures are happening this week at the usual times, of course. There will be some extra office hours
next week when I come back, please follow the announcements. With exam marking questions, please wait until I come back, and resist asking Prof. Adams to change any marks.
□ Office hours before Midterm 1:
☆ Wednesday October 2, 3:45-6pm,
☆ Thursday October 3, 11:10am-12:20pm.
☆ Monday October 7, 12:30-2:30pm.
□ Review session for Midterm 1: Thursday October 3, 5-7 pm, in LSK 460.
Section-specific notes (posted occasionally, when there's something unusual)
• September 12: Notes from September 12 (by Prof. Silberman)
• September 24: In-class worksheet (distance between skew lines) from September 24, with solutions, is here .
• September 26/October 1: Here is the contour plot of a certain function f(x,y), that was discussed at the end of the class on Thursday September 26. There is a small prize for asking the best
question about this plot the next class (Tue, Oct. 1). Hint: look for things that are a bit weird about this plot, and if anything seems strange, ask about it the next class! The best questions
and explanations will be posted here.
Sadly, the prize was not awarded.
One feature some people noticed was the strange behaviour of the plot near the line y=-3x where the function is undefined.
However, no-one was surprised to see level curves meeting at a point ( here is a picture zoomed in around that point; you can see the coordinates of the point on top). Note that level curves can
never have a common point in the domain of the function; here the point at which they appear to "meet" is outside the domain. What the picture shows is that the function f(x,y) approaches
different values as (x,y) approaches this point, depending on the curve along which you are approaching. More about such phenomena is in 14.2 (which is not part of the course).
Another surprising feature is that we do not see any level curves in a certain region in the middle; on the other hand, every point in the domain lies one some level curve. What happens is, not
all level curves are hyperbolas as it seems; in fact, there are some ellipses in that region that initially seemed empty -- you just have to force the computer to draw them. A couple are pictured
here and here .
• October 1: Please read this post (by Joseph Lo) on tangent planes, and geometric meaning of partial derivatives. It was discussed on October 1, and will be discussed again on October 3. (You
might have to log in with CWL to read it).
• Required reading by Thursday October 3: 14.3 and 14.4.
• Required reading by Thursday October 10: 14.4 (was covered on October 8), 14.5 (we started it on Otober 8), and 14.6 (will mostly talk about 14.6 on Thursday October 10, and will come back to
14.5 next week). | {"url":"http://www.math.ubc.ca/~gor/Math200_2013/s105.html","timestamp":"2014-04-20T18:24:03Z","content_type":null,"content_length":"5736","record_id":"<urn:uuid:b480b2cd-67bc-4ec0-b375-89486d5c1074>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00028-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matrix norms, sup,max and headache
September 16th 2010, 02:48 AM
Matrix norms, sup,max and headache
I am reading (again) about matrix norms and have a few questions.
The definition I have says that given any norm $||*||$ on the space $\mathbb{R}^n$ of $n$-dimensional vectors with real entries, the subordinate matrix norm on the space $\mathbb{R}^{n x n}$ of
$n x n$ matrices with real entries is defined by
$||A|| = \max_{x\in\mathbb{R}^n\backslash\{0\}}\frac{||Ax|| }{||x||}.$
I've also read that it can be defined as,
$||A|| = \sup_{||x||=1}||Ax||.$
The way I "understand" the second definition is that we take all the vectors
whose norm is one, and multiply them with the matrix A to create a set of vectors, say,
I believe that there is an infinite number of vectors with norm one..
We then take the norm of all these new vectors to get a set of real numbers. We now find the smallest real number that is larger than all the real numbers in this set, and use it as the norm of
the matrix
. By the definition of
, I believe that this number that is larger than all numbers in our set is not part of the set.
As for the first definition, I am tempted to say that we take all vectors
and divide then by
to make a unit vector.We then multiply this unit vector by
to get the same set of vectors (
) as before. We take the norm of these vectors and get a set of numbers. Since
is involved I guess that this implies that the upper bound is in this set, and not outside of it as it is with
... I do not understand this..
I read somewhere that since
is a scalar, we have that
$||A||_p = \sup_{xeq 0}\frac{||Ax||_p}{||x||} = \sup_{xeq 0}||\frac{Ax}{||x||_p}||_p$
not sure how that works either..
By the way, I've also seen the matrix norm defined as,
$<br /> ||A|| = \sup_{x\in\mathbb{R}^n\backslash\{0\}}\frac{||Ax|| }{||x||}.$
I am confused. Hope someone can take the time and explain this to me, thanks.
September 16th 2010, 04:04 AM
After a bit more research I've found that if
$||A||=\sup_{||x||=1}||Ax||$ the domain of the continuous function $||Ax||$ of $x$ is closed and bounded and therefore the function will achieve its maximum and minimum values. We may then
replace $\sup$ with $\max$..
Sorry for all the ranting.. | {"url":"http://mathhelpforum.com/advanced-algebra/156386-matrix-norms-sup-max-headache-print.html","timestamp":"2014-04-17T14:13:56Z","content_type":null,"content_length":"10762","record_id":"<urn:uuid:a5d1d7b8-95e5-4679-8bc6-042190f5a547>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00505-ip-10-147-4-33.ec2.internal.warc.gz"} |
lim sin 4x/sin 6x as x approaches zero - Math Central
We have two responses for you
Hi Josh,
The theorem that I am sure you know that needs to be used hers is that the limit of sin(t)/t approaches 1 as t approaches 0. Thus if you had
you could make the variable substitution t = 4x. As x approaches 0 certainly 4x = t approaches zero so the limit above is 1. Similarly
But this isn't your problem, mine has an extra 6x in the numerator and an extra 4x in the denominator, but
Now if you take the limit of the right side as x approach er zero the first fraction approaches 1, the second fraction approaches 1 and the third fraction is (4x)/(6x) = 4/6 = 2/3. Thus the limit is
I hope this helps,
Hi Josh.
Looking at the fraction, I can see that it goes to zero divided by zero, so my first guess would be to see if I can use l'Hôpital's Rule on it: If, as x approaches 0, lim sin(4x) = 0 [and it does]
and lim sin(6x) = 0 [it does] and lim (sin 4x)′ / (sin 6x)′ has a finite value is +/- inifinity [we'll check this], then the lim sin(4x)/sin(6x) = lim (sin 4x)′ / (sin 6x)′.
So we need the derivative of sin 4x. Using the chain rule we have (sin 4x)′ = (4x)′ cos(4x) = 4 cos(4x).
And the same with the denominator.
So lim (sin 4x)′ / (sin 6x)′ = lim [4 cos(4x)] / [6 cos 6x]. As x approaches 0, this converges to 2/3. That's a finite value, so that's the same as the original limit.
Hope this helps,
Stephen La Rocque. | {"url":"http://mathcentral.uregina.ca/QQ/database/QQ.09.07/h/josh3.html","timestamp":"2014-04-19T12:11:56Z","content_type":null,"content_length":"8544","record_id":"<urn:uuid:e6e16300-23d0-42a7-ac21-cd0d1628bb58>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00485-ip-10-147-4-33.ec2.internal.warc.gz"} |
Computer Science, Math and Clever Dolphins
If $a, b, c$ are any three integers, then show that $$(abc)(a^3-b^3)(b^3-c^3)(c^3-a^3)$$ is divisible by $7$.
If you know a bit about number theory, one of the first methods we can guess might work on this problem would be modular arithmetic.
Basically, through a bit of magic, it turns out that if you're only considering the reminders of some expression against, say, $7$, you can replace all the numbers with their remainders by $7$.
So, as an example, if we want to find out the remainder of $15 \cdot 37$ by $7$, we can replace the $15$ by a $1$ (since $15$ gives $1$ as a remainder when divided by $7$) and we can replace the $35$
by a $2$.
Of course, mathematicians don't go about writing long sentences every time they want to use modular arithmetic, so, they use a short notation. The example just discussed can be represented by $$15 \
cdot 37 \equiv 1 \cdot 2 \equiv 2 (\pmod{7})$$
(notice the number of lines in equivalence sign, and notice how the calculation breaks down)
And, sure enough, $15*37=555$, which gives a remainder of $2$ when divided by $7$.
Now, back to the problem.
Basically, the problem is saying that we must prove $$(abc)(a^3-b^3)(b^3-c^3)(c^3-a^3) \equiv 0 (\pmod 7)$$
Firstly, we notice that we can replace each of $a$, $b$ and $c$ with their respective remainders by $7$ (often called residues mod $7$).
So, we only have to consider $$6 \ge a, b, c \ge 0$$
Hey, that's not too shabby! Can we just brute force it? Um... not quite, if you crunch some numbers (i.e. $6^3$), we see we'd still have to try $216$ different a's, b's and c's.
Let's try narrowing this down.
Well, if any of $a$, $b$ or $c$ are $0$ (remember, by that we mean divisible by $7$!), then $$(abc)(a^3-b^3)(b^3-c^3)(c^3-a^3) \equiv 0 (\pmod 7)$$
And, notice that the equation we're dealing with is symmetric; if you switch around $a$, $b$ and $c$, it doesn't make any difference! So, we only need to consider combinations of numbers, which means
that $a=1$, $b=2$, $c=3$ will give the same result as $a=2$, $b=1$, $c=3$.
Also, if $a=b$, $a=c$ or $c-a$, then our expression is still divisible by $7$, because the difference of cubes expressions will turn out to be $0$.
Now, we've weeded out most of what we can just by looking at the equation, now, we have to do a bit of work to see if we can get the rest of what's missing.
What about these $a^3$, $b^3$ and $c^3$ terms? Can't we do something about those?
Let's take a bit of detour (solving a problem is a bit like climbing a mountain; you've got to retrace your steps, take detours, maybe even start over to reach the top).
What are values can $a$ take on? $$0, 1, 2, 3, 4, 5, 6$$
Of course, we don't really care when its $0$, because that means our expression is divisible by $7$, so, the ones to consider: $$1, 2, 3, 4, 5, 6$$
We can also represent these are negative remainders: $$1, 2, 3, -3, -2, -1$$
What happens when we cube these (to get $a^3$)? We get: $$1, 8, 27, -27, -8, -1$$
And, these give these residues mod $7$: $$1, 1, 6, 1, 6, 6$$
But, since there are only two distinct residues, that means in order to pick three, we'll have to repeat one, so, we've proved $$(abc)(a^3-b^3)(b^3-c^3)(c^3-a^3) \equiv 0 (\pmod 7)$$!
What you just saw (i.e. looking at $a^3$) is known as a cubic residue; there are also quadratic residues and all kinds of other interesting structures derived from these.
In order to gain the most out of this problem, you'll probably want to read this discussion again. We had a few very important moves; firstly, only considering the remainders (since its a
divisibility problem) was essential, then, observing the expression to see what we could do with it was also incredibly important, and then, by looking at the cubic residues (and using the symmetric
nature of the problem), we finished the problem. | {"url":"http://poincare101.herokuapp.com/post/17","timestamp":"2014-04-20T18:22:56Z","content_type":null,"content_length":"9338","record_id":"<urn:uuid:cf41f4bc-1f6e-4a9c-a2d0-e8dc20c920db>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00476-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebra 2 @ KHS
Algebra 2 @ KHS This is a book designed to accompany Algebra 2 @ Kokomo High School. It is not complete in detail, but meant to mainly serve as a reference page to help students negotiate this
course. Explanations are more succinct than complete (other places on the web give more complete definitions). Also, practice problems with work will be included.
Table of Contents
Topic 1: Beyond Linear Functions
Topic 2: Understanding Inverse Relations
Topic 3: Transformations on Parent Functions
Topic 4: Introduction to Systems
Topic 5: Quadratic Functions
Topic 6: Building New Functions
Topic 7: Polynomial Functions
Topic 8: Polynomial Equations
Last modified on 20 December 2012, at 21:15 | {"url":"https://en.m.wikibooks.org/wiki/Algebra_2_@_KHS","timestamp":"2014-04-21T09:38:14Z","content_type":null,"content_length":"14769","record_id":"<urn:uuid:55a52fb7-debe-4dad-a32b-ce5996b429c9>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00577-ip-10-147-4-33.ec2.internal.warc.gz"} |
Whats my line?
Re: Whats my line?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Whats my line?
hi bobbym
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Whats my line?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Whats my line?
hi bobbym
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Whats my line?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Whats my line?
hi bobbym
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Whats my line?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Whats my line?
what cultural misconception?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Whats my line?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Whats my line?
Well done Stefy! You get the second point.
bobbym 1
And the chance to baffle us with another if you like.
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: Whats my line?
hi guys
who's the next poser? if it's me then i give the posing to the one of you who wants it.if not then,bobbym,post away.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Whats my line?
I have 6.7 billion more!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Whats my line?
k,then post away bobbym.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Whats my line?
By popular demand I have another one.
Clue #1) He is a controversial guy.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Whats my line?
hi bobbym
1) Is his work good or bad for the society?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Whats my line?
I would have to say good.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Whats my line?
2)Is his work connected to computers?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Whats my line?
Question is too general to give a good answer. He uses a computer.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Whats my line?
hi bobbym
so the answer is yes. -.-''
3)Is his work connected to the media?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Whats my line?
Again, too general. He has been on television but his line is not in media.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Whats my line?
3)Does he work alone?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Whats my line?
When he is working there are always other people.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Whats my line?
4) Is his work connected to sport?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Whats my line?
Nothing to do with sports.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Whats my line?
5) Is his work connected to music or films?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=203910","timestamp":"2014-04-21T09:45:22Z","content_type":null,"content_length":"40762","record_id":"<urn:uuid:a0fe43c5-722d-4d58-95cc-29fd3c98115f>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00008-ip-10-147-4-33.ec2.internal.warc.gz"} |
1989]: ‘Herbrand-analysen zweier Beweise des Satzes von Roth: Polynomial Anzahlschranken
Results 1 - 10 of 19
- Journal of Symbolic Computation , 2000
"... A new cut-elimination method for Gentzen’s LK is defined. First cut-elimination is generalized to the problem of redundancy-elimination. Then the elimination of redundancy in LK-proofs is
performed by a resolution method in the following way: A set of clauses C is assigned to an LK-proof ψ and it is ..."
Cited by 28 (10 self)
Add to MetaCart
A new cut-elimination method for Gentzen’s LK is defined. First cut-elimination is generalized to the problem of redundancy-elimination. Then the elimination of redundancy in LK-proofs is performed
by a resolution method in the following way: A set of clauses C is assigned to an LK-proof ψ and it is shown that C is always unsatisfiable. A resolution refutation of C then serves as a skeleton of
an LK-proof ψ ′ with atomic cuts; ψ ′ can be constructed from the resolution proof and ψ by a projection method. In the last step the atomic cuts are eliminated and a cut-free proof is obtained. The
complexity of the method is analyzed and it is shown that a nonelementary speed-up over Gentzen’s method can be achieved. Finally an application to automated deduction is presented: it is
demonstrated how informal proofs (containing pseudo-cuts) can be transformed into formal ones by the method of redundancy-elimination; moreover, the method can even be used to transform incorrect
proofs into correct ones. 1.
- Philosophia Mathematica , 2003
"... Elementary arithmetic (also known as “elementary function arithmetic”) is a fragment of first-order arithmetic so weak that it cannot prove the totality of an iterated exponential function.
Surprisingly, however, the theory turns out to be remarkably robust. I will discuss formal results that show t ..."
Cited by 17 (5 self)
Add to MetaCart
Elementary arithmetic (also known as “elementary function arithmetic”) is a fragment of first-order arithmetic so weak that it cannot prove the totality of an iterated exponential function.
Surprisingly, however, the theory turns out to be remarkably robust. I will discuss formal results that show that many theorems of number theory and combinatorics are derivable in elementary
arithmetic, and try to place these results in a broader philosophical context. 1
- In Odifreddi [53 , 1996
"... Through his own contributions (individual and collaborative) and his extraordinary personal influence, Georg Kreisel did perhaps more than anyone else to promote the development of proof theory
and the metamathematics of constructivity in the last forty-odd years. My purpose here is to give ..."
Cited by 11 (0 self)
Add to MetaCart
Through his own contributions (individual and collaborative) and his extraordinary personal influence, Georg Kreisel did perhaps more than anyone else to promote the development of proof theory and
the metamathematics of constructivity in the last forty-odd years. My purpose here is to give
, 2000
"... This paper is a case study in proof mining applied to non-effective proofs in nonlinear functional analysis. More specifically, we are concerned with the fixed point theory of nonexpansive
selfmappings f of convex sets C in normed spaces. We study the Krasnoselski iteration as well as more general ..."
Cited by 11 (10 self)
Add to MetaCart
This paper is a case study in proof mining applied to non-effective proofs in nonlinear functional analysis. More specifically, we are concerned with the fixed point theory of nonexpansive
selfmappings f of convex sets C in normed spaces. We study the Krasnoselski iteration as well as more general so-called Krasnoselski-Mann iterations. These iterations converge to fixed points of f
under certain compactness conditions. But, as we show, already for uniformly convex spaces in general no bound on the rate of convergence can be computed uniformly in f . This is related to the
non-uniqueness of fixed points. However, the iterations yield even without any compactness assumption and for arbitrary normed spaces approximate fixed points of arbitrary quality for bounded C
(asymptotic regularity, Ishikawa 1976). We apply proof theoretic techniques (developed in previous papers of us) to non-effective proofs of this regularity and extract effective uniform bounds on the
rate of the asymptotic re...
- MATHEMATICAL KNOWLEDGE MANAGEMENT (MKM) 2006, VOLUME 4108 OF LECTURE NOTES IN ARTIFICIAL INTELLIGENCE , 2006
"... Cut-elimination is the most prominent form of proof transformation in logic. The elimination of cuts in formal proofs corresponds to the removal of intermediate statements (lemmas) in
mathematical proofs. The cut-elimination method CERES (cut-elimination by resolution) works by constructing a set o ..."
Cited by 9 (8 self)
Add to MetaCart
Cut-elimination is the most prominent form of proof transformation in logic. The elimination of cuts in formal proofs corresponds to the removal of intermediate statements (lemmas) in mathematical
proofs. The cut-elimination method CERES (cut-elimination by resolution) works by constructing a set of clauses from a proof with cuts. Any resolution refutation of this set then serves as a skeleton
of an LK-proof with only atomic cuts. In this paper we present an extension of CERES to a calculus LKDe which is stronger than the Gentzen calculus LK (it contains rules for introduction of
definitions and equality rules). This extension makes it much easier to formalize mathematical proofs and increases the performance of the cut-elimination method. The system CERES already proved
efficient in handling very large proofs.
, 2008
"... The distinction between analytic and synthetic proofs is a very old and important one: An analytic proof uses only notions occurring in the proved statement while a synthetic proof uses
additional ones. This distinction has been made precise by Gentzen’s famous cut-elimination theorem stating that s ..."
Cited by 9 (7 self)
Add to MetaCart
The distinction between analytic and synthetic proofs is a very old and important one: An analytic proof uses only notions occurring in the proved statement while a synthetic proof uses additional
ones. This distinction has been made precise by Gentzen’s famous cut-elimination theorem stating that synthetic proofs can be transformed into analytic ones. CERES (cut-elimination by resolution) is
a cut-elimination method that has the advantage of considering the original proof in its full generality which allows the extraction of different analytic arguments from it. In this paper we will use
an implementation of CERES to analyze Fürstenberg’s topological proof of the infinity of primes. We will show that Euclid’s original proof can be obtained as one of the analytic arguments from
Fürstenberg’s proof. This constitutes a proof-of-concept example for a semi-automated analysis of realistic mathematical proofs providing new information about them.
, 2007
"... This survey reports on some recent developments in the project of applying proof theory to proofs in core mathematics. The historical roots, however, go back to Hilbert’s central theme in the
foundations of mathematics which can be paraphrased by the following question ..."
Cited by 9 (1 self)
Add to MetaCart
This survey reports on some recent developments in the project of applying proof theory to proofs in core mathematics. The historical roots, however, go back to Hilbert’s central theme in the
foundations of mathematics which can be paraphrased by the following question
"... This short report presents the main topics of methods of cut-elimination which will be presented in the course at the ESSLLI'99. It gives a short introduction addressing the problem of
cut-elimination in general. Furthermore we give a brief description of several methods and refer to other papers ad ..."
Cited by 9 (8 self)
Add to MetaCart
This short report presents the main topics of methods of cut-elimination which will be presented in the course at the ESSLLI'99. It gives a short introduction addressing the problem of
cut-elimination in general. Furthermore we give a brief description of several methods and refer to other papers added to the course material. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=656174","timestamp":"2014-04-16T21:45:59Z","content_type":null,"content_length":"34761","record_id":"<urn:uuid:8f47f35f-7fce-46a9-9678-c013720363cc>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00518-ip-10-147-4-33.ec2.internal.warc.gz"} |
John Nash Jr by Reececlarke
John Forbes Nash Jr. was born June 13, 1928 in Bluefield, West Virginia. Mr. Nash Jr. is an American mathematician who won the 1994 Nobel Prize for his works in the late 1980’s on game theory. Game
theory is the study of strategic decision making or more formally known as the mathematical models of conflict and cooperation between intelligent and rational decision makers. Game theory is mainly
used in economics, political science, and psychology, as well as logic and biology. Mr. Nash Jr. has also contributed numerous publications involving differential geometry, and partial differential
equation (PDE). Differential geometry is a mathematical discipline that uses differential calculus and integral calculus, linear algebra and multi linear algebra to study geometry problems. Partial
differential equation is a differential equation that contains unknown multivariable functions and their partial derivatives. These are used to formulate problems involving functions of several
variables. Mr. Nash Jr. used all of these skills and is known for developing the Nash embedding theorem. The Nash embedding theorem stated that every Riemannian manifold ( a real smooth manifold
equipped with an inner product on each tangent space that varies smoothly from point to point) can be isometrically embedded into some Euclidean space ( a three dimensional space of Euclidean
geometry, distinguishes these spaces from the curved spaces of Non-Euclidean geometry). An example used on Wikipedia is the bending of a piece of paper with out stretching or tearing the paper gives
it an isometric embedding of the page into Euclidean space because curves drawn on the page retain the same arclenth however the page is bent. John Nash Jr. also made significant contributions tp
parabolic partial differential equations and to singularity theory. While... | {"url":"http://www.studymode.com/essays/John-Nash-Jr-1287248.html","timestamp":"2014-04-19T01:54:48Z","content_type":null,"content_length":"31787","record_id":"<urn:uuid:f5ca5449-dcef-416f-a9f8-02e3202f9ceb>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00372-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: On the regularization of mixed complementarity
R. Andreani \Lambda J. M. Mart'inez y B. F. Svaiter z
November 8, 1999
A variational inequality problem (VIP) satisfying a constraint qual
ification can be reduced to a mixed complementarity problem (MCP).
Monotonicity of the VIP implies that the MCP is also monotone. In
troducing regularizing perturbations, a sequence of strictly monotone
mixed complementarity problems are generated. It is shown that, if
the original problem is solvable, the sequence of computable inexact
solutions of the strictly monotone MCP's is bounded and every accu
mulation point is a solution. Under an additional condition on the
precision used for solving each subproblem, the sequence converges to
the minimum norm solution of the MCP.
Keywords. Variational inequalities, complementarity, perturbations,
inexact solutions, minimization algorithms, reformulation.
AMS: 90C33, 90C30
1 Introduction
The variational inequality problem was introduced as a tool in the study of | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/980/3873633.html","timestamp":"2014-04-19T01:56:33Z","content_type":null,"content_length":"8178","record_id":"<urn:uuid:1e399f84-05e8-40c6-acec-c2636ef936a7>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00121-ip-10-147-4-33.ec2.internal.warc.gz"} |
How Would I Construct The Program For This Question ... | Chegg.com
How would I construct the program for this question in ASP?
Image text transcribed for accessibility: A directed graph G can be described by a set of vertices, represented by facts vertex (a). vertex (b), Â…and a set of edges, represented by facts edge(a. b).
edge(a. c), . . . Use ASP to define the relation connected(X. Y) which holds off the re is a path in G connecting vertices X and Y. Consider the following graph for your assignment: Vertices = {a. b,
c, d, e} Edges = {(a. 6), (6?a), (a, c), (c, a), (b. d), (c, d). (d, e), (c, b)} Test your program using clingo and submit the corresponding output. (HINT: Use a recursive definition to define
connected(X. YJ))
Computer Science | {"url":"http://www.chegg.com/homework-help/questions-and-answers/would-construct-program-question-asp-q3803375","timestamp":"2014-04-18T14:52:26Z","content_type":null,"content_length":"18613","record_id":"<urn:uuid:5eff7e18-b22f-47ab-bcab-50d304c60533>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00386-ip-10-147-4-33.ec2.internal.warc.gz"} |
Comments on Walk Like a Sabermetrician: Discounted Team ChampionshipsA variation on this is exponential smoothing, whic...This is very interesting. Here's something to con...
18057215403741682609noreply@blogger.comBlogger2125tag:blogger.com,1999:blog-12133335.post-86522924583162654082007-06-08T11:28:00.000-04:002007-06-08T11:28:00.000-04:00A variation on this is
exponential smoothing, which you could apply to winning percentage as well. Your current state equals x (x a number between 0 and 1, reflecting the discount rate) times your current period's value
(could be championships, fractional championships (e.g., 1 for WS title, .5 for WS loss, .25 for playoffs), winning percentage) plus (1-x) times last month's state. March forward in time and keep
track.<BR/><BR/>An advantage of this is that it always gives you a value between 0 and 1 that means exactly what your measure is. Instead of the Cubs having "1378 discounted wins", they would have a
.469 discounted winning percentage.<BR/><BR/>This method is used for performance measures tracked by inspectors employed by the government to track aviation safety.parinellahttp://www.blogger.com/
profile/03802604259779936852noreply@blogger.comtag:blogger.com,1999:blog-12133335.post-13662470854602427202007-06-05T15:19:00.000-04:002007-06-05T15:19:00.000-04:00This is very interesting. Here's
something to consider to try to make this a bit more complicated and "accurate".<BR/><BR/>Presuming that a fan can only appreciate a World Series once he's 12 years old, then any World Series won by
a team prior to today would count as 0 for those fans. All 13 year olds will only appreciate a World Series won in the last year, etc, etc. <BR/><BR/>Secondly, your discount rate can apply (be it 3%
or 10% or whatever). That part doesn't change.<BR/><BR/>Then, you simply multiply the frequency of each group by the discounted championships that you figured for that age group, and all them all up
for each age group.<BR/><BR/>This will give you the average discounted championship for the average living fan.Tangotigerhttp://www.blogger.com/profile/11864323151591103655noreply@blogger.com | {"url":"http://walksaber.blogspot.com/feeds/1651567478869160613/comments/default","timestamp":"2014-04-19T00:26:02Z","content_type":null,"content_length":"6525","record_id":"<urn:uuid:4e3bd824-52eb-4b28-9496-609b661836ac>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00568-ip-10-147-4-33.ec2.internal.warc.gz"} |
PEC 8th Class Maths Model Paper
PEC 8th Class Maths Model Paper
Model paper of Maths includes the paper pattern or the format of 8th Class Maths Exam of Punjab Board. This Maths model paper will help the 8th Class Students to prepare for their Maths exam. This
paper pattern is assigned by the PEC for conducting 8th grade Maths exam. Through this Maths model paper the students can get the idea that how and what type of question will appear in the Maths
exam. PEC conducts the exam of Maths according to the paper pattern given below since the year 2009.
To download this Maths Model Paper of 8th Class to your computer click the link below.
Downloads: 3,640 Times
File Size: 200.0 KiB
Description: Punjab Board 8th Class Maths Model Paper for the exams conducted by PEC (Punjab Education Commission)
If the students want any help regarding Maths subject or the model paper of Maths, then they should let us know by the comment section.
22 Comments
1. yara yea tu bhot easy paper hai ab touf hu ga uffffffffffff allah g hamar akia banay ga
3. dont woory is mein se kuch bhi nahi ayey ga
4. To the Admin!
Thanks a lot for uploading model paper. Its quite helpful for teachers too, like me.
By the way,can you please provide the solution of this paper too.
Abide by my request as soon as possible. I need it urgently.
□ zaroori nai k paper main ya hi aaya or easy kaha say ha???????
8. plzzzzzzzzzzzz solution de dia kare
9. Nic it is good for student who have net.
10. respected sir i need the previous paper of board exams
□ We are working on it, will be available soon.
11. ashraf kya baat hai tum to result nahi datey
12. mjhe is saal ka board ka paper chahiye
□ We will update the past papers of 8th garde soon.
13. Is website se help h jati h
14. THIS INFORMATION IS VERY USE FULL FOR STUDENT BUT SIR, I ALSO WANT ALL BOOK EXERCISE SOLED EXAMPLES OF 8TH CLASS MATHS FOR SELF JUDGEMENT.
15. jo search karo results kuch aur detay hain ………banday pagal karnay wali website hai yeeeeeeeee……
□ Aap ko kia nahi mil raha yahan pe?
16. plzz gave the modle papaerzz of 2011 | {"url":"http://www.booknstuff.com/boards/pec/8th-class/pec-8th-class-maths-model-paper","timestamp":"2014-04-16T18:56:23Z","content_type":null,"content_length":"77265","record_id":"<urn:uuid:b4f4e09c-35b4-4be1-9793-5c125082d179>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00395-ip-10-147-4-33.ec2.internal.warc.gz"} |
Permutations and Combinations: How to Solve Problems
Main Index > Permutations and Combinations
How to Solve Permutations and Combinations Problems: Overview
The hardest part about solving permutations and combinations problems is: Which is which? Combinations sounds familiar; Think of combining ingredients, or musical chords. Permutation isn’t a word you
use in everyday language. With combinations, order doesn’t matter: Flour, salt and water in a bowl is the same as salt, water and flour. Permutations are the more complex of the two. Every little
detail matters. Eggs first? Then salt? Or flour first? Combinations and permutations each have their own formula:
This is just multiplication and division. The “!” is the factorial symbol. That’s just a special way of multiplying numbers. See: What is a Factorial?
Permutations and Combinations: Sample Problems
Sample problem #1: Five bingo numbers are being picked from a ball containing 100 bingo numbers. How many possible ways are there for picking different number combinations?
Step 1: Figure out if you have permutations or combinations. Order doesn’t matter in Bingo. Or for that matter, most lottery games.
Step 2: Put your numbers into the formula. The number of items (Bingo numbers) is “n.” And “k” is the number of items you want to put in order. You have 100 Bingo numbers and are picking 5 at a time,
Step 3: Solve:
That’s it! Sample problem #2. Five people are being selected for president, vice president, CEO, and secretary. How many different ways can the positions be filled?
Step 1: Figure out if you have permutations or combinations. You can’t just throw people into these positions; They are selected in a particular order for particular jobs. Therefore, it’s a
permutations problem.
Step 2: Put your numbers into the formula. There are five people who you can put on the committee. Only four positions are available. Therefore “n” (the number of items you have to choose from) is 5,
and “k” (the number of available slots) is 4:
Step 3: Solve:
That’s it!
Note: Oddly enough, a combination lock has the wrong name. It should be a permutation lock. Why? Because the order matters. Try entering the numbers in the wrong order and see if the lock opens :)
Check out our Youtube channel for more stats tips! More videos added every week. Subscribe for updates. Comments always welcome.
Questions? Ask our stats guy on our FREE forum. | {"url":"http://www.statisticshowto.com/how-to-solve-permutations-and-combinations-problems/","timestamp":"2014-04-19T12:08:10Z","content_type":null,"content_length":"22955","record_id":"<urn:uuid:e9e8302a-07d0-4852-aadf-f563fb072490>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00481-ip-10-147-4-33.ec2.internal.warc.gz"} |
P.: On the local convergence of quasi-Newton methods for constrained optimization
Results 1 - 10 of 20
, 1997
"... Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here
we consider problems with general inequality constraints (linear and nonlinear). We assume that first deriv ..."
Cited by 328 (18 self)
Add to MetaCart
Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we
consider problems with general inequality constraints (linear and nonlinear). We assume that first derivatives are available, and that the constraint gradients are sparse.
, 1995
"... this paper we examine the underlying ideas of the SQP method and the theory that establishes it as a framework from which effective algorithms can ..."
Cited by 114 (2 self)
Add to MetaCart
this paper we examine the underlying ideas of the SQP method and the theory that establishes it as a framework from which effective algorithms can
- SIAM JOURNAL ON OPTIMIZATION , 1995
"... ..."
, 1995
"... . In this paper some Newton and quasi-Newton algorithms for the solution of inequality constrained minimization problems are considered. All the algorithms described produce sequences fx k g
converging q-superlinearly to the solution. Furthermore, under mild assumptions, a q-quadratic convergence ra ..."
Cited by 17 (6 self)
Add to MetaCart
. In this paper some Newton and quasi-Newton algorithms for the solution of inequality constrained minimization problems are considered. All the algorithms described produce sequences fx k g
converging q-superlinearly to the solution. Furthermore, under mild assumptions, a q-quadratic convergence rate in x is also attained. Other features of these algorithms are that the solution of
linear systems of equations only is required at each iteration and that the strict complementarity assumption is never invoked. First the superlinear or quadratic convergence rate of a Newton-like
algorithm is proved. Then, a simpler version of this algorithm is studied and it is shown that it is superlinearly convergent. Finally, quasi-Newton versions of the previous algorithms are considered
and, provided the sequence defined by the algorithms converges, a characterization of superlinear convergence extending the result of Boggs, Tolle and Wang is given. Key Words. Inequality constrained
optimization, New...
, 1996
"... . In this paper we give a global convergence analysis of a basic version of an SQP algorithm described in [2] for the solution of large scale nonlinear inequality-constrained optimization
problems. Several procedures and options have been added to the basic algorithm to improve the practical perform ..."
Cited by 17 (4 self)
Add to MetaCart
. In this paper we give a global convergence analysis of a basic version of an SQP algorithm described in [2] for the solution of large scale nonlinear inequality-constrained optimization problems.
Several procedures and options have been added to the basic algorithm to improve the practical performance; some of these are also analyzed. The important features of the algorithm include the use of
a constrained merit function to assess the progress of the iterates and a sequence of approximate merit functions that are less expensive to evaluate. It also employs an interior point quadratic
programming solver that can be terminated early to produce a truncated step. Key words. Sequential Quadratic Programming, Global Convergence, Merit Function, Large Scale Problems. AMS subject
classifications. 49M37, 65K05, 90C30 1. Introduction. In this report we consider an algorithm to solve the inequalityconstrained minimization problem, min x f(x) subject to: g(x) 0; (1.1) where x 2 R
n , and...
- I⋅E I +w S⋅E S ES EI located Pareto optimum (a) (b) ZR E=w I⋅E I +w S⋅E S , 1999
"... The sequential quadratic programming (SQP) algorithm has been one of the most successful general methods for solving nonlinear constrained optimization problems. We provide an introduction to
the general method and show its relationship to recent developments in interior-point approaches. We emph ..."
Cited by 5 (0 self)
Add to MetaCart
The sequential quadratic programming (SQP) algorithm has been one of the most successful general methods for solving nonlinear constrained optimization problems. We provide an introduction to the
general method and show its relationship to recent developments in interior-point approaches. We emphasize large-scale aspects. Key words: sequential quadratic programming, nonlinear optimization,
Newton methods, interior-point methods, local convergence, global convergence ? Contribution of Sandia National Laboratories and not subject to copyright in the United States. Preprint submitted to
Elsevier Preprint 1 July 1999 1 Introduction In this article we consider the general method of Sequential Quadratic Programming (hereafter denoted SQP) for solving the nonlinear programming problem
minimize f(x) x subject to: h(x) = 0 g(x) 0 (NLP) where f : R n ! R, h : R n ! R m , and g : R n ! R p . Broadly defined, the SQP method is a procedure that generates iterates converging ...
, 1997
"... The constrained least-squares regularization of nonlinear ill-posed problems is a nonlinear programming problem for which trust-region methods have been developed. In this paper the convergence
theory of one of those methods is addressed. It will be proved that, under suitable hypotheses, local (sup ..."
Cited by 5 (1 self)
Add to MetaCart
The constrained least-squares regularization of nonlinear ill-posed problems is a nonlinear programming problem for which trust-region methods have been developed. In this paper the convergence
theory of one of those methods is addressed. It will be proved that, under suitable hypotheses, local (superlinear or quadratic) convergence holds and every accumulation point is second-order
stationary. Key words. Trust-region methods, Regularization, Ill Conditioning, Ill-Posed Problems, Constrained Minimization, Fixed-Point QuasiNewton methods. 1 Introduction Many practical problems in
applied sciences and engineering give rise to ill-conditioned (linear or nonlinear) systems F (x) = y (1) where F : IR n ! IR m . Neither "exact solutions" of (1) (when they exist), nor global
minimizers of kF (x) \Gamma yk have physical meaning since they are, to a great extent, contaminated by the influence of measuring and rounding errors and, perhaps, uncertainty in the model
formulation. From the ...
, 2009
"... As is well known, superlinear or quadratic convergence of the primal-dual sequence generated by an optimization algorithm does not, in general, imply superlinear convergence of the primal part.
Primal convergence, however, is often of particular interest. For the sequential quadratic programming (SQ ..."
Cited by 5 (5 self)
Add to MetaCart
As is well known, superlinear or quadratic convergence of the primal-dual sequence generated by an optimization algorithm does not, in general, imply superlinear convergence of the primal part.
Primal convergence, however, is often of particular interest. For the sequential quadratic programming (SQP) algorithm, local primal-dual quadratic convergence can be established under the
assumptions of uniqueness of the Lagrange multiplier associated to the solution and the second-order sufficient condition. At the same time, previous primal superlinear convergence results for SQP
required to strengthen the first assumption to the linear independence constraint qualification. In this paper, we show that this strengthening of assumptions is actually not necessary. Specifically,
we show that once primal-dual convergence is assumed or already established, for primal superlinear rate one only needs a certain error bound estimate. This error bound holds, for example, under the
second-order sufficient condition, which is needed for primal-dual local analysis in any case. Moreover, in some situations even second-order sufficiency can be relaxed to the weaker assumption that
the multiplier in question is noncritical. Our study is performed for a rather general perturbed SQP framework, which covers in addition to SQP and quasi-Newton SQP some other algorithms as well. For
example, as a by-product,
- of Unkown Multipath Channels Based on Block Precoding and Transmit Diversity,” in Asilomar Conference on Signals, Systems, and Computers
"... An algorithm for general nonlinearly constrained optimization is presented, which solves an unconstrained piecewise quadratic subproblem and a quadratic programming subproblem at each iterate.
The algorithm is robust since it can circumvent the difficulties associated with the possible inconsistency ..."
Cited by 5 (4 self)
Add to MetaCart
An algorithm for general nonlinearly constrained optimization is presented, which solves an unconstrained piecewise quadratic subproblem and a quadratic programming subproblem at each iterate. The
algorithm is robust since it can circumvent the difficulties associated with the possible inconsistency of QP subproblem of the original SQP method. Moreover, the algorithm can converge to a point
which satisfies a certain first-order necessary optimality condition even when the original problem is itself infeasible, which is a feature of Burke and Han's methods(1989). Unlike Burke and Han's
methods(1989), however, we do not introduce additional bound constraints. The algorithm solves the same subproblems as Han-Powell SQP algorithm at feasible points of the original problem. Under
certain assumptions, it is shown that the algorithm coincide with the Han-Powell method when the iterates are sufficiently close to the solution. Some global convergence results are proved and local
superlinear co...
, 2008
"... This paper addresses the need for nonlinear programming algorithms that provide fast local convergence guarantees no matter if a problem is feasible or infeasible. We present an active-set
sequential quadratic programming method derived from an exact penalty approach that adjusts the penalty paramet ..."
Cited by 4 (2 self)
Add to MetaCart
This paper addresses the need for nonlinear programming algorithms that provide fast local convergence guarantees no matter if a problem is feasible or infeasible. We present an active-set sequential
quadratic programming method derived from an exact penalty approach that adjusts the penalty parameter appropriately to emphasize optimality over feasibility, or vice versa. Conditions are presented
under which superlinear convergence is achieved in the infeasible case. Numerical experiments illustrate the practical behavior of the method. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1106109","timestamp":"2014-04-24T09:31:14Z","content_type":null,"content_length":"37864","record_id":"<urn:uuid:4d9a21e7-5931-48ed-aa2c-a699b5cc89ba>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00325-ip-10-147-4-33.ec2.internal.warc.gz"} |
Eclipse Community Forums: OCL » Checking whether "self" is a reference of it's "container"
Checking whether "self" is a reference of it's "container" [message #42386] Wed, 31 October 2007 20:48
Originally posted by: marco.pantel.unibw.de
Good morning!
I've been trying to find a solution to my problem for a while now but I
just can't find it...
Imagine the following situation in my ecore:
an EClass, e.g. "Calculation", contains two EReferences, let's say
"function" and "argument".
Another EClass, "Variable", which can be each of these references has
got an ocl-constraint.
Though the constraint should not be active when Variable acts as
"function", I wanted to check, if the "argument" of the container
(=Calculation) of Variable is the Variable itself.
My first approach was something like "self.container.argument=self" but
that is neither beautiful nor did it work... ;-)
Can anybody help me?
Regards, Marco
Re: Checking whether "self" is a reference of it's "container" [message #42541 is a reply to message #42386] Thu, 01 November 2007 10:28
Originally posted by: cdamus.ca.ibm.com
Hi, Marco,
If you have a container reference defined from Variable to Calculation
(i.e., an EReference whose eOpposite is the containment EReference from
Calculation to Variable), then you will be able to do what you need.
So, if you have
EClass Calculation
EReference variable : Variable [0..*]
{eOpposite = Variable::calculation}
EClass Variable
EReference calculation : Calculation [0..1]
{eOpposite = Calculation::variable}
then you can do
self.calculation.argument = self
in the context of the Variable EClass.
Of course, if and when https://bugs.eclipse.org/bugs/show_bug.cgi?id=152003
is implemented (which is more feasible now that we have the new-in-M3
support for OCL parser warnings and problem options), you would be able to
self.eContainer().oclIsKindOf(Calculation) and
self.eContainer().oclAsType(Calculation).argument = self
which is more ugly anyway. :-)
Marco Pantel wrote:
> Good morning!
> I've been trying to find a solution to my problem for a while now but I
> just can't find it...
> Imagine the following situation in my ecore:
> an EClass, e.g. "Calculation", contains two EReferences, let's say
> "function" and "argument".
> Another EClass, "Variable", which can be each of these references has
> got an ocl-constraint.
> Though the constraint should not be active when Variable acts as
> "function", I wanted to check, if the "argument" of the container
> (=Calculation) of Variable is the Variable itself.
> My first approach was something like "self.container.argument=self" but
> that is neither beautiful nor did it work... ;-)
> Can anybody help me?
> Regards, Marco
Powered by
. Page generated in 0.01588 seconds | {"url":"http://www.eclipse.org/forums/index.php/t/13434/","timestamp":"2014-04-19T06:56:23Z","content_type":null,"content_length":"29843","record_id":"<urn:uuid:da7a30e2-01c1-4c57-b046-5d0fc250d832>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00125-ip-10-147-4-33.ec2.internal.warc.gz"} |
Khan Academy
OUHS is a charter school that serves the east side of Oakland. They started their pilot with incoming 9th graders, although 10th graders and upperclassmen also used Khan Academy by the end of the
year. Students used Khan Academy in a learning lab 4 days a week in addition to their normal math class.
Teachers primarily used Khan Academy to 1) identify and fill gaps in student knowledge, and 2) support lessons being taught in math class. Throughout the year, students gained more confidence and
took responsibility over their own learning, which translated into significant academic achievement. After this pilot, OUHS plans to scale Khan Academy into all math classes.
In September 2011, Oakland Unity High School (OUHS) in Oakland, Calif. introduced a rotation blended learning program using Khan Academy in Algebra 1 and Algebra Readiness. This program served all
incoming high school freshmen.
Oakland Unity High School is a high expectation school with a safe environment, rigorous curriculum, and intensive supports. Approximately 95% of students are Latino or African-American and 85%
receive free or reduced lunch at OUHS.
In the summer of 2010, the school conducted a diagnostic test with all incoming freshman to evaluate basic algebra and arithmetic skills. Most students needed to retake an Algebra Readiness or
Algebra 1 course. The number of students scoring below basic (approximate score of 40%) decreased from 77% to 28%. The number scoring above proficient (approximate score of 60%) increased from 9% to
In September 2011, the school implemented a rotation blended learning model with a Khan Academy learning lab for all students enrolled in an Algebra Readiness or Algebra 1 course to close a “learning
and confidence” gap with its students. Students work in an online environment during the Learning Lab to work on content that directly supports instruction in their math course. Instructors
collaborate on a weekly basis to ensure that course curriculum is aligned and sequenced.
This year’s Algebra Readiness and Algebra 1 students scored consistently higher on solving equations, absolute value and the first semester final exams. That margin grew substantially for the most
rigorous test on systems of equations (from 37% to 74%). This suggested that improved habits through the Khan Academy approach were creating real improvements among students. This evidence of
superior performance was reinforced by the increased portion of scores above 80% on all of the tests.
Many Khan Academy features increased the quality and quantity of practice work among the OUHS students:
• Most exercises are not multiple-choice, which eliminates guessing
• Questions are randomly generated, which eliminates copying
• The short video clips engaged students and allowed them to replay the material until they understood it; and
• The online environment and Khan Academy’s overall design appeals to the students, resulting in significant engagement time.
• Students completed the Khan Academy exercises, finished written homework, paid attention in class and gained confidence to approach challenging problems. We truly believe that the Khan Academy
approach met student’s learning needs in order to deliver real learning in math proficiency.
Findings are from the report “Are We Asking the Right Questions?” assembled by David Castillo and Peter McIntosh. Learn more at unityhigh.org.
Oakland Unity in the press: | {"url":"http://www.khanacademy.org/coach-res/reference-for-coaches/oakland-unity/a/about-oakland-unity","timestamp":"2014-04-18T11:11:05Z","content_type":null,"content_length":"122270","record_id":"<urn:uuid:ed9a1eaf-9c19-4fdb-8b3e-33c22693c9f0>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00381-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reference request: Simple facts about vector-valued Sobolev space
up vote 3 down vote favorite
Let $V,H$ be separable Hilbert spaces such that there are dense injections $V \hookrightarrow H \hookrightarrow V^*$. (For example, $H = L^2(\mathbb{R}^n)$, $V = H^1(\mathbb{R}^n)$, $V^* = H^{-1}(\
mathbb{R}^n)$.) We can then define the vector-valued Sobolev space $W^{1,2}([0,1]; V, V^*)$ of functions $u \in L^2([0,1]; V)$ which have one weak derivative $u' \in L^2([0,1], V^*)$. Such spaces
arise often in the study of PDE involving time.
I would like a reference for some simple facts about $W^{1,2}$. For example:
• Basic calculus, like integration by parts, etc.
• The "Sobolev embedding" result $W^{1,2} \subset C([0,1]; H)$;
• The "product rule" $\frac{d}{dt} ||u(t)||\_H^2 = (u'(t), u(t))_{V^*, V}$
• $C^\infty([0,1]; V)$ is dense in $W^{1,2}$.
These are pretty easy to prove, but they should be standard and I don't want to waste space in a paper with proofs.
Some of these results, in the special case where $V$ is Sobolev space, are in L. C. Evans, Partial Differential Equations, section 5.9, but I'd rather not cite special cases. Also, in current
editions of this book, there's a small but significant gap in one of the proofs (it is addressed briefly in the latest errata). So I'd prefer something else.
reference-request fa.functional-analysis ap.analysis-of-pdes sobolev-spaces
I think this is a rather common situation in PDE's and analysis, because there are so many slight variants possible but it is often difficult to identify the right general formulation that covers
them all. Whenever I encountered this, my solution was to write out careful statements and proofs of what I needed and put them all into an appendix of my paper. – Deane Yang Feb 4 '12 at 11:37
@Deane: That's what I have in my current draft, and it was a good exercise, but the appendix is half as long as the paper itself. – Nate Eldredge Feb 4 '12 at 15:43
Keep it unless a referee or editor makes you take it out. You won't regret it. – Deane Yang Feb 5 '12 at 19:08
add comment
3 Answers
active oldest votes
J. Wloka "Partial differential equations", § 25 (p. 390 on, in my 1992 CUP edition) has an account of the space $W(0,T)=W_2^1(0,T)$ which is essentially the space $W^{1,2}
up vote 3 down vote ([0,T];V,V^*)$.
This looks like just what I want. Thanks, and thanks to all for the other suggestions, which also look good. – Nate Eldredge Feb 6 '12 at 17:43
add comment
Herbert Ammann's book on parabolic problems contains an excellent introduction.
up vote 4 down vote
add comment
If you read French then this book is the place you are looking for
Brézis, H. Opérateurs maximaux monotones et semi-groupes de contractions dans les espaces de Hilbert. (French) North-Holland Mathematics Studies, No. 5. Notas de Matemática (50).
North-Holland Publishing Co., Amsterdam-London; American Elsevier Publishing Co., Inc., New York, 1973. vi+183 pp.Inc.,
up vote 2 Another source
down vote
Barbu, Viorel(R-IASIM) Nonlinear differential equations of monotone types in Banach spaces. Springer Monographs in Mathematics. Springer, New York, 2010. x+272 pp. ISBN:
add comment
Not the answer you're looking for? Browse other questions tagged reference-request fa.functional-analysis ap.analysis-of-pdes sobolev-spaces or ask your own question. | {"url":"http://mathoverflow.net/questions/87486/reference-request-simple-facts-about-vector-valued-sobolev-space?sort=oldest","timestamp":"2014-04-18T16:13:45Z","content_type":null,"content_length":"64004","record_id":"<urn:uuid:4210dda9-3c3b-4804-b0c5-bd51ce9a6a50>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00006-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
(4b-2)(5b-4) would this = 9b-6
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50954161e4b0d0275a3c7594","timestamp":"2014-04-16T08:10:41Z","content_type":null,"content_length":"53593","record_id":"<urn:uuid:5786148b-ea0d-4244-b0ba-5773f764332f>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00445-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
find (dy/dx) by implicit differentiation (x/y)=(y/x)
• 6 months ago
• 6 months ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/525c6124e4b002bdb0909865","timestamp":"2014-04-18T23:21:10Z","content_type":null,"content_length":"158355","record_id":"<urn:uuid:cd64992c-9126-43c1-a0b2-dad99206dfed>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00239-ip-10-147-4-33.ec2.internal.warc.gz"} |
This is an application of the statistical theory of extreme values to astronomy. Once again the name of the illustrious statistician Sir Ronald Fisher comes up. for it was he who developed the theory
of extreme values. The problem has to do with the remarkably small dispersion (M[1], of the first brightest galaxies in clusters. This has made them indispensable as "standard candles", and they
have, for this reason, become the most powerful yardsticks in observational cosmology. There has existed, however disagreement as to the nature of these objects. The two opposing viewpoints have
been: that they are a class of "special" objects (Peach 1969; Sandage 1976; Tremaine and Richstone 1977); and at the other extreme, that they are "statistical", representing the tail end of the
luminosity function for galaxies (Peebles 1968, Geller and Peebles 1976).
In 1928, in a classic paper, R.A. Fisher and L.H.C. Tippett had derived the general asymptotic form that a distribution of extreme sample values should take - independent of the parent distribution
from which they are drawn! This work was later expanded upon by Gumbel (1958).
From extreme value theory, for clusters of similar size, the probability density distribution of M[1] for the "statistical" galaxies is given (Bhavsar and Barrow 1985) by the distribution in equation
6, which is often referred to as the first Fisher-Tippett asymptote, or Gumbel distribution.
where a is the parameter which measures the steepness of the luminosity function at the tail end. M[0] is the mode of the distribution and is a measure of the cluster mass, but with only a
logarithmical dependence on the cluster mass.
We can compare the above distribution with the data to answer the question: "Are the first-ranked galaxies statistical?" Figure 9 shows the maximum likelihood fit of equation (6) to the magnitudes of
the 93 first-ranked galaxies from a homogeneous sample of richness 0 and 1 Abell clusters. This excellent data for M[1] was obtained by Hoessel, Gunn and Thuan (1980, [HGT]). The fit is very bad. A
Kolmogorov-Smirnov test also rejects the statistical hypothesis with 99% confidence. If all the galaxies are special, it is not possible to have a theoretical expression for their distribution in
magnitude. Though we can make a simple argument that if they are formed from normal galaxies as a standard "mold", we expect a Gaussian distribution for their magnitudes or luminosities. This is in
fact what is assumed for their distribution by most observers, as seen from the available literature. This possibility is explored also, and figure 9 shows a Gaussian with the same mean and variance
as the data compared with the data for both cases where the magnitudes or the luminosities have a Gaussian distribution. The case where the luminosities are distributed as a Gaussian is called
expnormal (in analogy to lognormal) for the magnitudes. Neither, it can be seen from figure 9 is acceptable.
This result does not necessarily imply that all brightest galaxies are special. It does demand though, that not all first ranked galaxies in clusters are statistical. The question as to their nature,
implied by their distribution in magnitudes, was recently addressed by this author (Bhavsar 1989).
It was shown that a "two population model" (Bhavsar 1989) in which the first brightest galaxies are drawn from two distinct populations of objects - a class of special galaxies and a class of
extremes of a statistical distribution - is needed. Such a model can explain the details of the distribution of M[1] very well. Parameters determined purely on the basis of a statistical fit of this
model with the observed distribution of the magnitudes of the brightest galaxies, are exceptionally consistent with their physically determined and observed values from other sources.
The probability density distribution of the magnitudes of the special galaxies is assumed to be a Gaussian. This is the most general expression for the distribution of either the magnitudes or
luminosities of these galaxies, if they arise from some process which creates a standard mold with a small scatter, arising because of intrinsic variations as well as experimental uncertainty in
measurement. The distribution of M[1] for the special galaxies is given by
If a fraction, d, of the rich clusters have a special galaxy which competes with the normal brightest galaxy for first-rank, then the distribution that results is derived in Bhavsar (1989). This
expression, f (M[1]), which describes the distribution of M[1] for the first-ranked galaxies in rich clusters, for the two population model is given by equation (8)
The first term in the above expression gives the probability density distribution of special galaxies with the condition that the brightest normal galaxy in that cluster is always fainter. The second
term gives the probability density distribution of first-ranked normal galaxies, in clusters containing a special galaxy, but the special galaxy is always fainter. The last term gives the probability
density of normal galaxies in clusters that do not have a special galaxy. Equation (8) is our model's predicted distribution of M[1] for the brightest galaxies in rich clusters. The parameters in
this model are: i) [sp] - the standard deviation in the magnitude distribution of the special galaxies; ii) M[sp] - the mean of the absolute magnitude of the special galaxies; iii) a - the measure of
the steepness of the luminosity function of galaxies at the tail end; iv) M[ex] - the mean of the absolute magnitude of the statistical extremes given by M[ex] = M[0] - .577/a, we shall instead use
the parameter b = M[sp] - M[ex], the difference in the means of the magnitudes of special galaxies and statistical extremes; and v) d - the fraction of clusters that have a special galaxy.
We have chosen the maximum-likelihood method, being the most bias free, and therefore best suited, to determine the values of the parameters. There are five independent parameters and 93 values of
data. We maximize the likelihood function, defined by
where the function f (M[1][i]) is the value of f (M[1]) defined in equation (8), evaluated at each of the 93 values of M[1] respectively for i = 1 to 93. The values of the parameters that maximize £
give the maximum-likelihood fit of the model to the data. The parameters, thus determined, have the following values:
Figure 10 compares the data to the model [equation (8)] evaluated for the parameter values determined above. The fit is very good. Note that the fit is calculated using all 93 independent
observations, and not tailored to fit this particular histogram.
A further detailed statistical analysis by Bhavsar and Cohen (1989) of alternatives to the assumed Gaussian distribution of the magnitudes of special galaxies, along with a study of the confidence
limits of the parameters determined by maximum-likelihood and other statistical methods has determined a "best" model. It turns out that the models in which the luminosities have a Gaussian
distribution work marginally better. This may have been expected, luminosity being the physical quantity. This model requires 73% of the richness 0 and 1 clusters to have a special galaxy which is on
average half a magnitude brighter than the average brightest normal galaxy. As a result, about 66% of all first-ranked galaxies in richness 0 and 1 clusters are special. This is because in 7% of the
clusters, though a special galaxy is present, it is not the brightest.
Although it is generally appreciated that some of the brightest galaxies in rich dusters are a morphologically distinct class of objects (eg cD galaxies); we have approached the problem from the
viewpoint of the statistics of their distribution in M[1], and conclude that indeed some of the brightest galaxies in rich clusters are a special class of objects, distinct from the brightest normal
galaxies. Further we have been able to model the distribution of these galaxies. We have presented statistical evidence that the magnitudes of first-ranked galaxies in rich clusters are best
explained if they consist of two distinct populations of objects; a population of special galaxies having a Gaussian distribution of magnitudes with a small dispersion (0.21 mag), and a population of
extremes of a statistical luminosity function. The best fit model requires that 73% of the clusters have a special galaxy that is on average 0.5 magnitudes brighter than the brightest normal galaxy.
The model also requires the luminosity function of galaxies in clusters to be much steeper at the very tail end, than conventionally described. | {"url":"http://ned.ipac.caltech.edu/level5/Sept01/Bhavsar/Bhavsar4.html","timestamp":"2014-04-17T15:30:36Z","content_type":null,"content_length":"12981","record_id":"<urn:uuid:785e79b2-c092-450f-8e22-029cb640945f>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00096-ip-10-147-4-33.ec2.internal.warc.gz"} |
Petaluma Algebra Tutor
...He enjoys music, hiking and geocaching.Dr. Andrew G. has a Ph.D. from Caltech in environmental engineering science with a minor in numerical methods. In addition he has over 30 years experience
as a practicing atmospheric scientist and dispersion modeler.
13 Subjects: including algebra 1, algebra 2, calculus, physics
...I look forward to talking with parents about their concerns for their students. We CAN make a difference!The study of psychology at all levels, from high school through undergraduate, Master's
and Doctoral levels, including statistics, experimental design and methodology. As a doctoral student ...
20 Subjects: including algebra 1, algebra 2, calculus, trigonometry
...It's pretty awesome. And if I can help show at least one student that fact, then I'll have done my job. Oh by the way, it also doesn't hurt that my tutoring has also consistently raised grades
too- an added, (but important!) bonus.
11 Subjects: including algebra 2, algebra 1, calculus, geometry
...The writing section involves composing two separate essays under time constraints, so I recommend some forethought and practice; and I have considerable experience teaching writing. I have been
assisting undergraduates and grad students with sociology papers and theses for many years. During my...
42 Subjects: including algebra 1, English, reading, writing
...I played Violin at three different Churches in Santa Rosa. I currently am the Violinist at the 9AM mass at St Eugene's Cathedral. I satisfactorily completed 160 hours classroom training in
Industrial Sewing and was awarded a Certificate of Completion from Durham Technical Community College on April 1, 1991.
41 Subjects: including algebra 2, algebra 1, English, geometry | {"url":"http://www.purplemath.com/petaluma_algebra_tutors.php","timestamp":"2014-04-17T07:38:15Z","content_type":null,"content_length":"23710","record_id":"<urn:uuid:18e73269-6b68-46a2-adb2-db1d9d9ed16e>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00454-ip-10-147-4-33.ec2.internal.warc.gz"} |
Petaluma Algebra Tutor
...He enjoys music, hiking and geocaching.Dr. Andrew G. has a Ph.D. from Caltech in environmental engineering science with a minor in numerical methods. In addition he has over 30 years experience
as a practicing atmospheric scientist and dispersion modeler.
13 Subjects: including algebra 1, algebra 2, calculus, physics
...I look forward to talking with parents about their concerns for their students. We CAN make a difference!The study of psychology at all levels, from high school through undergraduate, Master's
and Doctoral levels, including statistics, experimental design and methodology. As a doctoral student ...
20 Subjects: including algebra 1, algebra 2, calculus, trigonometry
...It's pretty awesome. And if I can help show at least one student that fact, then I'll have done my job. Oh by the way, it also doesn't hurt that my tutoring has also consistently raised grades
too- an added, (but important!) bonus.
11 Subjects: including algebra 2, algebra 1, calculus, geometry
...The writing section involves composing two separate essays under time constraints, so I recommend some forethought and practice; and I have considerable experience teaching writing. I have been
assisting undergraduates and grad students with sociology papers and theses for many years. During my...
42 Subjects: including algebra 1, English, reading, writing
...I played Violin at three different Churches in Santa Rosa. I currently am the Violinist at the 9AM mass at St Eugene's Cathedral. I satisfactorily completed 160 hours classroom training in
Industrial Sewing and was awarded a Certificate of Completion from Durham Technical Community College on April 1, 1991.
41 Subjects: including algebra 2, algebra 1, English, geometry | {"url":"http://www.purplemath.com/petaluma_algebra_tutors.php","timestamp":"2014-04-17T07:38:15Z","content_type":null,"content_length":"23710","record_id":"<urn:uuid:18e73269-6b68-46a2-adb2-db1d9d9ed16e>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00454-ip-10-147-4-33.ec2.internal.warc.gz"} |
Browse by Author
Number of items: 2.
Collier, J
Collier, J. R. and McInerney, D. and Schnell, S. and Maini, P. K. and Gavaghan, D. J. and Houston, P. and Stern, C. D. (2000) A cell cycle model for somitogenesis: mathematical formulation and
numerical simulation. Journal of Theoretical Biology, 207 (3). pp. 305-316.
Collier, J. R. and Monk, N. A. M. and Maini, P. K. and Lewis, J. H. (1996) Pattern formation by lateral inhibition with feedback: a mathematical model of Delta-Notch intercellular signalling. Journal
of Theoretical Biology, 183 (4). pp. 429-446.
Gavaghan, D
Collier, J. R. and McInerney, D. and Schnell, S. and Maini, P. K. and Gavaghan, D. J. and Houston, P. and Stern, C. D. (2000) A cell cycle model for somitogenesis: mathematical formulation and
numerical simulation. Journal of Theoretical Biology, 207 (3). pp. 305-316.
Houston, P
Collier, J. R. and McInerney, D. and Schnell, S. and Maini, P. K. and Gavaghan, D. J. and Houston, P. and Stern, C. D. (2000) A cell cycle model for somitogenesis: mathematical formulation and
numerical simulation. Journal of Theoretical Biology, 207 (3). pp. 305-316.
Lewis, J
Collier, J. R. and Monk, N. A. M. and Maini, P. K. and Lewis, J. H. (1996) Pattern formation by lateral inhibition with feedback: a mathematical model of Delta-Notch intercellular signalling. Journal
of Theoretical Biology, 183 (4). pp. 429-446.
Maini, P
Collier, J. R. and McInerney, D. and Schnell, S. and Maini, P. K. and Gavaghan, D. J. and Houston, P. and Stern, C. D. (2000) A cell cycle model for somitogenesis: mathematical formulation and
numerical simulation. Journal of Theoretical Biology, 207 (3). pp. 305-316.
Collier, J. R. and Monk, N. A. M. and Maini, P. K. and Lewis, J. H. (1996) Pattern formation by lateral inhibition with feedback: a mathematical model of Delta-Notch intercellular signalling. Journal
of Theoretical Biology, 183 (4). pp. 429-446.
McInerney, D
Collier, J. R. and McInerney, D. and Schnell, S. and Maini, P. K. and Gavaghan, D. J. and Houston, P. and Stern, C. D. (2000) A cell cycle model for somitogenesis: mathematical formulation and
numerical simulation. Journal of Theoretical Biology, 207 (3). pp. 305-316.
Monk, N
Collier, J. R. and Monk, N. A. M. and Maini, P. K. and Lewis, J. H. (1996) Pattern formation by lateral inhibition with feedback: a mathematical model of Delta-Notch intercellular signalling. Journal
of Theoretical Biology, 183 (4). pp. 429-446.
Schnell, S
Collier, J. R. and McInerney, D. and Schnell, S. and Maini, P. K. and Gavaghan, D. J. and Houston, P. and Stern, C. D. (2000) A cell cycle model for somitogenesis: mathematical formulation and
numerical simulation. Journal of Theoretical Biology, 207 (3). pp. 305-316.
Stern, C
Collier, J. R. and McInerney, D. and Schnell, S. and Maini, P. K. and Gavaghan, D. J. and Houston, P. and Stern, C. D. (2000) A cell cycle model for somitogenesis: mathematical formulation and
numerical simulation. Journal of Theoretical Biology, 207 (3). pp. 305-316.
This list was generated on Sat Apr 19 07:24:47 2014 BST. | {"url":"http://eprints.maths.ox.ac.uk/view/author/Collier=3AJ=2E_R=2E=3A=3A.html","timestamp":"2014-04-19T06:53:54Z","content_type":null,"content_length":"14256","record_id":"<urn:uuid:62a9ca23-2017-4bd8-a921-d20cae02f0fb>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00344-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebraic theories
Algebras and modules
Higher algebras
Model category presentations
Geometry on formal duals of algebras
In an associative algebra $A$, the commutant of a set $B \subset A$ of elements of $A$ is the set
$B' = \{a \in A | \forall b \in B: a b = b a \}$
of elements in $A$ that commute with all elements in $B$.
The operation of taking a commutant is a contravariant map $P(A) \to P(A)$ that is adjoint to itself in the sense of Galois connections. In other words, we have for any two subsets $B, C \subseteq A$
the equivalence
$B \subseteq C' \qquad iff \qquad C \subseteq B'.$
Hence $B \subseteq B''$ and also $B' = B'''$.
Revised on July 27, 2011 15:57:10 by
Urs Schreiber | {"url":"http://www.ncatlab.org/nlab/show/commutant","timestamp":"2014-04-16T21:56:23Z","content_type":null,"content_length":"22242","record_id":"<urn:uuid:b491ef91-5092-48cd-9e4e-fb0f5de06160>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00246-ip-10-147-4-33.ec2.internal.warc.gz"} |
the Common Core Sta
Explore high school geometry with varied and dynamic teaching material, like exciting interactive activities, flexible graphs and diagrams, and summary quizzes. With hundreds of slides of interactive
teaching material, the resources are correlated to each of the eight Standards for Mathematical Practice in the Common Core State Standards, as well as state learning standards. | {"url":"http://www.boardworkseducation.com/hs-geometry-common-core_295/product-showcase","timestamp":"2014-04-20T05:53:09Z","content_type":null,"content_length":"40063","record_id":"<urn:uuid:c832c2ac-0398-41f4-8886-4e727850963f>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00467-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about quadratic equations on The Math Projects Journal
I usually tell this analogy in the spring. I use it as a hook during the Algebra unit on Quadratics, when introducing problems involving projectiles (Mission to Mars lesson). I am sharing it now
because it is timely, being that the rover Curiosity just landed on Mars this week. (Note: It would be useful to … Continue reading
Study parabolic curves through the design or water arcs SUBJECT: Algebra (Beginning & Advanced) TOPICS: Writing, graphing and solving quadratic equations; finding the roots, vertex, axis of symmetry,
directrix, and focus; solving by factoring, completing the square, and the quadratic formula; systems of quadratics PAGES: 3 Download PDF | {"url":"http://mathprojects.com/tag/quadratic-equations/","timestamp":"2014-04-18T18:11:24Z","content_type":null,"content_length":"31064","record_id":"<urn:uuid:834bcedb-a0ca-4e51-bfc8-81a4540ed6bf>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00491-ip-10-147-4-33.ec2.internal.warc.gz"} |
diffalg[power_series_solution] - expand the non-singular zero of a characterizable differential ideal into integral power series
Calling Sequence
power_series_solution (point, order, J, 'syst', 'params')
point - list or set of names or equations
order - non-negative integer
J - characterizable differential ideal
syst - (optional) name
params - (optional) name
• Important: The diffalg package has been deprecated. Use the superseding package DifferentialAlgebra instead.
• The function power_series_solution computes a formal integral power series solution of the differential system equations , inequations . Such a system is formally integrable. See the last example
• The parameter point furnishes the point of expansion of the formal power series. It is a set or a list of equations where is one of the derivation variables and is its value.
If point is a singular point of equations (J), then power_series_solution returns FAIL. Nevertheless, this does not mean that no formal power series solution exists at that point.
• When point is not singular, the series is truncated at the order given by the parameter order. They could be expanded up to any order, though convergence is not guaranteed.
The result is presented as a list of equations , where the are the differential indeterminates and the are series in the derivation variables.
• The series involve parameters corresponding to initial conditions to be given.
The parameters appear as u, where u is a differential indeterminate if it represents the value of the solution at point, or _Cu_x, where x is some derivation variable, if it represents the value of
the value of the first derivative of according to x at point.
The parameters must satisfy a triangular system of polynomial equations and inequations given by syst in terms of the parameters involved in the power series solution.
If present, the variable params receives the subset of the parameters involved in the power series solution that can almost be chosen arbitrarily if not for some inequations in syst.
• If J is a radical differential ideal represented by a list of characterizable differential ideals, the function power_series_solution is mapped on its component.
• The command with(diffalg,power_series_solution) allows the use of the abbreviated form of this command.
Important: The diffalg package has been deprecated. Use the superseding package DifferentialAlgebra instead.
Let us explain now why, in general, we have to start from a characterizable differential system instead of any differential system. Consider the differential system given by these two differential
We are looking for a solution starting as:
It seems that we can choose an initial condition () and that, by differentiating the equations, all the coefficients in the expansion can be expressed in terms of .
The first terms do not lead to any problem:
To compute the next term we can either differentiate or . The problem is that the results obtained are not compatible.
The system is not formally integrable as it stands. The only solution of the system is:
See Also
diffalg(deprecated), diffalg(deprecated)/differential_algebra, diffalg(deprecated)/differential_ring, diffalg(deprecated)/initial_conditions, diffalg(deprecated)/Rosenfeld_Groebner,
DifferentialAlgebra[PowerSeriesSolution], simplify, solve, subs
Was this information helpful?
Please add your Comment (Optional)
E-mail Address (Optional)
What is This question helps us to combat spam | {"url":"http://www.maplesoft.com/support/help/Maple/view.aspx?path=diffalg(deprecated)/power_series_solution","timestamp":"2014-04-19T09:32:30Z","content_type":null,"content_length":"164853","record_id":"<urn:uuid:a8ce5971-f32a-4c24-b89d-4e19787b65d0>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00540-ip-10-147-4-33.ec2.internal.warc.gz"} |
How "generalized eigenvalues" combine into producing the spectral measure?
up vote 4 down vote favorite
Hi... I am wondering how 'eigenvalues' that don't lie in my Hilbert space combine into producing the spectral measure. I study probability and I am quite ignorant in the field of spectral analysis of
operators on Hilbert spaces so please go easy on me :), yet i tried reading parts of the classical Simon and Reeds "Functional analysis" volume 1, and other books. I feel i am very far from an
answer. At least, now i can formulate my question.
The mathematical setting is the following: I consider a general (possibly unbounded) operator $A$ on a Hilbert space H with scalar product $(. , .)$, say $H = L^2( \mathbb{R} )$. $D(A)$ will be a
dense domain. I do understand that the spectrum $\sigma(A)$ is defined as the complementary of the resolvent set, and can be broken into continuous, residual and point spectra. In the case of
self-adjoint operators (hence closable), a very abstract version the Von-Neumann spectral theorem asserts that $A$ can be diagonalized using a spectral decomposition of the identity. The full setting
would look like (cf "Quantum physics for mathematicians" by Leon Takhtajan):
• There are a spectrum-indexed family of projectors $P_\lambda(.)$ $\lambda \in \sigma(A)$. These are the "spectral projectors" and reduce in finite dimension to $P_\lambda(x) = \sum_{ \mu \leq \
lambda }(x, e_\mu) e_\mu$, $e_\mu$ being the orthonormal diagonalizing basis. And when $\lambda$ is in the point spectrum: $dP_\lambda(x) = (x, e_\lambda) e_\lambda$
• The image of spectral projectors grows with respect to the spectral parameters so that the following identity is true: $$ \forall f \in H, P_\lambda \circ P_\mu(f) = P_{min(\lambda,\mu)}(f)$$
• The behavior regarding the $\lambda$ parameter is such that: $$ \forall (f,g) \in H^2, (P_\lambda(f), g) = \mu_{f,g}(]-\infty; \lambda])$$ with $\mu_{f,g}$ a measure. Because of this property,
$P_.(f)$ can be seen as a measure on H itself whatever that means.
• The spectral decomposition of the identity (trivial in finite dimension): $$ \forall f \in H, f = \int_{-\infty}^\infty dP_\lambda(f) $$
• The spectral decomposition of our operator: $$ \forall f \in H, Af = \int_{-\infty}^\infty \lambda d P_\lambda(f) $$ This last one being the generalization of the very basic linear algebra
identity valid for hermitian matrices: $$ \forall x \in \mathbb{R}^n, A(x) = \sum_{ \lambda \in \sigma(A) } \lambda (x, e_\lambda) e_\lambda $$ It is well known that at $\lambda$ an eigenvalue
(in the point spectrum), the spectral measure has a Dirac, as we find ourselves in same situation as the finite dimensional case. I am interested in "generalized eigenfunctions" that are
functions not necessarily in $L^2$, but that verify still $ Af = \lambda f$ for a certain $\lambda$ in the general spectrum. I am now including two classical examples.
In the case of the "position" operator: $$ D(A) = (\{ f \in H, \int x^2 f(x)^2 dx < \infty \}) $$ $$ (Af)(x) = x f(x) $$ The spectrum is well known and only continuous: $\sigma(A) = \mathbb{R}$. It
is obvious that the operator is already diagonal, and that matter is reflected by the fact that the "generalized eigenfunctions" are Dirac distributions: $$ (A \delta_\lambda)(x) = x \delta_\lambda
(x) = \lambda \delta_\lambda(x)$$ $$ \forall f \in H, Af = \int_{\mathbb{R}} \lambda f(\lambda) \delta_\lambda(.) $$
In the case of $A = -\Delta$: $$ D(A) = (\{ f \in H, \int f''(x)^2 dx < \infty \}) $$ The spectrum is well known and only continuous: $\sigma(A) = \mathbb{R}^+$. The operator is diagonalized using
the Fourier transform $\mathcal{F}$, and that matter is reflected by the fact that the "generalized eigenfunctions" are complex unitary characters $e^{i\sqrt{\lambda}x}$ and $e^{-i\sqrt{\lambda}x}$.
The spectral theorem takes a simple shape thanks to the Fourier transform. Indeed: $$ \forall f \in H, \mathcal{F}(Af)(k) = k^2 \mathcal{F}(f)(k) $$ Then: $$ \forall f \in H, (Af)(x) = \frac{1}{2 \
pi} \int_{\mathbb{R}} k^2 e^{-i k x} \mathcal{F}(f)(k) dk = \frac{1}{2 \pi} \int_{\mathbb{R}^+} \lambda ( e^{-i \sqrt{\lambda} x}\mathcal{F}(f)(\sqrt{\lambda}) - e^{i \sqrt{\lambda} x}\mathcal{F}(f)
(-\sqrt{\lambda}) ) d\lambda$$ This last way of writing the operator 'diagonalization' shows that the spectral measure is a superposition of the two types of 'waves' (positively propagating $e^{i\
sqrt{\lambda}x}$ and negatively propagating $e^{-i\sqrt{\lambda}x}$) with a weight given by the Fourier transform of f, whatever that really means.
In those two cases, we see that those "generalized eigenfunctions" (Diracs and unitary complex characters) combine in a special way in order to produce the spectral measure. I read somewhere a
sentence that left me puzzled: those eigenfunctions combine into a Schwartz kernel. I think i read that in "Quantum physics for mathematicians" by L. Takhtajan. Now i get the feeling that fully
diagonalizing a self-adjoint operator can be a very hard task. Can you also provide me for other references than those i used to far?
In the end, my question could be formulated as the following, even if i am not satisfied with it, as it is still too vague: Suppose that by some mean, i know all or some "generalized eigenfunctions".
Then can i express the spectral measure in terms of those eigenfunctions? If so, how?
Side questions:
• It seems natural to ask my generalized eigenfunctions to be a complete orthonormal system, whose cardinal is the cardinal of the spectrum at least (if no multiplicities). It then makes me feel
that those functions (or distributions) will lie in a non-separable space as all orthonormal systems is separable spaces are countable. For the same reason, an operator's point spectrum is always
countable. This makes me think that generalized eigenfunctions have to be looked for in a very big space with a special topology.
• Why some "generalized eigenfunctions" count and others don't? I am thinking of the Laplacian case, as any $f(x) = e^{zx}, z \in \mathbb{C}$ verifies $f'' = z^2 f$. But clearly, only those with
imaginary $z$ count. There are also the harmonic polynomials. Is this related to the fact of being unitary?
I would very much appreciate any references or hints. And i am sorry if my question is not stated in the proper terms of spectral analysis/operator theory.
Phew that was long to type...
sp.spectral-theory oa.operator-algebras differential-operators fa.functional-analysis
1 -1 for two points. First of all, $(P_\lambda(f),g)=\mu(]-\infty;\lambda]$ is certainly false. How could it be true for every $f$ and $g$ in $H$ ? Second, you employ $\mu$ once for a measure, then
in the next line for a number (in $\min(\lambda,\mu)$). – Denis Serre Oct 10 '10 at 13:50
1 I don't think it is such a big deal. It's just notation, and the problem is simply solved by renaming the measure $\mu$ as $\mu_{f,g}$. – Martin Argerami Oct 10 '10 at 14:24
True... i'll edit my post... that part i believe is not very clear... Sorry i am no expert and just learned these things... – Reda Oct 10 '10 at 14:41
add comment
1 Answer
active oldest votes
Wouldn't Theorem 4.2 in here answer your question?
up vote 4 down
vote accepted
Wow, thanks... This definitely answers quite a lot... I indeed suspected there was the need for "completing" the space... So i'll read more about those Hilbert-Schmidt
riggings... I may be able to better formulate or even answer my side questions... Thanks again – Reda Oct 10 '10 at 15:28
Gee i should be ashamed... That was the first google hit for "generalized eigenfunctions"... I had tried a bunch of things on google, but not that... – Reda Oct 10 '10 at 15:39
3 Actually, that's how I found it. I had never thought of the possibility of finding "eigenfunctions" outside the natural Hilbert space, so I just googled "generalized
eigenfunction". – Martin Argerami Oct 10 '10 at 16:28
add comment
Not the answer you're looking for? Browse other questions tagged sp.spectral-theory oa.operator-algebras differential-operators fa.functional-analysis or ask your own question. | {"url":"http://mathoverflow.net/questions/41675/how-generalized-eigenvalues-combine-into-producing-the-spectral-measure/41681","timestamp":"2014-04-24T00:16:17Z","content_type":null,"content_length":"65149","record_id":"<urn:uuid:34b1cf70-f332-4fc3-bfa2-bb409359e0c0>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00619-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fundamental Counting Principle
Using the Fundamental Counting Principle to Determine the Sample Space
As we dive deeper into more complex probability problems, you may start wondering, "How can I figure out the total number of outcomes, also known as the sample space?"
We will use a formula known as the fundamental counting principle to easily determine the total outcomes for a given problem. First we are going to take a look at how the fundamental counting
principle was derived, by drawing a tree diagram.
Example 1
We were able to determine the total number of possible outcomes (18) by drawing a tree diagram. However, this technique can be very time consuming.
The fundamental counting principle will allow us to take the same information and find the total outcomes using a simple calculation. Take a look.
Example 1 (continued)
As you can see, this is a much faster and more efficient way of determining the total outcomes for a situation.
Let's take a look at another example.
Example 2
I would not want to draw a tree diagram for Example 2! However, we were able to determine the total outcomes by using the fundamental counting principle.
Let's look at one more example and see how probability comes into play.
Example 3
Now it's your turn. No tree diagrams!
Practice Problem
Answer Key
Although you may think that drawing the tree diagrams is fun, it's much easier to use the formula, isn't it? I hope you had fun - now it's time to move on.
We would love to hear what you have to say about this page!
Probability Lessons
Subscribe To This Site | {"url":"http://www.algebra-class.com/fundamental-counting-principle.html","timestamp":"2014-04-19T17:06:34Z","content_type":null,"content_length":"27414","record_id":"<urn:uuid:a37e30c5-c8ad-4247-b3ed-4329b3c8588a>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00318-ip-10-147-4-33.ec2.internal.warc.gz"} |
A symbolic framework for operations on linear boundary problems
Rosenkranz, Markus and Regensburger, Georg and Tec, Loredana and Buchberger, Bruno (2009) A symbolic framework for operations on linear boundary problems. In: Gerdt, V.P. and Mayr, E.W. and
Vorozhtsov, E.V., eds. Proceedings of the 11th International Workshop on Computer Algebra in Scientific Computing. Lecture Notes in Computer Science, 5743 . Springer, pp. 269-283. ISBN 9783642041020.
(The full text of this publication is not available from this repository)
We describe a symbolic framework for treating linear boundary problems with a generic implementation in the Theorema system. For ordinary differential equations, the operations implemented include
computing Green’s operators, composing boundary problems and integro-differential operators, and factoring boundary problems. Based on our factorization approach, we also present some first steps for
symbolically computing Green’s operators of simple boundary problems for partial differential equations with constant coefficients. After summarizing the theoretical background on abstract boundary
problems, we outline an algebraic structure for partial integro-differential operators. Finally, we describe the implementation in Theorema, which relies on functors for building up the computational
domains, and we illustrate it with some sample computations including the unbounded wave equation.
• Depositors only (login required): | {"url":"http://kar.kent.ac.uk/29967/","timestamp":"2014-04-20T15:58:44Z","content_type":null,"content_length":"22209","record_id":"<urn:uuid:72ee999e-3123-4179-9485-f2f8291522e2>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00196-ip-10-147-4-33.ec2.internal.warc.gz"} |
Metrics on the Phase Space and Non-Selfadjoint Operators
This is a four-hundred-page book on the topic of pseudodifferential operators, with special emphasis on non-selfadjoint operators, a priori estimates and localization in the phase space. The first
chapter, Basic Notions of Phase Space Analysis, is introductory and gives a presentation of very classical classes of pseudodifferential operators, along with some basic properties. The second
chapter, Metrics on the Phase Space, begins with a review of classical structures. We expose as well some elements of the so-called Wick calculus. We present some key examples related to the
Calderón-Zygmund decompositions such that the Fefferman-Phong inequality. We give a description of the construction of Sobolev spaces attached to a pseudodifferential calculus. The third and last
chapter is entitled Estimates for Non-Selfadjoint Operators. We discuss the details of the various types of estimates that can be proved or disproved, depending on the geometry of the symbols. We
start with a rather elementary section containing examples and various classical models such as the Hans Lewy example. The following sections are more involved; in particular we start a discussion on
the geometry of condition (Ψ) with some known facts on flow-invariant sets, but we expose also the contribution of N. Dencker in the understanding of that geometric condition, with various
inequalities satisfied by symbols. Then we enter into the discussion of estimates with loss of one derivative; we start with a detailed proof of the Beals-Fefferman result on local solvability with
loss of one derivative under condition (P). Following the author's counterexample, we show that an estimate with loss of one derivative is not a consequence of (Ψ). Finally, we give a proof of an
estimate with loss of 3/2 derivatives under condition (Ψ), following the articles of N. Dencker and the author's. It is our hope that the first two parts of the book are accessible to graduate
students with a decent background in Analysis. The third chapter is directed more to researchers but should also be accessible to the readers able to get some good familiarity with the first two
chapters, in which the main tools for the proofs of Chapter 3 are provided.
If you are interested in this book, you may go to the springer website, but you can also take a look separately at each chapter below.
Table of Contents, Preface
Chapter 1: Basic Notions of Phase Space Analysis
1.1. Introduction to pseudodifferential operators
1.2. Pseudodifferential operators on an open subset of R^n
1.3. Pseudodifferential operators in harmonic analysis
Chapter 2: Metrics on the Phase Space
2.1. The structure of the phase space
2.2. Admissible metrics
2.3. General principles of pseudodifferential calculus
2.4. The Wick calculus of pseudodifferential operators
2.5. Basic estimates for pseudodifferential operators
2.6. Sobolev spaces attached to a pseudodifferential calculus
Chapter 3: Estimates for Non-Selfadjoint Operators
3.1. Introduction.
3.2. The geometry of condition (Ψ)
3.3. The necessity of condition (Ψ)
3.4. Estimates with loss of k/k+1 derivative
3.5. Estimates with loss of 1 derivative
3.6. Condition (Ψ) does not imply solvability with loss of 1 derivative
3.7. Condition (Ψ) implies solvability with loss of 3/2 derivatives
3.8. Concluding remarks
Appendix, References, Index | {"url":"http://www.math.jussieu.fr/~lerner/index.pse.html","timestamp":"2014-04-20T03:11:30Z","content_type":null,"content_length":"4548","record_id":"<urn:uuid:ce7f3824-4853-418b-a605-d322305269b3>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00335-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Total # Posts: 98
At t = 0, an automobile traveling north begins to make a turn. It follows one-quarter of the arc of a circle of radius 10.1 m until, at t = 1.64 s, it is traveling east. The car does not alter its
speed during the turn. (a) Find the car's speed. (b) Find the change in its ...
A circular wire loop of mass M and radius R carries a current I. It hangs from a hinge that allows it to swing in the direction perpendicular to the plane of the loop. A uniform magnetic field B
exists everywhere in space pointing upwards. What angle θ does the plane of t...
A metal spring can be used as a solenoid. The spring is stretched slightly, and a current is passed through it. The resulting magnetic field will cause the spring to a. collapse (shorten) b. stretch
(lengthen) c. not change (nothing happens) d. ranslate along the current direc...
A charged particle with charge q is moving with speed v perpendicular to a uniform magnetic field. A second identical charged particle is moving with speed 2v perpendiculuar to the same magnetic
field. The time to complete one full circular revolution for the first particle is...
work out the velocity from right hand rule. Remember what current means in terms of conventional current.
electricity and magnets
Think about conventional current.
what statements?
not yet
So did this work out Lora?
That seems really small.
whats h? is that the height 10 cm?
There are two things in life, physics and everything else is stamp collecting.
Magnetic effect
Use kinematic equations for 1. Then you can work out the radius. For 2 what is the equation for potential energy of a magnetic field?
You have a pulley 13.9 cm in diameter and with a mass of 1.7 kg. You get to wondering whether the pulley is uniform. That is, is the mass evenly distributed, or is it concentrated toward the center
or the rim? To find out, you hang the pulley on a hook, wrap a string around it...
2 moles of helium (He) are pumped from a gas canister into a weather balloon. The balloon at this point has a volume V1. 1 mole of nitrogen gas (N2) is added at the same temperature and pressure. The
volume of the balloon changes to V2. How will the new volume of the balloon c...
When NH3 is added to solid Hg2Cl2, some of the mercury ions are oxidized and some are reduced. A reaction where the same substance is both oxidized and reduced is called a disproportionation
reaction. Write equations for each of these half-reactions. HELP
algebra 1
just like normal equations
physical science
two cars 200meters apart drive towards one another. If one car travels at 12 meters/s and the other at 26 meters/s how long does it take for them to collide?
10 grade algebra
10(x+5)=130 x+5=130/10 x+5=13 x=13-5 x=8
I divide exactly by 100 my first two digits are equal the sum of my four digits is 8
Vector A has a magnitude of 5.00 units, vector has a magnitude of 5.00 units, and the dot product A·B has a value of 16. What is the angle between the directions of A and ?
if 400 customers come to a grand opening and every 5th customers receives a coupon for jeans and every 7th customers receives a coupon for a free sweater. how many customers will leave with both
AP Chem
(MnO40- + (C2O4)2- + H+ = CO2 + H2O + ? OR (MnO4)- + (C2O4)2-+ OH-=CO2 n+ H2O +? Can somebody please help me predict the products? I can balance it i just need help determining what to balance.
a driver sees a horse on the road and applies the brakes so hard that they lock and the car skids to a stop in 24 m. the road is level and the coefficient of kinetic friction between the tires and
the road is 0.7. How fast was the car going when the brakes were applied?
AP Chemistry
thank you that was the only one that i couldn't get. that and number 7. im a little confused on how somebody could do that?
AP Chemistry
thank you that was the only one that i couldn't get. that and number 7. im a little confused on how somebody could do that?
AP Chemistry
The Synthesis of an Iron Oxalato Complex Salt ( Green Crystals ) In this experiment you will synthesize a compound that will contain the elements potassium, iron, carbon, hydrogen, and oxygen. Carbon
and oxygen will be present in the compound as oxalate (C2O4-2) wher...
The height, h(t), of a projectile, in metres, can be modelled by the equation h(t) = 14t 5t2, where t is the time in seconds after the projectile is released. Can the projectile ever reach a height
of 9m? Explain.
We are having an issue trying to solve for purple...
A toy train has 20 cars- blue, purple and orange. There are 4 times as many blue cars as purple cars. There are 5 times as many orange cars as purple cars. How many of each color are there?
Marie can afford a $250 per month car payment. She s found a 5 year loan at 7% interest. a. How expensive of a car can she afford? b. How much total money will she pay the loan company? c. How much
of that money is interest?
How much money will I need to have at retirement so I can withdraw $60,000 a year for 20 years from an account earning 8% compounded annually? a. How much do you need in your account at the beginning
b. How much total money will you pull out of the account? c. How much of that...
You deposit $1000 each year into an account earning 8% compounded annually. a. How much will you have in the account in 10 years? b. How much total money will you put into the account? c. How much
total interest will you earn?
How much would you need to deposit in an account now in order to have $20,000 in the account in 4 years? Assume the account earns 5% interest.
How do yo calculate a number to the power of 2.08 i.e. not a whole number. so 4 to the power of 2.08 Thanks.
The mole fraction of an aqueous solution of magnesium chlorate is 0.371. Calculate the molarity (in mol/L) of the magnesium chlorate solution, if the density of the solution is 1.39 g mL-1.
calculate the pH of 20.00mL of 2.00 M HCl diluted to 0.500 L
The equilibrium constant for the reaction: 2NO(g)+Br2(g)<==>2NOBr(g) Kc=1.3x10^-2 at 1000 Kelvin Calculate Kc for: NOBr(g)<==>NO(g)+ 1/2Br2(g)
Calculate the [OH-] for a solution in which [H+] is 1000 times greater than [OH-]? and will the final result be acidic, basic, or neutral?
assuming complete dissociation of the solute, how many grams of KNO3 must be added to 275 mL of water to produce a solution that freezes at -14.5 C? The freezing point for pure water is 0.0 C and is
equal to 1.86 C/m .
AP Physics
m1= mass of the bullet m2= mass of the block Vf=sqrt(2gh) v1= initial velocity of bullet v2= initial velocity of the block = 0 so first is conservation of momentum m1v1+m2v2=(m1+v2)Vf solve for v1
(note m2v2=0) v1=((m1+m2)/m2)/Vf to find Vf look at conservation of energy KE+PE...
whats 17.50x30
In the following experiment, a coffee-cup calorimeter containing 100 mL of H20 is used. The initial temperature of the calorimeter is 23.0 C . If 3.60 g of CaCl2is added to the calorimeter, what will
be the final temperature of the solution in the calorimeter? The heat of the ...
the molarity of a solution composed of 25.4g KHCO3 diluted with water to a total volume of 200 mL
A solution of .12 L of 0.160 M KOH is mixed with a solution of .3 L of 0.230 M NiSO4. the equation for this reaction is: 2KOH (aq)+NiSO4 (aq) ----->K2SO4 (aq)+Ni(OH)2 (s) .89 grams of Ni(OH)2
precipitate form I need to know the concentration remaining in the solution of: 1...
Chemistry 101
A solution of .12 L of 0.160 M KOH is mixed with a solution of .3 L of 0.230 M NiSO4. the equation for this reaction is: 2KOH (aq)+NiSO4 (aq) ----->K2SO4 (aq)+Ni(OH)2 (s) .89 grams of Ni(OH)2
precipitate form I need to know the concentration remaining in the solution of: 1)...
A scientist wants to make a solution of tribasic sodium phosphate, Na3PO4, for a laboratory experiment. How many grams of Na3PO4 will be needed to produce 400 mL of a solution that has a
concentration of Na+ ions of 1.00 M?
Find the slope of the line passing through the points A(-4, 3) and B(4, 2)
what dose porosity mean?
CHEMISTRY URGENT!!!
Determine the pH (to two decimal places) of the solution that is produced by mixing 30.3 mL of 1.00×10-4 M HBr with 829 mL of 5.77×10-4 M Ca(OH)2.
A mixture containing CH3OH, HF, CH3F, and H2O, all at an initial partial pressure of 1.04 atm, is allowed to achieve equilibrium according to the equation below. At equilibrium, the partial pressure
of H2O is observed to have increased by 0.43 atm. Determine Kp for this chemic...
Determine the volume (in mL) of 0.844 M hydrochloric acid (HCl) that must be added to 38.0 mL of 0.650 M sodium hydrogen phosphate (Na2HPO4) to yield a pH of 6.92. Assume the 5% approximation is
valid and report your answer to 3 significant figures. Look up pKa values.
9x-12 = 78 (add 12 to both sides) 9x-12+12 = 78+12 (divide by 9 to isolate x) 9x=90 9x/9 = 90/9 x = 10 Check * 9(10) - 12 = 78 (9 x 10) - 12 = 78 90 - 12 = 78 78 = 78
Determine the molarity (in mol/L) of NO2− ions in solution when 9.77×101 g of sodium nitrite is added to enough water to prepare 3.94×102 mL of the solution.
Physics II
Jose, whose mass is 75 kg, has just completed his first bungee jump and is now bouncing up and down at the end of the cord. His oscillations have an initial amplitude of 11.0 m and a period of 4.0 s.
From what height above the lowest point did Jose jump? k=185 n/m Vmax = 17 m/s
The mole fraction of an aqueous solution of sodium dichromate is 0.367. Calculate the molarity (in mol/L) of the sodium dichromate solution, if the density of the solution is 1.42 g mL-1.
chemistry question. How many grams of silver sulfide can be produced by heating 45 grams of silver excess sulfer?
A roller coaster at an amusement park has a dip that bottoms out in a vertical circle of radius r. A passenger feels the seat of the car pushing upward on her with a force equal to 2.6 times her
weight as she goes through the dip. If r = 20.0 m, how fast is the roller coaster ...
a 5-kg crate is resting on a horizontal plank. the coefficient of static friction is .5 and the coefficient of kinetic friction is .4. after one end of the plank is raised so the plank makes an angle
of 25 degrees with the horizontal, the force of friction is?
3rd grade
array for 5 times 1
we have a problem that my parents cant help with. 846*2=(...+40+...)*2=(800 over ...+...over2+6 over...)=(400+...+3)...=... ...means blank space for a number. Can you help with this problem, so that
I can figure out the rest of the problems.Thanks
5th grade
how do you break apart to divide: 648 by 6 using compatible numbers to break apart the dividend.
what about The Story of My Life
I need a new written add on verse of bob dylan chimes of freedom song has to be 8 lines long and the last line is An' we gazed upon the chimes of freedom flashing.
Its this symbol^ the computer on our webassign online doesn't recognize it so we need to replace that symbol^with a number so thats where I am lost
Need help showing me the steps without using the carrot function: Find the product. (2y + 2z)(2y - 3z)
The area A of a triangle is given by the formula A=1/2bh If the base of the triangle with height 12 inches is doubled, its area will be increased by 48 square inches. Find the base of the orignial
The area A of a triangle is given by the formula A=1/2bh If the base of the triangle with height 12 inches is doubled, its area will be increased by 48 square inches. Find the base of the orignial
Gr.7 - math
4. The dimensions of a king-size waterbed are 2.1m, 1.8m, and 23 cm. Find the volume and mass of the water it contains when full.
7th grade
5. What is the mass of the water? a. that fills a 1m^3 tank b. that fills a 2L bottle c. that would fill you empty classroom Thanks for your help!
I think your essay was good,but in the first paragraph you said "Throughout the year, the children that couldn t afford designer clothes were made fun of constantly for their attire. Keeping up with
the latest fashion trends was a must in our busy school and whoever ...
What is the definition of the word embers?
what is a ernmeyer flask and a granulated cylander used for?
Friedman test: Can !?
I have 28 participants. Each completed five tests. The maximum score a participant could receive on any one test was 4/4. The lowest score anyone could receive was zero. I will do a Friedman test for
this paired data. My question: given that I have a very large number of ident...
Thanks very much Bob. I much appreciate your taking the trouble to respond. All the best! Colin
I have 28 participants. Each completed five tests. The maximum score a participant could receive on any one test was 4/4. The lowest score anyone could receive was zero. I will do a Friedman test for
this paired data. My question: given that I have a very large number of ident...
Why is it drifting farther away
What is happening to the distance between Canada and europe
What is happening to the distance between Canada and Europe? Which plates meet near the west coast of Canada? In whihc direction are they moving?
Thank you
Name the five largest plates and on which plate Canada is located?
10th grade
Whatis the difference between the Hudson Bay Lowlands and the Arctic Lowlands?
potential energy is energy of _______ or chemistry?
what is the gcf of 24,32
a 10kg table needs to be pushed across the floor (µ=3). u stand behind it and push it at an anjgle of 30 degrees down with a force of 60N. wats the acceleration of the table?
Algebra (word problems)
thank u
Algebra (word problems)
dude i did my own effort. the problem is i dont know what im doing with these. i missed class cuz i was sick but if i can get the answers or if you would be so kind as to take the time to explain to
me how to do a problem of 2 that would be great. id give u what i have but i k...
Algebra (word problems)
1) a small pipe can fill a tank in 16 min more time than it takes a large pipe to fill the same tank. Working together, both pipes can fill the tank in 6 min. How long would it take each pipe working
alone to fill the tank? 2) a car travels 240 mi. a second car, travelling 12 ...
Okay i have a few function questions im gonna post. Im completely confused with these so any help would be great. Minimum or maximum value of: f(x)=2x^2+4x+1 and f(x)=x^2-3x+2 then if f(x)=x^2+9 and
g(x)=x-4, find (g*f)(-3) next is if f(x)=x^3+x^2-x-1 and g(x)=x+1, find (f/g)(...
What is the ratio of sulfur atoms to oxygen atoms in SO2?
simplify (numbers AFTER the variable represent exponents) ALSO these are all fractions, theres just no way to put the line x2-64 . x2-x-6 _ 1 x2-4x-12 x2+5x-24 x-6
a cessna can fly at a rate of 220 mph in calm air. traveling with the wind, the cessna flew 780 mi in the same amount of time as it flew 540 mi against the wind. find the rate of the wind. how do i
set this problem up? please explain in detail cuz i dont get these word problem...
an amtrak trainand a car leave new york city at 2 PM and head for a town 300 miles away. the rate of the amtrak train is 1 1/2 times the rate of the car. the train arrives 2h ahead of the car, find
the rate of the car
pigskin geography
where are the answers
pigskin geography
i hate pigskin geography
I know how to get the speed if it accelerated the whole time. How do I calculate the cruising speed? Starting from rest, a car travels 1,350 meters in 1 min. It accelerated at 1 m/s^2 until it
reached its cruising speed. Then it drove the remaining distance at constant velocit... | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Colin","timestamp":"2014-04-18T04:51:42Z","content_type":null,"content_length":"27425","record_id":"<urn:uuid:477f0545-49d8-46c8-a7c5-f61f4c00cb68>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00008-ip-10-147-4-33.ec2.internal.warc.gz"} |
Description: Based at the University of Plymouth, the Centre for Innovation in Mathematics Teaching has developed many instructional materials designed to help both novice and experienced math
teachers. This particular area of the website provides access to a number of interactive mathematics tutorials. The materials cover a variety of basic algebra and mathematics topics. Example
questions and exercises are provided within each unit. Topics include linear equations, decimals, fractions, basic arithmetic, fractions and more.
Date Of Record Release: 2011-01-10 03:00:01 (W3C-DTF) Date Last Modified: 2011-01-05 10:17:08 (W3C-DTF) | {"url":"https://amser.org/index.php?P=FullRecord&ResourceId=16389","timestamp":"2014-04-19T20:53:55Z","content_type":null,"content_length":"31627","record_id":"<urn:uuid:e3e26b19-68bd-4121-aad0-448f0435749a>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00060-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions - Middle School Math Research Electives
Date: Feb 15, 2013 7:40 PM
Author: nymathteacher
Subject: Middle School Math Research Electives
I was wondering if any of your schools (particularly middle schools) had mathematics electives? Especially mathematics research electives - classes where students would have enrichment in mathematical topic not taught by the standards (i.e. the golden ratio, fibonacci sequence, Pascal's triangle, etc.). If so, could you tell me a little bit about these programs in your school? How are they run? What is a typical class like?
Thank you | {"url":"http://mathforum.org/kb/plaintext.jspa?messageID=8342185","timestamp":"2014-04-20T09:02:01Z","content_type":null,"content_length":"1417","record_id":"<urn:uuid:9b9caee8-39d1-423e-8aa8-cd1dc7d869d1>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00515-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: RE: AW: Sample selection models under zero-truncated negative b
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: RE: AW: Sample selection models under zero-truncated negative binomial models
From sjsamuels@gmail.com
To statalist@hsphsun2.harvard.edu
Subject Re: st: RE: AW: Sample selection models under zero-truncated negative binomial models
Date Tue, 2 Jun 2009 13:21:27 -0400
A potential problem with Jon's original approach is that the use of
services is an event with a time dimension--time to first use of
services. People might not use services until they need them.
Instead of a logit model (my preference also), a survival model for
the first part might be appropriate.
With later first-use, the time available for later visits is reduced,
and number of visits might be associated with the time from first use
to the end of observation. Moreover, people with later first-visits
(or none) might differ in their degree of need for subsequent visits.
To account for unequal follow-up times, I suggest a supplementary
analysis in which the outcome for the second part of the hurdle model
is not the number of visits, but the rate of visits (per unit time at
On Tue, Jun 2, 2009 at 12:22 PM, Lachenbruch, Peter
<Peter.Lachenbruch@oregonstate.edu> wrote:
> This could also be handled by a two-part or hurdle model. The 0 vs. non-zero model is given by a probit or logit (my preference) model. The non-zeros are modeled by the count data or OLS or what have you. The results can be combined since the likelihood separates (the zero values are identifiable - no visits vs number of visits).
> Tony
> Peter A. Lachenbruch
> Department of Public Health
> Oregon State University
> Corvallis, OR 97330
> Phone: 541-737-3832
> FAX: 541-737-4001
> -----Original Message-----
> From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Martin Weiss
> Sent: Tuesday, June 02, 2009 7:02 AM
> To: statalist@hsphsun2.harvard.edu
> Subject: st: AW: Sample selection models under zero-truncated negative binomial models
> <>
> Try
> *************
> ssc d cmp
> *************
> HTH
> Martin
> -----Ursprüngliche Nachricht-----
> Von: owner-statalist@hsphsun2.harvard.edu
> [mailto:owner-statalist@hsphsun2.harvard.edu] Im Auftrag von John Ataguba
> Gesendet: Dienstag, 2. Juni 2009 16:00
> An: Statalist statalist mailing
> Betreff: st: Sample selection models under zero-truncated negative binomial
> models
> Dear colleagues,
> I want to enquire if it is possible to perform a ztnb (zero-truncated
> negative binomial) model on a dataset that has the zeros observed in a
> fashion similar to the heckman sample selection model.
> Specifically, I have a binary variable on use/non use of outpatient health
> services and I fitted a standard probit/logit model to observe the factors
> that predict the probaility of use. Subsequently, I want to explain the
> factors the influence the amount of visits to the health facililities. Since
> this is a count data, I cannot fit the standard Heckman model using the
> standard two-part procedure in stata command -heckman-.
> My fear now is that my sample of users will be biased if I fit a ztnb model
> on only the users given that i have information on the non-users which I
> used to run the initial probit/logit estimation.
> Is it possible to generate the inverse of mills' ratio from the probit model
> and include this in the ztnb model? will this be consistent? etc...
> Are there any smarter suggestions? Any reference that has used the similar
> sample selection form will be appreciated.
> Regards
> Jon
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2009-06/msg00056.html","timestamp":"2014-04-20T19:16:55Z","content_type":null,"content_length":"11118","record_id":"<urn:uuid:bf8af5c7-9af2-4c0b-a50c-88b08b75852d>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00006-ip-10-147-4-33.ec2.internal.warc.gz"} |
Can Gaussian Process Regression Be Made Robust Against Model Mismatch?
Peter Sollich
In: Lecture notes in computer science (2005) Springer .
Learning curves for Gaussian process (GP) regression can be strongly affected by a mismatch between the `student' model and the `teacher' (true data generation process), exhibiting e.g. multiple
overfitting maxima and logarithmically slow learning. I investigate whether GPs can be made robust against such effects by adapting student model hyperparameters to maximize the evidence (data
likelihood). An approximation for the average evidence is derived and used to predict the optimal hyperparameter values and the resulting generalization error. For large input space dimension, where
the approximation becomes exact, Bayes-optimal performance is obtained at the evidence maximum, but the actual hyperparameters (e.g. the noise level) do not necessarily reflect the properties of the
teacher. Also, the theoretically achievable evidence maximum cannot always be reached with the chosen set of hyperparameters, and maximizing the evidence in such cases can actually make
generalization performance worse rather than better. In lower-dimensional learning scenarios, the theory predicts---in excellent qualitative and good quantitative accord with simulations---that
evidence maximization eliminates logarithmically slow learning and recovers the optimal scaling of the decrease of generalization error with training set size.
PDF - Requires Adobe Acrobat Reader or other PDF viewer.
Postscript - Requires a viewer, such as GhostView | {"url":"http://eprints.pascal-network.org/archive/00000957/","timestamp":"2014-04-19T17:06:37Z","content_type":null,"content_length":"9116","record_id":"<urn:uuid:c1219565-55a9-44d5-bb6e-e4015233e890>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00063-ip-10-147-4-33.ec2.internal.warc.gz"} |
the definition of dot notation
Computing Dictionary
dot notation definition
networking Berkeley Unix
notation for an
Internet address
, consisting of one to four numbers (a "dotted quad") in
(leading 0x),
(leading 0), or (usually) decimal. It represents a 32-bit address. Each leading number represents eight bits of the address (high byte first) and the last number represents the rest. E.g. address
0x25.32.0xab represents 0x252000ab. By far the most common form is four decimal numbers, e.g. 146.169.22.42.
Many programs accept an address in dot notation in place of a | {"url":"http://dictionary.reference.com/browse/dot+notation?qsrc=2446","timestamp":"2014-04-16T04:44:07Z","content_type":null,"content_length":"91868","record_id":"<urn:uuid:f2016c19-4b9e-40f8-982f-5180208af198>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00232-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Complex Product and Quotient Using Similar Triangles
Two similar triangles can be used to construct the product of two complex numbers. The triangle with vertices is similar to the triangle with vertices . Similarly (pun intended), quotients can also
be constructed by noticing that the triangle with vertices is similar to the triangle with vertices .
By dragging the locators at and , a series of questions can be tackled in the classroom. For instance, when are , and collinear? When is a real number? When is an imaginary number (with real part
zero)? What happens if is a real number? What happens if is imaginary? When is a real number? When is an imaginary number? | {"url":"http://demonstrations.wolfram.com/ComplexProductAndQuotientUsingSimilarTriangles/","timestamp":"2014-04-21T12:24:36Z","content_type":null,"content_length":"43663","record_id":"<urn:uuid:c1ca00d2-c553-4a75-94f4-0b9cdecdad48>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00533-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Property related to denseness
Replies: 8 Last Post: Jan 16, 2013 4:34 PM
Messages: [ Previous | Next ]
Re: Property related to denseness
Posted: Jan 16, 2013 4:34 PM
On Jan 16, 5:17 am, quasi <qu...@null.set> wrote:
> [. . .]
> The terminologies
> "the trivial topology on X"
> "the indiscrete topology on X"
> are both accepted, but I think "the indiscrete topology" is the
> one more commonly used.
> Willard mentions both but favors "the trivial topology".
> Wikipedia has "the trivial topology" first
> <http://en.wikipedia.org/wiki/Trivial_topology>
> but indicates that "the indiscrete topology" is a perfectly
> acceptable alternate.
> Personally, I like "the trivial topology" better, but it's not
> a big deal either way.
No big deal if you're an AI. For us humans, there is a lot of
terminology to keep track of, and it is worthwhile to keep it as
natural and easy to remember as possible. Personally, I never had any
trouble remembering which of the two extremal topologies is "discrete"
and which is "indiscrete"; the words themselves are pretty good clues.
However, as far as I can see it's completely arbitrary which of those
two topologies is "trivial"; it's just another bit of jargon to
Date Subject Author
1/12/13 Property related to denseness Paul
1/12/13 Re: Property related to denseness Butch Malahide
1/14/13 Re: Property related to denseness Michael Stemper
1/14/13 Re: Property related to denseness Butch Malahide
1/15/13 Re: Property related to denseness Michael Stemper
1/16/13 Re: Property related to denseness Paul
1/16/13 Property related to denseness William Elliot
1/16/13 Re: Property related to denseness quasi
1/16/13 Re: Property related to denseness Butch Malahide | {"url":"http://mathforum.org/kb/message.jspa?messageID=8079679","timestamp":"2014-04-19T05:07:34Z","content_type":null,"content_length":"26541","record_id":"<urn:uuid:30f40bf0-902c-4176-aeca-eae781b8024e>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00044-ip-10-147-4-33.ec2.internal.warc.gz"} |
Maxwell's Displacement and Einstein's Trace
Maxwell’s Displacement and Einstein’s Trace
There’s a remarkable parallel between Maxwell’s development of the field equations of electromagnetism and Einstein’s development of the field equations of general relativity. Recall that when
Maxwell began his work there had already been established a set of relations describing the forces of electricity and magnetism. The earliest quantitative theories of electricity and magnetism were
formulated in terms of forces acting at a distance, analogous to Newton’s law of gravitation. John Michell had shown in 1750 that the poles of stationary magnets exert on each other a force directed
along the line between them with a magnitude inversely proportional to the square of the distance. Joseph Priestly (prompted by a suggestion from Benjamin Franklin) discovered in 1766 that the force
of attraction or repulsion between stationary electrically charged particles is of this same form, a fact which was verified by Charles Coulomb in 1785. These discoveries, together with the success
of the inverse-square law of gravity, led to the belief among most scientists that all the fundamental interactions of nature were to be understood in terms of inverse-square forces acting at a
distance along the direct line of sight. However, in 1820 Oersted discovered that a moving charge (such as a current flowing through a wire) produces a magnetic force, and that this force evidently
does not act along the direct line in sight. In 1830 Faraday discovered the reciprocal effect, i.e., that a moving magnet induces electric current in a wire. Ampere interpreted these effects, still
within the framework of action at a distance, by formulating a force law that depended not just on the distance but on the velocity of the charged particles. Continuing with this approach, Gauss and
Weber devised a fairly successful theory of electro-dynamics by the 1840s.
Beginning in the 1850’s, Maxwell elaborated ideas of Faraday to give a completely different account of electrodynamics, based on the concept of continuous fields of force. He showed that the
inverse-square force laws of Michell and Coulomb can equally well be expressed in terms of such fields. In integral form, the flux of the field over any closed surface equals the charge enclosed
within that surface. In differential form the same field law can be expressed by stating that the divergence of the field at any point equals the charge density at that point. Hence at any point free
of charge the divergence of the field is zero. (Incidentally, from the absence of magnetic charges, i.e., mono-poles, it follows that the divergence of the magnetic field vanishes everywhere.)
Maxwell also expressed the dynamical relations of Ampere and Faraday in terms of fields associated with moving magnets and electric charges. In all he arrived at a set of four partial differential
equations, representing the laws of Michell, Coulomb, Ampere, and Faraday. All the information encoded in these four equations had been derived directly from experiment.
Maxwell then added another term to Ampere’s law, which he called the displacement current. The magnitude of this term was far too small to have been perceptible in the experiments performed by Ampere
and Faraday, so its inclusion by Maxwell was motivated purely by theoretical considerations. Recall that Ampere’s original law, converted to differential form in terms of the magnetic field B, was
where j is the current density, understood to represent the flow of electric charge. Maxwell decided that this equation was incomplete, and the right hand side needed to be augmented by an additional
term which he called the displacement current. His first published explanation of this term was really just a plausibility argument from a crude mechanical analogy, but in context it strikes me as a
post facto rationalization rather than as a thought process that would have motivated the inclusion of the term in the first place. His explanation evolved over the years, as his ideas about suitable
mechanisms changed, but in essence he argued that the current density j at a given location does not actually represent the total current flow at that location (even though that is essentially its
definition). According to Maxwell, a dielectric medium can be considered to consist of couples of positive and negative charges, and an electric field E pulls these charges in opposite directions,
stretching the links between them until they achieve some kind of equilibrium. If the strength of the field is increased, the charges are pulled further apart, so during periods when the electric
field is changing there is movement of the electric charge elements of the dielectric medium. This movement of charge is what Maxwell calls the displacement current, proportional to ∂E/∂t, which he
adds to Ampere’s original formula to give (in suitable units)
However, Maxwell’s rationalization of this extra term is questionable in at least two respects. First, it’s reasonable to ask why the displacement current is not already included as part of the total
current density j at the given point. By definition, j is supposed to represent the flow of electric charge at a given location and time. Since Maxwell conceives of the displacement current as
literally a flow of electric charge, one could argue that it should already be included in j, especially since the experimental results did not indicate the need for any additional term. Second,
after introducing the concept of displacement current in dielectric media (where the existence of coupled electric charges is somewhat plausible), Maxwell goes on to apply the extra term to the
vacuum, where the presence of coupled electric charges (being pulled apart and held in equilibrium by a stationary electric field) is questionable. He certainly could not point to any evidence of
such disembodied charges existing in the vacuum. It’s true that some aspects of modern quantum field theory can be expressed in terms of pairs of oppositely charged virtual particles in the vacuum,
flashing in and out of existence within the limits of the uncertainty principle, but surely virtual particles were not what Maxwell had in mind when he conceived of his tangible mechanistic models of
the luminiferous ether. Without the uncertainty relations such particles would violate conservation of charge, a principle which Maxwell surely accepted. This principle can be expressed in
differential form as
Now, recall that Coulomb’s law is [], and if we take the partial of this with respect to time (and make use of the fact that partial differentiation is commutative), we get
Adding this to the previous equation gives
Thus the combination of charge conservation and Coulomb’s law implies that the divergence of equation (2) vanishes, whereas the divergence of equation (1) does not vanish. This immediately shows that
equation (2) must be correct, i.e., we must add ∂E/∂t to Ampere’s law, purely for mathematical consistency, because the left hand sides of equations (1) and (2) are the curl of the magnetic field,
and it’s easy to show that the divergence of the curl of any vector field is identically zero. Making use again of the commutivity of partial differentiation for an arbitrary vector field B, we have
It is sometimes argued that Maxwell’s addition of ∂E/∂t to Ampere’s law arose from his mechanistic model of the luminiferous ether, leading to the idea of a displacement current, and that this
implies some kind of validity for that model. However, a review of his Treatise suggests a different interpretation. In Article 110, after discussing the hypothetical states of stress in a fluid
dielectric, he says
It must be carefully borne in mind that we have made only one step in the theory of the action of the medium. We have supposed it to be in a state of stress, but we have not in any way accounted for
this stress, or explained how it is maintained. This step, however, seems to me to be an important one, as it explains, by the action of the consecutive parts of the medium, phenomena which were
formerly supposed to be explicable only by direct action at a distance.
Thus Maxwell’s main concern was to formulate electrodynamics in such a way that it did not rely on action at a distance. The essence of his approach, following Faraday, was to treat the fields of
force as having “consecutive parts”, and to regard the force as being communicated from one part to adjacent parts over time. In other words, his main commitment was to the idea of local action, not
to the idea of a material mechanism for this action. Certainly a material mechanism would satisfy local action, but he was beginning to realize that local action need not imply a material mechanism.
This is shown clearly when he then continues the discussion in Article 111 with an important admission
I have not been able to make the next step, namely, to account by mechanical considerations for these stresses in the dielectric. I therefore leave the theory at this point, merely stating what are
the other parts of the phenomenon of induction in dielectrics…
When induction is transmitted through a dielectric, there is in the first place a displacement of electricity in the direction of the induction… Any increase of this displacement is equivalent,
during the time of increase, to a current of positive electricity… and any diminution of the displacement is equivalent to a current in the opposite direction…
Thus the introduction of the displacement current (in Maxwell’s final presentation of his results) is prefaced by an explicit admission that he was not able to show how it would arise “from
mechanical considerations for the stresses in the dielectric”. This applies even for a fluid dielectric, to say nothing of the vacuum. Admittedly he still refers to a displacement of electricity
(i.e., a displacement of electric charge), but he follows this by saying “any increase of this displacement… is equivalent to a current”. This reveals Maxwell’s ambiguity, because if the displacement
is actually a movement of electric charge, then it’s not just equivalent to a current, it is a current. Overall it seems fair to say that, even in his final presentation of the subject, Maxwell is
unclear as to the nature of the displacement current.
One finds in the literature three basic justifications for introducing the “displacement current” term to Ampere’s law. First, it is sometimes claimed that it can be justified simply on the grounds
of symmetry, i.e., since Faraday’s law indicates that a changing magnetic field is associated with an electric field, we would expect by symmetry that a changing electric field should be associated
with a magnetic field. However, the glaring asymmetry due to the absence of magnetic monopoles tends to undermine the cogency of this argument. The second justification, found especially in
historical treatments, is Maxwell’s heuristic rationale based on the idea of a dielectric medium consisting of charge couples that are pulled apart by an electric field. Lastly, the most common
justification is consistency with Coulomb’s law and charge conservation, noting that the divergence of the curl of B must vanish. Thus we begin with Ampere’s hypothesis that the curl of B equals j,
but then we note that the divergence of j does not vanish, whereas the vector j + ∂E/∂t does have vanishing divergence (due to Coulomb’s law and the conservation of charge), so we add this term to
complete the field equations of electromagnetism in a mathematically and physically self-consistent way.
It’s interesting how similar this is to the process by which Einstein arrived at the final field equations of general relativity. The simplest hypothesis involving only the metric coefficients and
their first and second derivatives, is that the Ricci tensor R[mn] equals the stress energy tensor T[mn], but then we notice that the divergence of T[mn] does not vanish as it should in order to
satisfy local conservation of mass-energy. However, the tensor T[mn] - (1/2)g[mn]T does have vanishing divergence (due to Bianchi’s identity), so we include the “trace” term -(1/2)g[mn]T to give the
complete and mathematically consistent field equations of general relativity
which can also be written in the equivalent form
Just as the inclusion of the “displacement current” in Ampere’s formula was the key to a Maxwell’s self-consistent field theory of electrodynamics, so the inclusion of the “trace stress-energy” in
the expression for the Ricci tensor was the key to Einstein’s self-consistent field theory of gravitation. In both cases, the extra term was added in order to give a divergenceless field.
Incidentally, to the three common justifications for the displacement current discussed above, we might add a fourth, namely, the fact that the inclusion of the term ∂E/∂t in Ampere’s equation leads
to transverse electromagnetic waves propagating in a vacuum at the speed of light. Of course, this is ordinarily presented (triumphantly) as a consequence of the added term, rather than as a
justification or motivation for it. However, someone as mathematically astute as Maxwell could hardly have failed to notice that the standard wave equation would result from the known system of
electromagnetic equations if only Ampere’s law contained a term of the form ∂E/∂t. Indeed Faraday (Maxwell’s primary source and inspiration) had speculated that the electromagnetic ether and the
luminiferous ether might well turn out to be the same thing, suggesting that light actually is a propagating electromagnetic disturbance. Also, Weber had shown that a speed on the order of the speed
of light is given by a simple combination of electromagnetic constants, and many other people (including Riemann) had pursued the same idea. The objective of explaining the wave properties of light
was certainly “in the air” at that time. Is it conceivable that Maxwell actually reverse-engineered the displacement current precisely so that the equations of electromagnetism would support
transverse waves at the speed of light in a vacuum? If so, he would have been consistent with a long tradition, dating back to the ancient Greeks, of arriving at results analytically but presenting
them synthetically.
Einstein commented on this question in a letter to Michele Besso in 1918. He was chiding Besso for having suggested (in a previous letter) that, in view of Einstein’s theory of relativity,
“speculation had proved itself superior to empiricism”. Einstein disavowed this suggestion, pointing out the empirical bases for all the important developments in theoretical physics, including the
special and general theories of relativity. He concluded
No genuinely useful and profound theory has ever really been found purely speculatively. The closest case would be Maxwell’s hypothesis for displacement current. But there it involved accounting for
the fact of the propagation of light (& open circuits).
In other words, although the displacement current itself had never been directly detected, the hypothesis of such a current could be very directly connected to empirical phenomena. Likewise the
atomic hypothesis and the kinetic theory of gases arose (according to Einstein) from the empirical equivalence of work and heat. Similarly in the case of general relativity he cites the empirical
equivalence of inertial and gravitational mass. Ironically, Einstein was later seen as having succumbed to the very notion for which he had chided Besso. He seemed to encourage this impression
himself when he said (in the Herbert Spenser lecture of 1933)
The creative principle resides in mathematics. In a certain sense, therefore, I hold true that pure thought can grasp reality, as the ancients dreamed.
Regarding Einstein’s long search for a field theory that would unify the gravitational equations with Maxwell’s electromagnetic equations, his old friend Max Born wrote
He had achieved his greatest success by relying on just one empirical fact known to every school boy. Yet now he tried to do without any empirical facts, by pure thinking. He believed in the power of
reason to guess the laws according to which God has built the world.
Return to MathPages Main Menu | {"url":"http://www.mathpages.com/home/kmath103/kmath103.htm","timestamp":"2014-04-21T15:24:50Z","content_type":null,"content_length":"30005","record_id":"<urn:uuid:2f572c80-3769-44f3-bbf3-4921d2871aa9>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00586-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linear Dependance
January 21st 2010, 06:48 PM
Linear Dependance
The problem is this,
Show that the set S is L.D. by example, find a nontrivial linear combonation of the vectors in S (whose sum is the zero vector)
$S= {(3,3,1,2),(3,1,4,1),(-3,7,-16,3)}$
This looks to me like a system of 4 equations in 3 unknowns, but can't see how to do it except for trial and error (trying different linear combonations...)
Any help greatly appriciated!
January 21st 2010, 07:00 PM
The problem is this,
Show that the set S is L.D. by example, find a nontrivial linear combonation of the vectors in S (whose sum is the zero vector)
$S= {(3,3,1,2),(3,1,4,1),(-3,7,-16,3)}$
This looks to me like a system of 4 equations in 3 unknowns, but can't see how to do it except for trial and error (trying different linear combonations...)
Any help greatly appriciated!
You're looking for scalars $a,b,c$ s.t. $a(3,3,1,2)+b(3,1,4,1)+c(-3,7,-16,3)=(0,0,0,0)\Longrightarrow \left\{\begin {array}{l}3a+3b-3c=0\\3a+b+7c=0\\a+4b-16c=0\\2a+b+3c=0\end{array}\right.$.
Well, now just solve this system and you're done...one of the many possible solutions is $a=-4\,,\,b=5\,,\,c=1$. Can you find the general solution?
January 21st 2010, 11:36 PM
Method 2: Form the matrix whose rows are the given vectors, and reduce to echelon form using the elementary row operations. You will find that the last row of this matrix is a zero row. Since the
echelon matrix has a zero row, the vectors are dependent. Try it!
January 21st 2010, 11:47 PM
--> Since the echelon matrix has a zero row, the vectors are dependent.
So, 0a+0b+0c=0 in a row implies linear dependance?
I'm not sure I understand that part.
Certainly, I see that if the system has parametric solutions (thus, infinetly many non-0 solutions) is thus LD by definition...
Thank you for the help!
January 22nd 2010, 12:44 AM
Well, the homogeneous system must only have the trivial solution, if its row vectors are linearly independent. Now a matrix containing a zero row can never be invertible (your matrix is not
square either). And a matrix who is not invertible, implies that the system has nontrivial solutions, and hence the vectors are linearly dependent. Furthermore, since it contains a zero row it
will no longer be row equivalent to the identity matrix, this is the other condition.
Likewise, if the echelon matrix has no zero rows, the vectors are independent. | {"url":"http://mathhelpforum.com/advanced-algebra/124870-linear-dependance-print.html","timestamp":"2014-04-21T06:31:38Z","content_type":null,"content_length":"7884","record_id":"<urn:uuid:01cc29d5-7ff0-4652-8b6f-060deded4939>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00328-ip-10-147-4-33.ec2.internal.warc.gz"} |
Intersections -- Poetry with Mathematics
A two-line poem by Chilean poet, Pablo Neruda (1904-73), found in my bilingual edition of Extravagaria, reminded me of the poetic nature of several of the opening expressions of Euclid's geometry.
Both of these follow:
This poem by Howard Nemrov (1920-1991) uses scientific terminology in ways that seem especially deft: Two Pair More money's lost, old gamblers understand On two pair than on any other hand;
An irrational sonnet has 14 lines, just as the traditional sonnet, but differs in its stanza-division and rhyme: there are five stanzas--containing 3, 1, 4, 1 and 5 lines, respectively (these being
the first five digits of the irrational number pi), and a rhyme scheme of AAB C BAAB C CDCCD. This form was devised by Oulipo member Jacques Bens (1931-2001) in 1963. (Previous postings concerning
the Oulipo occurred on March 25 and August 5.)
In applications of mathematics, as in other scientific research, it is important to distinguish between the precision of measurements (how closely they agree with each other) and their accuracy (how
closely measured values agree with the correct value). One of my favorite poets, Miroslav Holub (1923-98), also a research scientist (immunologist), has captured this dilemma with irony in his "Brief
Reflection on Accuracy," translated from Czech by Ewald Osers.
My new poetry book, Red Has No Reason, is now available (from Plain View Press or amazon.com). Several of the poems mention math--and one of them comments on the nature of mathematics. Ideas for "A
Taste of Mathematics" (below) came from a mathematics conference in San Antonio, TX (January 1993) where it was announced that the billionth digit in the decimal expansion of π is 9. Recently an
amazing new calculation record of 5 trillion digits (claimed by Alexander J. Yee and Shigeru Kondo) has been announced.
Back in the 1980's when I began taking examples of poetry into my mathematics classrooms at Bloomsburg University, I think that I justified doing so by considering poetry as an application of
mathematics. For example, Linda Pastan applies algebra to give meaning to her poem of the same title. Here are the opening lines.
Game theory (with origins in the 1930s) was initially developed to analyze competitive decisions in which one individual does better at another's expense--"zero sum" games--and this term has become a
part of everyday vocabulary; here we find it in a poem by Christopher Okigbo (1932-1967), a Nigerian poet.
Poems from three women illustrate a range of emotional content in the mathematics classroom: Rita Dove's "Geometry" captures the excitement of a new mathematical discovery. Sue VanHattum's "Desire in
a Math Class" tells of undercurrents of emotion beneath the surface in a formal classroom setting. Marion Deutsche Cohen's untitled poem [I stand up there and dance] offers a glimpse of what may go
on in a teacher's mind as she performs for her class.
Philip Wexler plays with the terminology of calculus in this poem: The Calculus of Ants on a Worm Swarming tiny bodies nibble away, no limits,
Today's post explores poetic structures called snowballs developed by the Oulipo (see also March 25 posting) and known to many through the writings of Scientific American columnist Martin Gardner
(1914-2010). TIME Magazine's issue for January 10, 1977 had an article entitled "Science: Perverbs and Snowballs" that celebrated both Gardner and the inventive structures of the Oulipo. Oulipian
Harry Mathews' "Liminal Poem" (to the right) is a snowball (growing and then melting) dedicated to Gardner. The lines in Mathew's poem increase or decrease by one letter from line to line. Below
left, a poem by John Newman illustrates the growth-only snowball. | {"url":"http://poetrywithmathematics.blogspot.com/2010_08_01_archive.html","timestamp":"2014-04-16T18:56:27Z","content_type":null,"content_length":"113893","record_id":"<urn:uuid:fcc16a46-4df3-4efb-ba02-4b98bc0fcc9d>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00412-ip-10-147-4-33.ec2.internal.warc.gz"} |
transfromations of an exponential function??
September 24th 2009, 06:56 PM
transfromations of an exponential function??
Consider the graph of y = ex. (a) Find the equation of the graph that
results from reflecting about the line y = 8.
y = 1
(b) Find the equation of the graph that results from reflecting about
the line x = 2.
y =_____________________________
How would i solve this, and can u show the steps?
September 25th 2009, 04:00 AM
Hello Sneaky
Suppose that the point $(x_1, y_1)$ is transformed into the point $(x_2, y_2)$, by a reflection in the line $y = 8$. Then, since the reflection line is horizontal, the x-coordinate doesn't
change, and so:
$x_2 = x_1$ (1)
The fundamental property of a reflection is that the distances of a point and its reflection from the mirror-line are equal. Let's assume (and it won't make any difference if it's the other way
around) that $y_1 < 8$ and $y_2 > 8$. Then these distances are $(8-y_1)$ and $(y_2-8)$, respectively. So
$8-y_1 = y_2-8$
$\Rightarrow y_2 = 16-y_1$ (2)
So, since $y_1 = e^{x_1}$, the equation relating $y_2$ and $x_2$, when we use (1) and (2), is:
$y_2 = 16 - e^{x_2}$
and so the equation of the reflected graph is
$y = 16-e^x$
(b) Find the equation of the graph that results from reflecting about
the line x = 2.
y =_____________________________
How would i solve this, and can u show the steps?
Do this in the same way. Assume that $(x_1,y_1)\rightarrow (x_2,y_2)$. Then, since the mirror-line is vertical
and this time it's the horizontal distances from the mirror-line that are equal. So
$2-x_1 = x_2-2$
$\Rightarrow x_1 = 4-x_2$
And, once again, $y_1=e^{x_1}$, so how are $y_2$ and $x_2$ related? This will then give you the equation of the transformed graph.
Can you complete this now? | {"url":"http://mathhelpforum.com/pre-calculus/104174-transfromations-exponential-function-print.html","timestamp":"2014-04-16T20:57:52Z","content_type":null,"content_length":"10041","record_id":"<urn:uuid:725bccc2-1771-4fb7-89b8-184bf25f1a84>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00438-ip-10-147-4-33.ec2.internal.warc.gz"} |
Simple 2D collisions - Java-Gaming.org
Hello all! I'm trying to write a java program that, as simply as possible, simulates Brownian motion. In essence, there will be a bunch of circles with different velocity and mass, and I need to know
what happens when they hit each other. Detecting the collision is simple, since they are all circles and you can just use the distance formula to see if they're touching, but as for exactly what they
do once they touch, I'm somewhat lost.
I know this has been done before, but I can't find any code for it. Also, I'd like to avoid using a clunky physics library, as there could be hundreds or circles at once and I'd like to keep the
program running smoothly.
Thank you! | {"url":"http://www.java-gaming.org/index.php?topic=25943.msg225644","timestamp":"2014-04-19T10:43:08Z","content_type":null,"content_length":"110653","record_id":"<urn:uuid:d18fefa7-56c8-455d-a910-a9dafb5e4222>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00337-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
It’s Amanda’s birthday and Shane is baking her a cake. He takes the cake out of the 350◦ F oven into the room with temperature of 65◦ F . The cooling of the cake is modeled by the following equation:
T (t) = T_r + (T_0 − T_r )e^kt , where T (t) is the temperature of the cake t minutes after it has been taken out of the oven, Tr is the temperature of the cake’s present surroundings and T0 is the
cake’s initial temperature. In 3 minutes, the cake cools off down to 325◦ F . (a) Write a model for the temperature of the cake. Find the constant k. (b) What is the temperature of the cake after 1
• one year ago
• one year ago
Best Response
You've already chosen the best response.
plug in all the given information to solve for k T(t) = 325 T_0 = 350 T_r = 65 t = 3 \[325 = 65 +(350-65)e^{3k}\]
Best Response
You've already chosen the best response.
a negative answer for k (-.031), would be correct?
Best Response
You've already chosen the best response.
and for part b) would if look like this: T(t) = 65 + (350-65)e^(60k) ?
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/500f45d8e4b009397c6767f5","timestamp":"2014-04-16T04:30:01Z","content_type":null,"content_length":"33001","record_id":"<urn:uuid:a6c0af1e-174b-4ee5-9bd3-75c387d351bf>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00594-ip-10-147-4-33.ec2.internal.warc.gz"} |
Find the limit? - WyzAnt Answers
Find the limit?
Find the limit of (sin(x+h)-sin(x))/h as h approaches 0.
Answer: cos x
Please show all your work.
Tutors, please
to answer this question.
Use the trig identity sin(x+h) = sin(x)cos(h) + cos(x)sin(h), so that
(sin(x+h)-sin(x))/h = (sin(x)(cos(h) - 1) + cos(x)sin(h))/h.
Now take the limit
lim[h→0] (sin(x)(cos(h) - 1) + cos(x)sin(h))/h = sin(x) lim[h→0] ((cos(h) - 1)/h) + cos(x) lim[h→0] (sin(h)/h).
From basic trigonometry it is known that
lim[h→0] ((cos(h) - 1)/h) = 0 and lim[h→0] (sin(h)/h) = 1, so that
lim[h→0] (sin(x)(cos(h) - 1) + cos(x)sin(h))/h = cos(x).
Thank you, Kenneth. This is the trigonometric proof that's needed in this problem.
Your approach to the problem is also very good too!
lim (sin(x+h)-sin(x))/h
since sin(x+h) = sin(x) cos(h) + cos(x) sin(h)
then we have
lim (sin(x) cos(h) + cos(x) sin(h) - sin(x))/h
grouping 1st and 3rd term, we have
lim ((cos(h) - 1) sin(x) + cos(x) sin(h))/h
Since h not equal 0, a limit exist for the above function.
When h→0 cos(h)=1 and sin(h)=h, the above function becomes
lim ((1-1)sin(x) + cos(x)(h))/h
lim (0 + cos(x)h)/h
lim cos(x)h/h
equation used in this exercise:
sin(a+b) = sin(a)cos(b) + cos(a)sin(b)
lim cos(x) = 1 ; lim sin(x) = x
x→0 x→0
applying equations and recognizing pattern of problem is key to solving this kind of math questions, which has been known for some hundreds of year!!!
I bet there is other ways to solve it!!
lim[x→0]sin(x) = 0, not x. However, lim[x→0](sin(x)/x) = 1.
sin(x) = x - x^3/3! + x^5/5! − x^7/7! + . . .
As x → 0 don't the cube and higher degree terms go to zero much faster than x?
So why can't you say as x → 0, sin(x) → x?
Don't we use the approximation sin(x) ≈ x near x = 0?
The limit of a function f(x) is defined to be a number, not another function. We can't use an approximation in a proof.
(sin(x) - x) = - x^3/3! + x^5/5! − x^7/7! + . . .
As x → 0 the right side → 0, so the value of sin(x) - x → 0 and
sin(x) → x. It's asymptotic.
No, all this means is
lim[x→0]sin(x) = lim[x→0]x = 0.
Please review the ε-δ-definition of limit.
Also, using the Taylor series of sin(x) in this problem is circular, because to get to the Taylor series you need to take the derivative of sin(x), which is exactly what you are supposed to find in
this problem. | {"url":"http://www.wyzant.com/resources/answers/25697/find_the_limit","timestamp":"2014-04-18T18:42:04Z","content_type":null,"content_length":"53774","record_id":"<urn:uuid:2f019921-bfe2-4427-8d69-b34522c89eaa>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00478-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tangent of Curve Calculaton
December 13th 2012, 03:47 AM
Tangent of Curve Calculaton
I have four X Points(x0,x1,x2,x3) and corresponding Y Points(y0,y1,y2,y3), A Curve can be connected using these points, I want to find tangent of this curve. How do I get this or what is method I
suppose to use.
Thanks in Advance,
Pandi K
December 13th 2012, 04:14 AM
Re: Tangent of Curve Calculaton
What do you mean by find the tangent?
I'll assume you mean gradient and start with finding the gradient of the tangent.
The gradient of the tangent can be found with $\frac{y2-y1}{x2-x1}$, where y2 is the biggest y-coordinate you have ( which is 3 in your case ) and y1 is the smallest y coordinate you have. Same
thing applies for X2 and X1. So sub all values in, you should end up with $\frac{3-0}{3-0}$ and therefore the gradient of the tangent is 1.
December 13th 2012, 06:03 AM
Re: Tangent of Curve Calculaton
THE "tangent to a curve" makes no sense. A curve may have a different tangent at any point. And, of course, because there are an infinite number of functions, even "differentiable" functions or
even polynomials, that pass through four given points, there is no way of knowing which is meant or at which point you want the tangent. That's why Tutu gives you the slope of the secant line
through the two endpoints. However, he makes the error of assuming that by " $x_0, x_1, x_2, x_3$" and " $y_0, y_1, y_2, y_3$" you mean that x=0 and y= 0; x= 1, y= 1; x= 2, y= 2; x=3, y= 3 so
that your points all lie on the straight line y= x.
Given four distinct points, $(x_0, y_0)$, $(x_1, y_1)$, $(x_2, y_2)$, and $(x_3, y_3)$, the simplest thing to do, although not the only answer, is to find the unique cubic function that passes
through those points: that is, find $y= ax^3+ bx^2+ cx+ d$ so that $y_0= ax_0^3+ bx_0^2+ cx_0+ d$, $y_1= ax_1^3+ bx_1^2+ cx_1+ d$, $y_2= ax_2^3+ bx_2^2+ cx_2+ d$, and $y_3= ax_3^3+ bx_3^2+ cx_3+
d$. Solve those four equations for a, b, c, and d. Then the slope of the tangent line at any point at x is given by $y'= 3ax^2+ 2bx+ c$. That allows you to find the tangent line at that
particular point to that particular curve. | {"url":"http://mathhelpforum.com/calculus/209726-tangent-curve-calculaton-print.html","timestamp":"2014-04-20T19:51:55Z","content_type":null,"content_length":"8197","record_id":"<urn:uuid:e9804dfa-1c30-4279-b002-57a217ee5e5e>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00632-ip-10-147-4-33.ec2.internal.warc.gz"} |
Complex Analysis (i need help,immediately)
October 13th 2009, 12:06 PM #1
Oct 2009
Complex Analysis (i need help,immediately)
Please help me with them problems:
1) if z^3=1, show that (1-z)(1-z^2)(1-z^4)(1-z^5)=9, zEC
2) if cos(x)+cos(y)+cos(t)=0, sin(x)+sin(y)+sin(t)=0 show that cos(3x)+cos(3y)+cos(3t)=3cos(x+y+t)
3)show that, the roots the equations (1+z)^(2n) +(1-z)^(2n)=0, nEN, zEC are given by the relation z=itan((2κ+1)π)/4n), κ=0,1,...,n-1
Thank you
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/advanced-math-topics/107807-complex-analysis-i-need-help-immediately.html","timestamp":"2014-04-21T14:44:20Z","content_type":null,"content_length":"29553","record_id":"<urn:uuid:1a3e3f8a-5916-46ca-ae03-0986760deb38>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00102-ip-10-147-4-33.ec2.internal.warc.gz"} |
El Cajon Statistics Tutor
Find an El Cajon Statistics Tutor
Hello! My name is David. While at Azusa Pacific University, I was the school's official statistics tutor.
7 Subjects: including statistics, economics, Microsoft Excel, prealgebra
Hello! My name is Eric, and I hold a Bachelor's degree in Mathematics and Cognitive Science from the University of California - San Diego. I began tutoring math in high school, volunteering to
assist an Algebra 1 class for 4 hours per week.
14 Subjects: including statistics, calculus, physics, geometry
...In high school, chemistry really clicked for me and it inspired me to go into chemical engineering. I use all the concepts that you will see in a high school and entry level college chemistry
course quite regularly at both school and work. I have experience with both the AP Chemistry course as well the SAT II subject tests as I managed to attain a perfect score on each of them.
19 Subjects: including statistics, chemistry, calculus, physics
...I can tutor a variety of subjects from basic elementary math to calculus, basic natural sciences to upper division chemistry, as well as up to Semester 4 of university Japanese. I started out
majoring Chemistry at Harvey Mudd College where I was taught not only a wide breadth of subjects in math...
13 Subjects: including statistics, chemistry, calculus, geometry
...I have extensive knowledge of the command line, Linux applications, and server maintenance. I can teach shell scripting, job scheduling, system maintenance as well. I was first exposed to
MATLAB programming in my Masters in Complexity Science, where course work and research projects required it.
26 Subjects: including statistics, physics, calculus, algebra 1 | {"url":"http://www.purplemath.com/el_cajon_ca_statistics_tutors.php","timestamp":"2014-04-18T00:34:17Z","content_type":null,"content_length":"23831","record_id":"<urn:uuid:79fae214-8aa2-41a4-94ff-d29df4445292>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00062-ip-10-147-4-33.ec2.internal.warc.gz"} |
Labor Productivity Myth
According to a recent announcement by the US Labor Department, workers productivity in the non-farm sector increased at an annual rate of 5.3% in the second quarter after rising by 1.9% in the
previous quarter. Year-on-year productivity increased by 5.1%, the highest such gain since the third quarter of 1983 when productivity was up 5.3%.
After the release of these data, the Dow Jones Industrial Average jumped by 1%. Players in the stock market interpreted the surge in productivity as another indication that the US economy is becoming
healthier and wealthier.
There is nothing inherently wrong with this conclusion. After all, a rise in productivity would indicate that workers are churning out a greater amount of goods and services per hour. The trouble is
that there are serious doubts as to whether productivity figures describe the facts of reality.
To calculate productivity, statisticians look at total output produced and the number of workers' hours that went into production of this output. In short: Productivity = Total output / number of
To calculate a total, several data sets must be added together. To be added, analytical rigor requires that they have some unit in common. But the "non-farm business sector" includes a huge diversity
of products and services; it is not possible to simply add these up and arrive at a total. There is not unit of measurement common to refrigerators, cars, and shirts that make it possible to derive
total output. Since the total real output cannot be meaningfully defined obviously it cannot be quantified.
The statisticians' technique of employing total monetary expenditure adjusted for prices simply won't do. What is a price? It is the rate of exchange between goods established in a transaction
between two individuals at a particular place and at a particular point in time.
The price, or the rate of exchange of one good in terms of another, is the amount of the other good divided by the amount of the first good. In the money economy, price will be the amount of money
divided by the amount of the first good.
Suppose two transactions were conducted. In the first transaction one TV set is exchanged for $1000. In the second transaction one shirt is exchanged for $40. The price or the rate of exchange in the
first transaction is $1000/1TV set. The price in the second transaction is $40/1 shirt. In order to calculate the average price we must add these two ratios and divide them by 2; however, it is
conceptually meaningless to add $1000/1TV set to $40/1shirt.
It is interesting to note that in the commodity markets prices are quoted as Dollars/barrel of oil, Dollars/ounce of gold, Dollars/tonne of copper, etc. Obviously it wouldn't make much sense to
establish an average of these prices. Likewise it doesn't make much sense to establish an average of the exchange rates Dollar/Sterling, Dollar/Yen, etc.
On this Rothbard wrote, "Thus, any concept of average price level involves adding or multiplying quantities of completely different units of goods, such as butter, hats, sugar, etc., and is therefore
meaningless and illegitimate. Even pounds of sugar and pounds of butter cannot be added together, because they are two different goods and their valuation is completely different. (Man, Economy, and
State, p. 734).
The use of a fixed weight price index seems to offer a solution that bypasses the problem of a direct calculation of an average price. By means of this index, it is held, we could establish changes
in the overall purchasing power of money, which in turn will permit us to ascertain changes in real output. Thus if total money outlay increased by 10% and the purchasing power of money fell by 5%
one could say that real outlay grew by 5%. The following example illustrates the essence of a fixed weight price index.
In period 1, Tom bought 100 hamburgers for $2 each. He also bought 5 shirts at $20 each. His total outlay in period 1 is $2*100 + $20*5 = $300. Observe that hamburgers carry a weight of 0.67 in the
total outlay while shirts carry a weight of 0.33.
In period 2, hamburgers are exchanged for $3, an increase of 50%, shirts are exchanged for $25 an increase of 25%. By applying unchanged weights, i.e. an unchanged pattern of consumption, we will
find that the purchasing power of Tom's money fell by 41.7%. (50%*0.67 + 25%*0.33 = 41.7%)
Now, if we were to assume that Tom's pattern of consumption represents an average consumer then we could say that the overall purchasing power of money fell by 41.7%.
It was observed that people's spending increased from $100 million in period 1 to $140 million in period 2 i.e. a 40% increase. By applying the information that the purchasing power of money fell by
41.7% we can establish that in real terms spending stood at $98.8 million in period 2 a fall of 1.2% from period 1.
Every ten years government statisticians conduct extensive surveys to establish a pattern of spending of a "typical" or an "average" consumer. The obtained weights in turn serve to establish changes
in the average price and hence in the purchasing power of money. Once changes in the purchasing power of money are established one could make an estimate of changes in total real output and of labor
The assumption that weights remain constant over a prolonged period of time is, however, not applicable in the real world. This assumption implies an individual with frozen preferences i.e. a robot.
According to Mises in the world of frozen preferences the idea that money's purchasing power could change is contradictory. (Human Action, p. 222).
Moreover, according to Rothbard, "There are only individual buyers, and each buyer has bought a different proportion and type of goods. If one person purchases a TV set, and another goes to the
movies, each activity is the result of different value scales, and each has different effects on the various commodities. There is no 'average person' who goes partly to the movies and buys part of a
TV set. There is therefore no 'average housewife' buying some given proportion of a totality of goods. Goods are not bought in their totality against money, but only by individuals in individual
transactions, and therefore there can be no scientific method of combining them. (Man, Economy, and State, p. 740)
Since the fixed weight price index has nothing to do with reality it means that the overall purchasing power of money cannot be established. Indeed it cannot be established, even conceptually. Thus
when $1 is exchanged for 1 loaf of bread we can say that the purchasing power of $1 is 1 loaf of bread. If $1 is exchanged for 2 tomatoes then this also means that the purchasing power of $1 is 2
However, it is not possible to establish total purchasing power of money since we cannot add up 2 tomatoes to 1 loaf of bread. In short, we can only establish the purchasing power of money with
respect to a particular good in a transaction at a given point in time and at a given place.
The view that a variable weight price index could bring more realism and hence permit the estimation of the purchasing power of money also misses the point. In the world of a fixed weight price index
the change in prices is entirely attributed to changes in the purchasing power of money.
This is not so with respect to the variable weight index. Changes in the variable weight index imply that prices are driven by monetary and non-monetary factors. The influence of these factors on
prices is, however, intertwined and cannot be separated. Consequently it is not possible to isolate changes in the purchasing power of money from changes in this price index. Without this knowledge
it is not possible to calculate changes in real spending.
According to Rothbard, "All sorts of index numbers have been spawned in a vain attempt to surmount these difficulties: quantity weights have been chosen that vary for each year covered; arithmetical,
geometrical, and harmonic averages have been taken at variable and fixed weights; "ideal" formulas have been explored - all with no realization of the futility of these endeavors. No such index
number, no attempt to separate and measure prices and quantities, can be valid." (Man, Economy, and State, p. 744).
Also according to Mises, "In the field of praxeology and economics no sense can be given to the notion of measurement. In the hypothetical state of rigid conditions there are no changes to be
measured. In the actual world of change there are no fixed points, dimensions, or relations which could serve as a standard." (Human Action, p. 222)
We can thus conclude that the various price deflators that government statisticians compute are arbitrary numbers. If the so-called deflators are meaningless, so is the real output statistic, which
is employed in the calculation of workers productivity.
Furthermore, the entire concept of total labour productivity is dubious. Let us assume that in period 1 it was observed that the electronic sector produced 10 TV sets per hour of labour. It was also
observed that the clothing industry produced 100 shirts per hour of labor. In period 2 it was found that one-hour of labor in the electronic sector produced 8 TV sets while in the clothing sector 120
Based on this information it is not possible to say anything about total labor productivity. All that we can say is that productivity fell in the electronic sector and increased in the clothing
sector. Even if in both sectors productivity were to increase it is not possible to establish the numerical increase of total labor productivity.
So what are we to make out of the pronouncement that labor productivity increased by 5.1% in the second quarter? All that we could say is that this percentage has nothing to do with productivity
growth. It is the result of monetary spending adjusted by a meaningless deflator.
As a rule the more money created by the central bank and the banking sector, the larger the monetary spending will be. This in turn means that, the rate of growth of what government calls "total real
output," will closely mirror rises in money supply. Essentially, the more money pumped, the greater the "total output" will be. But this is not real production but only a statistical illusion.
The "strong" second quarter "labor productivity" is most likely the result of last year's aggressive monetary pumping by the Fed. Year-on-year in December, the money base grew by 15.3%. Since early
this year the pace of monetary pumping has fallen quite sharply, by July it stood at 6.8%. This raises the likelihood that "labor productivity" will weaken sharply in the months ahead.
The fallacy is in thinking that these ups and downs in official data have anything at all to do with discerning real economic activity. | {"url":"http://mises.org/daily/510","timestamp":"2014-04-17T01:59:13Z","content_type":null,"content_length":"41406","record_id":"<urn:uuid:29766ea4-ed30-46f1-83b2-8f5eab8f5ce2>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00633-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quantitative Comparison Questions
Questions of this type ask you to compare two quantities – Quantity A and Quantity B – and then
determine which of the following statements describes the comparison:
Quantity A is greater.
Quantity B is greater.
The two quantities are equal.
The relationship cannot be determined from the information given.
Tips for Answering
Become familiar with the answer choices. Quantitative Comparison questions always have the same
answer choices, so get to know them, especially the last answer choice, "The relationship cannot be
determined from the information given." Never select this last choice if it is clear that the values of the
two quantities can be determined by computation. Also, if you determine that one quantity is greater
than the other, make sure you carefully select the corresponding answer choice so as not to reverse the
first two answer choices.
Avoid unnecessary computations. Don't waste time performing needless computations in order to
compare the two quantities. Simplify, transform or estimate one or both of the given quantities only as
much as is necessary to compare them.
Remember that geometric figures are not necessarily drawn to scale. If any aspect of a given geometric
figure is not fully determined, try to redraw the figure, keeping those aspects that are completely
determined by the given information fixed but changing the aspects of the figure that are not
determined. Examine the results. What variations are possible in the relative lengths of line segments or
measures of angles?
Plug in numbers. If one or both of the quantities are algebraic expressions, you can substitute easy
numbers for the variables and compare the resulting quantities in your analysis. Consider all kinds of
appropriate numbers before you give an answer: e.g., zero, positive and negative numbers, small and
large numbers, fractions and decimals. If you see that Quantity A is greater than Quantity B in one case
and Quantity B is greater than Quantity A in another case, choose "The relationship cannot be
determined from the information given."
Simplify the comparison. If both quantities are algebraic or arithmetic expressions and you cannot easily
see a relationship between them, you can try to simplify the comparison. Try a step-by-step
simplification that is similar to the steps involved when you solve the equation for x, or that is similar to
the steps involved when you determine that the inequality is equivalent to the simpler inequality Begin
by setting up a comparison involving the two quantities, as follows:
Quantity A ? Quantity B
where ? is a "placeholder" that could represent the relationship greater than (>), less than (<) or equal
to (=) or could represent the fact that the relationship cannot be determined from the information
given. Then try to simplify the comparison, step-by-step, until you can determine a relationship between
simplified quantities. For example, you may conclude after the last step that ? represents equal to (=).
Based on this conclusion, you may be able to compare Quantities A and B. To understand this strategy
more fully, see sample questions 6–9.
Quantitative Comparison Sample Questions
Introduction Sample Questions
Directions: Compare Quantity A and Quantity B, using additional information centered above the two
quantities if such information is given, and select one of the following four answer choices:
(A) Quantity A is greater.
(B) Quantity B is greater.
(C) The two quantities are equal.
(D) The relationship cannot be determined from the information given.
A symbol that appears more than once in a question has the same meaning throughout the question.
Quantity A Quantity B
(A) Quantity A is greater.
(B) Quantity B is greater.
(C) The two quantities are equal.
(D) The relationship cannot be determined from the information given.
Since 12 is greater than 8, Quantity A is greater than Quantity B. Thus, the correct answer is choice A,
Quantity A is greater.
Lionel is younger than Maria.
Quantity A Quantity B
Twice Lionel's age Maria's age
(A) Quantity A is greater.
(B) Quantity B is greater.
(C) The two quantities are equal.
(D) The relationship cannot be determined from the information given.
If Lionel's age is 6 years and Maria's age is 10 years, then Quantity A is greater, but if Lionel's age is 4
years and Maria's age is 10 years, then Quantity B is greater. Thus, the relationship cannot be
determined. The correct answer is choice D, the relationship cannot be determined from the
information given.
Quantity A Quantity B
54% of 360 150
(A) Quantity A is greater.
(B) Quantity B is greater.
(C) The two quantities are equal.
(D) The relationship cannot be determined from the information given.
Without doing the exact computation, you can see that 54 percent of 360 is greater than of 360, which
is 180, and 180 is greater than Quantity B, 150. Thus, the correct answer is choice A, Quantity A is
Figure 1
Quantity A Quantity B
PS SR
(A) Quantity A is greater.
(B) Quantity B is greater.
(C) The two quantities are equal.
(D) The relationship cannot be determined from the information given.
From Figure 1, you know that PQR is a triangle and that point S is between points P and R, so and You
are also given that However, this information is not sufficient to compare PS and SR. Furthermore,
because the figure is not necessarily drawn to scale, you cannot determine the relative sizes of PS and
SR visually from the figure, though they may appear to be equal. The position of S can vary along side PR
anywhere between P and R. Below are two possible variations of Figure 1, each of which is drawn
consistent with the information Figure 2Figure 3
Note that Quantity A is greater in Figure 2 and Quantity B is greater in Figure 3. Thus, the correct answer
is choice D, the relationship cannot be determined from the information given.
Quantity A Quantity B
x y
(A) Quantity A is greater.
(B) Quantity B is greater.
(C) The two quantities are equal.
(D) The relationship cannot be determined from the information given.
If then so in this case, but if then so in that case, Thus, the correct answer is choice D, the
relationship cannot be determined from the information given.
Note that plugging numbers into expressions may not be conclusive. However, it is conclusive if you get
different results after plugging in different numbers: the conclusion is that the relationship cannot be
determined from the information given. It is also conclusive if there are only a small number of possible
numbers to plug in and all of them yield the same result, say, that Quantity B is greater.
Now suppose there are an infinite number of possible numbers to plug in. If you plug many of them in
and each time the result is, for example, that Quantity A is greater, you still cannot conclude that
Quantity A is greater for every possible number that could be plugged in. Further analysis would be
necessary and should focus on whether Quantity A is greater for all possible numbers or whether there
are numbers for which Quantity A is not greater.
The following sample questions focus on simplifying the comparison.
Quantity A Quantity B
(A) Quantity A is greater.
(B) Quantity B is greater.
(C) The two quantities are equal.
(D) The relationship cannot be determined from the information given.
Set up the initial comparison:
Then simplify:
Step 1: Multiply both sides by 5 to get
Step 2: Subtract 3y from both sides to get
Step 3: Divide both sides by 2 to get
The comparison is now simplified as much as possible. In order to compare 1 and y, note that you are
given the information (above Quantities A and B). It follows from that or so that in the comparison
the placeholder represents less than (<):
However, the problem asks for a comparison between Quantity A and Quantity B, not a comparison
between 1 and y. To go from the comparison between 1 and y to a comparison between Quantities A
and B, start with the last comparison, and carefully consider each simplification step in reverse order to
determine what each comparison implies about the preceding comparison, all the way back to the
comparison between Quantities A and B, if possible. Since step 3 was "divide both sides by 2,"
multiplying both sides of the comparison by 2 implies the preceding comparison thus reversing step 3.
Each simplification step can be reversed as follows:
Reverse step 3: multiply both sides by 2.
Reverse step 2: add 3y to both sides.
Reverse step 1: divide both sides by 5.
When each step is reversed, the relationship remains less than (<), so Quantity A is less than Quantity B.
Thus, the correct answer is choice B, Quantity B is greater.
While some simplification steps like subtracting 3 from both sides or dividing both sides by 10 are always
reversible, it is important to note that some steps, like squaring both sides, may not be reversible.
Also, note that when you simplify an inequality, the steps of multiplying or dividing both sides by a
negative number change the direction of the inequality; for example, if then So the relationship in the
final, simplified inequality may be the opposite of the relationship between Quantities A and B. This is
another reason to consider the impact of each step carefully.
Quantity A Quantity B
(A) Quantity A is greater.
(B) Quantity B is greater.
(C) The two quantities are equal.
(D) The relationship cannot be determined from the information given.
Set up the initial comparison:
Then simplify:
Step 1: Multiply both sides by 2 to get
Step 2: Add to both sides to get
Step 3: Simplify the right-hand side using the fact that to get
The resulting relationship is equal to (=). In reverse order, each simplification step implies equal to in the
preceding comparison. So Quantities A and B are also equal. Thus, the correct answer is choice C, the
two quantities are equal.
Quantity A Quantity B
(A) Quantity A is greater.
(B) Quantity B is greater.
(C) The two quantities are equal.
(D) The relationship cannot be determined from the information given.
Set up the initial comparison:
Then simplify by noting that the quadratic polynomial can be factored:
Step 1: Subtract 2x from both sides to get
Step 2: Factor the left-hand side to get
The left-hand side of the comparison is the square of a number. Since the square of a number is always
greater than or equal to 0, and 0 is greater than the simplified comparison is the inequality and the
resulting relationship is greater than (>). In reverse order, each simplification step implies the inequality
greater than (>) in the preceding comparison. Therefore, Quantity A is greater than Quantity B. The
correct answer is choice A, Quantity A is greater.
Quantity A Quantity B
(A) Quantity A is greater.
(B) Quantity B is greater.
(C) The two quantities are equal.
(D) The relationship cannot be determined from the information given.
Set up the initial comparison:
Then simplify:
Step 1: Subtract 2w from both sides and add 4 to both sides to get
Step 2: Divide both sides by 5 to get
The comparison cannot be simplified any further. Although you are given that you still don't know how
w compares to or 1.8. For example, if then but if then In other words, the relationship between w and
cannot be determined. Note that each of these simplification steps is reversible, so in reverse order,
each simplification step implies that the relationship cannot be determined in the preceding
comparison. Thus, the relationship between Quantities A and B cannot be determined. The correct
answer is choice D, the relationship cannot be determined from the information given.
The strategy of simplifying the comparison works most efficiently when you note that a simplification
step is reversible while actually taking the step. Here are some common steps that are always reversible:
Adding any number or expression to both sides of a comparison
Subtracting any number or expression from both sides
Multiplying both sides by any nonzero number or expression
Dividing both sides by any nonzero number or expression
Remember that if the relationship is an inequality, multiplying or dividing both sides by any negative
number or expression will yield the opposite inequality. Be aware that some common operations like
squaring both sides are generally not reversible and may require further analysis using other
information given in the question in order to justify reversing such steps. | {"url":"http://www.docstoc.com/docs/146260632/Quantitative-Comparison-Questions","timestamp":"2014-04-19T12:16:13Z","content_type":null,"content_length":"69985","record_id":"<urn:uuid:cf092a17-1a60-4306-a1b7-332fff6960c5>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00295-ip-10-147-4-33.ec2.internal.warc.gz"} |
Free Plane, Analytic, and Solid Geometry Ebooks
I have been searching for Solid Geometry ebooks online and I found some good old plane geometry ebooks as well as analytic geometry ebooks. They are all FREE and available for download.
If you know some related books that are free for download, please use the comment box below.
You might also want to visit the Math and Multimedia All for Free page for more ebooks. | {"url":"http://mathandmultimedia.com/2013/12/29/free-solid-geometry-ebooks/","timestamp":"2014-04-21T07:11:10Z","content_type":null,"content_length":"333947","record_id":"<urn:uuid:9fb76076-d169-4f4f-a96c-69ff6666b315>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00108-ip-10-147-4-33.ec2.internal.warc.gz"} |