content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Text retrieval: Theory and practice
Results 1 - 10 of 37
- ACM Computing Surveys , 1999
"... We survey the current techniques to cope with the problem of string matching allowing errors. This is becoming a more and more relevant issue for many fast growing areas such as information
retrieval and computational biology. We focus on online searching and mostly on edit distance, explaining t ..."
Cited by 404 (38 self)
Add to MetaCart
We survey the current techniques to cope with the problem of string matching allowing errors. This is becoming a more and more relevant issue for many fast growing areas such as information retrieval
and computational biology. We focus on online searching and mostly on edit distance, explaining the problem and its relevance, its statistical behavior, its history and current developments, and the
central ideas of the algorithms and their complexities. We present a number of experiments to compare the performance of the different algorithms and show which are the best choices according to each
case. We conclude with some future work directions and open problems. 1
"... We introduce a family of simple and fast algorithms for solving the classical string matching problem, string matching with classes of symbols, don't care symbols and complement symbols, and
multiple patterns. In addition we solve the same problems allowing up to k mismatches. Among the features of ..."
Cited by 229 (15 self)
Add to MetaCart
We introduce a family of simple and fast algorithms for solving the classical string matching problem, string matching with classes of symbols, don't care symbols and complement symbols, and multiple
patterns. In addition we solve the same problems allowing up to k mismatches. Among the features of these algorithms are that they don't need to buffer the input, they are real time algorithms (for
constant size patterns), and they are suitable to be implemented in hardware. 1 Introduction String searching is a very important component of many problems, including text editing, bibliographic
retrieval, and symbol manipulation. Recent surveys of string searching can be found in [17, 4]. The string matching problem consists of finding all occurrences of a pattern of length m in a text of
length n. We generalize the problem allowing "don't care" symbols, the complement of a symbol, and any finite class of symbols. We solve this problem for one or more patterns, with or without
mismatches. Fo...
- Algorithmica , 1999
"... We present a new algorithm for on-line approximate string matching. The algorithm is based on the simulation of a non-deterministic finite automaton built from the pattern and using the text as
input. This simulation uses bit operations on a RAM machine with word length w = \Omega\Gamma137 n) bits, ..."
Cited by 72 (24 self)
Add to MetaCart
We present a new algorithm for on-line approximate string matching. The algorithm is based on the simulation of a non-deterministic finite automaton built from the pattern and using the text as
input. This simulation uses bit operations on a RAM machine with word length w = \Omega\Gamma137 n) bits, where n is the text size. This is essentially similar to the model used in Wu and Manber's
work, although we improve the search time by packing the automaton states differently. The running time achieved is O(n) for small patterns (i.e. whenever mk = O(log n)), where m is the pattern
length and k ! m the number of allowed errors. This is in contrast with the result of Wu and Manber, which is O(kn) for m = O(log n). Longer patterns can be processed by partitioning the automaton
into many machine words, at O(mk=w n) search cost. We allow generalizations in the pattern, such as classes of characters, gaps and others, at essentially the same search cost. We then explore other
novel techniques t...
- ACM JOURNAL OF EXPERIMENTAL ALGORITHMICS (JEA , 1998
"... ... In this paper we merge bit-parallelism and suffix automata, so that a nondeterministic suffix automaton is simulated using bit-parallelism. The resulting algorithm, called BNDM, obtains the
best from both worlds. It is much simpler to implement than BDM and nearly as simple as Shift-Or. It inher ..."
Cited by 61 (11 self)
Add to MetaCart
... In this paper we merge bit-parallelism and suffix automata, so that a nondeterministic suffix automaton is simulated using bit-parallelism. The resulting algorithm, called BNDM, obtains the best
from both worlds. It is much simpler to implement than BDM and nearly as simple as Shift-Or. It inherits from Shift-Or the ability to handle flexible patterns and from BDM the ability to skip
characters. BNDM is 30%-40% faster than BDM and up to 7 times faster than Shift-Or. When compared to the fastest existing algorithms on exact patterns (which belong to the BM family), BNDM is from
20% slower to 3 times faster, depending on the alphabet size. With respect to flexible pattern searching, BNDM is by far the fastest technique to deal with classes of characters and is competitive to
search allowing errors. In particular, BNDM seems very adequate for computational biology applications, since it is the fastest algorithm to search on DNA sequences and flexible searching is an
important problem in that
- IEEE Data Engineering Bulletin , 2000
"... Indexing for approximate text searching is a novel problem receiving much attention because of its applications in signal processing, computational biology and text retrieval, to name a few. We
classify most indexing methods in a taxonomy that helps understand their essential features. We show that ..."
Cited by 56 (10 self)
Add to MetaCart
Indexing for approximate text searching is a novel problem receiving much attention because of its applications in signal processing, computational biology and text retrieval, to name a few. We
classify most indexing methods in a taxonomy that helps understand their essential features. We show that the existing methods, rather than completely different as they are regarded, form a range of
solutions whose optimum is usually somewhere in between.
"... We present a new indexing method for the approximate string matching problem. The method is based on a suffix array combined with a partitioning of the pattern. We analyze the resulting
algorithm and show that the average retrieval time is Ç Ò � ÐÓ � Ò,forsome�� that depends on the error fraction t ..."
Cited by 55 (10 self)
Add to MetaCart
We present a new indexing method for the approximate string matching problem. The method is based on a suffix array combined with a partitioning of the pattern. We analyze the resulting algorithm and
show that the average retrieval time is Ç Ò � ÐÓ � Ò,forsome�� that depends on the error fraction tolerated « and the alphabet size �. Itisshownthat �� for approximately « � � � Ô �,where � � � �
����. Thespace required is four times the text size, which is quite moderate for this problem. We experimentally show that this index can outperform by far all the existing alternatives for indexed
approximate searching. These are also the first experiments that compare the different existing schemes.
, 1998
"... . We address the problem of string matching on Ziv-Lempel compressed text. The goal is to search a pattern in a text without uncompressing it. This is a highly relevant issue to keep compressed
text databases where efficient searching is still possible. We develop a general technique for string matc ..."
Cited by 42 (8 self)
Add to MetaCart
. We address the problem of string matching on Ziv-Lempel compressed text. The goal is to search a pattern in a text without uncompressing it. This is a highly relevant issue to keep compressed text
databases where efficient searching is still possible. We develop a general technique for string matching when the text comes as a sequence of blocks. This abstracts the essential features of
Ziv-Lempel compression. We then apply the scheme to each particular type of compression. We present the first algorithm to find all the matches of a pattern in a text compressed using LZ77. When we
apply our scheme to LZ78, we obtain a much more efficient search algorithm, which is faster than uncompressing the text and then searching on it. Finally, we propose a new hybrid compression scheme
which is between LZ77 and LZ78, being in practice as good to compress as LZ77 and as fast to search in as LZ78. 1 Introduction String matching is one of the most pervasive problems in computer
science, with appli...
, 1998
"... . We present a new algorithm for string matching. The algorithm, called BNDM, is the bit-parallel simulation of a known (but recent) algorithm called BDM. BDM skips characters using a "suffix
automaton " which is made deterministic in the preprocessing. BNDM, instead, simulates the nondeterministic ..."
Cited by 40 (5 self)
Add to MetaCart
. We present a new algorithm for string matching. The algorithm, called BNDM, is the bit-parallel simulation of a known (but recent) algorithm called BDM. BDM skips characters using a "suffix
automaton " which is made deterministic in the preprocessing. BNDM, instead, simulates the nondeterministic version using bit-parallelism. This algorithm is 20%-25% faster than BDM, 2-3 times faster
than other bitparallel algorithms, and 10%-40% faster than all the Boyer-Moore family. This makes it the fastest algorithm in all cases except for very short or very long patterns (e.g. on English
text it is the fastest between 5 and 110 characters). Moreover, the algorithm is very simple, allowing to easily implement other variants of BDM which are extremely complex in their original
formulation. We show that, as other bit-parallel algorithms, BNDM can be extended to handle classes of characters in the pattern and in the text, multiple patterns and to allow errors in the pattern
or in the text, combin...
, 2000
"... . We present a new index for approximate string matching. The index collects text q-samples, i.e. disjoint text substrings of length q, at fixed intervals and stores their positions. At search
time, part of the text is filtered out by noticing that any occurrence of the pattern must be reflected ..."
Cited by 38 (11 self)
Add to MetaCart
. We present a new index for approximate string matching. The index collects text q-samples, i.e. disjoint text substrings of length q, at fixed intervals and stores their positions. At search time,
part of the text is filtered out by noticing that any occurrence of the pattern must be reflected in the presence of some text q-samples that match approximately inside the pattern. We show
experimentally that the parameterization mechanism of the related filtration scheme provides a compromise between the space requirement of the index and the error level for which the filtration is
still efficient. 1
- Software Practice and Experience (SPE , 2000
"... We present nrgrep ("nondeterministic reverse grep"), a new pattern matching tool designed for efficient search of complex patterns. Unlike previous tools of the grep family, such as agrep and
Gnu grep, nrgrep is based on a single and uniform concept: the bit-parallel simulation of a nondeterminis ..."
Cited by 37 (7 self)
Add to MetaCart
We present nrgrep ("nondeterministic reverse grep"), a new pattern matching tool designed for efficient search of complex patterns. Unlike previous tools of the grep family, such as agrep and Gnu
grep, nrgrep is based on a single and uniform concept: the bit-parallel simulation of a nondeterministic suffix automaton. As a result, nrgrep can find from simple patterns to regular expressions,
exactly or allowing errors in the matches, with an efficiency that degrades smoothly as the complexity of the searched pattern increases. Another concept fully integrated into nrgrep and that
contributes to this smoothness is the selection of adequate subpatterns for fast scanning, which is also absent in many current tools. We show that the efficiency of nrgrep is similar to that of the
fastest existing string matching tools for the simplest patterns, and by far unpaired for more complex patterns. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=109693","timestamp":"2014-04-24T10:03:59Z","content_type":null,"content_length":"38375","record_id":"<urn:uuid:98992979-493e-4a91-8b66-9f4f815dfcf2>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00429-ip-10-147-4-33.ec2.internal.warc.gz"} |
Why didn’t Baumgartner burn up on re-entry?Why didn't Baumgartner burn up on re-entry?
By now everyone’s heard of Felix Baumgartner and his record-breaking leap out of a balloon some 24 miles over the New Mexico desert. While the “official” definition of outer space is generally
considered to start at either 60 miles or 100 kilometers, Felix’s leap of some 39,045 meters is in many respect a drop from outer space. He was above essentially all of the atmosphere, the daylight
sky above him was a black starry void, and he had to wear a spacesuit to breathe.
So why didn’t he burn up on re-entry? The physics-savvy among you may scoff at such a question, but it’s one that a lot of people had and it deserves a serious answer. It’s also a great chance to
clarify the difference between “in space” and “in orbit”, which are two related but very different concepts.
So, what’s an orbit? The best explanation probably involves Newton’s cannon. If you fire a cannonball, it will sail through the air until eventually gravity brings it crashing to the ground. If you
load in even more powder, it will sail farther still. The farther it goes, the more the horizon will curve away under it.
Fire the canon hard enough, and the fall of the cannonball due to gravity will be exactly counterbalanced by the fact that the horizon is dropping away from under it.
This is an orbit. You can reach very high altitudes – even into space – without going into orbit. Orbit is fundamentally about sideways motion, and while it’s a nice way to stay in space around a
massive planet, you don’t have to be in orbit to be in space. You can just hitch a balloon ride or whatever other method you prefer to take you straight up.
To get into orbit you need to be going sideways pretty fast. As it happens, the velocity for a circular orbit is
$v = \sqrt{\frac{G M}{r}}$
Where G is the gravitational constant, M is the mass of the earth, and r is the radius of the orbit. For Baumgartner’s jump, r is essentially equal to the radius of the earth. (The earth is about
6400 kilometers in radius, so the extra 40 make little difference.) Plug in, and you find that orbital velocity near the earth is some 7900 m/s.
Kinetic energy is
$E_k = \frac{1}{2}mv^2$
Plugging in orbital velocity, that’s a solid 31 million joules per kilogram. The potential energy due to gravity is just*
$E_p = m g h$
This is about 380,000 joules per kilogram, a much smaller number.
And fundamentally this is the reason why you can jump from a balloon without burning up. Coming from orbit, you have to dissipate both your kinetic and potential energy, and your kinetic energy is
greater by a factor of 100. Coming from a balloon, you only have to dissipate potential energy because you’re not moving at orbital speeds.
Of course there were a million other ways for the jump to have gone horribly wrong, and the Red Bull team deserves great congratulations for pulling it off. I’m don’t usually pay a lot of attention
to commercial stunts, but I tip my hat to this one.
*(g is effectively the same in low orbit as it is on the ground. If the earth were the size of a basketball, the space station would orbit perhaps a centimeter above it)
1. #1 Bob
October 16, 2012
so if the space station was somehow stationary above the earth and you stepped out the door you would fall like Baumgartner? How far/high would you have to be before this did not happen?
2. #2 Andrew Belt October 17, 2012
Yes. You’d have to be an infinite distance away from the earth for you to feel no gravitational force from it, although at long distances it would be unnoticeable and you may feel stronger
gravitational forces from nearby objects in different directions.
4. #4 John H October 17, 2012
Bob – of course there are a lot of variables to consider in answering that, but a fall from 100 Km would yield a velocity approaching 5,000 Kph (point mass, ignoring drag for most of the fall),
and aircraft like the SR-71 get hot enough to burn at those velocities. So you’d probably be significantly heated by a fall from where “space” is usually considered to start.
5. #5 Paul October 17, 2012
Even if one has heat shielding, falling straight down into the atmosphere from a sufficient height will be fatal, either because one hits the ground, or because one decelerates fast enough to
avoid that, but then is killed by that deceleration.
Reentering from orbit, the trajectory is nearly flat, so the deceleration occurs over a span of thousands of miles. A slight amount of lift enables the vehicle to prolong the entry, reducing the
peak deceleration even more.
6. #6 Omega Centauri October 17, 2012
More interestingly if ALL the dissapated kinetic energy went into internal body heat: assume he is made of water 4.2 Joules per degree C, and he would heat up by ninety centigrade! Clearly most
of the heat was conducted away. I suspect some was dissapated by fluid motions (and shock waves) at some distance from his spacesuit/body.
Some questions that might be interesting to know:
At what altitude did he reach max velocity (i.e. drag equals 1g)?
What was his maximum aerodynamic drag? And at what altitude?
At how altitude decelerating did he drop back through the sound barrier?
Did he reach dense air quickly enough he coulda made it without an oxygen supply (i.e. the air in his lungs and bloodstream would be enough for a minute or two)?
7. [...] Und eine Mini-Doku ist auch schon fertig. [15:25 MESZ] Baumgartners Abschied von Roswell, warum er nicht verglühte und was das alles dem Sponsor gebracht hat – plus ein abgehobenes Essay
und ein Cartoon. [...]
8. #8 Sean T October 18, 2012
There’s really no such thing as a space station that is “stationary with respect to the earth.” Such a space station would immediately fall to the earth due to the gravtiational attraction. A
more realistic situation is a space station that is stationary with respect to a given point on the surface of the earth. We actually have satellites that exhibit this condition. They are known
as geosynchronous satellites and are very important for applications like communications and GPS.
Now, a space station in geosynchronous orbit must have a component of “sideways” velocity the same as any other object in orbit. In this case, the orbital velocity simply matches the speed of the
earth’s rotation, keeping the same point on the earth’s surface directly under the station. If you were on the station, you would share this velocity. Stepping off of the station does not change
your velocity, since no external force acts on you upon your departure from the station. Therefore, since your velocity remains unchanged, you would also be in geosynchronous orbit, and you would
not fall to the earth.
9. #9 scott
October 18, 2012
with respect to Bob’s questions and the comments, lets not forget geosynchronous orbits. If you orbit the earth at the altitude of 22,000 miles, then your orbital period (time it takes to circle
the earth one time) is 24 hours. As a result, you are stationary with respect to the earth’s surface. Also, you are still subject to gravity, in the same way that any orbiting body is subject to
gravity. But you don’t fall to earth.
10. #10 jazzman October 18, 2012
I’ve read the article a few times now and other than a number of very interesting formulas, the author never explicitly answers the question. In direct layman’s terms and without any math…..why
didn’t Baumgartner burn up during his fall?
11. #11 Sean T October 18, 2012
Basically, he didn’t burn up because he wasn’t moving fast enough. A spacecraft returning from orbit requires heat shielding because it’s moving much faster than Baumgartner was. Baumgartner’s
speed came solely from falling, that is from the gravitational attraction of the earth. A spacecraft’s speed comes from falling plus its orbital velocity.
12. #12 VikingExplorer October 18, 2012
jazzman, it’s because he wasn’t going fast enough. He went up, had zero speed, then started falling. If he had gone further up, or if he had been coming from space with a large velocity, he would
have burned up. Thrusters could be used to slow down enough to avoid burning up. Another answer pointed out that burning up isn’t the only option. We can dive into a pool from a diving board, but
jumping off a high bridge will kill us on impact because the deceleration is too great. Similarly, at too great a speed, an object can simply tear apart, an we can be killed from the deceleration
of hitting the atmosphere.
13. #13 Paul October 18, 2012
The issue of near-vertical reentry has application to the safety of proposed orbital elevators (very strong tethers that extend up to geostationary altitude, on which a vehicle would climb). If
the “car” fell off the elevator, there is a range of altitudes over which the passengers would die, even if the car had shielding that could prevent them from burning up on reentry.
For safety, the elevator car would need a rocket system. Close to Earth, the rocket would fire as it approached the atmosphere, slowing it to a survivable speed. If the car fell off at higher
altitude, the rocket would be fired sideways, giving the car enough orbital angular momentum that it could reenter at a grazing angle (or even stay in orbit).
(I leave for the reader the problem of finding the altitude at which the first mode transitions to the second.)
14. #14 Steve October 18, 2012
If instead of 24 miles up he’d jumped at 3900 miles up (one Earth radius) he’d have hit the atmosphere at 17,700 mph (7900m/sec) which is orbital reentry speed. And then he WOULD have burned up.
15. #15 chuck mcclure
Bay Area
October 22, 2012
Why don’t space vehicles fire there rockets prior to entering the atomospere in order to stop their orbital velocity? This way they could avoid the dependence on heat shielding materials such as
the ones that failed on the Columbia Space shuttle.
16. #16 jean david
October 23, 2012
(even if you are already in free fall in orbit) If you jump out from ISS, you have to slow down your speed to get down to earth towards earth. you need some rocket jets to do that. Any object on
earth that is getting speed up, must move up ! To move down (fall) you have to decrease ! beamgartner falls down because air friction at this altitude slows him down. higher, you must use jet
brakes !
17. #17 dom October 28, 2012
matt I have a question about the atmospheres and flight not dissimilar from your Iron Man article. | {"url":"http://scienceblogs.com/builtonfacts/2012/10/16/why-didnt-baumgartner-burn-up-on-re-entry/","timestamp":"2014-04-18T10:39:32Z","content_type":null,"content_length":"65248","record_id":"<urn:uuid:b8678f06-aab7-4393-a297-2a81e8943869>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00058-ip-10-147-4-33.ec2.internal.warc.gz"} |
SCALBN(3) BSD Programmer's Manual SCALBN(3)
scalbn, scalbnf - exponent using FLT_RADIX
#include <math.h>
scalbn(double x, int n);
scalbnf(float x, int n);
The scalbn() and scalbnf() functions compute x * r^n, where r is the ra-
dix of the machine's floating point arithmetic, defined by the FLT_RADIX
constant in #include <float.h>
The rationale is efficiency; r^n is not computed explicitly.
As described above, upon successful completion, the described functions
return the exponent computed using FLT_RADIX. Otherwise the following may
1. When the result would cause an overflow, a range error occurs
and +-HUGE_VAL, +-HUGE_VALF, or +-HUGE_VALL is returned ac-
cording to the sign of x and the return type of the
corresponding function.
2. When the correct value would cause an underflow and it is not
representable, a range error occurs and either 0.0 or an
implementation-defined value is returned. When an underflow
occurs but the correct value is representable, a range error
occurs but the correct value is returned.
3. If x is +-0 or +-Inf, x is returned. Likewise, if n is zero, x
is returned. If x is NaN, NaN is returned.
exp(3), frexp(3), ldexp(3), math(3)
The described functions conform to ISO/IEC 9899:1999 ("ISO C99").
MirOS BSD #10-current February 9, 2014 1
Generated on 2014-04-02 20:57:59 by $MirOS: src/scripts/roff2htm,v 1.79 2014/02/10 00:36:11 tg Exp $
These manual pages and other documentation are copyrighted by their respective writers; their source is available at our CVSweb, AnonCVS, and other mirrors. The rest is Copyright © 2002‒2014 The
MirOS Project, Germany.
This product includes material provided by Thorsten Glaser.
This manual page’s HTML representation is supposed to be valid XHTML/1.1; if not, please send a bug report – diffs preferred. | {"url":"http://www.mirbsd.org/htman/i386/man3/scalbn.htm","timestamp":"2014-04-20T05:53:00Z","content_type":null,"content_length":"4653","record_id":"<urn:uuid:b66383d7-d6f2-445b-a138-4060d474ad58>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00534-ip-10-147-4-33.ec2.internal.warc.gz"} |
Foundations of Combinatorics with Applications
Foundations of Combinatorics with Applications covers the classic combinatorial topics: counting and listing, graphs, recursion, and generating functions. But in each of these areas, the text brings
in new topics and methods that have been developed in this generation. For example, in the section on generating functions, they introduce the rules of sum and product that allow us to “go directly
from a combinatorial construction to a generating function expression” without getting bogged down in possibly messy algebraic manipulation. In the graphs section, Bender and Williamson explain how a
considerable body of research has built up around planar graphs and they go on to explain some research highlights, like finding algorithms to figure out if a graph is planar. This discussion about
research in a textbook connects readers to the idea that mathematics is always new, exciting, and expanding and in a sense, the authors are inviting readers to enter into the professional research
The text also focuses on the “interaction between computer science and mathematics.” While other combinatorics texts may only use computer science as motivation for problems, this text dives more
deeply into the connection. For example, the text includes brief programs and examples of code, calculations on limits of speed, cost, and storage, and a whole chapter of sorting algorithms.
While they bring in new, fresh themes, Bender and Williamson do a fine job with the classic topics too. One of my favorite parts of combinatorics is the equivalent ways of looking at the same idea.
They introduce the Catalan numbers and then explain a few different ways to interpret them: an election between two candidates, a computer science stack, and the triangulation of an n-gon. Bender and
Williamson also introduce and solve a problem (like how many ways there are to seat n people on Ferris wheel with one person in each seat) using one method and then later solve this same problem
using more and more efficient methods. I like the section on recursions because the authors focus not just on solving recursions, but they also explain methods of how to think recursively.
Bender and Williamson’s text is rigorous. They assert that the text could be used in a “challenging lower level course”, in an upper division course, or in a beginning graduate course. I would not
recommend using this text in a lower level course, even a challenging one. Every mathematics textbook author must strike a balance between theorems, proofs, and examples. This text leans toward
introducing theorems and then proving them. While there are many examples in the text, especially in the counting and listing section, there are sections sorely lacking examples (like the graphs
section) and sometimes, the examples are not really examples, in that they are not problems for the reader to solve. I don’t think the text is appropriate for a student’s first course in
combinatorics because there are not enough concrete examples for the student to gain a steady footing in the subject. Upper division and graduate students that are already familiar with combinatorics
would be able to use this book.
The homework exercises in the text are great, thought I wish some of these exercises were fleshed out examples included in the main part of the text. Bender and Williamson do provide detailed answers
to odd-numbered problems, even proofs! The exercises are interesting and well-thought out. For example, they have an exercise on recursions where proofs are given and the reader must figure out what
is wrong with the proof.
Overall, the text is refreshing in its combination of classic and new topics and provides a rigorous investigation into the interaction between combinatorics and computer science.
Kara Shane Colley studied physics at Dartmouth College, math at the University of Albany, and math education at Teachers College. She has taught math and physics to middle school, high school, and
community college students in the U.S., the Marshall Islands, and England. Currently, she is volunteering aboard the Halfmoon, a replica of Henry Hudson’s 17th century ship, docked in Albany, NY.
Contact her at karashanecolley@yahoo.com. | {"url":"http://www.maa.org/publications/maa-reviews/foundations-of-combinatorics-with-applications","timestamp":"2014-04-18T00:27:51Z","content_type":null,"content_length":"122855","record_id":"<urn:uuid:2bdc9128-5054-41d3-9dec-7c8fa3037e20>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00225-ip-10-147-4-33.ec2.internal.warc.gz"} |
Java Q&A: How Do I Correctly Implement the <i>equals()</i> Method?
Tal is a researcher in IBM's Haifa Research Labs in Israel. He can be contacted at tal@forum2.org.
The Java equals() method, which is defined in java.lang.Object, is used for instance equality testing (as opposed to reference equality, which is tested using the == operator). Consider, for example,
these two assignments:
Date d1 = new Date(2001, 10, 27);
Date d2 = new Date(2001, 10, 27);
In this case, d1 == d2 returns False (since == tests for reference equality, and the two variables are references to different objects). However, d1.equals(d2) returns True.
The default implementation of equals() is based on the == operator: Two objects are equal if and only if they are the same object. Naturally, most classes should define their own alternative
implementation of this important method.
However, implementing equals() correctly is not straightforward. The equals() method has a contract that says the equality relation must meet these demands:
• It must be reflexive. For any reference x, x.equals(x) must return True.
• It must be symmetric. For any two nonnull references x and y, x.equals(y) should return the exact same value as y.equals(x).
• It must be transitive. For any three references x, y, and z, if x.equals(y) and y.equals(z) are True, then x.equals(z) must also return True.
• It should be consistent. For any two references x and y, x.equals(y) should return the same value if called repeatedly (unless, of course, either x or y were changed between the repeated
invocations of equals()).
• For any nonnull reference x, x.equals(null) should return False.
This doesn't sound complicated: The first three items are the natural mathematical properties of equality, and the last two are trivial programmatic requirements. It looks like any implementation
based on simple field-by-field comparison would do the trick. For example, in Listing One the class Point represents a point in two-dimensional space, with a suggested implementation for the equals()
method. At first glance, it looks as though Listing One meets all five demands placed by the contract:
• It is reflexive, since whenever the parameter o is actually this (which is what happens when one invokes it using x.equals(x)), the fields match and the result is True.
• It seems symmetric. If some Point object p1 finds its fields are equal to those of some other Point object p2, then p2 would also find that its own fields are equal to those of p1. For example,
after the two assignments:
Point p1 = new Point(1, 2);
Point p2 = new Point(1, 2);
both p1.equals(p2) and p2.equals(p1) return True. If, on the other hand, p2 is different than p1, both calls return False.
• It seems transitive, for the same reasons.
• It is clearly consistent.
• Any call of the form x.equals(null) returns False, thanks to the test at the beginning of the code: If the parameter is not an instance of the class Point, the method returns False immediately.
Since, in particular, null is not an instance of Point (nor indeed of any other class), the condition is met.
However, this is a naïve implementation. As Joshua Bloch shows in his book Effective Java Programming Language Guide (Addison-Wesley, 2001), things get much more complex when inheritance is involved.
Bloch presents the class ColorPoint (Listing Two), which extends Point and adds an aspect (namely, a new field). If ColorPoint implements equals() similarly to its superclass Point, symmetry is
violated. Again, the implementation seems straightforward and correct. The problem arises when two objects are involved, each of a different class:
ColorPoint p1 = new ColorPoint(1, 2, Color.RED);
Point p2 = new Point(1, 2);
Now, p2.equals(p1) returns True, since the two fields p2's equals() method compares, x and y, are indeed equal. Yet p1.equals(p2) returns False because p2 is not an instance of the ColorPoint class.
It is important to understand that an incorrect implementation of equals(), like that just presented, would cause problems in many unexpected places; for example, when the objects are used in various
collection classes (that is, in their containment tests). And you have just seen that this simple implementation does not provide symmetry.
Listing Three, an alternative implementation of equals(), does meet the symmetry requirement. While at first it might seem a better solution, Bloch shows that it is broken, too. Symmetry is indeed
preserved. p1 and p2 (from the earlier example) would both provide the same answer when asked if one equals the other. However, this implementation violates the demand for "transitivity." To see how,
add a third reference, p3:
ColorPoint p3 = new ColorPoint(1, 2, Col or.BLUE);
In this case, p1.equals(p2) returns True, since p1 realizes p2 is not a ColorPoint and performs a color-blind comparison. p2.equals(p3) also returns True, since p2, being a simple Point, compares
only the x and y fields and finds them to be equal. Transitivity demands that if a=b and b=c, then a=c as well. But in this case, even though p1.equals(p2) and p2.equals(p3), the call p1.equals(p3)
returns False.
One way to avoid the problem is to ignore any fields added in subclasses. This way, ColorPoint inherits the implementation of equals() provided by Point, and doesn't override it. This solution does
meet all the contract demands for equals(). However, it is hardly a useful equality test; for example, p1.equals(p3) returns True, even though each point has a different color.
Bloch claims that "It turns out that this is a fundamental problem of equivalence relations in object-oriented languages. There is simply no way to extend an instantiable class and add an aspect
while preserving the equals contract." He suggests that programmers use composition rather than inheritance to work around this problem. Taking this approach, the ColorPoint class would not extend
Point, but rather include a field of that type, like Listing Four.
Is this the only solution? Not really. The Point class can be extended, adding an aspect, while preserving the equals() contract. The basic idea is this: For two objects to be equal, both must agree
that they are equal. To prevent endless recursion during the mutual verification, you define a protected helper method, blindlyEquals(), which compares fields blindly. The equals() method then
verifies that both objects agree that they are blindly equal to each other; see Listing Five. Note how the implementation of blindlyEquals() is simply the original implementation of equals().
However, blindlyEquals() is not bound by the equals() contract. By itself, it does not provide a symmetric comparison, but it does provide equals() with the services it needs to fully meet the
contract demands.
In subclasses, you override blindlyEquals() only, leaving equals() unchanged. Listing Six, therefore, is a proper implementation of the class ColorPoint. Again, the implementation of blindlyEquals()
is the original, nonsymmetric attempt to implement equals(). The equals() method itself is inherited from Point, and not overridden.
It is easy to see that this new implementation is both symmetric and transitive, as well as meeting all other demands placed by the equals() contract. In particular, when using the three objects
defined in the previous examples:
• p2.blindlyEquals(p1) returns True, but p1.blindlyEquals(p2) returns False. Since equals() checks both ways, both p1.equals(p2) and p2.equals(p1) return False.
• Since p1.equals(p2) returns False (and p2.equals(p3) returns False as well), the transitivity demand does not hold in this case (ab and bc means you do not know in advance if a=c or not).
It can be mathematically proven that symmetry and transitivity always hold with this implementation. The symmetry part is easy: For any two references x and y, x.equals(y) and y.equals(x) execute the
same code (calling both x.blindlyEquals(y) and y.blindlyEquals(x), although in a different order). Transitivity can be proven using reductio ad absurdum. And of course, the other three contract
demands reflexivity, consistency, and returning False when tested on null are also met.
The technique presented here can be applied to any object hierarchy you define in Java. That equals() itself is never overridden means it would have been best if this implementation was part of the
standard java.lang.Object() class, along with a default implementation of blindlyEquals(), which could be easily overridden by each subclass. However, since this change in the Java standard libraries
is not likely to occur anytime soon, we will have to be content with manually including it in programs.
In short, whenever you define a new class, a definition of blindlyEquals() must be included as a nonsymmetric comparison operation, and an implementation of equals() (as presented here) should be
added. Then, all subclasses of this newly defined class need only override blindlyEquals() to provide a complete, contract-abiding equals() comparison.
The method presented here can be used in any object-oriented language, and does not rely on run-time type information (other than the instanceof operator, which is required for any implementation of
equals()). It does incur a price on performance, but a relatively minor one: The equality test is repeated twice, but only if the two objects are indeed equal. A few simple modifications can reduce
the cost significantly. For an additional discussion, please visit http://www.forum2.org/tal/equals.html.
class Point {
private int x;
private int y;
// (obvious constructor omitted...)
public boolean equals(Object o) {
if (!(o instanceof Point))
return false;
Point p = (Point)o;
return (p.x == this.x && p.y == this.y);
class ColorPoint extends Point {
private Color c;
// (obvious constructor omitted...)
public boolean equals(Object o) {
if (!(o instanceof ColorPoint))
return false;
ColorPoint cp = (ColorPoint)o;
return (super.equals(cp) && cp.color == this.color);
class ColorPoint extends Point {
private Color c;
// (obvious constructor omitted...)
public boolean equals(Object o) {
if (!(o instanceof Point))
return false;
// if o is a normal Point, do a color-blind comparison:
if (!(o instanceof ColorPoint))
return o.equals(this);
// o is a ColorPoint; do a full comparison:
ColorPoint cp = (ColorPoint)o;
return (super.equals(cp) && cp.color == this.color);
class ColorPoint {
private Point point;
private Color color;
// ...etc.
class Point {
private int x;
private int y;
protected boolean blindlyEquals(Object o) {
if (!(o instanceof Point))
return false;
Point p = (Point)o;
return (p.x == this.x && p.y == this.y);
public boolean equals(Object o) {
return (this.blindlyEquals(o) && o.blindlyEquals(this));
class ColorPoint extends Point {
private Color c;
protected boolean blindlyEquals(Object o) {
if (!(o instanceof ColorPoint))
return false;
ColorPoint cp = (ColorPoint)o;
return (super.blindlyEquals(cp) &&
cp.color == this.color); | {"url":"http://www.drdobbs.com/jvm/java-qa-how-do-i-correctly-implement-th/184405053","timestamp":"2014-04-19T04:47:53Z","content_type":null,"content_length":"103726","record_id":"<urn:uuid:9e2160e2-60f3-4c67-9200-c1c7452ff7c8>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00238-ip-10-147-4-33.ec2.internal.warc.gz"} |
POV-Ray: Documentation: 2.5.11.14 Fractal Patterns
Fractal patterns supported in POV-Ray:
• The Mandelbrot set with exponents up to 33.(The formula for these is: z(n+1) = z(n)^p + c, where p is the correspondent exponent.)
• The equivalent Julia sets.
• The magnet1 and magnet2 fractals (which are derived from some magnetic renormalization transformations; see the fractint help for more details).
Both 'Mandelbrot' and 'Julia' versions of them are supported.
For the Mandelbrot and Julia sets, higher exponents will be slower for two reasons:
1. For the exponents 2,3 and 4 an optimized algorithm is used. Higher exponents use a generic algorithm for raising a complex number to an integer exponent, and this is a bit slower than an
optimized version for a certain exponent.
2. The higher the exponent, the slower it will be. This is because the amount of operations needed to raise a complex number to an integer exponent is directly proportional to the exponent. This
means that exponent 10 will be (very) roughly twice as slow as exponent 5.
mandel ITERATIONS [, BUMP_SIZE]
[exponent EXPONENT]
[exterior EXTERIOR_TYPE, FACTOR]
[interior INTERIOR_TYPE, FACTOR]
julia COMPLEX, ITERATIONS [, BUMP_SIZE]
[exponent EXPONENT]
[exterior EXTERIOR_TYPE, FACTOR]
[interior INTERIOR_TYPE, FACTOR]
magnet MAGNET_TYPE mandel ITERATIONS [, BUMP_SIZE]
[exterior EXTERIOR_TYPE, FACTOR]
[interior INTERIOR_TYPE, FACTOR]
magnet MAGNET_TYPE julia COMPLEX, ITERATIONS [, BUMP_SIZE]
[exterior EXTERIOR_TYPE, FACTOR]
[interior INTERIOR_TYPE, FACTOR]
ITERATIONS is the number of times to iterate the algorithm.
COMPLEX is a 2D vector denoting a complex number.
MAGNET_TYPE is either 1 or 2.
exponent is an integer between 2 and 33. If not given, the default is 2.
interior and exterior specify special coloring algorithms. You can specify one of them or both at the same time. They only work with the fractal patterns.
EXTERIOR_TYPE and INTERIOR_TYPE are integer values between 0 and 6 (inclusive). When not specified, the default value of INTERIOR_TYPE is 0 and for EXTERIOR_TYPE 1.
FACTOR is a float. The return value of the pattern is multiplied by FACTOR before returning it. This can be used to scale the value range of the pattern when using interior and exterior coloring
(this is often needed to get the desired effect). The default value of FACTOR is 1.
The different values of EXTERIOR_TYPE and INTERIOR_TYPE have the following meaning:
• 0 : Returns just 1
• 1 : For exterior: The number of iterations until bailout divided by ITERATIONS.
Note: this is not scaled by FACTOR (since it is internally scaled by 1/ITERATIONS instead).
For interior: The absolute value of the smallest point in the orbit of the calculated point
• 2 : Real part of the last point in the orbit
• 3 : Imaginary part of the last point in the orbit
• 4 : Squared real part of the last point in the orbit
• 5 : Squared imaginary part of the last point in the orbit
• 6 : Absolute value of the last point in the orbit
box {
<-2, -2, 0>, <2, 2, 0.1>
pigment {
julia <0.353, 0.288>, 30
interior 1, 1
color_map {
[0 rgb 0]
[0.2 rgb x]
[0.4 rgb x+y]
[1 rgb 1]
[1 rgb 0] | {"url":"http://www.povray.org/documentation/view/3.6.1/377/","timestamp":"2014-04-21T02:38:01Z","content_type":null,"content_length":"13619","record_id":"<urn:uuid:4ba7c214-dbe6-4a59-9f24-8bcdd62d7acb>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00620-ip-10-147-4-33.ec2.internal.warc.gz"} |
A quadratically convergent O( p nL)-iteration algorithm for linear programming
Results 1 - 10 of 32
, 2000
"... The modern era of interior-point methods dates to 1984, when Karmarkar proposed his algorithm for linear programming. In the years since then, algorithms and software for linear programming have
become quite sophisticated, while extensions to more general classes of problems, such as convex quadrati ..."
Cited by 463 (16 self)
Add to MetaCart
The modern era of interior-point methods dates to 1984, when Karmarkar proposed his algorithm for linear programming. In the years since then, algorithms and software for linear programming have
become quite sophisticated, while extensions to more general classes of problems, such as convex quadratic programming, semidefinite programming, and nonconvex and nonlinear problems, have reached
varying levels of maturity. We review some of the key developments in the area, including comments on both the complexity theory and practical algorithms for linear programming, semidefinite
programming, monotone linear complementarity, and convex programming over sets that can be characterized by self-concordant barrier functions.
, 1992
"... CONTENTS 1 Introduction 1 2 The Basics of Predictor-Corrector Path Following 3 3 Aspects of Implementations 7 4 Applications 15 5 Piecewise-Linear Methods 34 6 Complexity 41 7 Available Software
44 References 48 1. Introduction Continuation, embedding or homotopy methods have long served as useful ..."
Cited by 70 (6 self)
Add to MetaCart
CONTENTS 1 Introduction 1 2 The Basics of Predictor-Corrector Path Following 3 3 Aspects of Implementations 7 4 Applications 15 5 Piecewise-Linear Methods 34 6 Complexity 41 7 Available Software 44
References 48 1. Introduction Continuation, embedding or homotopy methods have long served as useful theoretical tools in modern mathematics. Their use can be traced back at least to such venerated
works as those of Poincar'e (1881--1886), Klein (1882-- 1883) and Bernstein (1910). Leray and Schauder (1934) refined the tool and presented it as a global result in topology, viz., the homotopy
invariance of degree. The use of deformations to solve nonlinear systems of equations Partially supported by the National Science Foundation via grant # DMS-9104058 y Preprint, Colorado State
University, August 2 E. Allgower and K. Georg may be traced back at least to Lahaye (1934). The classical embedding methods were the
- MATH. PROGRAMMING , 2004
"... We present polynomial-time interior-point algorithms for solving the Fisher and Arrow-Debreu competitive market equilibrium problems with linear utilities and n players. Both of them have the
arithmetic operation complexity bound of O(n 4 log(1/ɛ)) for computing an ɛ-equilibrium solution. If the p ..."
Cited by 36 (7 self)
Add to MetaCart
We present polynomial-time interior-point algorithms for solving the Fisher and Arrow-Debreu competitive market equilibrium problems with linear utilities and n players. Both of them have the
arithmetic operation complexity bound of O(n 4 log(1/ɛ)) for computing an ɛ-equilibrium solution. If the problem data are rational numbers and their bit-length is L, then the bound to generate an
exact solution is O(n 4 L) which is in line with the best complexity bound for linear programming of the same dimension and size. This is a significant improvement over the previously best bound O(n
8 log(1/ɛ)) for approximating the two problems using other methods. The key ingredient to derive these results is to show that these problems admit convex optimization formulations, efficient barrier
functions and fast rounding techniques. We also present a continuous path leading to the set of the Arrow-Debreu equilibrium, similar to the central path developed for linear programming
interior-point methods. This path is derived from the weighted logarithmic utility and barrier functions and the Brouwer fixed-point theorem. The defining equations are bilinear and possess some
primal-dual structure for the application of the Newton-based path-following method.
- Journal of Optimization Theory and Applications , 1993
"... . We show that recently developed interior point methods for quadratic programming and linear complementarity problems can be put to use in solving discrete-time optimal control problems, with
general pointwise constraints on states and controls. We describe interior point algorithms for a discrete ..."
Cited by 31 (5 self)
Add to MetaCart
. We show that recently developed interior point methods for quadratic programming and linear complementarity problems can be put to use in solving discrete-time optimal control problems, with
general pointwise constraints on states and controls. We describe interior point algorithms for a discrete time linear-quadratic regulator problem with mixed state/control constraints, and show how
it can be efficiently incorporated into an inexact sequential quadratic programming algorithm for nonlinear problems. The key to the efficiency of the interior-point method is the narrow-banded
structure of the coefficient matrix which is factorized at each iteration. Key words. interior point algorithms, optimal control, banded linear systems. 1. Introduction. The problem of optimal
control of an initial value ordinary differential equation, with Bolza objectives and mixed constraints, is min x;u Z T 0 L(x(t); u(t); t) dt + OE f (x(T )); x(t) = f(x(t); u(t); t); x(0) = x init ;
(1.1) g(x(t); u(...
, 1994
"... The literature on interior point algorithms shows impressive results related to the speed of convergence of the objective values, but very little is known about the convergence of the iterate
sequences. This paper studies the horizontal linear complementarity problem, and derives general convergence ..."
Cited by 23 (4 self)
Add to MetaCart
The literature on interior point algorithms shows impressive results related to the speed of convergence of the objective values, but very little is known about the convergence of the iterate
sequences. This paper studies the horizontal linear complementarity problem, and derives general convergence properties for algorithms based on Newton iterations. This problem provides a simple and
general framework for most existing primal-dual interior point methods. The conclusion is that most of the published algorithms of this kind generate convergent sequences. In many cases (whenever the
convergence is not too fast in a certain sense), the sequences converge to the analytic center of the optimal face.
- Optimization Methods and Software , 2003
"... Polynomial cutting plane methods based on the logarithmic barrier function and on the volumetric center are surveyed. These algorithms construct a linear programming relaxation of the feasible
region, find an appropriate approximate center of the region, and call a separation oracle at this approxim ..."
Cited by 15 (8 self)
Add to MetaCart
Polynomial cutting plane methods based on the logarithmic barrier function and on the volumetric center are surveyed. These algorithms construct a linear programming relaxation of the feasible
region, find an appropriate approximate center of the region, and call a separation oracle at this approximate center to determine whether additional constraints should be added to the relaxation.
Typically, these cutting plane methods can be developed so as to exhibit polynomial convergence. The volumetric cutting plane algorithm achieves the theoretical minimum number of calls to a
separation oracle. Long-step versions of the algorithms for solving convex optimization problems are presented. 1
, 1991
"... ... In this paper, we survey the various theoretical and practical issues related to degeneracy in IPM's for linear programming. We survey results which for the most part already appeared in the
literature. Roughly speaking, we shall deal with four topics: the effect of degeneracy on the convergence ..."
Cited by 11 (1 self)
Add to MetaCart
... In this paper, we survey the various theoretical and practical issues related to degeneracy in IPM's for linear programming. We survey results which for the most part already appeared in the
literature. Roughly speaking, we shall deal with four topics: the effect of degeneracy on the convergence of IPM's, on the trajectories followed by the algorithms, the effect of degeneracy in
numerical performance, and on finding basic solutions.
- Department of Mathematics, The University of Iowa, Iowa City, IA , 1995
"... A new algorithm for solving linear complementarity problems with sufficient matrices is proposed. If the problem has a solution the algorithm is superlinearly convergent from any positive
starting points, even for degenerate problems. Each iteration requires only one matrix factorization and at most ..."
Cited by 10 (9 self)
Add to MetaCart
A new algorithm for solving linear complementarity problems with sufficient matrices is proposed. If the problem has a solution the algorithm is superlinearly convergent from any positive starting
points, even for degenerate problems. Each iteration requires only one matrix factorization and at most two backsolves. Only one backsolve is necessary if the problem is known to be nondegenerate.
The algorithm generates points in a large neighborhood of the central path and has the lowest iteration complexity obtained so far in the literature. Moreover, the iteration sequence converges
superlinearly to a maximal solution with the same Q-order as the complementarity sequence. Key Words: linear complementarity problems, sufficient matrices, P -matrices, path-following,
infeasible-interior-point algorithm, polynomiality, superlinear convergence. Abbreviated Title: A method for LCP. Department of Mathematics, University of Iowa, Iowa City, IA 52242, USA. The work of
this author was supporte...
- MATH. PROGRAM., SER. A 91: 99–115 (2001) , 2001
"... ..." | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1061985","timestamp":"2014-04-20T17:10:35Z","content_type":null,"content_length":"36640","record_id":"<urn:uuid:e246f114-338c-4222-9735-a95595336e2f>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00653-ip-10-147-4-33.ec2.internal.warc.gz"} |
Investigating Z
, 2002
"... We show how a theory of specification refinement and program development can be constructed as a conservative extension of our existing logic for Z. The resulting system can be set up as a
development method for Z, or as a generalisation of a refinement calculus (with a novel semantics). In addition ..."
Cited by 2 (1 self)
Add to MetaCart
We show how a theory of specification refinement and program development can be constructed as a conservative extension of our existing logic for Z. The resulting system can be set up as a
development method for Z, or as a generalisation of a refinement calculus (with a novel semantics). In addition to the technical development we illustrate how the theory can be used in practice. 1.
, 2002
"... In this paper we analyse total correctness operation refinement on a partial relation semantics for specification. In particular we show that three theories: a relational completion approach, a
prooftheoretic approach and a functional models approach, are all equivalent. This result holds whether or ..."
Cited by 2 (0 self)
Add to MetaCart
In this paper we analyse total correctness operation refinement on a partial relation semantics for specification. In particular we show that three theories: a relational completion approach, a
prooftheoretic approach and a functional models approach, are all equivalent. This result holds whether or not preconditions are taken to be minimal or fixed conditions for establishing the
postcondition. Keyword: Specification Language; Specification Logic; Refinement; 1
, 1766
"... The language µ-Charts is one of many Statechart-like languages, a family of visual languages that are used for designing reactive systems. We introduce a logic for reasoning about and
constructing refinements for µ-charts. The logic itself is interesting and important because it allows reasoning abo ..."
Cited by 1 (0 self)
Add to MetaCart
The language µ-Charts is one of many Statechart-like languages, a family of visual languages that are used for designing reactive systems. We introduce a logic for reasoning about and constructing
refinements for µ-charts. The logic itself is interesting and important because it allows reasoning about µ-charts in terms of partial relations rather than the more traditional traces approach. The
method of derivation of the logic is also worthy of report. A Z-based model for the language µ-Charts is constructed and the existing logic and refinement calculus of Z is used as the basis for the
logic of µ-Charts. As well as describing the logic we introduce some of the ways such a logic can be specifications into concrete realisations of reactive systems. A refinement theory for
Statechart-like languages is an important contribution because it allows us to formally investigate and reason about properties of the object language µ-Charts. In particular, we can conjecture and
, 2001
"... This report describes a deep embedding of the logic ZC [HR00] in Isabelle /HOL. The development is based on a general theory of de Bruijn terms. Wellformed terms, propositions and judgements are
represented as inductive sets. The embedding is used to prove elementary properties of ZC such as uniquen ..."
Cited by 1 (1 self)
Add to MetaCart
This report describes a deep embedding of the logic ZC [HR00] in Isabelle /HOL. The development is based on a general theory of de Bruijn terms. Wellformed terms, propositions and judgements are
represented as inductive sets. The embedding is used to prove elementary properties of ZC such as uniqueness of types, type inhabitation and that elements of judgements are wellformed propositions 1
De Bruijn Terms The representation of logical syntax in Isabelle/HOL will be based on a polymorphic datatype dbterm of de Bruijn terms. This development follows the example of A. Gordon [Gor94] who
constructed a similar theory for the HOL system. The datatype dbterm is independent of ZC and can be used as a foundation for deep embeddings in general. For other HOL representations of terms see
[Owe95] and [Von95].
- in W. Grieskamp, T. Santen & B. Stoddart, eds, ‘Integrated Formal Methods 2000: Proceedings of the 2nd , 2000
"... . In this paper we show, by a series of examples, how the - chart formalism can be translated into Z. We give reasons for why this an interesting and sensible thing to do and what it might be
used for. 1 Introduction In this paper we show, by a series of examples, how the -chart formalism (as gi ..."
Cited by 1 (0 self)
Add to MetaCart
. In this paper we show, by a series of examples, how the - chart formalism can be translated into Z. We give reasons for why this an interesting and sensible thing to do and what it might be used
for. 1 Introduction In this paper we show, by a series of examples, how the -chart formalism (as given in [9]) can be translated into Z. We also discuss why this is a useful and interesting thing to
do and give some examples of work that might be done in the future in this area which combines Z and -charts. It might seem obvious that we should simply express the denotational semantics given in
[9] directly in Z and then do our proofs. After all, the semantics is given in set theory and so Z would be adequate for the task. However, our aim is to produce versions of -charts that are
recognisably Z models, i.e. using the usual state and operation schema constructs and some schema calculus in natural ways|-chart states and transitions appear as Z state and operation schemas
, 2003
"... In this note, we revise the preconditions as \ ring conditions" approach to operation re nement and data re nement for Z. 1 ..."
Add to MetaCart
In this note, we revise the preconditions as \ ring conditions" approach to operation re nement and data re nement for Z. 1
"... We introduce a logic for reasoning about and constructing refinements for \mu-Charts, a rational simplification and reconstruction of Statecharts. The method of derivation of the logic is that a
semantics for the language is constructed in Z and the existing logic and refinement calculus of Z is the ..."
Add to MetaCart
We introduce a logic for reasoning about and constructing refinements for \mu-Charts, a rational simplification and reconstruction of Statecharts. The method of derivation of the logic is that a
semantics for the language is constructed in Z and the existing logic and refinement calculus of Z is then used to induce the logic and refinement calculus of \mu-Charts, proceeding by a series of
definitions and conservative extensions and hence generating a sound logic for \mu-Charts, given that the soundness of the Z logic has already been established. | {"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.39.9479","timestamp":"2014-04-18T01:44:19Z","content_type":null,"content_length":"26905","record_id":"<urn:uuid:f0bd85b8-cfc4-4b25-a451-57a92d3f84e3>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00291-ip-10-147-4-33.ec2.internal.warc.gz"} |
Classification with Naive Bayes
by Josh Patterson, Independent Consultant at Patterson Consulting on May 02, 2011
A Deep Dive into Classification with Naive Bayes. Along the way we take a look at some basics from Ian Witten's Data Mining book and dig into the algorithm....
A Deep Dive into Classification with Naive Bayes. Along the way we take a look at some basics from Ian Witten's Data Mining book and dig into the algorithm.
Presented on Wed Apr 27 2011 at SeaHUG in Seattle, WA.
Total Views
Views on SlideShare
Embed Views
Usage Rights
© All Rights Reserved | {"url":"http://www.slideshare.net/jpatanooga/sea-hug-navebayes24042011v5","timestamp":"2014-04-21T15:07:29Z","content_type":null,"content_length":"163536","record_id":"<urn:uuid:263c4b48-3c1f-4755-8e27-6ddd55b8feeb>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00555-ip-10-147-4-33.ec2.internal.warc.gz"} |
Working out the level of homogeneity
March 25th 2013, 10:49 AM #1
Mar 2013
Working out the level of homogeneity
I know that q= 120L^1 1/2 K - (L^3/√K) has homogenous degree of 2 1/2 but I'm unsure how that has been worked out. I think its the square root thats throwing me, could someone show me how its 2 1
Thanks in advance.
Re: Working out the level of homogeneity
In this case, homogeneous means that the sum of the exponents of L and K for the first term is the same as the sum of the exponents of L and K for the second term.
For the first term, L has exponent $1\,\frac{1}{2}$ and K has exponent 1, so the sum is $2\,\frac{1}{2}$. For the second term, L has exponent 3 and K has exponent $-\frac{1}{2}$ (since $K^{-\frac
{1}{2}}=\frac{1}{\sqrt{K}}$), so the sum is again $2\,\frac{1}{2}$.
- Hollywood
Re: Working out the level of homogeneity
Thank you, for some reason I was adding those terms together.
March 25th 2013, 05:43 PM #2
Super Member
Mar 2010
March 26th 2013, 05:07 AM #3
Mar 2013 | {"url":"http://mathhelpforum.com/calculus/215534-working-out-level-homogeneity.html","timestamp":"2014-04-16T09:03:11Z","content_type":null,"content_length":"34815","record_id":"<urn:uuid:00209240-a923-42d6-ae01-b0cbf2e49ad7>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00322-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Game on a Finite Projective Plane
up vote 6 down vote favorite
Two players Oh and Ex alternately choose points of a finite projective plane.
The first player (if any) to make a line in his/her chosen points is the winner.
Using the Erdos-Selfridge theorem, we can see that the game is a draw if the order of the projective plane is 5 or greater. The game is a trivial Oh win if the order is 2. Does Oh win if the order is
3 or 4?
add comment
1 Answer
active oldest votes
See "Tic-Tac-Toe on a Finite Plane", Maureen T. Carroll and Steven T. Dougherty, Mathematics Magazine, Vol. 77, No. 4 (Oct., 2004), pp. 260-274. (Preprint here: http://
up vote 8 academic.scranton.edu/faculty/carrollm1/tictac.pdf) The second player can force a draw on the 3-by-3 and 4-by-4 projective planes.
down vote
Thank you very much for the reference, which completely answers the question. – Martin Erickson Jun 28 '10 at 15:31
add comment
Not the answer you're looking for? Browse other questions tagged combinatorial-game-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/29602/a-game-on-a-finite-projective-plane/29610","timestamp":"2014-04-19T10:05:27Z","content_type":null,"content_length":"50105","record_id":"<urn:uuid:ef186843-48e2-46c9-88f7-0ae7adb27f0a>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00435-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
if f(x)=7x + 5 and g(x)= x^2 , What is (g + f)(x) . I don't understand this question because what about like terms? Can I answer it w/o them?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
You don't need to worry about like-terms.
Best Response
You've already chosen the best response.
(g + f)(x) =x^2 + 7x + 5 Hence, you can do without like terms
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/505f7fcee4b0583d5cd20b2f","timestamp":"2014-04-17T22:09:19Z","content_type":null,"content_length":"30086","record_id":"<urn:uuid:15cdb9aa-279a-418c-8198-50ac2481a1fd>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00211-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: replacing with mean
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: replacing with mean
From Steven Samuels <sjsamuels@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: replacing with mean
Date Thu, 2 Dec 2010 09:40:30 -0500
Fabio Zona: Why do you want to impute the missing data? For all the purposes that I can think of, mean replacement is an approach to avoid. While it reproduces means, it distorts most other
properties of the observed and unknown complete data, including standard deviations, correlations, and regression estimates. Consider one of Stata's other imputation programs: -mi- in Stata 11; -mim-
or -ice- from SSC.
Steven J. Samuels
18 Cantine's Island
Saugerties NY 12477
Voice: 845-246-0774
Fax: 206-202-4783
On Dec 2, 2010, at 3:42 AM, Nick Cox wrote:
No; it is not necessary as you could calculate the means in Mata. But Michael's suggestion will typically be easier to work with.
-compress- usually gives extra memory painlessly.
Fabio Zona
...one more thing.... is it necessary to generate a new variable of the mean? This consumes memory in stata..
Michael N. Mitchell
Will this do the trick?
egen missrev = mean(revenues), by(industry)
replace revenues = missrev if missing(revenues)
On 2010-12-01 10.31 PM, Fabio Zona wrote:
I have a set of industries, with a different number of firms in each industry; per each firm I have a value, say it be Revenues
Industry Firm Revenues
A 1 100
A 2 150
A 3 missing1
A 4 120
B 5 80
B 6 130
B 7 missing2
I need to replace the missing value of Revenues with the mean of the Revenues within the same industries (For example, missing1 for firm 3, needs to be replaced with the mean of the values 100,
150, 120, that is, with the mean of the revenues of other firms 1, 2 and 4 which belong to the same industry to which firm 3 belongs).
I need to do this hundreds of time.
How can I do it easily?
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2010-12/msg00078.html","timestamp":"2014-04-20T03:22:07Z","content_type":null,"content_length":"10648","record_id":"<urn:uuid:1499e184-1b70-4d4c-9c49-fd3a03903ff1>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00579-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebra Tutors
La Canada Flintridge, CA 91011
Experienced, Credentialed Teacher: ESL, English, German, Piano
ESL: With over 20 years' experience teaching high school students from various language backgrounds, most entering ninth grade with no English at all, to quickly gain the needed proficiency to
complete graduation and college entrance requirements on time, I have...
Offering 10+ subjects including algebra 1 | {"url":"http://www.wyzant.com/geo_Temple_City_Algebra_tutors.aspx?d=20&pagesize=5&pagenum=4","timestamp":"2014-04-19T01:24:26Z","content_type":null,"content_length":"60133","record_id":"<urn:uuid:059f2bba-c7d9-49e8-ae8d-e0502a76e75e>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00423-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics: Vector Components and Unit Vectors Video | MindBites
Physics: Vector Components and Unit Vectors
About this Lesson
• Type: Video Tutorial
• Length: 12:05
• Media: Video/mp4
• Use: Watch Online & Download
• Access Period: Unrestricted
• Download: MP4 (iPod compatible)
• Size: 130 MB
• Posted: 07/01/2009
This lesson is part of the following series:
Physics (147 lessons, $198.00)
Physics: Preliminaries (8 lessons, $11.88)
Physics: Vectors (2 lessons, $4.95)
This lesson was selected from a broader, comprehensive course, Physics I. This course and others are available from Thinkwell, Inc. The full course can be found at http://www.thinkwell.com/student/
product/physics. The full course covers kinematics, dynamics, energy, momentum, the physics of extended objects, gravity, fluids, relativity, oscillatory motion, waves, and more. The course features
two renowned professors: Steven Pollock, an associate professor of Physics at he University of Colorado at Boulder and Ephraim Fischbach, a professor of physics at Purdue University.
Steven Pollock earned a Bachelor of Science in physics from the Massachusetts Institute of Technology and a Ph.D. from Stanford University. Prof. Pollock wears two research hats: he studies
theoretical nuclear physics, and does physics education research. Currently, his research activities focus on questions of replication and sustainability of reformed teaching techniques in (very)
large introductory courses. He received an Alfred P. Sloan Research Fellowship in 1994 and a Boulder Faculty Assembly (CU campus-wide) Teaching Excellence Award in 1998. He is the author of two
Teaching Company video courses: “Particle Physics for Non-Physicists: a Tour of the Microcosmos” and “The Great Ideas of Classical Physics”. Prof. Pollock regularly gives public presentations in
which he brings physics alive at conferences, seminars, colloquia, and for community audiences.
Ephraim Fischbach earned a B.A. in physics from Columbia University and a Ph.D. from the University of Pennsylvania. In Thinkwell Physics I, he delivers the "Physics in Action" video lectures and
demonstrates numerous laboratory techniques and real-world applications. As part of his mission to encourage an interest in physics wherever he goes, Prof. Fischbach coordinates Physics on the Road,
an Outreach/Funfest program. He is the author or coauthor of more than 180 publications including a recent book, “The Search for Non-Newtonian Gravity”, and was made a Fellow of the American Physical
Society in 2001. He also serves as a referee for a number of journals including “Physical Review” and “Physical Review Letters”.
About this Author
2174 lessons
Founded in 1997, Thinkwell has succeeded in creating "next-generation" textbooks that help students learn and teachers teach. Capitalizing on the power of new technology, Thinkwell products prepare
students more effectively for their coursework than any printed textbook can. Thinkwell has assembled a group of talented industry professionals who have shaped the company into the leading provider
of technology-based textbooks. For more information about Thinkwell, please visit www.thinkwell.com or visit Thinkwell's Video Lesson Store at http://thinkwell.mindbites.com/.
Thinkwell lessons feature a star-studded cast of outstanding university professors: Edward Burger (Pre-Algebra through...
Recent Reviews
This lesson has not been reviewed.
Please purchase the lesson to review.
This lesson has not been reviewed.
Please purchase the lesson to review.
We use vectors whenever we have to talk about something physical that has a magnitude and a direction associated with it. I always like to think about velocity as kind of a typical example. Velocity
is a vector. It has a magnitude, 55 miles per hour for instance, and a direction, northeast. When you have a vector as a physical quantity, you can do things with it. You can add vectors, you can
scale vectors, and we've seen how you can do that graphically on a piece of paper, representing the vectors as arrows.
Now, if you want to add to vectors, or worse yet, subtract them, on a piece of paper, it can be a little bit tricky. You have to draw them very carefully and to scale and I'm a lousy artist, I hate
to do that. So, it seems like there should be some way of manipulating vectors symbolically, mathematically in a kind of more elegant way. The key to doing that is thinking about the vector in a
slightly different way. So, here's a vector, which is a classic vector. It's got a length, it's got a direction - you could characterize that direction as this angle, for instance, and that's a
perfectly valid way of describing this vector.
But when I look at it, I can think about it in another way. I sort of see that this vector has an "acrossness" to it. It's sort of got this much across, and then it's got an "upness" to it. It's
going about this much up. In fact, it's not about, it's exact. I can think about he vector in terms of its length and the angle, or I can think about the vector in terms of how far across does it go
and how far up does it go? I still need two numbers, but they're different numbers and it's just another way of thinking about the vector.
Let me try to draw a little picture of this. Here's a vector A, so now I'm drawing it in the horizontal plane, one way to describe it is to give its magnitude and the angle. Another way, the new way
would be to tell how much across is that vector, and how much up is it? You might think of how much across it as the projection of the arrow in this direction. Sometimes, I think about shining a
light on this arrow and then I just look at the shadow of it. The shadow is the projection in this direction. If you want to label things, you'd probably want to pick a coordinate system. For
instance, we decide to call right the X axis and up, the Y axis. We can pick any coordinate system we want. This is a pretty typical one. X is horizontal. So this piece of the arrow, we call A[X].
It's just a natural notation. It's the X of the vector A. It's now a number. It's the length of this dashed line. And we can project the arrow in the Y direction. That's this piece. How much of this
vector is going up? And we would call that, naturally, A[Y]. And if you look at this picture, by the nature of these projections, this is a right angle.
Supposing that we had characterized the vector in the first place by angle and magnitude A. Look at the picture, think of a little trigonometry. A[X] is the adjacent side in a right triangle. A[X] is
equal to the magnitude of A times the cosine of the angle. Hypotenuse times cosine is the adjacent side. And, similarly, A[Y] is equal to Asin. I'm just getting that from trigonometry. I'm looking at
the picture. Think about what these formulas are telling us. If you know the magnitude and the angle, then these formulas immediately tell you numerically what is A[X] and what is A[Y]. So you can go
from one language to another mathematically very easily. If I tell you A in , you just work out A[X] and A[Y] on a calculator. You can go backwards, too.
Supposing that in the first place, I had told you A[X] and A[Y] . If you look at the picture that uniquely identifies the vector and you can easily figure out, for instance, what's the length of the
vector. It's a right triangle, You use the Pythagorean theorem. A is the square root of A[X][][][][] plus A[Y][][][][]. And, similarly, is the arced tangent of , opposite over adjacent. So, this is a
neat way of describing vectors. And there's kind of a related way of thinking about the vector once we have introduced this A[X] and A[Y]. So let's just take a step back and let me look at our vector
A again. Here is it's X and Y components. And now, instead of just thinking of this as a number, let me instead view it as a vector. It's always true that any vector A can always be thought of as a
horizontal vector plus a vertical vector. In fact, it's unique. There's only one horizontal vector, this green one that I've laid down, and there's only one vertical vector. The vector A is the
vector sum of this green vector and that blue one. They're tip to tail. Graphically, you can see that the vector sum of green plus blue is equal to A.
Let me try to write this down mathematically. In order to do that, I'm going to introduce a convention, which is very commonly used and very convenient, so first of all, remember we have a coordinate
system. This is the X direction, that's the Y direction. Let me draw a unit vector. That's a vector of length exactly 1. The unit vector in the X direction, we're going to give a name. You might
think we should call it X or something, but for some reason, lots of people call it i. And they put a hat on top of it to indicate that, very clearly, it's a vector with length 1. The hat is a
special kind of vector. It's a unit vector. is the unit vector in the X direction; is the name that we give, just a convention, for the unit vector in the Y direction. So, let's look back at our
vector A. I am arguing that it is a sum of two vectors. It's A[X] , that's the magnitude in the I direction, and we're adding A[Y, ]that's the magnitude of this arrow, in the J direction. Let me just
say it again. Look at this expression - AX. That's a vector. It's a number times a vector. We've seen that last time. It represents a vector. The length of that vector is A[X] this just has unit
length, and the direction is the direction of . That's the green one. So, this whole term here represents that green arrow. This whole term represents that blue arrow. There you go. The vector A can
always be written in this way. It's component A[X] X plus it's component A[X][][][][]. That's a neat way of writing down a formula which describes vector.
So, supposing you've got this formula, A=A[X][][][][]+ A[Y][][][][]. It's just handed to you. And you want to do the various things that we've been doing with vectors. What would that mean? For
example, supposing that you wanted to multiply A by two. That was scaler multiplication, it's 2A[X][][][][] +2A[Y][][][][]. How did I get that? I'm trying to double this vector. If you double a
vector, you double it's X component and you double its Y component. Nice and easy. What if you wanted to find -A. It's -A[X][][][][] -A[Y][][][][]. What if you wanted to find some arbitrary constant
K times A. It's just KA[X][][][][]+ KA[Y][][][][]. So mathematically, scaling a vector is very easy if you've got the components.
What about if you want to add vectors? We know how to do it graphically. It's the tip to tail method. If we've got the vector in components, it's really very easy to add A+B. If you just stop and
think about it for a second, the vector A+B is just going to be A[X] +B[X], that's going to be the X component of the sum, and A[Y]+B[Y] and that's going to be the Y or component. That's it. It's a
formula if you know A and B, these are just numbers. You add them up. It's really quite straightforward now to add vectors. I don't even have a picture here anymore of A and B. It really doesn't
matter. I don't have to look at them in order figure out the sum. So, that's nice for somebody like me who can't draw vectors very well. If you want to subtract A minus B, it will be the same story,
it will be A[X]-B[X][][][][]+A[Y]-B[Y] .
So, let me do one example. Supposing that I give you some specific vectors, A and B and here they are. Here's A, there's B. Now, the old way of describing A and B was magnitude and direction.
Magnitude 3, thirty degrees from vertical. B is magnitude 5, 37 degrees from vertical. I don't think this picture is to scale, but never mind. That's the point here. All we need is a sketch and we
can figure out the sum A+B. Well, we know how to do that graphically. They're already tip to tale, there it is. It's this dashed sum vector. The sum is A+B. It goes from the beginning to the end.
But, if I really wanted to know that sum, what does that mean? It means I really need to know the magnitude and the angle or the components. If I haven't drawn this to scale, it would be awfully hard
to figure out, but I'm all set up. If you just stare at this picture, all I need to do is figure out the numbers A[X], A[Y], B[X], B[Y] and we've got the formulas and we've got the sum. Let's just
take a look at what they are. A[X] , look at that little right triangle that we've got. A[X] is the A times the sine of 30 degrees. Look at the picture and convince yourself that it really is the
sine. A[X] is opposite from that angle. I just had a formula shortly ago, where I said A[X] is Acos. So how come I'm using sine? Well, because my angle there is defined in a funny way. In the
picture, I've defined with respect to the vertical, whereas in my formula before, I had a different which respect to the horizontal. So, watch out. Look at the picture, use what's correct. In this
case, it is Asin, calculate the numbers. AY, check the picture. A[Y] is down. The A vector is down and across. Down is in the negative Y direction, so the A[Y] component is negative. There's the B[X]
and B[Y]; those are really straightforward. That B triangle is set up just the way we had set up before.
So, if you want to find the sum, the sum in the X direction is just A[X]+ B[X]. It's four plus one point five, the sum in the Y direction is just B[Y] + A[Y], 3 - 2.6 is 0.4 and we're done. We've got
the X and Y components and the sum. If you want the magnitude, use the Pythagorean theorem.
So, what we've learned is, if you want to manipulate vectors, there's two different ways to think about them. You can think about them as having a magnitude and a direction. That's kind of a nice
conceptual way. You draw pictures. You can also just think of the vector as having an X component and a Y component. That's really convenient when you're trying to calculate.
Vector Components and Unit Vectors Page [2 of 2]
Get it Now and Start Learning
Embed this video on your site
Copy and paste the following snippet:
Link to this page
Copy and paste the following snippet: | {"url":"http://www.mindbites.com/lesson/4487-physics-vector-components-and-unit-vectors","timestamp":"2014-04-21T09:47:29Z","content_type":null,"content_length":"62840","record_id":"<urn:uuid:180e3d7d-f5a2-4081-9df0-ac65c6224953>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00266-ip-10-147-4-33.ec2.internal.warc.gz"} |
Seat Pleasant, MD Math Tutor
Find a Seat Pleasant, MD Math Tutor
...I have previously taught economics at the undergraduate level and can help you with microeconomics, macroeconomics, econometrics and algebra problems. I enjoy teaching and working through
problems with students since that is the best way to learn.Have studied and scored high marks in econometric...
14 Subjects: including geometry, linear algebra, probability, STATA
...I have more than 10 years of experience in teaching math, physics, and engineering courses to science and non-science students at UMCP, Virginia Tech, and in Switzerland. I am a dedicated
teacher and I always took the extra effort to spend time with students outside regular class hours to help them learn. It is from these individual sessions that I find the students learn the
16 Subjects: including algebra 1, algebra 2, calculus, geometry
...I have taught people to use my software and written user manuals. I have written and delivered scientific papers, and collaborated with others on same. I have also taught very entry level ice
skating and sea kayaking on a voluntary basis.
10 Subjects: including algebra 1, algebra 2, geometry, prealgebra
...As an Adult Basic Education teacher, one of the subjects I teach is math. I have extensive knowledge in concepts such as fractions, decimals, percents, ratios and proportions, and word
problems. Math can be a very difficult subject for some, so I break down those concepts to where the student can understand in basic terms.
4 Subjects: including prealgebra, Spanish, ESL/ESOL, elementary math
...I am very good at assimilating information and determining the main points and the important details, I am very capable of teaching these skills to students. I have a Bachelor's degree in
Theatre Arts with 17 years of experience in Theatre I have worked with 4 children with autism as well as man...
23 Subjects: including algebra 1, prealgebra, English, reading
Related Seat Pleasant, MD Tutors
Seat Pleasant, MD Accounting Tutors
Seat Pleasant, MD ACT Tutors
Seat Pleasant, MD Algebra Tutors
Seat Pleasant, MD Algebra 2 Tutors
Seat Pleasant, MD Calculus Tutors
Seat Pleasant, MD Geometry Tutors
Seat Pleasant, MD Math Tutors
Seat Pleasant, MD Prealgebra Tutors
Seat Pleasant, MD Precalculus Tutors
Seat Pleasant, MD SAT Tutors
Seat Pleasant, MD SAT Math Tutors
Seat Pleasant, MD Science Tutors
Seat Pleasant, MD Statistics Tutors
Seat Pleasant, MD Trigonometry Tutors
Nearby Cities With Math Tutor
Berwyn Heights, MD Math Tutors
Bladensburg, MD Math Tutors
Brentwood, MD Math Tutors
Capitol Heights Math Tutors
Cheverly, MD Math Tutors
Colmar Manor, MD Math Tutors
Cottage City, MD Math Tutors
District Heights Math Tutors
Edmonston, MD Math Tutors
Fairmount Heights, MD Math Tutors
Glenarden, MD Math Tutors
Landover Hills, MD Math Tutors
Mount Rainier Math Tutors
North Brentwood, MD Math Tutors
Suitland Math Tutors | {"url":"http://www.purplemath.com/seat_pleasant_md_math_tutors.php","timestamp":"2014-04-19T17:10:05Z","content_type":null,"content_length":"24362","record_id":"<urn:uuid:423c5659-e664-461d-85b2-b45120997d1f>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00111-ip-10-147-4-33.ec2.internal.warc.gz"} |
First, we'll learn how to make a coefficient matrix. We'll start with the following examples:
x + 4y = 6
2x – 3y = 9
This is all fine and dandy, we know all about coefficients. The first equation has a -4, while the second equations has a 2 and a -3. But what's the coefficient in front of the first x? That's right;
it's a 1. Therefore, our equations are equivalent to:
1x + 4y = 6
2x – 3y = 9
To create the coefficient matrix we make a matrix like this:
That's all a matrix really is: a grid of numbers inside brackets. When it's a coefficient matrix, the coefficients are the entries. The rows of a matrix are the horizontal number groups, so in this
coefficient matrix the rows are
1 4 and 2 -3; 1 4 is row one, and 2 -3 is row two.
The columns of a matrix are the vertical number groups, so in this coefficient matrix the columns are
1 2 and 4 -3; 1 2 is column one and 4 -3 is column two.
Now we get to find us some determinants.
Determinants ditch the brackets and use vertical lines instead. Determinants are NOT open-minded. They only associate with square matrices. (Insecurity, no doubt.)
To find the determinant from our coefficient matrix above
You go like this:
(1)(-3) – (2)(4) = -5
Why? Because you multiply down the first diagonal
And up the other one
And put a minus between them. That's it. Crisscross minus applesauce.
Looking for a formula, you math lovers? Okay.
The value of the determinant for the above matrix is
ps – rq
Fun with determinants doesn't end there. Check out Cramer's Rule next.
Create a coefficient matrix from these equations:
3x – 5y = 4
-2x + y = 5
What are the rows and columns in the matrix:
Find D, the determinant of the coefficient matrix from Example One above:
Create a coefficient matrix:
3x + y = 0-x – 5y = 4
Create a coefficient matrix:
3x – 4y = -7x + y = 9
Create a coefficient matrix:
2x + 2y = 116x – y = 3 | {"url":"http://www.shmoop.com/matrices/beginning-operations-help.html","timestamp":"2014-04-21T12:09:36Z","content_type":null,"content_length":"47134","record_id":"<urn:uuid:2128d9d0-1da2-44ab-9fd0-e4a42f1378a4>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00384-ip-10-147-4-33.ec2.internal.warc.gz"} |
How algebra ruined my chances of getting a college education
Algebra was responsible for the first F I ever got.
While I was never a straight-A student, I wasn’t a screw-up either. But tell that to Mexican-immigrant parents who dropped out of school after first grade and took pride in seeing their offspring get
the education they never had. I’ll never forget that dreadful parent-teacher conference after that seventh grade F, or the silence in our minivan on the way home. There was no congratulatory feast at
Shakey’s Pizza that night.
I’ve never passed algebra. In high school pre-algebra, I weaseled my way to a C by teaching my teacher how to play “Angel Baby” on the acoustic guitar. Geometry came next, and I passed with no
trouble. In my senior year. my problem with algebra was shared by many other students and posed a threat to the record of my “California Distinguished” high school. So the administrators decided to
count Accounting 1 as an algebra equivalent. I passed that with a B+.
When I entered Pasadena City College, in the hope of transferring to a four-year institution, the placement exam put me two classes below the California transfer requirement class of Statistics 50. I
wasn’t the only one with this problem; 80 percent of my fellow incoming students also place into below-college-level math, according PCC’s research office.
Four and half years later, I still haven’t passed. OK, let me be honest – I failed algebra seven times. I started to question my character, my brain, my capabilities, and even my values. How was I
able to write a cover story for Saveur magazine in 2011 but unable to pass a class that involved mixing numbers with letters?
Ready to break down in desperation, I made an appointment with my academic counselor. He pointed me to an experimental class called “Exploring Topics in Mathematics.” It promised to jumpstart me into
Statistics 50—into the next-to-last class I’d need to complete in order to transfer. I immediately enrolled.
The class teacher was Professor Jay Cho, a slender, soft-spoken 41-year-old from Korea who had received his Masters in mathematics from UC Irvine. “You can just call me Cho for short, or, as some of
my past students called me, notorious C-H-O,” he told us. Cho taught us how to complete Linear Equation problems by relating it to blood alcohol levels when you drink and drive. He used the
almost-daily tardiness of the Goth girl to teach us relative frequency approximation of probability. It was a modern-day version of Stand and Deliver.
Twenty of the 35 of us passed the class. That was a far higher rate than normal for the level of math we were being taught.
But my story doesn’t end there. I still had to get through Statistics 50, also taught by Cho. And how did that go?
I failed. Cho felt I just hadn’t devoted enough time to studying. I wish the explanation were that simple.
According to the current regulations of California, I am ineligible for college, and I shouldn’t even have a high school diploma. The thinking in such policymaking is that 1) despite all the failed
attempts, all of us who fail algebra are secretly able to pass it, if we just push ourselves a little harder. And 2) that higher education would be wasted on someone who can’t pass algebra.
But are those assumptions valid? I don’t think an inability to solve quadratic equations should bring me or so many of my classmates to the brink of high-school dropout status.
Now I’ve finally dropped out, and I’m supporting myself through writing. Many students who are jobless and trying to get a college education after 20 years out of school are likewise stymied by math.
I would love to learn more about art, philosophy, literature, and history in a college setting. But math requirements will prevent that.
Should they?
Javier Cabral is responsible for TheGlutster.com (formerly Teenage Glutster), a food, booze, music, and general desmadre blog. He is an in-house writer for Sonic Trace KCRW and freelances for Saveur
Magazine and Grub Street LA. He wrote this for Zocalo Public Square.
50 Comments»
• PatRick said:
The previous two commenters don't seem to understand what Cabral is saying. The statement that Cabral doesn't support having any standards at all is laughable if it weren't meant with all
seriousness. It's at best a misunderstanding of Cabral's statement and at worst a willful twisting of it. Cabral is asking why his seeming inability to pass algebra should bar him from pursuing
studies in other fields, unrelated to mathematics. While you, Mazer101, must get a kick out of mocking other people's inabilities, I can't help but think that you entirely miss the point that
Cabral is making, namely that he is asking why his lack of aptitude in algebra would make him ineligible to receive ANY type of college degree. It's a better question than you're giving him
credit for, and it makes both of you look quite ridiculous to assert that Cabral is either lazy or lacking in values for not being able to pass algebra. As the fact that he is currently
self-employed and has indeed found ways of getting outside this issue so clearly demonstrate, he is neither lazy nor without values. Moreover, you address neither of his questions that he asks in
this article. You mock the first one and ignore the second one.
• floridagirl said:
I too have been plagued by college algebra, I've failed (or had to drop before I failed) I think 5 times now, my degree is in criminology. It's the last class I need to get my BSAS lol. Well our
governor in Florida stated that they are changing some requirements (like math) for certain degrees. I also think it's silly to not allow someone to further their education if they lack ability
in one area. I received my Assoicate's Degree, graduating with honors due to my high GPA but once I transferred into a university my many F's in this stupid class have dropped it down and ruined
all of my hard work. Atleast here, we have many professors from other countries which makes learning it that much harder. Hang in there, some day when they change the requirements, I'm sure more
people will pursue high education. I also excel in writing.
• Johnny Mills said:
Seems like if you understand the concepts, that's literacy. If you can't solve the problems well, don't get a job involving math. Not sure why someone should be punished or have their options
further limited if they have other talents…
• Jeffrey said:
Don't give up Nick, think out of the box. Go to a college that does not have as much of a higher math requirement, specifically out of state. (example- I live in Massachusetts where the math
standards are difficult but went to a community college, where they have extra support for math, where tutoring is available and cheaper) I also have an LD, however, I went to a community college
and passed Accounting 101, 102, which had 4 credit labs with a B and an A, and were not easy because of the courses were transfer classes to a university.
I also barely passed with a C a Business Financial Math Class that had stats, algebra, and financial functions in it. In Massachusetts there are more requirements for math that the 11 credits
that I passed. Therefore, I transferred to a college that had a business program that did not require as much math that this. Nick, I would try to go to a local Community College and try to pass
Algebra, with some private tutoring which is very cheap. Good Luck.
• Kristine Hood said:
I am NOW a retired Licensed Masters Social Worker, Certified Social Worker and a member of the Academy of Certified Social Workers. I have additional certifications in School Social Work, and
Medical Social Work (where I spent MOST of my working career). I am also a published author on the subject of Tourette's Syndrome. I had the above problems, with very little parental and academic
support from school staff. While in GRADUATE SCHOOL, I took "School Testing". We tested each other and my fellow student quickly and correctly diagnosed me with "Discalcula". A learning
disability. I finally understood that hard work CANNOT overcome certain brain processing problems without skilled teaching. The relief was astounding, and with skilled professional support I was
able to learn enough Algebra when applied to my subject matter for a required Research and Statistics Class. Please go to the Special Education Department at your university and ask for testing
and professional followup. Someone there will UNDERSTAND and NOT BLAME. The world NEEDS your talents. Don't let this stop you. I used my significant RIGHT BRAIN superiority to help thousands of
people in my career. You can, too.
• James said:
Javiar's story is not at all unique and I believe it represents a fundamental problem with our educational system as a whole. The last two decades have really put an emphasis in higher math
skills in order to produce more STEM college bound graduates. The problem with this approach is that while learning higher math(meaning higher math beyond arithmetic) is that a certain percentage
of students have learning disabilities which inhibit the long term retention and understanding of certain math courses such as Algebra and Statistics.
People with learning disabilities are not at all lazy or stupid, but have a disability which prevents them from doing certain things such as understanding higher applications of math. What our
educational system should be doing is to make sure these students do not fall through the cracks and end up as high school or college drop outs. If a person has a documented learning disability
and traditional methods of assistance such as providing extra time on tests in a private area does not result in helping the learning disabled student to successfully complete a math class, then
it would seem reasonable that students such as Javier should be permitted some sort of waiver for the math requirement.
Unless the student is majoring in math, engineering, or any science related major, providing such a waiver would make the most sense in order to properly accommodate the learning disabled.
Otherwise, our educational system is part of the problem and by essentially forcing the learning disabled out of school because they couldn't pass Algebra/Statistics/Calculus, etc is creating the
problem of allowing what are otherwise talented and intelligent people to drop out and end up with low paying jobs and student loans that they these students may not ever be able to pay back.
I knew someone, an older student who like Javiar had the same problem with math. As a undergraduate student in her 50's, she petitioned her school's university to take Algebra outside of her
college because she had failed this class 5 times. She ended up paying someone to complete the Algebra class which while cheating, allowed her to transfer the credits to her university and
finally receive a degree.
Students like the person I just described are those who have to resort to unconventional means to combat this problem that our educational system needs to address. My friend, who was Business
major went out to become a very successful in her field of accounting, never using the Algebra which haunted her throughout her college career.
It's time that our colleges, universities, and K-12 system make a fundamental change to education away from the "one size fits all" approach and create a system that is more realistic which would
guide those whose strengths are not in math, towards other fields that do not require such high math skills. Not everyone is cut out to be a doctor, scientist, or engineer. Some of us become
business people like my friend who can be very successful without having to learn Algebra/Calculus, and Statistics. We need as a society to be more aware of how our schools are disenfranchising
the learning disabled in order to improve this societal problem.
Lastly I will ask this question to anyone who disagrees. If students were required to take a semester of some sort of organized sport such as running or volleyball, would it be fair to not allow
someone who is paraplegic an alternative to complete their degree problem? Doing otherwise doesn't make much sense, but that is exactly what our high schools, colleges, and universities are doing
to the learning disabled!
• kjacksonemti said:
As a fellow Higher Math Cripple, I definitely feel the author's pain.
Both my parents were teachers. When a student they felt was other wise intelligent failed at something they considered a simple subject (and to teachers, they're all simple subjects) then that
student was just 'undermotivated'. So when I started completely and utterly bombing algebra in 7th grade, they just figured the problem was laziness. I was first encouraged, then threatened, then
punished and finally herded into a little homemade cubicle constructed out of two dressers and a desk. Every weeknight I was put in there because I just wasn't "able to concentrate". Of course,
making my studying conditions easier for me to 'concentrate' in was basically just educatorese for 'lock the lazy ass in a dungeon until he gets motivated enough to pass'. Not only did I still
fail to understand algebra, but I began to hate it like cancer.
And it only got worse from there.
So High School rolls around and I'm still unable to understand algebra. My high school math teacher? The most patient human being ever. Seriously. Mrs. Whittington was a saint. because she and my
parents were close friends, I spent hours each week being personally tutored by the very woman whose class I took every day. Did it help? Not one single iota. My teacher never wanted to admit
that a person couldn't be taught algebra, but at least she didn't assume that my problems with the subject were due to laziness. Unfortunately, my parents and just about everybody else in the
education industry (yes, it's an industry) disagreed.
It always amazed me that the standard response on the part of teachers when confronted by a student who fails abominably at mathematics generally involves words like 'unmotivated' and
'underachiever'. And that's when they're feeling diplomatic. But nobody seems to address the fact that we have millions of children in this country who, despite the best and worst efforts of
virtually every adult around them, have never and will never grasp the finer points of algebra or any other advanced form of math. Are they all lazy and stupid? Am I? I'm an EMT-Intermediate with
17 years of field experience and I'm currently taking Paramedic classes. Does anybody here have any idea how complicated the human heart is? I do. I might not be the brightest crayon in the box,
but if I can wrap my head around things like 3rd degree blocks, intrinsic firing rates and the effects of norepinephrine and acetylcholine, then I'm probably not stupid. Certainly not so stupid
that my intelligence is a fundamental blockade to my understanding of algebra. And I'm pretty sure I didn't intentionally annihilate my chances at a college degree by failing at math out of a
desire to spite my parents. I certainly didn't enjoy years of dreading the prospect of any and all forms of math every single day during the school year.
Something, somewhere in our educational system is badly broken. I have no idea what or where it is, but it's definitely not working properly if a significant portion of students flunk algebra
every year. And what makes it worse is the attitude on the part of educators that blaming the students is the answer. The students want to pass more than the teachers want them to, I assure you.
• For all and sundry said:
Your blog is a great one. What really impresses me is that you are correctly mentioned that there are thousands of tools that are available to create a website or launch one but what matters is
that you fit the fat one, the one that gives you all that is actually needed.
• George Hilton said:
There are two categories of students in this world. One are those who are very sharp and quick learning in Math and English. In the other category some lazy students are included who always hide
themselves from difficult subjects. You may be one of them who is hating algebra. Now you can get help from http://www.youtube.com/watch?v=qCSEQ7f_Vfw video which is serving you to learn any kind
of knowledge and inspiring you towards education.
• kjacksonemti said:
I don't mean to be insulting here and I mean this in the least damaging way possible, but you are an asshole.
People who have problems with higher math aren't lazy any more than dyslexics are. People who have problems with higher math need help, not criticism. And this is precisely the problem with the
state of education in America. Had I received some sort of individual help from somebody who understood that my brain just doesn't work like others… had my parents, teachers both, understood that
not all students can be shoved into an identical mold and later kicked out like assembly line automatons… had anybody involved with my education recognized that I was working as hard as possible
instead of just sitting on my ass out of sloth or spite, I might have ended up in a different place than I am currently: trying to figure out algebra on my own terms so I can pass Paramedic and
not be a lethal threat to my patients in the process.
The problem here is not the lazy student who habitually avoids a difficult subject. The problem here is the lazy educator who is quite happy blaming children for not succeeding in an assembly
line educational system.
• James said:
George Hilton,
"There are two categories of students in this world. One are those who are very sharp and quick learning in Math and English. In the other category some lazy students are included who always hide
themselves from difficult subjects. "
You sir live in an imaginary black and white world. Clearly we live in a world where there are many types of students. Yes some are lazy, some are motivated, and some are there because their
parents essentially told them that they had to go to school "or else." Yet what you fail to grasp is that there is a type of student that has disabilities that prevent him or her from fully
grasping a certain subjects such as what the author of this article describes. Students with math disabilities are the farthest thing from being lazy.
"You may be one of them who is hating algebra."
This is not an issue of hating algebra, unfortunately students with learning disabilities with respect to math and language are simply unable to fully grasp many of the concepts required to
engage the subject matter. Again, these students with disabilities are not lazy/unmotivated people. Often what they lack in the capacity to fully grasp in math, they excel in other areas of their
academic life.
Don't be so quick to judge and stereotype people and realize that the world is not a black and white, two dimensional place.
• Tanveer Moundy said:
Different people remember their day when they were students and have some problems in math which really depressed them. Algebra was the most hated in this regard. You describe how to build
students interest towards algebra and math. editing paragraphs paragraph editing is also stand with you to help your kids as well as make perfection in their educational concepts.
• Carl said:
Math teaches you how to think.
If you are not passing you are not thinking. It is not mixing up numbers and letters. It is understanding, and using, concepts through definitions, axioms and theorems.
• Dina said:
It's a good thing for you that your high school and college didn't put the same requirements for creative writing as they do for algebra, or you'd be considered one of those you claim "doesn't
have any standards at all" because of your "subpar" abilities in that area. My son is a genius at creative writing and has been since he was 5 years old. He would write full scripts and
scenarios, but can't grasp algebra. He has always been considered a gifted child. Apparently his brain is more advanced in one area than another, as is yours. But for some reason whoever created
the high school and college standards that only appreciate Algebraic abilities, didn't have the intelligence to realize that not all professions require Algebraic knowledge, and not all geniuses
have that knowledge as well. It doesn't mean they are lazy or disabled. It just means that they are more talented in one area than another. Just like most people are more physically adept in one
area than others.
• AimlessInLA said:
But here's the thing–Javier did say he passed geometry, though what he means by "passed" isn't entirely clear. For some reason many high school students who struggle with pre-algebra and algebra
manage to do fairly well in geometry. It was so in my case, though I don't know why that was. It's been a very long time, but as well as I can recall, the handful of axioms and all the theorems
we had to work through all seemed to hang together a lot more cohesively than the content of my first year algebra had–both times I had taken that! I had to work for that 'B' in geometry, but
throughout the process I always understood exactly what I was trying to accomplish in a way that hadn't been true for any other kind of math.
Years after college and grad school, I found an old college algebra text that had belonged to my father. The striking difference in contrast to a first-year high school textbook was that it
emphasized proofs a great deal more; it was like geometry, only with partly abstract equations rather than diagrams of lines, circles, and angles. In some ways, it now seemed even easier than
geometry, because with algebra you usually don't have to keep switching back and forth between a diagram and an explanation. By no means have I mastered everything in my dad's algebra book, but I
have studied enough of it to change my whole perception of the subject. I now enjoy picking at mathematical puzzles and often find I can solve them; ditto for following the proofs of standard
area and volume formulas, or even deriving them as I did for the sphere without having ever heard of Cavalieri's Principle. (The chapters on mathematical induction and series in my dad's book
were instrumental here.)
So what, exactly, had been my problem with algebra, and every other type of math I'd had since the third grade? My perception is that the algebra curriculum placed greater emphasis on solving
*instances*, or problems, while in geometry it was more about the underlying principles. In hindsight I'm almost certainly wrong in this view and I'm sure my algebra and pre-algebra teachers
tried to teach it to me in the same way that geometry is taught. But for whatever reason I just didn't grasp that part of it, and there was no help for me back then, because even I wouldn't have
been able to explain why I was having so much trouble with the topic..
• Rosemary said:
There are many reasons why a person may not be able to understand mathematical concepts. While I do not have dyscalculia I do have the inability to process algebraic formulas. After taking
algebra 3 times with 3 different teachers, in-person tutors and paying for online tutors I was about ready to quit college and give up. Instead I went to see a neuropsychologist and had specific
testing done to find out why I could pass every other class but that one. Turns out a brain injury from when I was younger caused permanent damage to the area of my brain that processes spacial
memory. No one will every be able to teach it to me and no amount of applying myself will help. With the permission from the head of the psychology department I was able to get into statistics
with out taking algebra and the college district is accepting that class as my math credit to complete my degrees in sociology and psychology. Its very frustrating to keep taking the same class
with no results! Not only was it a waist of time for me but an added expense as well. Just because someone cannot complete algebra formulas does not mean that they do not understand math and it
certainly should never hold someone back from continued education.
• Jay said:
If the student was lazy, he would not be able to do well in any of the other classes. Me personally, I did well in all of my college courses except for my advanced Algebra class. I passed Algebra
2 in high school, but when I tested for placement in CSU college, I was placed in a remedial based math class. I passed one part and failed the second one which is the last class before college
level. In my case, I did well throughout the whole semester until the final exam and went from a 83% TO A 65%. It has nothing to do with a student being lazy or not putting in enough effort. Some
students just don't have the same learning abilities in Math as they do in others.
• drains said:
really educational blog – learn something new every day by reading blogs like this
• http://www.youtube.Com/ said:
You actually make it seem so easy with your presentation but I
find this topic to be really something that I think I would never understand.
It seems too complex and extremely broad for me. I’m looking forward for your next post, I’ll try to get the hang of it!
• Ruben said:
It seems impossible but maybe so. If you are really good and confident at the subject you are it might be that your just bad at math. The mind is a curious thing and works so many ways. See your
just really not a math person. All the subjects you just posted proves it. Their is always that one subject someone is really not good at. But their are many jobs for you to get other than your
looking for. For example like a Author or a Geologist or maybe a famous Speech Political writer. The world is your oyster and trust me there are many jobs that do not need math.
• Jimmy said:
Bucks County Community College in PA, NYU (Gallatin) in NY, Harvard Extension School, University Of Ottawa KS are ALL undergraduate degree programmes in the US that DO NOT REQUIRE MATH for
undergraduate degree conferral, and all except NYU have online degree completion options. Info is current as of February 2014.
• chatroulette said:
Super post, bon blog j'adore ! merci
• chat en ligne said:
C'est vraiment super que vous publiez des articles tel que celui ci et très bon site !
• chatroulette mobile said:
c'est exactement l'article que je cherchais merci !
• Rencontres gratuites said:
Bravo pour votre blog et bonne continuation !
• dial cam said:
Bonjour !
Votre blog est top et merci de publier des articles aussi intéressant !
• tchat adultes said:
Super article j'adore !
• cam hot said:
Très bonne article je recommande votre blog !
• dial hot said:
Bonsoir, merci beaucoup de faire tourner des articles aussi passionnant que celui-ci !
• live show coquin said:
Félicitation pour votre site et bonne continuation !
• mims23 said:
Unfortunately, not only might a student have dyscalculia, but it could well be an inherited trait. In my case, my father, I, & my son have all been stymied by our shared inability to learn
algebra. My father (born in 1911) dropped out of school after 8th grade, unable to stand the humiliation of devolving from a child who'd skipped 2nd grade to one who couldn't apparently function
at normal levels, due to algebra failure. He ultimately retired from a management position in those simpler times. I (born in 1946), struggled through the required College Algebra to a
then-acceptable grade of D, graduating cum laude with a BA in English. I became a teacher & later held professional positions in state government. I am a member of MENSA, scored in the 92nd
percentile on the verbal part of the GRE, & am quite capable in non-mathematical fields. My son (born in 1985) has a slew of learning disabilities but through diligent effort forged his way past
all of them except dyscalculia. With today's higher math requirements, he was never able to get beyond remedial college algebra classes, earning a terminal AA degree in Network Administration
(this major accepted business math as the required math class.) Actress-model Brooke Shields (born in 1965) got mucho publicity in the '80's when she graduated from Princeton without being
required to take any college math at all. I believe that in the UK & many other European countries, college students focus on their majors & do not have to take core curricula unrelated to their
main field of study – why does the US insist that all comers must be math -proficient or be forever barred from four year, liberal arts degrees?
• math tuition said:
I wanted to thank you for this excellent read!!
I certainly loved every little bit of it. I have got you bookmarked to
look at new things you post…
• clash of clans hacked apk said:
I am regular visitor, how are you everybody? This paragraph posted
at this web page is genuinely fastidious.
• Dragoncityhackapk.Blogspot.Com said:
At this time it sounds like Expression Engine is the top blogging platform out
there right now. (from what I’ve read) Is that what
you are using on your blog?
• neal said:
It is possible to get a degree without Algebra by taking Liberal Arts Math. If the school you are in doesn't offer it, find a different one. If worst come to worst, get a non-accredited BA and
then transfer into an accredited MA that doesn't require math. I personally know several people who have gone this route. There is a way around the math.
• Sofa Set said:
hello!,I love your writing so much! percentage we keep up a correspondence extra about your post on AOL? I require an expert in this area to unravel my problem. Maybe that is you! Taking a look
forward to see you.
• zainab said:
• zainab said:
• Samoulson said:
Looking for the best essay writing company on the web and your site is very handy to me. I hope this site helps me a lot. Please, click here buy an essay to get more idea about the writing
• Amber said:
To those of you who "get" Algebra- congratulations! While I have excelled in every other class, college Algebra is getting the best of me. It's not for lack of trying… countless hours and dollars
spent on self-help books and private tutors- to no avail. More than anything, I WANT to be able to understand what I'm doing, but understanding escapes me. If Algebra is such a basic, elementary
subject to master, then perhaps you would be so kind as to invest your time and seemingly superior mind to help someone like myself. No? I didn't think so. Those of you who have never struggled
to grasp Algebraic concepts have most likely never known the feelings of hopelessness and extreme frustration that make up the personal hell I'm living in right now. You may be a mathematical
genius, but THAT is one thing you will never understand. Based on your condescending tone, you must be a fine piece of work…
• faizaa said:
• Psychology tutor said:
Thank you for one more essential write-up. Where else could anybody get that kind of details in these a finish way of writing? I have a business presentation incoming One week, and I'm around the
look for this kind of details.
• Jane said:
Ruben, you said that "there are many jobs that do not need math". That is true BUT…
the school system REQUIRES all sorts of math to get any advanced degrees. The school system REQUIRES high school algebra 1 and 2 and trigonometry, even if you plan to major in English in college!
I have interviewed two dozen professional scientists and engineers who were gainfully employed in the field of their studies, and only one of them admitted that he used all the math he was
required to study to earn his degree. The others used charts, graphs, computer programs, calculators, and slide rules to figure out exact numbers when needed.
The requirement for math is a hoax! It is a way to keep us in school longer and a way for the schools and tutors to make more money. You might as well require everyone to design, build and
maintain their own car and engine before they are allowed to drive one.
• Jane said:
Dear Mazer101:
We hope that someday you will be up against a situation in which your continued survival depends on your ability to develop a skill which you are completely unable to grasp, because that is the
situation which Mr. Cabral describes.
It is fine that mathematics and calculus and differential equations and all the rest are required for majors in science and engineering programs' It is not fine when they are required to even get
into college in the first place, especially for liberal arts and humanities which are not based on mathematics. It is very bad when today college degrees are required for jobs that many of us
could have done straight out of junior high.
It used to be that the few people who got through the eight grades in the little one room school house took a couple of years at a teacher's college and turned around and taught the same little
one room school house themselves. Today you are required to have a degree to push a broom. Today you are required to have a degree in Early Childhood Education to babysit infants and toddlers!
Just as people's bodies are greatly varied in size, shape, and strengths, so too are our minds. Just because you could do algebra when you were six years old does not automatically exclude
everyone else of a more normal intelligence from the right to earning a relevant degree and earning a living.
Have you ever tried to live on a minimum wage job? It is not possible without some real material assistance. 85% of the people who receive food stamps and other aid are employed.
Rather than belittling someone who could not learn arithmetic or math or algebra, how about finding effective ways to help them learn, or better yet, how about making our requirements for non
science and non engineering degrees more realistic and humane?
I bet you do not do your own plumbing repairs. I bet you don't tune up your own car. I bet you do not clean your own heating ducts and chimneys each year. You let professionals with the proper
equipment do those jobs. Why should they be required to pass algebra? Why should they be required to have a BA in anything at all?
• Avani said:
Very difference one look is here about the writing concerning issue. custom dissertation That is really very handful to me and as a author I am suggest to all of your to visit this help writing a
research paper that help you to get more idea about the writing service.
• tita said:
me 2 my first c d and f was in algebra 1 hon but im in middle school so i still have a chance i guess
• Healthy Drinks said:
fantastic points altogether, you simply received a new reader. What may you suggest about your put up that you made some days ago? Any sure?
• debi said:
Great statements. I can do basic math easily in most cases.. but not without effort. basic algebra also .. But once i get into difficult equations and higher stats.. Im a goner!!! Im a musical
genius and writer. I also have a high ability for reason…philosophy… But I think the part of my brain with numbers takes extreme effort to computate results. Im in college at age 50 and am in a
stats class.. Its difficult and killing me….
• Penelope said:
Javier, you may not be able to get into a public university in California, but you should look at private options as private colleges can be more flexible in terms of entrance requirements and
general education requirements. Before you instantly think that this would be too expensive, most colleges offer generous financial aid packages. Applicants who are first-generation Latino
college students are in fact heavily recruited by private colleges since they add to campus diversity. You could also look into being tested for a learning disability as some other commenters
have described. If you are diagnosed with a learning disability you fall under the Americans with Disabilities Act and universities must legally accommodate your documented disability.
• It's me! said:
I have a similar problem. I am a college sophomore and I tested into the Intermediate Algebra class which is only a prerequisite for college level algebra. I have taken that very course twice and
missed the required score by just a few points each time. I am an English major and I'm finishing my German minor and I cannot pass math no matter how much time I focus on it. I've studied
formulas, done practice exams, and asked my peers for help and nothing helps me to pass the tests.
Apparently, I'm smart enough in my German and English courses to be a whole semester ahead of other students my age, but I can't get the basic math credit. I can't do basic math in my head
without flipping it all around. I don't retain formulas long enough to match them to test problems. Am I stupid? No, I'm not stupid. I can write all kinds of essays and I can do it in two
languages. I've explored all kinds of things in my field of study, but I can't get a math credit to graduate college. What's sad is that a lot of people seem to think that people like me
shouldn't be in college at all because we aren't "smart" enough for it.
Leave your response!
Add your comment below, or trackback from your own site. You can also subscribe to these comments via RSS.
Be nice. Keep it clean. Stay on topic. No spam.
You can use these tags:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>
This is a Gravatar-enabled weblog. To get your own globally-recognized-avatar, please register at Gravatar. | {"url":"http://blogs.kcrw.com/whichwayla/2013/05/how-algebra-ruined-my-chances-of-getting-a-college-education","timestamp":"2014-04-20T15:53:24Z","content_type":null,"content_length":"122060","record_id":"<urn:uuid:3467927b-e078-4ac9-9ddb-fb710c3ee275>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00076-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
How do I solve and graph this inequality? 12x + 4 ≥ 16 or 3x – 5 ≤ –14
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50cf926de4b0031882dcee88","timestamp":"2014-04-20T08:36:18Z","content_type":null,"content_length":"69836","record_id":"<urn:uuid:945ba2bc-bb3c-439f-a845-f1d4e5d20ce2>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00096-ip-10-147-4-33.ec2.internal.warc.gz"} |
[R] Force regression line to a 1:1 relationship
David Winsemius dwinsemius at comcast.net
Tue Sep 13 22:27:04 CEST 2011
On Sep 13, 2011, at 11:56 AM, David Winsemius wrote:
> On Sep 13, 2011, at 11:44 AM, David Winsemius wrote:
>> On Sep 13, 2011, at 9:43 AM, RCulloch wrote:
>>> Dear John,
>>> Thank you for that, and for explaining why the abline() command
>>> wont/dosen't
>>> work. The approach is based on reviewers comments that I am a tad
>>> sceptical
>>> about myself but yet curious enough to test their
>>> suggestion......I don't
>>> think it is very straightforward to explain; however, it involves
>>> using the
>>> residuals of the lm() and plotting them against a covariate to
>>> assess
>>> whether or not the deviation from the 1:1 relationship is in someway
>>> influenced by the other covariate.
Is the reviewer perhaps saying this will display departures from a
"linear" or "straight-line" relationship? If so, then I agree entirely
with the reviewer.
>>> I hope that shines a small amount of
>>> light on this rather unorthodox approach?!
>> Plotting the residuals against a covariate is a standard way to
>> assess the assumption that the residuals are distributed normally
>> around each continuous regressor
I've been corrected offline on this point by another "reviewer", one
who I consider highly reputable. The regression assumption is that
residuals are normal around the "true" relationship, but since we
only have the predicted relationship, the usual second-best is to look
plot( fitted(fit), resid(fit))
Furthermore normality is generally not important. (I did know that.)
>> and have no non-linear relationship around each continuous regressor
That point is still valid.
> Forgot to include homoschedasticity:
> ...and have a reasonably constant standard deviation across the
> range of the regressor...
Also should be plotting against fitted() rather than regressors. _My_
external reviewer is of the opinion : "constant variance -- which,
again, usually is of no importance for estimation anyway unless the
heteroscedacity is huge-", but I think opinions about what constitutes
"huge" or "too much" variance may vary.
>> . It is not to assess a "1:1 relationship", whatever that is. I
>> think we would need to see a complete quotation of the reviewer's
>> comments before deciding who is confused in this interchange.
>> --
David Winsemius, MD
West Hartford, CT
More information about the R-help mailing list | {"url":"https://stat.ethz.ch/pipermail/r-help/2011-September/289787.html","timestamp":"2014-04-19T12:07:04Z","content_type":null,"content_length":"5780","record_id":"<urn:uuid:3cf52d77-4d8e-4145-8ab9-decafb558413>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00465-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] 388: Goedel's Second Again/definitive?
Harvey Friedman friedman at math.ohio-state.edu
Thu Jan 7 11:06:06 EST 2010
I have written about Goedel's Second in these FOM postings:
NOW it appears that I have identified a much clearer and much simpler
key sufficiency property on the formalizations of Con that ensure the
unprovability of Con.
Here is the key idea:
###For Goedel's Second Incompleteness Theorem, it is sufficient that
the formalization of consistency support the Goedel Completeness
We will use L = single sorted first order predicate calculus with
equality. Infinitely many constant, relation, and function symbols are
Let K be a set of sentences in L. We define the notion
*a sufficient formalization of Con(K) in S*
where S is a set of sentences in L. This is a sentence psi in L(S)
such that there is a definition in L(S) of a structure (D,...) in
L(K), such that for every element rho of K, S proves
**psi implies '(D,E,...) satisfies rho'**
Here '(D,...) satisfies rho' is the sentence of L(S) defined
straightforwardly. Note that this definition is quite easy to make
fully rigorous - e.g., by induction on rho. The Intensionality Issues
that plague the usual statements of Goedel's Second Incompleteness
Theorem are NOT PRESENT HERE.
NOTE: Obviously if psi is a sufficient formalization of Con(K) in S,
and S proves psi implies gamma, then gamma is also a sufficient
formalization of Con(K) in S.
We now want to consider "the usual formalizations of Con(K) in
arithmetic". For this purpose, the natural system to use for S is EFA
= exponential function arithmetic = ISigma0(exp), which is based on
0,S,+,dot,exp,<=, with the usual axioms for successor, defining
equations for +,dot,exp, definition of <=, and induction for all
bounded formulas. EFA is the weakest natural system of arithmetic
which supports worry free arithmetization of syntax. In bounded
formulas in L(EFA), the quantifiers are bounded as follows.
(forall n <= t)
(therexists n <= t)
where t is a term of L(EFA) in which n does not appear. Note that
**the usual formalizations of Con(K) in L(EFA)**
makes good sense. We caution the reader that some important issues
will arise when K is infinite. It is well known that EFA is finitely
SEMIFORMAL THEOREM. Let K be a finite set of sentences in L. The usual
formalizations of Con(K) in L(EFA) are Pi01 sufficient formalizations
of Con(K) in EFA.
Here a Pi01 sentence in L(EFA) is a sentence in L(EFA) that begins
with one or more universal quantifiers, and is followed by a bounded
There are usual formalizations of Con(K) when K is infinite, PROVIDED
that K is recursively enumerable. These usual formalizations can be
based on any algorithm for generating the elements of K.
SEMIFORMAL THEOREM. Let K be a recursively enumerable set of sentences
in L. The usual formalizations of Con(K) in L(EFA) are Pi01 sufficient
formalizations of Con(K) in EFA.
Here a Pi01 sentence in L(EFA) is a sentence in L(EFA) that begins
with 0 or more universal quantifiers, which are followed by a founded
For which K are there (Pi01) sufficient formalizations of Con(K) in
EFA? In Q? All K, because 1 = 0 is a (Pi01) sufficient formalization
of Con(K) in EFA. The following is more informative.
THEOREM. Let K be a set of sentences in L. The following are equivalent.
i. There is a Pi01 sufficient formalization of Con(K) in Q which is
consistent with Q.
ii. There is a Pi01 sufficient formalization of Con(K) in EFA that is
consistent with EFA.
iii. There is a Pi01 sufficient formalization of Con(K) in Q that is
iv. There is a Pi01 sufficient formalization of Con(K) in EFA that is
v. K is a subset of a consistent recursively enumerable set of
sentences in L.
We caution the reader that this notion of sufficient formalization can
be very weak:
THEOREM. Let K be a set of sentences in L. Then any sentence provable
in K is a sufficient formalization of Con(K) in K.
We remind the reader that the usual formalizations of Con(T) in
arithmetic involve arithmetizing finite sequences of nonnegative
integers. Accordingly, we now define EFA* to be EFA + "for all n,
there is a sequence of integers of length n starting with 2, where
each noninitial term is the base 2 exponential of the previous term".
EFA. Let T be a consistent set of sentences in L that implies EFA*. T
does not prove any Pi01 sufficient formalization of Con(T) in EFA.
We state another form, which is of mainly technical interest.
Q. Let T be a consistent set of sentences in L that implies EFA. T
does not prove any Pi01 sufficient formalization of Con(T) in Q.
Here Q is Raphael Robinson's Q. But for any T, the usual
formalizations of Con(T) in arithmetic definitely use exponentiation,
so we take the position that aren't any usual formalizations of Con(T)
in Q, or even PFA = polynomial function arithmetic = bounded
arithmetic = ISigma0. (It should be noted that there are usual
formalizations of Con(T) designed CAREFULLY BY SPECIALISTS in order to
"work in Q", and all of those are sufficient formalizations of Con(T)
in Q).
Goedel used arithmetized consistency statements. It is more natural
and direct to use sequence theoretic consistency statements. This has
many advantages.
We will use a very natural and very convenient system for the
formalization of syntax of L. We will call it SEQSYN (for sequence
SEQSYN is a two sorted system with equality for each sort. It is
convenient (although not necessary) to use undefined terms. There is a
very good and standard way of dealing with logic with undefined terms.
This is discussed, with references to the literature, in
H. Friedman, The Inevitably of Logical Strength: strict reverse
mathematics, in: Logic Colloquium '06, ASL, p. , 2009.
In summary, two terms are equal (written =) if and only if they are
both defined and have the same value. two terms are partially equal
(written ~=) if and only if either they are equal or both are
undefined. If a term is defined then all of its subterms are defined.
The two sorts in SEQSYN are Z (for integers, including positive and
negative integers and 0), and FSEQ (for finite sequences of integers,
including the empty sequence). We have variables over Z and variables
over FSEQ (we use Greek letters). We use ring operations 0,1,+,-,dot,
the order <=, and = between integers. We use lth (for length of a
finite sequence, which returns a nonnegative integer), val(alpha,n)
(for the n-th term of the finite sequence alpha, which may be
undefined), and = between finite sequences. The nonlogical axioms of
SEQSYN are
i. The discrete ordered commutative ring axioms.
ii. Every alpha has a largest term.
iii. lth(alpha) >= 0.
iv. val(alpha,n) is defined if and only if 1 <= n <= lth(alpha).
v. alpha = beta if and only if for all n, (val(alpha,n) ~= val(beta,n)).
vi. Induction on the nonnegative integers for all bounded formulas.
vii. Let n >= 0 be given and assume that for all 1 <= i <= n, there is
a unique m such that phi(i,m). There exists a sequence alpha of length
n such that for all 1 <= i <= n, val(alpha,i) = m iff phi(i,m). Here
phi is a bounded formula in L(SEQSYN) in which alpha does not appear.
It remains to define the bounded formulas. We require that the integer
quantifiers be bounded in this way:
(forall n of magnitude at most t)
(therexists n of magnitude at most t)
where t is an integer term in which n does not appear.
We now come to a crucial point. We require that the sequence
quantifiers be bounded in this way:
(forall alpha whose length and terms are of magnitude at most t)
(therexists alpha whose length and terms are of magnitude at most t)
where t is an integer term in which alpha does not appear.
THEOREM. SEQSYN is mutually interpretable with Q and with PFA. SEQSYN
is interpretable in EFA but not vice versa.
So in the above sense, we claim that the usual string theoretic
formalizations of consistency carry a weaker commitment than the usual
arithmetic formalizations of consistency.
Note that SEQSYN does not have exponentiation, yet SEQSYN clearly
supports the usual string theoretic formalization of consistency.
SEMIFORMAL THEOREM. Let K be a recursively enumerable set of sentences
in L. The usual formalizations of Con(K) in L(SEQSYN) are Pi01
sufficient formalizations of Con(K) in SEQSYN.
Here a Pi01 sentence in L(SEQSYN) is a sentence in L(SEQSYN) that
begins with zero or more universal quantifiers of either sort, which
is followed by a bounded formula in the above sense.
THEOREM. Let K be a set of sentences in L. The following are equivalent.
i. There is a consistent sufficient formalization of Con(K) in SEQSYN
that is consistent with SEQSYN.
ii. There is a consistent sufficient formalization of Con(K) in SEQSYN
that is true.
iii. K is a subset of a consistent recursively enumerable set of
sentences in L.
We take EXP to be the following sentence in L(SEQSYN).
There exists a sequence alpha of length n >= 1 whose first term is 2,
where every noninitial term is twice the previous term.
THEOREM. SEQSYN + EXP and EFA are mutually interpretable. They are
both finitely axiomatizable.
SEQSYN. Let T be a consistent set of sentences in L that implies
SEQSYN + EXP. T does not prove any Pi01 sufficient formalization of
Con(T) in SEQSYN.
Let K be a finite set of sentences in epsilon,=. By the direct set
theoretic formalization of Con(K), we mean the following sentence in
set theory (epsilon,=):
there exists D,R, where R is a set of ordered pairs from D, such that
(D,E,R) satisfies each element of K.
Let RST (rudimentary set theory) be the following convenient set
theory in epsilon,=:
a. Extensionality.
b. Pairing.
c. Union.
d. Cartesian product.
e. Separation for bounded formulas.
It can be shown that RST is finitely axiomatizable.
CONSISTENCY. Let K be a consistent finite set of sentences in
epsilon,=, which implies RST. K does not prove the direct set
theoretic formalization of Con(K).
COROLLARY. Let K be a consistent set of sentences in epsilon,=, which
implies RST. There is some consequence phi of K such that K does not
prove the direct set theoretic formalization of Con(phi).
It does not appear that we can obtain Goedel's Second Incompleteness
Theorem for PA and fragments, in any reasonable form, from Goedel's
Second Incompleteness Theorem for Direct Set Theoretic Consistency.
I use http://www.math.ohio-state.edu/~friedman/ for downloadable
manuscripts. This is the 388th in a series of self contained numbered
postings to FOM covering a wide range of topics in f.o.m. The list of
previous numbered postings #1-349 can be found at http://www.cs.nyu.edu/pipermail/fom/2009-August/014004.html
in the FOM archives.
350: one dimensional set series 7/23/09 12:11AM
351: Mapping Theorems/Mahlo/Subtle 8/6/09 10:59PM
352: Mapping Theorems/simpler 8/7/09 10:06PM
353: Function Generation 1 8/9/09 12:09PM
354: Mahlo Cardinals in HIGH SCHOOL 1 8/9/09 6:37PM
355: Mahlo Cardinals in HIGH SCHOOL 2 8/10/09 6:18PM
356: Simplified HIGH SCHOOL and Mapping Theorem 8/14/09 9:31AM
357: HIGH SCHOOL Games/Update 8/20/09 10:42AM
358: clearer statements of HIGH SCHOOL Games 8/23/09 2:42AM
359: finite two person HIGH SCHOOL games 8/24/09 1:28PM
360: Finite Linear/Limited Memory Games 8/31/09 5:43PM
361: Finite Promise Games 9/2/09 7:04AM
362: Simplest Order Invariant Game 9/7/09 11:08AM
363: Greedy Function Games/Largest Cardinals 1
364: Anticipation Function Games/Largest Cardinals/Simplified 9/7/09
365: Free Reductions and Large Cardinals 1 9/24/09 1:06PM
366: Free Reductions and Large Cardinals/polished 9/28/09 2:19PM
367: Upper Shift Fixed Points and Large Cardinals 10/4/09 2:44PM
368: Upper Shift Fixed Point and Large Cardinals/correction 10/6/09
369. Fixed Points and Large Cardinals/restatement 10/29/09 2:23PM
370: Upper Shift Fixed Points, Sequences, Games, and Large Cardinals
11/19/09 12:14PM
371: Vector Reduction and Large Cardinals 11/21/09 1:34AM
372: Maximal Lower Chains, Vector Reduction, and Large Cardinals
11/26/09 5:05AM
373: Upper Shifts, Greedy Chains, Vector Reduction, and Large
Cardinals 12/7/09 9:17AM
374: Upper Shift Greedy Chain Games 12/12/09 5:56AM
375: Upper Shift Clique Games and Large Cardinals 1
376: The Upper Shift Greedy Clique Theorem, and Large Cardinals
12/24/09 2:23PM
377: The Polynomial Shift Theorem 12/25/09 2:39PM
378: Upper Shift Clique Sequences and Large Cardinals 12/25/09 2:41PM
379: Greedy Sets and Huge Cardinals 1
380: More Polynomial Shift Theorems 12/28/09 7:06AM
381: Trigonometric Shift Theorem 12/29/09 11:25AM
382: Upper Shift Greedy Cliques and Large Cardinals 12/30/09 2:51AM
383: Upper Shift Greedy Clique Sequences and Large Cardinals 1
12/30/09 3:25PM
384: THe Polynomial Shift Translation Theorem/CORRECTION 12/31/09
385: Shifts and Extreme Greedy Clique Sequences 1/1/10 7:35PM
386: Terrifically and Extremely Long Finite Sequences 1/1/10 7:35PM
387: Better Polynomial Shift Translation/typos 1/6/10 10:41PM
Harvey Friedman
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2010-January/014290.html","timestamp":"2014-04-20T01:11:06Z","content_type":null,"content_length":"18469","record_id":"<urn:uuid:b5d7b071-2208-4a5e-a4c4-92e06e78c552>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00608-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculating Arc Flash Energy Levels
While not a major topic of consideration when designing and maintaining facilities in past years, code-enforcement bodies have become increasingly aware of the danger of arc flash incidents
associated with working on live electrical gear. As a result, facility managers and their consulting engineers are paying more attention to arc flash analysis. While the National Electrical Code
(NEC) requires
While not a major topic of consideration when designing and maintaining facilities in past years, code-enforcement bodies have become increasingly aware of the danger of arc flash incidents
associated with working on live electrical gear. As a result, facility managers and their consulting engineers are paying more attention to arc flash analysis.
While the National Electrical Code (NEC) requires only a generic label to warn of the possibility of arc flash danger on all equipment that is subject to arc flash hazards, NFPA's “Standard for
Electrical Safety in the Workplace” (NFPA 70E-2004) goes a step further, requiring that, “A flash hazard analysis be done in order to protect personnel from the possibility of being injured by an arc
flash.” This standard also requires that the analysis include a calculation of the flash protection boundary of each piece of equipment and determination of the required personal protective equipment
(PPE) at the appropriate working distance for each piece of equipment.
Both NFPA 70E and IEEE 1584, “Guide for Performing Arc-Flash Hazard Calculations,” prescribe their own methods of calculation for determining the available arc flash energy at a particular piece of
equipment. However, for both methods, typical working distances are given in Table 3 of IEEE 1584-2002.
Calculating arc flash energy
There are two distinct mathematical methods of calculating the available arc flash energy present at a specific piece of equipment — both of which are detailed in Annex D of NFPA 70E.
The first method, commonly referred to as the NFPA 70E equation for an arc flash in a cubic box, is:
E[MB] = 1038.7 D[B]^-1.4738 × t[A] [0.0093 F^2 0.3453 F + 5.9673]
Where E[MB] is the arc flash energy, D[B] is the working distance (from Table 3 of IEEE 1584), t[A] is the duration of the arc, and F is the short-circuit (or fault) current. This equation uses
inches for distance measurements, and gives results directly in calories per centimeter squared (cal/cm^2).
The cubic box is the commonly used equation (as opposed to the open air equation) because this equation most accurately approximates the behavior of an arc flash event within a standard piece of
electrical equipment (switchgear, fused switch, etc.). However, this equation is only valid over the limited range of conditions where the short-circuit current (F) is between 16kA and 50kA.
A broader range of short-circuit values is effectively modeled by using the IEEE 1584 set of equations:
log [E[n]] = k[1] + k[2] + 1.081 log [I[a]] + 0.0011 G and E = 4.184 C[f] × E[n] (t ÷ 0.2) × (610^x ÷ D^x)
Where E is the arc flash energy, E[n] is the normalized arc flash energy, I[a] is the arcing current, C[f] is the calculation factor, t is the duration of the arc, D is the distance from the arc to
the person (from IEEE 1584, Table 3), X is the distance X-factor, k[1] and k[2] are constants, and G is the conductor air gap.
A more rigorous method of calculating arc flash energy, this set of equations uses metric units, gives results in Joules per centimeter squared (J/cm^2), and is valid over the range of 0.208kV
through 15kV — and a short-circuit current of 0.7kA through 106kA.
Arc flash energy vs. fault current
In examining either of these equations, it becomes clear that the energy of an arc flash is dependent upon the length of the flash, the available short-circuit current (or bolted-fault current), and
inversely (and exponentially) related to one's distance from the origin of the flash.
Figure 1 (click here to see Fig. 1) shows the relationship between arc flash energy and bolted fault capacity. Note that this relationship is quite linear for the IEEE 1584 equations and somewhat
logarithmic for the NFPA 70E equations. Further examination also indicates that over the range of valid values (above 16kA for NFPA 70E), the IEEE 1584 method is calculated more aggressively, and the
NFPA 70E method is more conservative.
Figure 2 shows the relationship between arc flash energy and the duration of the arcing event, which is linear for both methods of calculation. However, it should be noted that the IEEE 1584 method
is once again more aggressive than its NFPA 70E counterpart.
It's important to note that when using either set of equations, the “working distance” is considered to be fixed, based upon the voltage and type of equipment on which the technician will be working.
Therefore, the dependent variables will be the available short-circuit current and the duration of the arc. The short-circuit current will be determined by the configuration of the electrical system,
and the fault duration will be determined by the trip properties and settings of the upstream protective device. However, it's usually assumed that the outer limit for the arcing time is no more than
2 seconds. Although this is not a hard and fast rule, it accounts for the likelihood that the arcing material in an arcing field will likely be either burned off or expelled by the force of the
blast. In any case, this would extinguish the arc event.
Figure 3 shows typical trip characteristics for a 400A, molded-case circuit breaker and a 2,000A, LSI solid-state breaker. Inspection of these trip characteristics shows that for certain regions of
the curve, where the short-time region meets the instantaneous region, for example, there are great swings in the operating time of the circuit breakers. This point is at approximately 16kA for the
2,000A breaker and 2.8kA for the 400A breaker.
Figure 4 (click here to see Fig. 4) shows the calculated arc flash energy for the 2,000A breaker, using both IEEE 1584 and NFPA 70E calculation methods. As you can see from the graph, the available
arc flash energy peaks at two points: 7.5kA and 16kA. As you might expect, these points coordinate with the “break” points of the trip curve where there is transition from the long-time to short-time
trip region and at the transition from the instantaneous to short-time trip region. These peaks are not replicated on the NFPA 70E curve because that equation is not valid in the associated trip
range for the 2,000A breaker. However, where larger currents are used, these peaks will occur using the NFPA 70E equation as well.
Figure 5 (click here to see Fig. 5) shows the calculated arc flash energy for the 400A breaker, using both IEEE 1584 and NFPA 70E calculation methods. For this size of breaker, the NFPA 70E equation
results are nearly useless, as the NFPA 70E equation becomes valid at 16kA (40 times the trip rating of the breaker). However, the IEEE 1584 results indicate a large spike at 3kA, which is in the
area of transition between the instantaneous and short-time trip regions.
Based on these figures, it's clear that calculating arc flash energy does not necessarily mesh with the “conservative” approach of calculating the worst possible case of short-circuit capacity (i.e.,
the highest value of short-circuit capacity). In many cases, a lower short-circuit capacity may produce a much larger arc flash energy level. For this reason, IEEE recommends that the arc flash
energy be calculated both at the calculated arcing current and again at 85% of the arcing current (also known as the second-arcing current) — and that the higher arc flash energy value be used. Note
that the arcing current is based upon the bolted-fault available short-circuit capacity, and is calculated for systems below 1kV as:
log [I[a]] = K + 0.662 log [I[bf]] + 0.0966 V + 0.000526 G + 0.5588 V log [I[bf]] 0.00304 G log [I[bf]] Or for systems over 1kV as: log [I[a]] = 0.00402 + 0.983 log [I[bf]]
Where K is a constant, V is the voltage of the system, G is the air gap between the conductors, and I[bf] is the bolted-fault available short-circuit capacity.
After calculating the arc flash energy level, an appropriate selection of PPE is made using Table130.7(C)(11) from Table NFPA 70E.
What are the “tables” in NFPA 70E?
In addition to using the calculated methods for determining the arc flash energy and required personal protective equipment (PPE) described above, there is a third method of determining what type of
protection to use when working on live equipment. This method is the direct application of Table 130.7(C)(9)(a) from NFPA 70E. This table is intended to be a conservative application of the
principles of Annex D, without the need for performing a complete arc flash evaluation and study. However, this table is based on several assumptions:
1. For panelboards and switchboards rated 600V and less, the available fault current is 25kA, and the clearing time of the protective device is 0.03 seconds (two cycles);
2. For 600V class motor control centers and other such equipment, the available fault current is 65kA, and the clearing time of the protective device is 0.03 seconds (two cycles);
3. For insertion or removal of starter buckets in motor control centers as noted above, the available fault current is 65kA, and the clearing time of the protective device is 0.33 seconds (20
cycles); and/or
4. For 600V class switchgear containing fused switches or power circuit breakers, the available fault current is 65kA, and the clearing time of the protective device is 1 second (60 cycles).
Although this table is a useful guide where there is no other information available, it is clear that the assumptions listed above are not necessarily applicable to the systems you may be analyzing.
As such, they will often lead to overclassification of protective gear for work on a specific piece of equipment. However, it is of even more concern that there is a potential for underclassification
of the protective gear recommended for work on a specific piece of equipment.
In a recent arc flash study of a facility that encompassed more than 700 pieces of equipment, it was found that in 7% of cases, the application of Table 130.7(C)(9)(a) without an appropriate system
study, would lead to the use of insufficient protective gear when working on live equipment. In addition, it was found that the PPE requirements of Table 130.7(C)(9)(a) were in excess of those
determined through calculation 62% of the time. Therefore, although the tables are of benefit when working at a location where there is no existing arc flash study, they are no substitute for a
complete and up-to-date arc flash study.
A useful method
When computer-based arc flash evaluations are run, the calculations of the software should not be blindly accepted. In many cases, the NFPA 70E equations are applied outside of their applicable
range, and the results are given with only a footnote to indicate that the calculation is “out of range” (i.e., invalid). Therefore, it is important that the software output be thoroughly reviewed
and applied as is appropriate to the necessary systems.
Also, it is important that both the NFPA 70E and IEEE 1584 equations be used correctly. By using both sets of equations, a conservative set of values can be assembled. It is good practice to
calculate the flash energy using both methods, and to use the valid value (where only one set of equations is valid) or to use the more conservative value (where both sets of equations are valid).
Some facility managers prefer to compare the NFPA 70E and/or IEEE 1584 calculated values to Table 130.7(C)(9)(a) from NFPA 70E and to use the most conservative flash protection equipment
recommendation. However, this is not a requirement of NFPA 70E, and there is debate as to whether overprotection in the form of flash suits is appropriate, as it is cumbersome for the worker.
A consolidated procedure for performing an arc flash analysis
The following steps outline a suggested procedure for performing an arc flash analysis of a facility. Although this is not the only possible procedure, it does cover a wide range of conditions, and
takes a conservative approach to the application of both IEEE 1584 and NFPA 70E equations.
1. Assemble existing as-built documentation — Start by collecting all of the relevant, existing as-built documentation. Include power floor plans showing locations of electrical equipment, and
single-line diagrams indicating the overcurrent protective devices and cable sizes for all relevant areas.
2. Field verify existing conditions — Because conditions may have changed since the original installation, it's critical that the existing conditions be field verified to ensure that the arc flash
analysis is performed using accurate breaker settings and field conditions.
3. Obtain the available utility fault current (range) and X/R ratio — The serving utility should be able to provide this information. Note that the utility normally guarantees a range of
short-circuit current, and that the highest arc flash energy value may occur anywhere within the range of available short-circuit current values.
4. Perform a short-circuit analysis — The short-circuit analysis must be performed to obtain the available bolted-fault and arc fault currents at each point in the system.
5. Perform an NFPA 70E arc flash analysis — Perform the arc flash analysis using the NFPA 70E equations and parameters.
6. Perform an IEEE 1584 arc flash analysis — Perform the arc flash analysis using the IEEE 1584 equations and parameters.
7. Repeat steps 4 through 6 for the entire range of possible values — The short-circuit and arc flash studies must be repeated over the entire range of valid utility fault capacities. The studies
may be performed in a range of fault increments to ensure that the highest arc flash energy value is captured at each component. For example, a system that has a fault value ranging from 3,000A
through 12,000A may be run in 1,000A increments.
8. Repeat steps 4 through 7 for all likely operating conditions — The report must be run for all likely operating conditions for the facility, including normal operation, load shedding modes,
parallel operation, tie breakers open and closed, and operating on standby power. It is important that all configurations be fully evaluated to ensure that the worst-case scenario is developed
for each piece of equipment.
9. Eliminate invalid data — Export the arc flash reports to spreadsheet software, and delete invalid values because they do not fall within the range of valid conditions for the equation set used
(i.e. NFPA 70E values calculated from short-circuit currents that are not between 16kA and 50kA as required).
10. Sort the worst-case values for each component or bus — Using a spreadsheet, the remaining valid values may be sorted and the worst-case extracted for each component within the system. This value
will be considered the available arc flash energy at its associated point in the system.
11. Assemble the comprehensive report — The final report should indicate the available short-circuit current used at each component, the available arc flash energy, the category of PPE required to
safely work on the equipment, and which set of equations was used to determine the available energy. The report should also include the working distance used in the calculation, and the flash
protection boundary (generally the threshold where the available energy exceeds 1.2 cal/cm2). In addition, any information that your specific client requires (duration of arc, closing time of
breaker, equipment required for safe operation, etc.) should be included in the report.
All of this information, along with the building's single-line and short-circuit capacity diagrams, should be collected in a coherent fashion and presented to the client in a binder for their use.
This method is straightforward, and keeps the report's focus on safety. Most importantly, it reminds the engineer to evaluate his equations independently, to ensure that the report contains
information that is accurate, valid, and useful. With proper preparation, arc flash analysis is a tool that can prevent needless accidents and injuries in the facility for which it is prepared — the
intent of NFPA 70E's requirement.
Medich is a senior electrical project engineer at Ballinger in Philadelphia.
Discuss this Article 3
Gordan Kinkela (not verified)
on Jul 12, 2012
My first contact with Calculating Arc Flash Energy Levels. Very interesting and useful.
• Login or register to post comments
Findling Associates (not verified)
on Aug 13, 2013
This article was also my first experience in calculating arc flash energy levels - excellent presentation.
• Login or register to post comments
on Jan 28, 2014
Very thorough article. The only activity I would add is to verify the condition of maintenance of the equipment and confirm that the equipment is installed in conformance with manufacturers
requirements and applicable codes and standards. The study is invalid if the OCPD does not function as intended. We have encountered CT's installed with incorrect polarity and some which were open
circuit, etc.
• Login or register to post comments | {"url":"http://ecmweb.com/content/calculating-arc-flash-energy-levels","timestamp":"2014-04-19T10:22:55Z","content_type":null,"content_length":"143397","record_id":"<urn:uuid:45b41510-0331-42d8-a261-7190524d116f>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00390-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Posts by Liz
Total # Posts: 992
free-body diagram help
A 20.0 kg block is being pushed forward on a flat surface with a force of magnitude 45.0 N against a frictional force of 13.0 N To draw this free-body diagram, would I only have the forces mentioned
in the question? or would i include the -9.8m/s^2 force of gravity? Also, how ...
Energy question
Your friend comes across a good deal to purchase a gold ring. She asks you for advice for you to test the ring. The ring has a mass of 4.54 g. When you heat the ring with 94.8 J of energy, its
temperature rises from 23.0 degrees celsius to 47.5 degrees celsius. Would you advis...
Energy question
how could you determine the efficiency with which the mechanical energy of a pendulum is conserved?
word problem
If equal amounts of heat are added to equal masses of silver and copper, both at the same initial temperature, which will reach the higher final temperature? Explain your answer. The specific heat
capacity of silver is 0.235 J/g degree Celsius and that of copper is 0.385 J/g d...
Energy question
thanks a lot! you gave me some good ideas! and i agree with you about the fluorescent light bulbs. They don't last as long as the box says
Energy question
How would a $10.00 compact fluorescent light bulb (15 W) be an overall money saver compared to an incandescent light bulb (60 W) that costs $0.50?
Energy multiple choice question
Which of the following samples has the highest specific heat capacity? a) A 100 kg sample of lead requires 13 000 J to increase its temperature by 1 K b) A 50 g sample of water releases 1050 J as it
cools by 5 degree Celsius c) A 2 kg sample of copper receives 2340 J of therma...
Physics Energy
How would a $10.00 compact fluorescent light bulb (15 W) be an overall money saver compared to an incandescent light bulb (60 W) that costs $0.50?
Show that the acceleration is 7.5 m/s2 for a ball that starts from rest and rolls down a ramp and gains a speed of 30 m/s in 4 s
Sodium metal adopts a body-centered cubic structure with a density of 0.97g/cm3. A. Use this information and Avogadro's number to estimate the atomic radius of sodium. B. If it didn't react so
vigorously, sodium could float on water. Use the answer from part a to estim...
May you please edit this essay? Holden Caulfield in Catcher in the Rye , seems to be obsessed with mortality and youthful innocence, both of which are embodied in James Castle and Allie Caulfield.
Holden demonstrates these infatuations in his conversation with Phoeb...
how to calculate the amount of heat in calories absorbed when 50.0 g of water at 20 degrees C spreads over your skin and warms the body temperature, 37 degrees C
A ball is fastened to one end of a 30cm string, and the other end is held fixed to a support. the ball whirls in a horizontal circle. Find the speed of the ball in its circular path if the string
makes an angle of 30 degrees to the vertical.
A room contains 48 kg of air. How many KWh of energy are necessary to heat the air in the house from 7oC to 28oC? The heat capacity of air is 1.03 J/goC
Differential Equations
Solve dy/dx = y/(2y-x) using rational substitution (using v=y/x) with the initial condition y(0)=1. I solved this problem and found C (the constant) to be undefined. That is not the correct answer.
Ohhh I see what I did wrong! Thanks a lot again!
I was wondering if my answers are correct. 1. A 50kg person is taking a ride on an elevator travelling up at a steady speed of 2.5m/s. Find the time fo the elevator trip if the elevator does 4900 J
of work on the person. My answer: First I used the formula f=mg to find the for...
thanks a lot!
I was wondering if my answers are correct. Thanks in advance. 1. A boy pushes down on a car rolling horizontally down the road. The boy pushes down vertically with 10N of force as the car rolls 3.0m
horizontally. What work is done? my answer: 30 N m (or would it be -30N...
Linear Algebra!
Write in parametric form the equation of the line that is perpendicular to r=(2i + 4j) + (i - 2j)t and goes through (1,0)
write and row reduce the augmented matrix to find the general solution: 2x - z = 2 6x + 5y + 3z = 7 2x - y = 4
write and row reduce the augmented matrix to find the general solution: x - 2y + 13 = 0 y - 4x = 17
math-advanced functions
on my exam review, i have this question for composition of functions Given f(x)=3x^2+x-1, g(x)=2cos(x), determine the values of x when f(g(x))=1 for 0≤x≤2π.
Determine the equation of g(x) that results from translating the function g(x)f(x)=x^2 +3 upward 10 units 1) g(x)=(x+13)^2 2) g(x)= (x+15)^2 3) g(x)= (x+4)^2 -11 4) g(x)= (x+4)^2 +11
Math. I really don't get this question
three circles are manually tangent externally. their centers form a triangle whose sides are of length 8, 9, 13. find the total area of the three circles
Fundamental Theorm of Calculus
Find the average value of f(x)=3x^2-2 x on the interval [3,6]
Fundamental Theorm of Calculus
Use a definite integral to find area of the region under the curve y=7-4x^2 and above the x-axis. Thanks in advance!
A test tube in a centrifuge is pivoted so that it swings out horizontally as the machine builds up speed. If the bottom of the tube is 165.0 mm from the central spin axis, and if the machine hits
53500 rev/min, what would be the centripetal force exerted on a giant amoeba of m...
A bicyclist traveling at 8 m/s rides around an unbanked curve. The coefficient of friction (is this static or kinetic friction?) between the tires and the road is 0.42. What is the radius of the
shortest turn that the bicyclist can safely make?
what is the solution of 3x^+27=0
prove or disprove cos(x-y)=cos(x)-cos(y)
You place a mirror on the ground 6 feet from the lamppost. you move back 3 feet and see the top of the lamppost in the mirror. What is the height of the lamppost?
American Government
Legislation whose tangible benefits are targeted solely at a particular legislator's
Calculus-Aproximate Areas
Estimate the area under the graph of f(x)=sin(pix) from x=0 to x=1 using the areas of 3 rectangles of equal width, with heights of the rectangles determined by the height of the curve a a) left
endpoint: b) right endpoint:
Calculus-Aproximate Areas
I tried that, and it was incorrect
Calculus-Aproximate Areas
Estimate the area under the graph of f(x)=sin(pix) from x=0 to x=1 using the areas of 3 rectangles of equal width, with heights of the rectangles determined by the height of the curve at a) left
endpoint: b) right endpoint:
Calculus-Approximate areas
Estimate the area under the graph of f(x)= x^2 + 3 x from x=1 to x=10 using the areas of 3 rectangles of equal width, with heights of the rectangles determined by the height of the curve at a) left
endpoints: b) right endpoints:
Calculus-Newton Method Approximation
Use Newton's method to approximate the positive value of x which satisfies x=2.3cosx Let x0=1 be the initial approximation. Find the next two approximations, x_1 and x2, to four decimal places each.
Calculus-Applied Optimization Problem
The manager of a large apartment complex knows from experience that 100 units will be occupied if the rent is 425 dollars per month. A market survey suggests that, on average, one additional unit
will remain vacant for each 9 dollar increase in rent. Similarly, one additional ...
How do I work out:if life expectancy of male is 78.9 and female is 83.6. Life expectancy is % lower than female. What is formula please
A research balloon of total mass 225 kg is descending vertically with a downward acceleration of 1.4 m/s2. How much ballast must be thrown from the car to give the balloon an upward acceleration
equal to 3.6 m/s2, presuming that the upward lift of the balloon does not change.
relative maxima and relative minima 2x^2+ 4000/x +10
cinquain poem
thanks for your help and advice
cinquain poem
For social studies we had to make our own 4 topic of cinquain poems. I asked some people and not many knew this type of poem, but one person said it all seem good except for the last lines. Do you
think I did the format corret? Thanks for your help 1.Homesteaders Brave, Patien...
-10 + 6g = 110
I have fewer ones than tens. the value of my tens is 20 what two numbers can I be
In a vacuum, two particles have charges of q1 and q2, where q1 = +3.8C. They are separated by a distance of 0.37 m, and particle 1 experiences an attractive force of 4.7 N. What is the value of q2,
with its sign? Two spherical objects are separated by a distance of 1.45 ×...
political culture refers to
Calculus 2
A tank is full of water. Find the work W required to pump the water out of the spout. (Use 9.8 for g and 3.14 for π. If you enter your answer in scientific notation, round the decimal value to two
decimal places. Use equivalent rounding if you do not enter your answer in ...
Calculus 2
Use the method of cylindrical shells to find the volume V generated by rotating the region bounded by the given curves about x = 4. y = 3 x^4 y = 0 x = 2
Franklin High
4x - y = 2, 9x - 2y = 8 find solution by substitution method
Calculus help!
Does the limit of xy/(x^2 + y^2) exist as (x,y)--->(0,0)? Why or why not??
Calculus help!
Three forces with magnitudes 75 pound, 100 lb, and 125 pound act on an object at angles of 30 degrees, -45 degrees, and 120 degrees respectively. Find the direction and magnitude of the resultant
Describe the surface and give traces: x = y^2 - 4z^2
Calculus-PLZ help!
Given u=3i-2j+k,v=2i-4j-3k, w=-i+2j+2k, 1 Find a unit vector normal to the plane containing v and w. 2 Find the volume of the parallelepiped formed by u, v, and w. 3 Are any of these vectors
parallel? Orthogonal? Why or why not?
Find the center (h,k) and radius
Find the center (h,k) and radius r of the circle and then use these to (a) graph the circle and (b) find the intercepts, if any. 3x^2+36x+3y^2=0
Describe the following surface and give traces if available. x^(2)-4z^(2)=4
You have a ten question multiple choice test in your next class and you did not study at all. Each question has four choices and only one correct answer. You are going to guess on each question. What
is the probability that you score at least a 20% on the test.
245 = .25x not sure how to do this since I have only learned with ratios
Chemistry II
Arrange the following 0.10M solutions in order of increasing pH: a. Nacl b. NH4CL C. HCL D. NaCH3CO2 e. KOH
College chemistry
The pH of blood is 7.4 and that of saliva is 6.4. what is the difference in pH between these two solutions? How much more Hydronium ion (H3O+) is in the saliva than in the blood. I need a detailed
answer with the formulas if you can. I have a hard time with chemestry
Thank you :) this helped a lot!
Can you explain it better we JUST started this section and im having a hard time with knowing what to put where when it comes to the solutions. what is the acid and what is the base
The pK3 of formic acid is 3.75 A)what is the pH of a buffer in which formic acid and sodium formate have equimolar concentration? B)what is the pH of a solution in which the sodium formate is 10M and
the formic acid is 1M?
What is the probability that 10 of 100 apples will be < 130 g? mean=173 std deviation=34?
For an Indian Youth Environment camp essay why should someone be chosen to go?
calculate the yield of CO2 if 30.g of C2H6 reacts with 14.0g of O2. Show work
Find the arc length of the curve described by the parametric equation over the given interval: x=t^(2) + 1 y=2t - 3 0<t<1
Calculus help!
On what intervals of t is the curve described by the given parametric equation concave up? Concave down? x=t^2; y=t^(3) + 3t I am a bit confused on how to solve this...any help/explanations are
welcome!! (& greatly appreciated!)
Find the arc length of the curve described by the parametric equation over the given interval: x=t^(2) + 1 y=2t - 3 --> 0<t<1
Convert the polar equation to rectangular coordinates: r = 2-cos theta ...and find all points of vertical and horizontal tangency.
Find the arc length of the curve described by the parametric equation over the given interval: x=t^(2) + 1 y=2t - 3 ...0 < t < 1
On what intervals of t is the curve described by the given parametric equation concave up? Concave down? x=t^2; y=t^(3) + 3t I am a bit confused on how to solve this...any help/explanations are
welcome!! (& greatly appreciated!)
Convert the curve to an equation in rectangular coordinates: x=(t+2); y=-2sqrt(t)
Parametric Equations
Find the length of the curve over the given interval: x=t+1 y=ln cos(t) for t=0 ---> t=pi/4
Find the length of the curve over the given interval: x=t+1 y=ln cos(t) for t=0 ---> t=pi/4
Convert to a rectangular equation by eliminating the parameter: x=sin(theta) y=(cos theta)^2
The Boeing 757-200 ER airliner carries 200 passengers and has doors with a height of 72 inches. Heights of men are normally distributed with a mean of 69 inches and a standard deviation of 2.8
inches. b) assume that half of the 200 passengers are men, what doorway height satis...
Math- Ms Sue can you help
so the volleyball team had a standard deviation 2.39 and a mean of 68.2 ( I messed up when giving you the numbers, there should have been a comma between 69 &70 not a period-giving 10 numbers) The
dance team I got 3.05 standard deviation and a mean of 63 So the should the answ...
Math- Ms Sue can you help
so the volleyball team had a standard deviation 2.39 and a mean of 68.2 ( I messed up when giving you the numbers, there should have been a comma between 69 &70 not a period-giving 10 numbers) The
dance team I got 3.05 standard deviation and a mean of 63 So the should the answ...
I didn't do the standard deviation for each one because I didn't know how, can you help?, if not do you have something ( a link to forward me to) so I can review standard deviations. I do appreciate
your help.
Math- Ms Sue can you help
Here is the height, in inches, of 10 randomly selected members of the girls' dance team and 10 randomly selected of the girls' volleyball team Based on the interquartile ranges of the two sets of
data, which is a reasonable conclusion concerning heights of the players ...
math- Reiny can you help
can you help with the above question
math- please help
Here is the height, in inches, of 10 randomly selected members of the girls' dance team and 10 randomly selected of the girls' volleyball team Based on the interquartile ranges of the two sets of
data, which is a reasonable conclusion concerning heights of the players...
Here is the height, in inches, of 10 randomly selected members of the girls' dance team and 10 randomly selected of the girls' volleyball team Based on the interquartile ranges of the two sets of
data, which is a reasonable conclusion concerning heights of the players ...
7th grade math
A restaurant donates 2 cans of soup to charity for every 9 bowls of soup it sells. The number of cans of soup donated by the restaurant is proportional to the total number of bowls of soup sold.
Which equation represents the relationship between b, the number of bowls of soup ...
Public Speaking
For communication to take place, there has to be: A. transmission of the message. B. medium. C. sharing of meaning. D. absence of noise. is it A is incorrect cause that's wat i had i think is B Which
is NOT a benefit of studying public speaking? A. Creates good first impre...
Public Speaking
Which is NOT a benefit of studying public speaking? A. Creates good first impression on others B. Communicates competence C. Proves our expertise D. Develops our ability to communicate ideas and
message clearly and with impact correct ans: c
does the sum 3/(2^(n)-1) converge or diverge? n=1--->infinity
Is the series 2/ln(n) convergent or divergent? n=3 to n=infinity
Is the series 1/(n(n+1)) convergent? n=2 to n=infinity
Find the base of a triangle whose area is 60in squared and whose height is 6 inches I did 1/2 bh=A 60=3b so b=60/3 b=20 then check so A=1/2bh A=1/2(20)6 A=1/2(120) A=60in squared is this correct?
world history
Any ideas to build a hippodrome for a school project
Physics...PLZ!! =)
A conducting rod whose length is 25 cm is placed on a U-shaped metal wire that has a resistance R of 8 Ù. The wire and the rod are in the plane of the paper. A constant magnetic field of strength 0.4
T is applied perpendicular and into the paper. An applied force moves ...
A 200-loop coil of cross sectional area 8.5 lies in the plane of the paper. Directed out of the plane of the paper is a magnetic field of 0.06 T. The field out of the paper decreases to 0.02 T in 12
milliseconds. (a) What is the magnitude of the change in magnetic flux enclos...
A 50-cm wire placed in an East-West direction is moved horizontally to the North with a speed of 2 m/s. The Horizontal and Vertical components of the earth's magnetic field at that location are 25 ìT
and 50ìT respectively. What is the emf between the ends of ...
If 5.30mL of 0.1000 M NaOH solution is needed to just neutralize excess acid after 20.00 mL of 0.1000 M HCl was added to 1.00 g of an antacid, how many moles of acid can the antacid counteract per
You must be doing the same worksheet we are doing right now. :) Why can't I do first grade math?
Ha ha ha Steve!! That's about as far as I got too! It was supposed to end up $400. No idea how though.
Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | Next>> | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Liz","timestamp":"2014-04-16T21:54:25Z","content_type":null,"content_length":"30065","record_id":"<urn:uuid:93d7e3d2-ae48-4630-9903-be44d1df4b72>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00560-ip-10-147-4-33.ec2.internal.warc.gz"} |
On Cayley-transform methods for the discretization of Lie-group equations
On Cayley-transform methods for the discretization of Lie-group equations (1999)
Download Links
Other Repositories/Bibliography
by Arieh Iserles
Venue: FOUND. COMPUT. MATH
Citations: 16 - 4 self
author = {Arieh Iserles},
title = {On Cayley-transform methods for the discretization of Lie-group equations},
institution = {FOUND. COMPUT. MATH},
year = {1999}
In this paper we develop in a systematic manner the theory of time-stepping methods based on the Cayley transform. Such methods can be applied to discretise differential equations that evolve in some
Lie groups, in particular in the orthogonal group and the symplectic group. Unlike many other Lie-group solvers, they do not require the evaluation of matrix exponentials. Similarly to the theory of
Magnus expansions in (Iserles & Nørsett 1999), we identify terms in a Cayley expansion with rooted trees, which can be constructed recursively. Each such term is an integral over a polytope but all
such integrals can be evaluated to high order by using special quadrature formulae similar to the construction in (Iserles & Nrsett 1999). Truncated Cayley expansions (with exact integrals) need not
be time-symmetric, hence the method does not display the usual advantages associated with time symmetry, e.g. even order of approximation. However, time symmetry (with its attendant benefits) is
attained when exact integrals are replaced by certain quadrature formulae. 1 Quadratic Lie groups The theme of this paper is geometric integration: numerical discretization of differential equations
that respects their underlying geometry. It is increasingly recognised by numerical analysts and users of computational methods alike that geometric integration often represents a highly efficient
and precise means toward obtaining a numerical solution, whilst retaining important qualitative attributes of the differential system (Budd & Iserles 1999). Large number of differential equations
with a wide range of practical applications evolve on Lie groups G = fA 2 GLn (R) : AJA where GLn (R) is the group of all n \Theta n nonsingular real matrices and J 2 GLn (R) is given. (We refer the
reader to (Cart...
341 Lie groups, Lie algebras, and their representations - Varadarajan - 1974
167 Special Functions - Rainville - 1960
152 On the exponential solution of differential equations for a linear operator - Magnus - 1954
132 Equivalence, Invariants and Symmetry - Olver - 1995
71 Runge-Kutta Methods on Lie Groups - Munthe-Kaas - 1996
52 High order Runge–Kutta methods on manifolds - Munthe-Kaas - 1999
51 Computations in a free Lie algebra - Munthe-Kaas, Owren - 1999
49 Numerical solution of isospectral flows - Calvo, Iserles, et al. - 1997
44 Vleck, Unitary integrators and applications to continuous orthonormalization techniques - Dieci, Russell, et al. - 1994
43 Conserving algorithms for the dynamics of Hamiltonian systems of Lie groups - Lewis, Simo - 1994
33 Geometric integration: numerical solution of differential equations on manifolds - Budd, Iserles - 1999
32 Approximating the exponential from a Lie algebra to a Lie group - Celledoni, Iserles - 2000
29 Collocation and relaxed collocation for the Fer and the Magnus expansions - Zanna - 1999
27 Die symbolische Exponentialformel in der Gruppentheorie. Berichte des Sächsischen Akad. der Wissensch. 58 - Hausdorff - 1906
18 The Cayley transform in the numerical solution of unitary differential systems - Diele, Lopez, et al. - 1998
15 A list of matrix flows with applications - Chu - 1990
12 Time-symmetry and high-order Magnus methods - ISERLES, NØRSETT, et al.
11 R'esolution del l'equation matricielle U = pU par produit infini d'exponentielles matricielles - Fer - 1958
11 On the solution of linear differential equations in Lie groups - Nrsett - 1999
9 On the construction of geometric integrators in the RKMK class - Eng - 1998
8 Quadrature methods based on the Cayley transform - Marthinsen, Owren
4 2000), ‘On the dimension of certain graded Lie algebras arising in geometric integration of differential equations - Iserles, Zanna
2 Volume-preserving algorithmms for source-free dynamical systems - Kang - 1995 | {"url":"http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.4.6377","timestamp":"2014-04-25T03:59:59Z","content_type":null,"content_length":"30889","record_id":"<urn:uuid:5bb3dc23-6a7f-4bd8-b7eb-17931aecd73c>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00382-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: February 2003 [00427]
[Date Index] [Thread Index] [Author Index]
Re: integrat trig radical
• To: mathgroup at smc.vnet.net
• Subject: [mg39593] Re: integrat trig radical
• From: "David W. Cantrell" <DWCantrell at sigmaxi.org>
• Date: Tue, 25 Feb 2003 02:56:24 -0500 (EST)
• References: <b3ck48$7vs$1@smc.vnet.net>
• Sender: owner-wri-mathgroup at wolfram.com
Friedrich Laher <mathefritz at schmieder-laher.de> wrote:
> is there any mathematical reason for mathematica, not 1st internally
> using
> Cos[x/2] = Sqrt[(1 + Cos[x])/2]
> before
> integrating Sqrt[1 + Cos[x]] ?
Sure! Cos[x/2] = Sqrt[(1 + Cos[x])/2] is false for some values of x. For
example, for Pi < x < 3*Pi, Cos[x/2] is negative while Sqrt[(1 + Cos[x])/2]
is positive.
> It
> even refuses to answer True
> to
> Integrate[Sqrt[1 + Cos[x]],x] == 2*Sqrt[2]*Sin[x/2]
I'm glad it refuses!
But your problem raises a more interesting question:
Since Sqrt[1 + Cos[x]] is continuous for all x, it must have a continuous
antiderivative. Yet the antiderivative provided by Mathematica is not
continuous everywhere. So why doesn't Mathematica give us a continuous
antiderivative here? Well, it should be noted that there are cases when,
although we know that a continuous antiderivative exists, we also know that
it cannot be written in closed form in terms of familiar functions.
HOWEVER, this problem is not such a case. Here is my answer for a
continuous antiderivative for Sqrt[1 + Cos[x]]:
(-1)^Floor[(x+Pi)/(2*Pi)]*2*Sqrt[2]*Sin[x/2] + 4*Sqrt[2]*Floor[(x+Pi)/(2*Pi)]
David Cantrell | {"url":"http://forums.wolfram.com/mathgroup/archive/2003/Feb/msg00427.html","timestamp":"2014-04-18T13:11:17Z","content_type":null,"content_length":"35379","record_id":"<urn:uuid:8cfc63b7-629d-497e-bd48-10ab381d2ba4>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00311-ip-10-147-4-33.ec2.internal.warc.gz"} |
Violin Plot with Matplotlib
One of the things I sorely missed from matplotlib for a very long time, was a
violin plot
implementation. Many a time, I thought about implementing one myself, but never found the time.
Today, browsing through Matplotlib's documentation, I found the recently added
function. Finally it seemed to have become a piece of cake to implement a violin plot. I Googled for violin plot and Python, to no avail. So I decided to write it myself.
Violin Plots are very similar to Box and whiskers plots, however they offer a more detailed view of a dataset's variability. It's frequently a good idea to combine them on the same plot. So here is
what I came up with:
# -*- coding: utf-8 -*-
from matplotlib.pyplot import figure, show
from scipy.stats import gaussian_kde
from numpy.random import normal
from numpy import arange
def violin_plot(ax,data,pos, bp=False):
create violin plots on an axis
dist = max(pos)-min(pos)
w = min(0.15*max(dist,1.0),0.5)
for d,p in zip(data,pos):
k = gaussian_kde(d) #calculates the kernel density
m = k.dataset.min() #lower bound of violin
M = k.dataset.max() #upper bound of violin
x = arange(m,M,(M-m)/100.) # support for violin
v = k.evaluate(x) #violin profile (density curve)
v = v/v.max()*w #scaling the violin to the available space
if bp:
if __name__=="__main__":
pos = range(5)
data = [normal(size=100) for i in pos]
ax = fig.add_subplot(111)
The next step now is to contribute this plot to Matplotlib, but before I do that, I'd like to get some comments on this particular implementation. Moreover, I don't know if it'd be acceptable for
Matplotlib to add Scipy as a dependency. But since re-implementing kernel density estimation for a simple plot would be overkill, maybe the destiny of this implementation will be to live on as an
example for others to adapt and use.
WARNING: This code requires maplotlib 0.99 (maybe 0.99.1rc1) to work because of the
8 comments:
I guess you can import scipy from the violin function, and/or add an optional parameter that would accept the kde.
Definitely post a patch to mpl please.
Cool! I hadn't heard of violin plots before. A beautiful way to summarize some data.
@Ondrej: I will certainly post a patch to MPL, maybe this weekend. maybe I can make the import within a try/except clause and warn the user he needs scipy in order to make violin plots.
Thanks Flavio! My group loves violin plots, but had been relying on R. We'll be sure to use this in the future.
Hi Flavio,
I am very excited to find your post on violin plots, as I am trying to integrate them into my work. I tried to run your post script and get the following error...
"File "numpy.pxd", line 30, in scipy.stats.vonmises_cython (scipy\stats\vonmises_cython.c:2939)
ValueError: numpy.dtype does not appear to be the correct type object"
I am using NumPy version 1.4.0rc1.
@Lou: I think this is an issue with your numpy or scipy installation. The script work nice for me.
Thanks for your quick reply. What version of numpy and scipy are you using? Thanks again, Lou
Is thera any news about what is happening with the violin inclusion | {"url":"http://pyinsci.blogspot.no/2009/09/violin-plot-with-matplotlib.html","timestamp":"2014-04-16T07:51:33Z","content_type":null,"content_length":"106448","record_id":"<urn:uuid:0dcb1c71-3c0e-4a1b-a4e7-e675bd00d432>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00016-ip-10-147-4-33.ec2.internal.warc.gz"} |
Perturbation of Cholesky decomposition for matrix inversion
up vote 0 down vote favorite
I am looking for a computationally cheap way to compute $x$ such that $$(L L^T + \mu^2 I)x = y$$ where $L \in \mathbb{R}^{n \times n}$ is a lower triangular definite positive matrix (with some very
small eigenvalues), $y \in \mathbb{R}^n$ and $\mu \in \mathbb{R}$ are known. If it is necessary, I can assume that $$\mu \ll 1$$ But $\mu^2$ is larger than the smallest eigenvalue of $LL^T$.
Basically, I would like to make the most of my knowledge of the Cholesky decomposition $L L^T$. Eventually, I hope to be able to compute $x$ in $\mathcal{O}(n^2)$. Approximate approaches are also
I have seen here that this does not seem to be doable in a more general situation, but I hope the smallness of $\mu$ may help...
Any idea, reference or warning? Thanks for your help.
linear-algebra matrix-theory numerical-linear-algebra matrix-inverse
add comment
1 Answer
active oldest votes
Let $A=LL^T,\lambda=\mu^2,f:X\rightarrow X^{-1}$. That follows is an approximate approach in $O(n^2)$ that is valid only if $\lambda$ is small with respect to $\inf(spectrum(A))$. $Df_A(H)=
-A^{-1}HA^{-1},D^2f_A(H,K)=A^{-1}KA^{-1}HA^{-1}+A^{-1}HA^{-1}KA^{-1}$. Thus, according to the Taylor formula, $(A+\lambda I)^{-1}\approx A^{-1}-\lambda A^{-2}+\lambda^2 A^{-3}$. Do not
up vote 1 calculate $A^{-2},A^{-3}$ but solve $LL^Tx_0=y,LL^TLL^Tx_1=y,LL^TLL^TLL^Tx_2=y$, that is $LL^Tx_0=y,LL^Tx_1=x_0,LL^Tx_2=x_1$.
down vote
Unfortunately, the role of $\lambda = \mu^2$ is to regularize the problem: $\lambda$ is larger (resp. smaller) than the smallest (resp. largest) eigenvalue of $A$. So I don't think we can
apply this Taylor expansion... Thanks for your help, I edited the question accordingly. – Mathieu Galtier Jan 8 at 11:46
Hi Mathieu, to regularize the problem, addind a scalar matrix, is a bad method. It is much better to do as follows: use this robust Cholesky factorization: math.berkeley.edu/~cinnawu/
hss.pdf – loup blanc Jan 8 at 17:21
Unfortunately (for you), according your comment, your question is an absolute non-sense. Indeed, if your problem is ill-conditioned, then you know the matrix $L$ with a very poor
precision ; then what is the interest to regularize the problem in these conditions ? – loup blanc Jan 8 at 22:25
I don't think we mean the same thing by regularization. I am doing this in the framework of an adaptive algo where I can compute exactly $L$ in $\mathcal{O}(n^2)$. The problem is that it
has eigenvalues very close to $0$. Therefore, when I want to invert it I get extremely large values for $x$. I am willing to loose a bit of precision in this inversion if it gives me
smaller values for $x$. The regularization needs to be done, not for the Cholesky decomposition, but for the inversion. Thanks for your time. – Mathieu Galtier Jan 9 at 9:56
add comment
Not the answer you're looking for? Browse other questions tagged linear-algebra matrix-theory numerical-linear-algebra matrix-inverse or ask your own question. | {"url":"http://mathoverflow.net/questions/153926/perturbation-of-cholesky-decomposition-for-matrix-inversion","timestamp":"2014-04-20T14:11:46Z","content_type":null,"content_length":"57396","record_id":"<urn:uuid:f9a47a7b-7a5a-42c9-8e4a-f76d7a1498a1>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00652-ip-10-147-4-33.ec2.internal.warc.gz"} |
Word Problem
Weekly students will be given the opportunity to solve mathematical word problems. Once they solve the problem they come up with a rule. This rule is a result of looking for a pattern. Where there is
a pattern, there is a rule. The following are problems students have solved or on which they will be working:
1. Eight adults and 2 children try to cross a river in a canoe. The canoe will only hold at one time, 2 children or 1 child and 1 adult. How many trips will it take to get everyone across the river?
How many trips would it take with 10 adults? 100 adults? (always just 2 children) 8/23
2. A canoe has eleven seats. There are ten fishermen in the canoe, five at each end with the middle seat vacant. They decide to switch ends. They cannot move the canoe. They can only move to an empty
seat next to them or around one person to an empty seat or the canoe will tip over. Can they make the switch? If so, how many moves will it take to get the fishermen to the opposite end of the canoe?
What about a canoe with 21 seats? 51 seats? 101 seats? 8/24
3. The Mangoes Problem: One night the King couldn't sleep, so he went down into the Royal kitchen, where he found a bowl full of mangoes. Being hungry, he took 1/6 of the mangoes. Later that same
night, the Queen was hungry and couldn't sleep. She, too, found the mangoes and took 1/5 of what the King had left. Still later, the first Prince awoke, went to the kitchen, and ate 1/4 of the
remaining mangoes. Even later, his brother, the second Prince, ate 1/3 of what was then left. Fnally, the third Prince ate 1/2 of what was left, leaving only three mangoes for the servants. How many
mangoes were originally in the bowl? 8/31
4. Sailors and Coconuts: Three sailors were marooned on a deserted island that was also inhabited by a band of monkeys. The sailors worked all day to collect coconuts but were too tired that night to
count them. They agreed to divide them equally the next morning. Duing the night, one sailor woke up and decided to get his share. He found that he could make three equal piles, with one coconut left
over, which he threw to the monkeys. Thereupon, he had his own share, and left the remainder in a single pile. Later that night, the second sailor awoke and, likewise, decided to get his share of
coconuts. He also was able to make three equal piles, with one coconut left over, which he threw to the monkeys. Somewhat later, the third sailor awoke and did exactly the same thing with the
remaining coconuts. In the morning, all three sailors noticed that the pile was considerably smaller, but each thought that he knew why and said nothing. When they then divided the remaing coconuts
equally, each sailor received seven and one left over, which they threw to the monkeys. How many coconuts were in the original pile? 8/31--9/4
5. How can you carry exactly four gallons of water from a river if only a three-gallon jug and a five-gallon jug are available? 9/24-9/28
To solve this problem, students worked in groups with tubs of water and "faux" buckets, aka, two different size cups, and poured back and forth until they came up with a solution. Actually the
students came up with two different ways to solve the task! Students then wrote about how they solved the task.
6. Desert Crossing: You live in a desert oasis and grow miniature watermelons that are worth a geat deal of money, if you can get them to the market 15 kilometers away across the desert. Your harvest
this year is 45 melons, but you have no way to get them to the market, except to carry them acros the desert. You have a backpack that holds up to 15 melons, the maximum number that you can carry at
a time. To walk across the desert, you need a certain amount tof fluid and nourishment that is supplied by the melons you carry. For each kilometer you walk (in either direction), on melon must be
eaten. Your challenge is to find a way to get as many melons as possible to market. 10/5 | {"url":"http://muskogee.schoolwires.net/Page/1330","timestamp":"2014-04-21T15:38:10Z","content_type":null,"content_length":"79273","record_id":"<urn:uuid:209f3a41-013f-4c92-b705-c0dfc0f0ca32>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00156-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: imputing continuous values when respondents select categories,
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: imputing continuous values when respondents select categories, e.g., income category
From Clive Nicholas <clivelists@googlemail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: imputing continuous values when respondents select categories, e.g., income category
Date Tue, 28 Apr 2009 13:37:35 -0400
Richard Williams replied:
> I am not sure what you mean by "implicate" but I suppose that most or all of
> what is true of intreg is also true for xtintreg. Also, I wonder about your
> statement that the data are in no way censored. I think the lower and upper
> bounds are supposed to be regarded as negative and positive infinity, even
> if in practice the observed range is much more limited than that.
> FYI, I have a hypothetical example where intreg works spectacularly well:
> http://www.nd.edu/~rwilliam/xsoc73994/intreg3.pdf
> However, as noted in the handout, everything is set up so intreg's
> assumptions are met. A simulation analysis examining how well it worked
> when assumptions were violated would no doubt be useful.
Apologies for the delay, and I cannot access your PDF at present.
Here, 'implicate' meant 'also apply to'. Tobit models, so far as I
understand them, are suitable for dependent variables which are
'left-censored', which normally means that it contains a
disproportional number of zeros, much of which is information not
fully realized. My argument was that my data for -xtintreg- doesn't
fit that description, and that it was my understanding that it was
more suitable for ordinal-level dependent variables. I could be wrong;
I often am.
Clive Nicholas
[Please DO NOT mail me personally here, but at
<clivenicholas@hotmail.com>. Please respond to contributions I make in
a list thread here. Thanks!]
"My colleagues in the social sciences talk a great deal about
methodology. I prefer to call it style." -- Freeman J. Dyson.
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2009-04/msg01208.html","timestamp":"2014-04-19T19:49:36Z","content_type":null,"content_length":"8139","record_id":"<urn:uuid:c219ac9d-25bd-46ba-809c-8ec02662fcf6>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00100-ip-10-147-4-33.ec2.internal.warc.gz"} |
Villa Rica, PR
Find a Villa Rica, PR Precalculus Tutor
...I have a bachelor's degree in Applied Math from Brown University, and a Masters and PhD in Cognitive Psychology, also from Brown University. And now, in semi-retirement, I find immense joy
tutoring students who need an extra boost. I tutor students from elementary school through Algebra II and Trigonometry.
8 Subjects: including precalculus, statistics, trigonometry, algebra 2
...I am also available for teaching Spanish, as well as almost any subject for lower grades. I am fun and outgoing and I like to make learning fun! In high school, I took a course preparing for
educational fields.
40 Subjects: including precalculus, English, reading, Spanish
I am Georgia certified educator with 12+ years in teaching math. I have taught a wide range of comprehensive math for grades 6 through 12 and have experience prepping students for EOCT, CRCT, SAT
and ACT. Unlike many others who know the math content, I know how to employ effective instructional strategies to help students understand and achieve mastery.
13 Subjects: including precalculus, statistics, geometry, SAT math
...The ACT reading section can be a confusing section if a student is not prepared for the format and patterns in the questions. I begin by learning about what the student likes to read and then
comparing those preferences to the types of reading on the ACT. We then identify strategies and tricks ...
17 Subjects: including precalculus, chemistry, writing, physics
...I also became even more proficient with Microsoft Excel, Word, and PowerPoint. So, if there are questions or invaluable tips that I can help someone with, I can also provide direction with
these Microsoft Office programs. I recently tested my own math skills by taking and passing the GACE Content Assessment for Mathematics (022 -023) for teacher certification in the State of
21 Subjects: including precalculus, calculus, statistics, geometry
Related Villa Rica, PR Tutors
Villa Rica, PR Accounting Tutors
Villa Rica, PR ACT Tutors
Villa Rica, PR Algebra Tutors
Villa Rica, PR Algebra 2 Tutors
Villa Rica, PR Calculus Tutors
Villa Rica, PR Geometry Tutors
Villa Rica, PR Math Tutors
Villa Rica, PR Prealgebra Tutors
Villa Rica, PR Precalculus Tutors
Villa Rica, PR SAT Tutors
Villa Rica, PR SAT Math Tutors
Villa Rica, PR Science Tutors
Villa Rica, PR Statistics Tutors
Villa Rica, PR Trigonometry Tutors
Nearby Cities With precalculus Tutor
Acworth, GA precalculus Tutors
Austell precalculus Tutors
Carrollton, GA precalculus Tutors
Dallas, GA precalculus Tutors
Douglasville precalculus Tutors
Fayetteville, GA precalculus Tutors
Forest Park, GA precalculus Tutors
Hiram, GA precalculus Tutors
Newnan precalculus Tutors
Oxford, AL precalculus Tutors
Powder Springs, GA precalculus Tutors
Temple, GA precalculus Tutors
Tyrone, GA precalculus Tutors
Union City, GA precalculus Tutors
Villa Rica precalculus Tutors | {"url":"http://www.purplemath.com/Villa_Rica_PR_Precalculus_tutors.php","timestamp":"2014-04-17T01:14:17Z","content_type":null,"content_length":"24402","record_id":"<urn:uuid:ce356476-2c69-40f8-8601-575206ce878f>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00117-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent US20040090441 - Method and device for translating two-dimensional data of a discrete wavelet transform system
[0053] A method for translating two-dimensional data in accordance with the present invention has a high speed for reading the two-dimension data and does not generate too much unnecessary data
during translating the two dimensional data. In addition, the two-dimensional data translated by the method to a good quality of translated result.
[0054] With reference to FIGS. 1A to 1C, the two-dimensional data (10), such as an image, is transformed to one-dimensional data by a stairway scan way. The two-dimensional data is composed of lines
and columns, wherein each line and column respectively have two end pixels (not numbered).
[0055] The lines of the two-dimensional data is first translated to a first one-dimensional data by a stairway scan way, wherein the two adjacent end pixels of the adjacent lines are connected
together to make the two adjacent lines a serial of data. The serial of data is a one-dimensional data having a first and a last end pixels (not numbered). Further, the first and last end pixels of
the first one-dimensional data are respectively extended to one boundary extension data (20) by a boundary extension process to be a first one-dimensional data input sequence. Therefore, in the
memory each of the first and last rows of the two-dimensional data (10) is extended to one extension data (20). With further reference to FIGS. 1D and 1F, the columns of the two-dimensional data are
also translated to a second one-dimensional data by the stairway scan way and the boundary extension process. The forgoing first and second one-dimensional data can be respectively executed by the
DWT. The boundary extension process is the symmetric extension.
[0056] Based on the forgoing description, two-dimensional data is only translated to the first and second one-dimensional data by the stairway scan way. That is, the translating one-dimensional data
procedure does not generate too much unnecessary data. In order to increase a compressing rate and a translated quality, the present invention further includes a border extension. That is, with
reference to FIG. 2, the lines of the two-dimensional data are processed by the border extension before executing the stairway scan way. Each even row of the two-dimensional data (10) has the two end
pixels. Two extension pixels (70) are respectively extended from each end pixel of each even row. The lines of the two-dimensional data with extension pixels (70) are further translated to the
one-dimensional data by the stairway scan way and then are processed by boundary extension process and lifting scheme to translate to a one-dimensional data of input sequence for the DWT. With
reference to FIG. 4A, the lines of the two-dimensional data (10) with the extension pixels are translated to the one-dimensional data by the stairway scan way and boundary extension process. With
reference to FIG. 4B, the columns of the two-dimensional data (10) are also first processed by the border extension to generate the extension pixels and then is further translated to the
one-dimensional data.
[0057] To further describe details of border extension, the Integer 5/3 is introduced as follow:
[0058] With reference to FIGS. 2 and 3, the second line of the forgoing two-dimensional data (10) is an example to show that the second line (not numbered) is processed to have extended pixel(s) and
to connect to the first line and third line by the border extension and the stairway scan way. Further, the second line is processed by the lifting scheme. The first and last pixels of the second
line are respectively extended to one first and last extended pixels, wherein a value of each extended pixel are changed according to values of the first pixel or last pixel, such as
[0059] (A) If the first extended pixel is extended from the first pixel of the even line (second line) of the two-dimensional data, the value S[i ]of the first extended pixel can be defined by three
different ways.
[0060] (1) The value “s[i]” of the first extended pixel is equal to the value s[i−1 ]or s[i+1 ]of the first pixel, as follows S[i]=S[i−1 ]or S[i]=S[i+1 ]
[0061] (2) The value “s[i]” of the first extended pixel is calculated to closed to 0 by the lifting scheme so that a formula is developed by the lifting scheme. The formula is S[i]=⅓└½(s[i−2]+S[i+2])
−(S[i−1]+S[i+1])┘, wherein S[i−1 ]and s[i+1 ]are the adjacent pixels of the S[i].
[0062] (3) The value “s[i]” of the first extended pixel is constant, such as s[i]=128 or S[i]1=0.
[0063] (B) If the last extended pixel is extended from the last pixel of the even line (second line) of the two-dimensional data, the value s[j ]of the last extended pixel can be defined by three
different ways.
[0064] (1) The value s[j ]of the last extended pixel is equal to the value of the adjacent pixels (S[j]=S[j−1 ]or S[j]=S[j+1]).
[0065] (2) The value “S[j]” of the first extended pixel is calculated to closed to 0 by the lifting scheme so that a formula is developed by the lifting scheme. The formula is S[j]=⅓└½(S[j−2]+S[j+2])
−(S[j−1]+S[j+1])┘, wherein S[j−1 ]and S[j+1 ]are the adjacent pixels of the s[j].
[0066] (3) The value “s[j]” of the first extended pixel is constant, such as s[j] =128 or s [j]=0.
[0067] With reference FIG. 5, a device for embodying the above forgoing method for translating two dimensional data of a two-dimensional DWT system has a controller and address generator (30), two
one-dimensional (1-D) DWT converters (31, 32), two memories (33, 34). Each of the DWT converter (31, 32) has two input terminals, two output terminals (not numbered) and a wavelet translation. Two
output terminals of the each 1-DD WT converter (31, 32) are respectively connected to the two memories through a selector (S) and one input terminal is connected to the controller and address
generator (30). Each memory (33, 34) to store a half of two-dimensional data is connected between the input terminal (not numbered) and the controller and address generator (30). Therefore, a size of
each memory (33, 34) has at least the half of the two-dimensional data.
[0068] Two memories respectively stored two portions of the two-dimensional data, so that two portions are executed to in the device at the time. That is two portion of the two-dimensional data are
respectively input to the corresponded the DWT converter (31, 32) to execute the Wavelet Transform controlled by the controller and address generator (30). The DWT with the translating method in the
device has the steps of
[0069] (1) Initial step.
[0070] (2) Row operating.
[0071] (3) Column operating
[0072] (4) Ending.
[0073] In the first step, an image is composed of N×N pixels. The image is cut into two portions each of which is composed of
[0074] pixels. Each portion is stored in middle addresses of the memory, as a gray area shown in the FIG. 6. If one address can store one pixel, the addresses for storing one portion of the image has
[0075] size. The rest addresses of the memory is used to store extension pixels during the border extension and the boundary extension process. Therefore, the rest addresses of the memory has 2×N
[0076] In the row operating step, the two 1-D DWT converters respectively get the data in serial sequence from the corresponded memories to calculate, not row by row. During getting the serial
sequence, the extension pixels are generated and added to the row to be calculated to output low frequency sequence and high frequency sequence by the 1-D DWT converters. The output sequences from
the two 1-D DWT converters are alternative stored into two portions of the memories. That is, when the two 1-D DWT converters are finished calculating process, the all low frequency sequences are
stored in one memory and the high frequency sequences are stored in the other memory.
[0077] In the columns operating step, two 1-D DWT converters get the all columns of the high frequency sequence or low frequency sequence in serial from the corresponded memories, not column by
column. During getting the serial sequence, extension pixels are generated and added to the serial sequence. The output sequences from the two D DWT converters are alternative stored into different
memories (denoted by the light and the dark color), as shown in FIG. 7C. That is, when the 1-D DWT converter finished calculating, all low frequency output sequences are stored in one memory and the
high frequency sequences are stored in the other memory.
[0078] In the ending step, until the third step finishing the image is translated one time by the translating method for a two dimensional DWT system. If the transformed image needs to further be
transformed by another time, the second to third steps are executed.
[0079] The above device is also to implement DWT with a conventional two-dimensional DWT system. Because in the conventional boundary extension process, each row or column has extended pixels and
then input to be executed translated by the DWT. That is, the steps of the executing Wavelet Transform do not change, only some detail steps change, especially the getting a serial sequence way uses
row by row or column by column instead of the stairway scan way. At as others changes are described as follow:
[0080] In the first step, an image is composed of N×N pixels. The image is cut into two portions each of which is composed of
[0081] pixels. Each portion is stored in middle addresses of the memory, as a gray area shown in the FIG. 7A. If one address can store one pixel, the addresses for storing one portion of the image
[0082] size. Number of the extension pixels is defined to α, the reset addresses of the memory has (N×α)+2×α^2 size and are used to prepare to store extension pixels of the image. Therefore, total
size of the memory is
[0083] In the row operating step, the two 1-D DWT converters get the one-dimensional data in row by row from the corresponded memories to calculate at the same time. When each row is got from the
memory, the extension pixels are generated and added to the row to be calculated to output low frequency sequence and high frequency sequence by the 1-D DWT converters. The output sequences from the
each 1-D DWT converters are alternative stored into two memories (denoted by the light and the dark color), as shown in FIG. 7B. That is, when the two 1-D DWT converters are finished calculating
process, the all low frequency sequences are stored in one memory and the high frequency sequences are stored in the other memory.
[0084] In the columns operating step, two 1-D DWT converters get the all columns of the high frequency sequence or low frequency sequence in column by column from the corresponded memories. During
getting each column, the extension pixels are generated and added to each column by the conventional boundary extension process. The output sequences from the each 1-D DWT converters are alternative
stored into two memories, (denoted by the light and the dark color), as shown in FIG. 7C. That is, when the 1-D DWT converter finished calculating, all low frequency sequences are stored in one
memory and the high frequency sequences are stored in the other memory.
[0085] In the ending step, until the third step finishing the image is translated one time by the wavelet transform. If the translated image needs to further be translated by another time, the second
to third steps are executed.
[0086] Based on the above description, the present invention proposed a translating method for DWT to translate two-dimensional. The image having two-dimensional data can be the input data for the
wavelet transform. That is, the hardware not only does not use large size memory to support the Wavelet transform with the boundary extension process, but also the calculating speed is fast, too.
Besides, the Wavelet transform with the boundary extension process in accordance with the present invention is suitable to the JPEG2000 standard.
[0087] It is to be understood, however, that even though numerous characteristics and advantages of the present invention have been set forth in the foregoing description, together with details of
the structure and function of the invention, the disclosure is illustrative only, and changes may be made in detail, especially in matters of shape, size, and arrangement of parts within the
principles of the invention to the full extent indicated by the broad general meaning of the terms in which the appended claims are expressed.
[0040]FIGS. 1A to 1D are transforming flow chart for translating a two dimensional data to one dimensional data;
[0041]FIG. 2 is a diagram of an image having extended pixels generated from a border extension;
[0042]FIG. 3 is a process diagram for generating the FIG. 2;
[0043]FIGS. 4A and 4B are diagrams of an image with extended pixel processed by a stairway scan way;
[0044]FIG. 5 is a block diagram of a translating device for translating method in accordance with the present invention;
[0045]FIG. 6, is an arrangement of disposition in memory of data generated from the FIG. 5;
[0046]FIGS. 7A, 7B, and 7C are arrangements of disposition in memory of data generated from the conventional Wavelet Transform;
[0047]FIG. 8 is a block diagram of a conventional lifting scheme for a wavelet transform;
[0048]FIG. 9 is a detailed block diagram of FIG. 8;
[0049]FIG. 10 is a block diagram of Integer 5/3 wavelet filter;
[0050]FIG. 11 is a block diagram of CDF 9/7 wavelet filter;
[0051]FIGS. 12A and 12B are two Symmetric extension for even and odd sequence; and
[0052]FIGS. 13A, 13B and 13C are diagrams of an image processed by the conventional signal line or signal column scanning way with conventional boundary extension process.
[0001] 1. Field of the Invention
[0002] The present invention relates to a method and a device for translating two-dimensional data of a discrete wavelet transform (DWT) system, more specifically to a translating two-dimensional
data method for a DWT system that provides a more effective translating process for data compression.
[0003] 2. Description of Related Art
[0004] The JPEG Committee proposed static image compression in 1988. Encoding technology to compress data often uses DCT (discrete cosine transform). The discrete cosine transform (DCT) is a
conventional transform technology used in image compression system. In order to increase compressing rate of the image, more significant signals of the image is lost in the DCT so that JPEG Committee
replaced DCT by DWT, which has less loss of the significant signals in the same condition with the DCT and has a good transforming quality.
[0005] The DWT has a variety of filters and the JPEG Committee suggests two of the filters to use, one is Integer 5/3 and the other is Daubechines 9/7 (CDF 9/7). The Integer 5/3 and Daubechines 9/7
(CDF 9/7) filters respectively have one fixed length. There are two kinds of implementing methods of the Integer 5/3 and Daubechines 9/7 (CDF 9/7) filters, one method is a sub-bank transform and the
other is a lifting scheme. Implementing the sub-bank transform requires more electronic elements and more memory requirement because the sub-bank layout circuit is more complex. The lifting scheme
was proposed in 1996. The lifting scheme built an orthogonal wavelet to quickly translate data by a small translation. Implementing the lifting scheme requires fewer electronic elements and less
memory requirement and is easier than implementing the sub-bank transform. Thus JPEG2000 suggested that the lifting scheme is used to implement the wavelet translation.
[0006] With reference to FIG. 8, a conventional embodiment of the lifting scheme has an input sequence x[k], a low frequency output sequence “y[low]” and a high frequency output sequence “y[high]”.
[0007] With further reference to FIG. 9, the lifting scheme has the following steps:
[0008] 1. Splitting step to split the input sequence into two portions, y[0] ^{0}[n] and y[1] ^{0}[n]. One portion y[0] ^{0}[n] defines an even number set of the input sequence and the other portion
y[0] ^{0}[n] defines an odd number set of the input sequence. The two portions, y[0] ^{0}[n] and y[1] ^{0}[n], of the input sequence are respectively described in a formula as follows:
y[0] ^{0}[n]=x[2n]
y[1] ^{0}[n]=x[2n+1]
[0009] 2. Predicting step to calculate a second odd number set by the first even number set. Specifically, each odd number is calculated as follows:
[0010] (a) Averaging the two adjacent even numbers; and
[0011] (b) Adding the average and a first odd number to obtain a second odd number.
[0012] The foregoing calculation can be mathematically described as follows:
y[0] ^{1}[n]=y[0] ^{0}[n]
[0013] 3. Recalculating the even number set based on the second odd number set. That is, each even number is calculated as follows:
[0014] (a) Averaging the two adjacent second odd numbers; and
[0015] (b) Adding the average and a first even number to obtain a second even number.
[0016] The foregoing calculation can be mathematically described as follows:
y[1] ^{1}[n]=y[1] ^{0}[n]
[0017] 4. Repeating the forgoing steps 2 and step 3. Number for repeating is based on an implemented wavelet filter. The repeating number is assumed to m.
[0018] 5. Normalization step to complete a low frequency and a high frequency sequence y[low], y[high ]of the lifting scheme. Two different numbers K[0 ]and K[1 ]are respectively multiply the m'th
even number set and the m'th odd number set as follows:
y [low] =y [0] ^{m} {n}×K [0 ]
y[high] =y [1] ^{m} {n}×K [1 ]
[0019] With reference to FIG. 10, a wavelet Integer 5/3 filter is an example used to implement the foregoing steps. First both of the two different numbers K[0 ]and K, are defined to 1 in the
normalization step. The step 2 and step 3 are only executed once. The low frequency and high frequency output sequences y[low], y[high ]are respectively calculated as
[0020] With reference to FIG. 11, a wavelet CDF 9/7 filter is other example to implement the foregoing steps. The predicting step and the updating step need to be executed twice to obtain the low
frequency and high frequency output sequences y[low], y[high]. The low frequency and the high frequency output sequences are described by the Z translation in the digital signal process (DSP) as
[0021] λ[1](z)=−1.586134342(1+z)
[0022] λ[2](z)=−0.052980118(1+z^−1)
[0023] λ[3](z)=0.882911075(1+z)
[0024] λ[4](Z)=0.443506852(1+z^−1);
[0025] where k[0]=k, k[1]=1/k, and k=1.149604398.
[0026] The foregoing description describes how one input sequence is translated to the low frequency and the high frequency output sequences by wavelet translating with the lifting scheme. However,
the quality of two-dimensional data must be further considered when translating two-dimensional data by wavelet translation. Next, the DWT for translating the two-dimensional data, such as an image,
is further introduced. In the image compression technology, an original image is first translated by DWT and then is further compressed and encoded to a compressed data. When the compressed data is
returned to the original image, the compressed data is reversal calculated to obtain a two-dimension data whose boundary differs from the original image's. Therefore, a boundary extension process is
executed before the DWT to ensure that quality of boundary of the original image.
[0027] One kind of the boundary extension process called a symmetric extension is used in JPEG2000. The symmetric extension has two different process methods. With reference to FIG. 12A, a data
stream having odd bit numbers is processed by the one symmetric extension. Two extended data streams respectively are mirror images of the data stream and are appended before a first bit and after a
last bit of the data stream. The number of bits in the extended data stream is defined based on the length of the filter of wavelet technology. In FIG. 12A, the length of filter is defined to four
bits long, so that the bit number of the extended data stream is four. With reference to FIG. 12B, a data stream having even bit numbers is processed by the other symmetric extension. Two extended
data streams respectively are also mirror images and extend from two centers, a first bit and a last bit of the data stream. The number of bits in the extended data stream is defined based on the
length of the filter of the wavelet technology.
[0028] With reference to FIG. 13A, a first image (50) is a two-dimensional data composed of rows and columns. Each row or each column of an example first image (50) is composed of 8 pixels. Thus the
first image (50) is composed of 8×8 pixels.
[0029] The first image (50) is translated by wavelet translation in the following steps. First, the first image is processed by the second symmetric extension, wherein the length of the filter of the
wavelet technology is four bits long.
[0030] 1. Extending two extended data streams (60) each with four mirror reflecting pixels respectively from a first pixel and a last pixel of each row to generate a second image (not numbered) which
is composed of 16×8 pixels, as shown in FIG. 13B.
[0031] 2. Translating the second image by inputting each row with two extended data streams until the last row to the lifting scheme.
[0032] 3. Extending two data streams each with four mirror reflecting pixels respectively from a first pixel and a last pixel of each column to generate a third image (not numbered) which is composed
of 16×16 pixels, as shown in FIG. 13C.
[0033] 4. Translating the third image by inputting each column with two extended data streams until the last column to the Lifting scheme.
[0034] The above translating process with the symmetric extension provides a good translated result to compress image without boundary effect. A one-dimensional data is requested in the Lifting
scheme so that each row and each column have to be process by the symmetric extension. Thus, lots of memory is needed in the translating process, which causes the overall calculating speed to be
slow. Furthermore, implementing a circuit to perform the translation also requires more electronic devices.
[0035] A conventional data compression system basically has a DWT unit and an Entropy coding unit. The original image is translated by the DWT unit and then is coded to a compressed data, which is
stored in small memory to be easy transmitted. When returning the compressed data to the image, the compressed data is input to an inverse data compression system including an Inverse Entropy coding
unit and an Inverse DWT unit to obtain a reconstruct image. In general, if the data compression system has a compressing quality, the reconstruct image is very similar to the original image. If the
data compression system has high compressing rate, a size of the reconstruct image is smaller than the original image's.
[0036] Therefore, the present invention provides a method for translating two-dimensional data having a high translating speed without complex circuit layout, a high compressing rate and good
compressing quality to mitigate or obviate the aforementioned problems.
[0037] An objective of the present invention is to provide a high speed two-dimensional data translating method with a border extension to generate a good translated result.
[0038] Another objective of the present invention is to provide a translating device based on the forgoing method. The translating device needs less memory requirement and the translating device is
easy implemented.
[0039] Other objectives, advantages and novel features of the invention will become more apparent from the following detailed description when taken in conjunction with the accompanying drawings. | {"url":"http://www.google.com/patents/US20040090441?dq=6,757,710","timestamp":"2014-04-24T08:28:12Z","content_type":null,"content_length":"76216","record_id":"<urn:uuid:0f92eac0-ea8d-4a81-97a3-db1696499859>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00393-ip-10-147-4-33.ec2.internal.warc.gz"} |
Journal Article Detail Page
written by David Hestenes
The connection between physics teaching and research at its deepest level can be illuminated by physics education research (PER). For students and scientists alike, what they know and
learn about physics is profoundly shaped by the conceptual tools at their command. Physicists employ a miscellaneous assortment of mathematical tools in ways that contribute to a
fragmentation of knowledge. We can do better! Research on the design and use of mathematical systems provides a guide for designing a unified mathematical language for the whole of physics
that facilitates learning and enhances physical insight. This research has produced a comprehensive language called geometric algebra, which I introduce with emphasis on how it simplifies
and integrates classical and quantum physics. Introducing research-based reform into a conservative physics curriculum is a challenge for the emerging PER community.
Subjects ADS Supplements Resource Types
- Collection
General Physics
- Reference Material
- Physics Education Research
= Research study
PER-Central Type Intended Users Ratings
- Researchers
- PER Literature
- Educators
Access Rights:
Available by subscription
© 2003 American Journal of Physics
Additional information is available.
algebra, geometry, mathematics, physics, teaching
Record Creator:
Metadata instance created July 13, 2005 by Lyle Barbato
Record Updated:
September 27, 2007 by Rebecca Barbato
Last Update
when Cataloged:
February 1, 2003
ComPADRE is beta testing Citation Styles!
<a href="http://www.compadre.org/PER/items/detail.cfm?ID=2693">Hestenes, David. "Oersted Medal Lecture 2002: Reforming the mathematical language of physics." Am. J. Phys. 71, no. 2,
(February 1, 2003): 104-121.</a>
D. Hestenes, Am. J. Phys. 71 (2), 104 (2003), WWW Document, (http://dx.doi.org/10.1119/1.1522700).
D. Hestenes, Oersted Medal Lecture 2002: Reforming the mathematical language of physics Am. J. Phys. 71 (2), 104 (2003), <http://dx.doi.org/10.1119/1.1522700>.
Hestenes, D. (2003, February 1). Oersted Medal Lecture 2002: Reforming the mathematical language of physics. Am. J. Phys., 71(2), 104-121. Retrieved April 16, 2014, from http://dx.doi.org/
10.1119/1.1522700 Similar
Hestenes, David. "Oersted Medal Lecture 2002: Reforming the mathematical language of physics." Am. J. Phys. 71, no. 2, (February 1, 2003): 104-121, http://dx.doi.org/10.1119/1.1522700
(accessed 16 April 2014).
Hestenes, David. "Oersted Medal Lecture 2002: Reforming the mathematical language of physics." Am. J. Phys. 71.2 (2003): 104-121. 16 Apr. 2014 <http://dx.doi.org/10.1119/1.1522700>.
@article{ Author = "David Hestenes", Title = {Oersted Medal Lecture 2002: Reforming the mathematical language of physics}, Journal = {Am. J. Phys.}, Volume = {71}, Number = {2}, Pages =
{104-121}, Month = {February}, Year = {2003} }
%A David Hestenes
%T Oersted Medal Lecture 2002: Reforming the mathematical language of physics
%J Am. J. Phys.
%V 71
%N 2
%D February 1, 2003
%P 104-121
%U http://dx.doi.org/10.1119/1.1522700
%O text/html
%0 Journal Article
%A Hestenes, David
%D February 1, 2003
%T Oersted Medal Lecture 2002: Reforming the mathematical language of physics
%J Am. J. Phys.
%V 71
%N 2
%P 104-121
%8 February 1, 2003
%U http://dx.doi.org/10.1119/1.1522700
: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the
Citation Source Information
area for clarifications.
Citation Source Information
The AIP Style presented is based on information from the AIP Style Manual.
The AJP/PRST-PER presented is based on the AIP Style with the addition of journal article titles and conference proceeding article titles.
The APA Style presented is based on information from APA Style.org: Electronic References.
The Chicago Style presented is based on information from Examples of Chicago-Style Documentation.
The MLA Style presented is based on information from the MLA FAQ. | {"url":"http://www.compadre.org/PER/items/detail.cfm?ID=2693","timestamp":"2014-04-17T01:08:55Z","content_type":null,"content_length":"34585","record_id":"<urn:uuid:b25d80ab-f0b8-468b-9dd1-b54df887a0cf>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00315-ip-10-147-4-33.ec2.internal.warc.gz"} |
NCERT Solutions for Class 12th Maths Chapter 10 - Vector Algebra
National Council of Educational Research and Training (NCERT) Book Solutions for class 12th Subject: Maths Chapter: Chapter 10 – Vector Algebra
Class 12th Maths Chapter 10 Vector Algebra NCERT Solution is given below.
Click Here to view All Chapters Solutions for Class 12th Maths
Stay Updated. Get All Information in Your Inbox. Enter your e-Mail below: | {"url":"http://schools.aglasem.com/?p=4115","timestamp":"2014-04-16T15:59:17Z","content_type":null,"content_length":"54850","record_id":"<urn:uuid:1ba5acee-06e5-4c28-b622-486d15274e12>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00561-ip-10-147-4-33.ec2.internal.warc.gz"} |
CPTR 125
CPTR 324 Automata, Formal Languages, and Computablility
January 7, 2008
Instructor: Dr. Eileen M. Peluso, D307 Academic Center, Extension 4135
Email: pelusoem@lycoming.edu
Office hours: to be announced . . . on my web page www.lycoming.edu/~pelusoem.
Objective: The goal of this course is to introduce students to the notion of computability and the theoretical limits of computation. By investigating various types of "machines" (also referred to
as automata) and their equivalent formal language counterparts, the students will discover the computational power and limitations associated with each. Specifically, the students will develop a
working knowledge of finite state machines, push-down automata, and Turing machines, the latter being computationally equivalent to the modern day computer.
Text: Michael Sipser, Introduction to the Theory of Computation, 2^nd edition, Thomson Course Technology, 2006.
Other course materials: JFLAP automaton simulator, freely downloadable from www.jflap.org.
· Weekly Take-home Quizzes: 50%
· Exams(2): 30% (tentatively scheduled for Wednesday, February 13^th and Wednesday, March 19^th)
· Comprehensive Final: 20%
Grade scale: If you earn the following average, you will receive at least the grade indicated.
· 90.0 or above A
· 85.0 – 89.9 A-
· 80.0 – 84.9 B+
· 75.0 – 79.9 B
· 70.0 – 74.9 B-
· 65.0 – 69.9 C+
· 60.0 – 64.9 C
· 55.0 – 59.9 C-
· 50.0 – 54.9 D
· below 50.0 F
1. Students will not be excused from exams unless
· they are ill and have been to the infirmary or have seen a doctor, or
· they have an emergency situation and have received exemption from the dean.
It is wise to contact me before missing an exam or quiz. Any tests missed will result in a grade of zero unless arrangements for a make-up are made within 48 hours.
2. Class attendance is important and expected. If in some emergency circumstance (such as illness and inclement weather) you are not able to attend class, inform the instructor as soon as possible.
It is the student's responsibility to obtain details about the missed work, announcements and any information disseminated during the missed classes.
3. Take-home quizzes will be given weekly on Fridays, due the following Friday unless stated otherwise. Each will consist of approximately 3 to 5 questions/problems that are designed to solidify
your understanding of the material presented in the text and lectures.
4. Academic Dishonesty: Discussions with other students about take-home quizzes are encouraged, however completing take-home quizzes as a group activity is not allowed. Obviously, you should never
have in your possession or have access to (in paper or electronic form) a copy of someone else's take-home quiz. As a general rule of thumb: The difference between sharing ideas and plagiarism will
be determined by the instructor as follows: if you cannot discuss, expound upon, and justify what you have submitted, then you have plagiarized. | {"url":"http://lycofs01.lycoming.edu/~pelusoem/cptr324/syllabus.html","timestamp":"2014-04-18T10:56:47Z","content_type":null,"content_length":"21314","record_id":"<urn:uuid:deadeb64-6950-4040-8384-a221af558e90>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00621-ip-10-147-4-33.ec2.internal.warc.gz"} |
From Algebraic Riccati equations to unilateral quadratic matrix equations: old and new algorithms
when quoting this document, please refer to the following
URN: urn:nbn:de:0030-drops-13987
URL: http://drops.dagstuhl.de/opus/volltexte/2008/1398/ Bini, Dario A. ; Meini, Beatrice ; Poloni, Federico
From Algebraic Riccati equations to unilateral quadratic matrix equations: old and new algorithms
The problem of reducing an algebraic Riccati equation $XCX-AX-XD+B=0$ to a unilateral quadratic matrix equation (UQME) of the kind $PX^2+QX+R$ is analyzed. New reductions are introduced which enable
one to prove some theoretical and computational properties. In particular we show that the structure preserving doubling algorithm of B.D.O. Anderson [Internat. J. Control, 1978] is nothing else but
the cyclic reduction algorithm applied to a suitable UQME. A new algorithm obtained by complementing our reductions with the shrink-and-shift tech- nique of Ramaswami is presented. Finally, faster
algorithms which require some non-singularity conditions, are designed. The non-singularity re- striction is relaxed by introducing a suitable similarity transformation of the Hamiltonian.
BibTeX - Entry
author = {Dario A. Bini and Beatrice Meini and Federico Poloni},
title = {From Algebraic Riccati equations to unilateral quadratic matrix equations: old and new algorithms},
booktitle = {Numerical Methods for Structured Markov Chains},
year = {2008},
editor = {Dario Bini and Beatrice Meini and Vaidyanathan Ramaswami and Marie-Ange Remiche and Peter Taylor},
number = {07461},
series = {Dagstuhl Seminar Proceedings},
ISSN = {1862-4405},
publisher = {Internationales Begegnungs- und Forschungszentrum f{\"u}r Informatik (IBFI), Schloss Dagstuhl, Germany},
address = {Dagstuhl, Germany},
URL = {http://drops.dagstuhl.de/opus/volltexte/2008/1398},
annote = {Keywords: Algebraic Riccati Equation, Matrix Equation, Cyclic Reduction, Structured doubling algorithm}
Keywords: Algebraic Riccati Equation, Matrix Equation, Cyclic Reduction, Structured doubling algorithm
Seminar: 07461 - Numerical Methods for Structured Markov Chains
Issue date: 2008
Date of publication: 07.04.2008
DROPS-Home | Fulltext Search | Imprint | {"url":"http://drops.dagstuhl.de/opus/volltexte/2008/1398/","timestamp":"2014-04-17T15:29:27Z","content_type":null,"content_length":"8190","record_id":"<urn:uuid:b427aa54-2d86-4cc9-ab40-ffffb065ea14>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00316-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/tainted/asked/1","timestamp":"2014-04-18T03:22:58Z","content_type":null,"content_length":"117539","record_id":"<urn:uuid:9c1de388-7b7d-4e97-a514-3d654e8d42c3>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00175-ip-10-147-4-33.ec2.internal.warc.gz"} |
Applications of Homological Algebra Introduction to Perverse Sheaves Spring 2007 P. Achar
Summary: Applications of Homological Algebra Introduction to Perverse Sheaves
Spring 2007 P. Achar
Problem Set 6
March 1, 2007
1. Let F·
and G·
be complexes of sheaves. Show that Hom(F·
, G·
) and F·
(graded tensor product)
are well-defined complexes (that is, that d2
= 0). In class, we defined R Hom and L
as functors only
of the second variable: for a fixed F·
(ShX), we have
R Hom(F·
, -) : D+
(ShX) D+
Source: Achar, Pramod - Department of Mathematics, Louisiana State University
Collections: Mathematics
Summary: Applications of Homological Algebra Introduction to Perverse Sheaves Spring 2007 P. Achar Problem Set 6 March 1, 2007 1. Let F· and G· be complexes of sheaves. Show that Hom(F· , G· ) and F·
G· (graded tensor product) are well-defined complexes (that is, that d2 = 0). In class, we defined R Hom and L as functors only of the second variable: for a fixed F· C- (ShX), we have R Hom(F· , -)
: D+ (ShX) D+
Source: Achar, Pramod - Department of Mathematics, Louisiana State University | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/191/4786154.html","timestamp":"2014-04-17T18:55:56Z","content_type":null,"content_length":"7516","record_id":"<urn:uuid:4fe1fbcd-b3d7-464e-b90b-88d27e22a86f>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00391-ip-10-147-4-33.ec2.internal.warc.gz"} |
Construction of rational points and pythagorean triples
September 25th 2010, 01:05 PM #1
Above is a unit circle, $x^2+y^2=1$.
1. First (trivial) solution is $x=1, \text{ }y=0$.
2. For second (non-trivial) solution, QR has a rational slope $y=t(x+1)$ and we plug this $\displaystyle{ x^2+y^2=x^2+t^2(x+1)^2=x^2(1+t^2)+x(2t^2)+(t^2-1)=1 }$.
This gives (non-trivial) solution of $x=\frac{1-t^2}{1+t^2}, \text{ } y=\frac{2t}{1+t^2}$.
Simple enough. Now the exercise.
The parameter $t$ in the pair $\left( \frac{1-t^2}{1+t^2}, \text{ } \frac{2t}{1+t^2} \right)$ runs through all rational numbers if $t=\frac{q}{p}$ and $p, \text{ } q$ run through all pairs of
Deduce that if $\text{ }(a,b,c)$ is any Pythagorean triple then
$\displaystyle{ \frac{a}{c}=\frac{p^2-q^2}{p^2+q^2}, \text{ } \frac{b}{c}=\frac{2pq}{p^2+q^2} }$
for some integers $p$ and $q$.
I'm having problem with a minus in $\frac{a}{c}$ part:
$\displaystyle{ \frac{a^2}{c^2}=x^2\Rightarrow \frac{a}{c}=x=\frac{1-t^2}{1+t^2}=\frac{1-(\frac{p}{q})^2}{1+(\frac{p}{q})^2}=\frac{q^2-p^2}{q^2+p^2}eq \frac{p^2-q^2}{q^2+p^2} }$
How would I use the previous exercise to prove that Euclid's generates all triples?
In the quoted text of the exercise t is given as $t = \frac qp$.
You used $t = \frac pq$.
What about the second one?
September 25th 2010, 11:16 PM #2
September 25th 2010, 11:24 PM #3 | {"url":"http://mathhelpforum.com/geometry/157401-construction-rational-points-pythagorean-triples.html","timestamp":"2014-04-20T23:29:07Z","content_type":null,"content_length":"42544","record_id":"<urn:uuid:dd6f236f-77fe-4247-abb9-4b3b8dde0cb5>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00441-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Build an Accelerometer
Building an accelerometer can be very difficult; accuracy and practicality are big issues. The simplest of devices consist of a clear tube with two rubber bands in it attached to a weight. The issue
with this however is accuracy; a rubber band stretches very easily and because your average rubber bands have fairly high spring constants and therefore require large masses to be able to see any
acceleration (upwards of 200 grams). These are nice however to take to an amusement park and measure the g's being pulled on different rides.
The construction of this first, very basic design which can be made from household items, is as follows:
Materials: tennis ball tube or paper towel tube or clear pvc piping, two rubber bands, scissors, a bolt (heavy enough to stretch the rubber bands when hung), duct tape, two paper clips.
1. If you're not using a clear tube, cut a slit in the side of the tube so that you can see the device.
2. Cut the rubber bands into strands and tie one end of each band to the bolt or other weight.
3. Fasten the ends of the rubber bands (with the bolt at the center) to the ends of the tube; the bands should be just barely loose so that the bolt can move around in the tube.
Calibration: With the device in a vertical position, mark where on the tube the bolt comes down to: this mark is one g (9.81m/s^2). You can then flip it over and make a negative one g mark on the
other side. Halfway between these two marks will be the zero g mark (which should be where the bolt is when the device is held horizontally), and multiple g marks can be measured out based on
negative one and one. To be more accurate, hang multiple bolts of the same weight onto the current weight: 2 weights = 2g's, 3 weights = 3g's and so on.
Use: This device can be taken in a car, on an amusement park ride or anything where you'll experience acceleration. A more accurate accelerometer is, however, described below.
A spring-mass system is just that, a mass on a spring. This is the next best thing to use for an accelerometer if you have the patience to build it and time to calibrate it. So, here it goes:
Theoretical Explanation: When accelerated, the force of the acceleration will stretch out the spring due to the mass, giving the device a different reading. Because of Newton’s Second Law, F = ma
which can be transformed into a = F / m, the acceleration of the device as a whole is proportional to the force exerted on the spring and inversely so to the mass attached to the spring. In addition,
because the distance the spring extends is known by the equation F = kx as x, with a known spring constant k, each displacement value can be attributed to a certain acceleration value based on the
following proof…
F = m * a
F = k * x
m * a = k * x
a = (k * x) / m
This produces a linear correlation between acceleration and displacement based on a given mass and spring as a result of Newton’s Second Law. The use and application of this correlation is explained
in detail further below. The method of reading accelerations can be a paper clip, laser pointer, or other pointing device attached to the cart/weight pointing down to a scale running next to the
Materials:9 inch 2x4 piece of wood (or any length really), model train track (9 inches), un-built model train kit, a small plastic box (1 cubic inch approximately-- must fit on train wheels), two
C-115 Century Corp Springs (at your local hardware store), poker chips (or some other form of weight which is incremental) two paper clips, power tools, glue (epoxy).
Calibration: Experimental data may be attained by tipping the device at readings multiple of one centimeter. The angle can then be recorded at each point and trig functions should be used to solve
for the component of gravity pulling the box down the track at each respective measurement. Different masses on ones cart can be used to allow for a wide range of acceleration while maintaining
accuracy by calculating the correlation for each of the 3 sets of data values. Spring constants should be attained using the following equations and the known values, for each data pair...
F = m * a
F = k * x
m * a = k * x
k = (m * a) / x
The averages for the spring constants of each data set should be calculated. Using this value as k in the known equations...
F = m * a
F = k * x
m * a = k * x
a = (k * x) / m
...a calibration equation can then be determined for each data set using displacement (x) as the independent variable and acceleration (a) as the dependent variable. The device can now be used in
multiple modes each with different acceleration ranges, also to be calculated based on the maximum and minimum displacement values along with the known displacement/distance coefficient. A
calibration chart/table for the spring C-115 is on the node C-115. The information on this page was compiled and calculated completely by me. I do not claim 100% accuracy considering a regression
equation was used, it is however at least 95% accurate if not better.
I left a good amount of description out and wasn't highly specific in the construction to allow for you to make the accelerometer how you wish, the concept is fairly simple... Wouldn't want to make
it too easy. Message me with any questions or concerns you have, happy building! | {"url":"http://everything2.com/title/How+to+Build+an+Accelerometer","timestamp":"2014-04-20T01:15:41Z","content_type":null,"content_length":"37207","record_id":"<urn:uuid:80d9456f-ae24-48f7-b833-9b6849efbc91>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00502-ip-10-147-4-33.ec2.internal.warc.gz"} |
Graphing the Ecliptic
I need to graph the Ecliptic at a Radius, R, around the earth.
It intersects the equator axis at the equinoxes.
The Angle from the equinoxes line to the maximum Z displacement is the angle of the Obliquity.
Could someone give me some ideas on how to graph the thing
and how to exactly how to calculate the Longitude of the Ascending Node for each of the | {"url":"http://www.physicsforums.com/showthread.php?t=697491","timestamp":"2014-04-19T19:45:56Z","content_type":null,"content_length":"19309","record_id":"<urn:uuid:a3d35cec-2551-43fa-8a3e-5c9671620fef>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00028-ip-10-147-4-33.ec2.internal.warc.gz"} |
On the Theory of the PTIME Degrees of the Recursive Sets
, 1997
"... We prove that the theory of EXPTIME degrees with respect to polynomial time Turing and many-one reducibility is undecidable. To do so we use a coding method based on ideal lattices of Boolean
algebras whichwas introduced in #7#. The method can be applied in fact to all hyper-polynomial time clas ..."
Cited by 5 (4 self)
Add to MetaCart
We prove that the theory of EXPTIME degrees with respect to polynomial time Turing and many-one reducibility is undecidable. To do so we use a coding method based on ideal lattices of Boolean
algebras whichwas introduced in #7#. The method can be applied in fact to all hyper-polynomial time classes. 1 Introduction If h is a time constructible function which dominates all polynomials,
then, by the methods of the deterministic time hierarchy theorem, DTIME#h# properly contains P . Therefore, a polynomial time reducibility like polynomial time many#one or Turing reducibility induces
a nontrivial degree structure on DTIME#h#, which is an uppersemilattice with least element 0. By the methods of Ladner ##6#, also see #4#, Chapter I.7#, this degree structure is dense. This was so
far the only fact known to hold in general for all such structures. Here we prove that all those degree structures are # Partially supported by the New Zealand Marsden Fund for Basic Science under
grant VIC-50...
"... A computably enumerable boolean algebra B is effectively dense if for each x 2 B we can effectively determine an F (x) x such that x 6= 0 implies 0 ! F (x) ! x. We give an interpretation of true
arithmetic in the theory of the lattice of computably enumerable ideals of such a boolean algebra. A ..."
Cited by 4 (3 self)
Add to MetaCart
A computably enumerable boolean algebra B is effectively dense if for each x 2 B we can effectively determine an F (x) x such that x 6= 0 implies 0 ! F (x) ! x. We give an interpretation of true
arithmetic in the theory of the lattice of computably enumerable ideals of such a boolean algebra. As an application, we also obtain an interpretation of true arithmetic in all theories of intervals
of E (the lattice of computably enumerable sets under inclusion) which are not boolean algebras. We derive a similar result for theories of certain initial segments "low down" of subrecursive degree
structures. 1 Introduction We describe a uniform method to interpret Th(N; +; \Theta) in the theories of a wide variety of seemingly well-behaved structures. These structures stem from formal logic,
complexity theory and computability theory. In many cases, they are closely related to dense distributive lattices. The results can be summarized by saying that, in spite of the structure's
"... Abstract. Sacks [Sa1966a] asks if the metarecursivley enumerable degrees are elementarily equivalent to the r.e. degrees. In unpublished work, Slaman and Shore proved that they are not. This
paper provides a simpler proof of that result and characterizes the degree of the theory as O (ω) or, equival ..."
Cited by 2 (0 self)
Add to MetaCart
Abstract. Sacks [Sa1966a] asks if the metarecursivley enumerable degrees are elementarily equivalent to the r.e. degrees. In unpublished work, Slaman and Shore proved that they are not. This paper
provides a simpler proof of that result and characterizes the degree of the theory as O (ω) or, equivalently, that of the truth set of L ω CK
- Memoirs of the American Mathematical Society
"... Abstract. When attempting to generalize recursion theory to admissible ordinals, it may seem as if all classical priority constructions can be lifted to any admissible ordinal satisfying a
sufficiently strong fragment of the replacement scheme. We show, however, that this is not always the case. In ..."
Cited by 1 (1 self)
Add to MetaCart
Abstract. When attempting to generalize recursion theory to admissible ordinals, it may seem as if all classical priority constructions can be lifted to any admissible ordinal satisfying a
sufficiently strong fragment of the replacement scheme. We show, however, that this is not always the case. In fact, there are some constructions which make an essential use of the notion of
finiteness which cannot be replaced by the generalized notion of α-finiteness. As examples we discuss both codings of models of arithmetic into the recursively enumerable degrees, and
non-distributive lattice embeddings into these degrees. We show that if an admissible ordinal α is effectively close to ω (where this closeness can be measured by size or by cofinality) then such
constructions may be performed in the α-r.e. degrees, but otherwise they fail. The results of these constructions can be expressed in the first-order language of partially ordered sets, and so these
results also show that there are natural elementary differences between the structures of α-r.e. degrees for various classes of admissible ordinals α. Together with coding work which shows that for
some α, the theory of the α-r.e. degrees is complicated, we get that for every admissible ordinal
"... We exhibit a structural difference between the truth-table degrees of the sets which are truth-table above 0 # and the PTIME-Turing degrees of all sets. ..."
Cited by 1 (0 self)
Add to MetaCart
We exhibit a structural difference between the truth-table degrees of the sets which are truth-table above 0 # and the PTIME-Turing degrees of all sets. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=265758","timestamp":"2014-04-21T02:22:13Z","content_type":null,"content_length":"22899","record_id":"<urn:uuid:ec422537-5f75-49ce-9590-311ef545df2b>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00472-ip-10-147-4-33.ec2.internal.warc.gz"} |
East Chicago Statistics Tutor
...My experience tutoring and teaching mathematics, English and the physical sciences at the college level qualify me to do so. My experience as a trainer and business manager further bolster my
ability to help students to succeed with PRAXIS. I have a systematic and comprehensive methodology for PRAXIS tutoring that has proven highly successful.
49 Subjects: including statistics, reading, writing, English
...Beyond this academic instruction I have worked with students since I was in college on the side helping them optimize their own study habits and techniques for both their classwork but also
their approach to test prep. Often times to master a substantial amount of material in a limited amount of...
38 Subjects: including statistics, Spanish, geometry, reading
...I have a both an undergraduate & graduate degree in mathematics having completed all of the coursework for a doctorate in mathematics I have achieved a B or above in my linear algebra courses
I've taken at the undergraduate & graduate level. I have been successful tutoring my classmates in linea...
19 Subjects: including statistics, calculus, ASVAB, geometry
I have a passion for teaching and a great liking for math, which gives me pleasure and rewards. I believe that math is a subject that we use every day. I do my best to associate math concepts to
real life situation.
7 Subjects: including statistics, algebra 1, algebra 2, precalculus
...I have my master's degree in mechanical engineering. Physics, chemistry, anatomy, and biology were all part of my curriculum to get where I am today. Overall, I received a 30 on my ACTs, with a
27 in science.
20 Subjects: including statistics, calculus, physics, geometry | {"url":"http://www.purplemath.com/East_Chicago_Statistics_tutors.php","timestamp":"2014-04-19T10:14:13Z","content_type":null,"content_length":"24089","record_id":"<urn:uuid:054c2c3d-38af-47bb-9cf2-ee6fe9c9443c>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00156-ip-10-147-4-33.ec2.internal.warc.gz"} |
C++ Algorithms
Display all entries for C++ Algorithms on one page, or view entries individually:
accumulate sum up a range of elements
adjacent_difference compute the differences between adjacent elements in a range
adjacent_find finds two items that are adjacent to eachother
binary_search determine if an element exists in a certain range
copy copy some range of elements to a new location
copy_backward copy a range of elements in backwards order
copy_n copy N elements
count return the number of elements matching a given value
count_if return the number of elements for which a predicate is true
equal determine if two sets of elements are the same
equal_range search for a range of elements that are all equal to a certain element
fill assign a range of elements a certain value
fill_n assign a value to some number of elements
find find a value in a given range
find_end find the last sequence of elements in a certain range
find_first_of search for any one of a set of elements
find_if find the first element for which a certain predicate is true
for_each apply a function to a range of elements
generate saves the result of a function in a range
generate_n saves the result of N applications of a function
includes returns true if one set is a subset of another
inner_product compute the inner product of two ranges of elements
inplace_merge merge two ordered ranges in-place
is_heap returns true if a given range is a heap
is_sorted returns true if a range is sorted in ascending order
iter_swap swaps the elements pointed to by two iterators
lexicographical_compare returns true if one range is lexicographically less than another
lexicographical_compare_3way determines if one range is lexicographically less than or greater than another
lower_bound search for the first place that a value can be inserted while preserving order
make_heap creates a heap out of a range of elements
max returns the larger of two elements
max_element returns the largest element in a range
merge merge two sorted ranges
min returns the smaller of two elements
min_element returns the smallest element in a range
mismatch finds the first position where two ranges differ
next_permutation generates the next greater lexicographic permutation of a range of elements
nth_element put one element in its sorted location and make sure that no elements to its left are greater than any elements to its right
partial_sort sort the first N elements of a range
partial_sort_copy copy and partially sort a range of elements
partial_sum compute the partial sum of a range of elements
partition divide a range of elements into two groups
pop_heap remove the largest element from a heap
prev_permutation generates the next smaller lexicographic permutation of a range of elements
push_heap add an element to a heap
random_sample randomly copy elements from one range to another
random_sample_n sample N random elements from a range
random_shuffle randomly re-order elements in some range
remove remove elements equal to certain value
remove_copy copy a range of elements omitting those that match a certian value
remove_copy_if create a copy of a range of elements, omitting any for which a predicate is true
remove_if remove all elements for which a predicate is true
replace replace every occurrence of some value in a range with another value
replace_copy copy a range, replacing certain elements with new ones
replace_copy_if copy a range of elements, replacing those for which a predicate is true
replace_if change the values of elements for which a predicate is true
reverse reverse elements in some range
reverse_copy create a copy of a range that is reversed
rotate move the elements in some range to the left by some amount
rotate_copy copy and rotate a range of elements
search search for a range of elements
search_n search for N consecutive copies of an element in some range
set_difference computes the difference between two sets
set_intersection computes the intersection of two sets
set_symmetric_difference computes the symmetric difference between two sets
set_union computes the union of two sets
sort sort a range into ascending order
sort_heap turns a heap into a sorted range of elements
stable_partition divide elements into two groups while preserving their relative order
stable_sort sort a range of elements while preserving order between equal elements
swap swap the values of two objects
swap_ranges swaps two ranges of elements
transform applies a function to a range of elements
unique remove consecutive duplicate elements in a range
unique_copy create a copy of some range of elements that contains no consecutive duplicates
upper_bound searches for the last possible location to insert an element into an ordered range | {"url":"http://idlebox.net/2008/apidocs/cppreference-20080420.zip/cppalgorithm/index.html","timestamp":"2014-04-16T07:39:18Z","content_type":null,"content_length":"19031","record_id":"<urn:uuid:e8db0849-6e63-4c3f-bf55-bfaf9ef71bb6>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00114-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fort Collins Algebra 2 Tutor
Find a Fort Collins Algebra 2 Tutor
...Following my graduation from UCSD (BS in Physiology and Neuroscience) with top honors of Summa Cum Laude (top 1% with a GPA of 3.948), I gained extensive knowledge in the fields of biology and
math. During my undergraduate years in college I mastered math and biology classes receiving more A+ th...
26 Subjects: including algebra 2, reading, geometry, biology
...While there, I worked on the Future Truck competition - a hybrid vehicle competition between teams from various universities. As the team leader for the Electrical team, I taught and guided
other students who were working on the electrical and communications portions of the vehicle. I have worked professionally as a test engineer, software engineer, and systems engineer.
20 Subjects: including algebra 2, physics, calculus, algebra 1
...I have also become very interested in Economics since taking a few entry level Economics classes in my first two years of college. I have always been very fond of numbers beginning when I was
a little kid, and anything with numbers has come easily to me. I have only done tutoring when helping o...
11 Subjects: including algebra 2, reading, writing, grammar
My pursuit of teaching chemistry stems from my desire to see young men and women grow in character, integrity, and knowledge. As a previous instructor of chemistry at a community college I have
been blessed with the opportunity to influence the lives of these students by using chemistry as an avenu...
14 Subjects: including algebra 2, chemistry, physics, biology
I have always liked math and it is even more enjoyable for me when I can help someone else understand it. I have a BS in Electrical Engineering and through earning that I have understood how
important it is to have a good foundation in the basics of mathematics. I've seen a lot of my fellow colleg...
10 Subjects: including algebra 2, algebra 1, precalculus, elementary math
Nearby Cities With algebra 2 Tutor
Arvada, CO algebra 2 Tutors
Aurora, CO algebra 2 Tutors
Boulder, CO algebra 2 Tutors
Denver algebra 2 Tutors
Evans, CO algebra 2 Tutors
Greeley, CO algebra 2 Tutors
Lakewood, CO algebra 2 Tutors
Laporte, CO algebra 2 Tutors
Longmont algebra 2 Tutors
Loveland, CO algebra 2 Tutors
Northglenn, CO algebra 2 Tutors
Severance, CO algebra 2 Tutors
Thornton, CO algebra 2 Tutors
Westminster, CO algebra 2 Tutors
Windsor, CO algebra 2 Tutors | {"url":"http://www.purplemath.com/Fort_Collins_algebra_2_tutors.php","timestamp":"2014-04-20T02:29:10Z","content_type":null,"content_length":"24279","record_id":"<urn:uuid:894abe47-c5f7-46c2-b46d-6333b198b75d>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00228-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Are proofs in mathematics based on sufficient?
Irving ianellis at iupui.edu
Mon Jul 19 22:43:21 EDT 2010
Monroe Eskew wrote that if Euclid had given correct informal proofs from
Hilbert's axioms, but still failed to specify any formal inference
rules, Russell would not have found any fault with the Elements. His
criticisms of I.1 and I.4 seem to be mathematical rather than
meta-mathematical points: Euclid did not have any continuity axioms,
and the superposition technique simply does not follow from anything
If I understand this aright, my response is that Russell
very explicitly denied that Euclid's "logical
excellence is transcendent". The claim on behalf of
Euclid's Logical excellence", Russell very plainly
assert, "vanishes on a close inspection," and the
reasons Russell lists for asserting this is that
Euclid's "definitions do not always define, his axioms
are not always indemonstrable, and his demonstrations
require many axioms of which he is quite unconscious,"
on the basis of which, I would suggest, his complaints
about Euclid have to do with the logical structure and
logical felicity of his demonstrations.
Turning from the specific examples which Russell gave, I
will readily admit that, at the time Russell wrote
to his conception of what is at stake philosophically, his
piece ("The Teaching of Euclid"), which was published in
1902, Russell had not yet attained the definitive position
of logicism with which we are familiar from his 1903
Principles of Mathematics. Russell began working towards
logicism in late 1900, but did not reach his definitive
position until late 1903, which led him to do last-minute
rewrites of parts of PoM in the galleys but after the
manuscript had already been delivered to the press. I
would contend, on the other hand, that there was enough
of logicism in Russell's foundational thinking at the
time he composed "The Teaching of Euclid" to suggest
that the distinction between "logical" and "mathematical"
were for him such that any difference between a logical
error and a mathematical error was essentially nil, in
particular as regards the questions of definitions that
define or do not define, the Independence vs. dependence of
axioms, and the completeness or incompleteness of axioms
for deriving from the axioms the mathematics that the set
of axioms, together with inference rules, are intended to
I had earlier attempt to send my response to the issues
which Prof. Eskew raised regarding computation/axiomatic system/
formal deduction and the question of the nature of proof, but
there may have been a transmission glitch of some sort, so I
will make another attempt shortly.
Irving H. Anellis
Visiting Research Associate
Peirce Edition, Institute for American Thought
902 W. New York St.
Indiana University-Purdue University at Indianapolis
Indianapolis, IN 46202-5159
URL: http://www.irvinganellis.info
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2010-July/014931.html","timestamp":"2014-04-19T17:07:11Z","content_type":null,"content_length":"5398","record_id":"<urn:uuid:5c553771-6f79-4736-887e-a6364af55970>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00372-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quadtrees and Octrees
Written by Mike James Follow
Thursday, 09 December 2010 @Iprogrammerinfo
Fast sorting
OK so now you understand the quadtree – what use is it?
The answer is that the different levels of the tree provide different resolutions of detail and this can be useful in many different ways.
For example, if you have a two-dimensional data point, x,y, then this will fall in exactly one node at each level of the tree. You can use this fact to perform a fast search of a
data set.
First sort the data into a quadtree and then when you want to see if a particular data value x,y is present you simply move down the tree via the correct node at each level until
you reach the bottom and find the point or discover it isn’t present.
This is the two-dimensional equivalent of the binary search algorithm used to search a one-dimensional list.
Such algorithms have some surprising applications for example if you want to make Conway's Game of Life (The Meaning of Life ) go faster then use a quadtree.
The Octree
Once you have understood the quadtree, the octree is almost obvious.
In three dimensions the square is replaced by a cube and the division into four is replaced by a division into eight sub-cubes – hence oct–tree, since oct = eight.
An octree division divides each cube into eight sub-cubes.
Thus the octree is just a generalisation of the quadtree to 3D.
Each node corresponds to a single cube and has exactly eight sub-nodes.
Notice that all of the sub-node cubes are contained within the parent cube. As in the case of the quadtree, the octree can be used to find a data point, but this time a
three-dimensional point, very quickly.
As before you can flatten out the octree and draw it as a standard tree structure.
The octree branches very rapidly and it doesn’t take very many levels to generate lots of nodes but apart from this implementing it as a data structure isn't difficult. What tends
to be difficult is managing the geometry needed to divide the cubes and store the details at each node.
As in the case of the quadtree the octree is often built up dynamically as data become available. In this case dynamic construction often results in an unbalanced tree with areas of
space being covered more finely than others.
As already mentioned octrees are useful when you have to search a 3D space. In particular they are often used in efficient collision detection but they also occur in algorithms that
work in more abstract 3D space. For example the color space is generally considered to be 3D with dimensions Red, Green and Blue. Hence yo can use an octree to organise a the use of
color by an image and perform color reduction or quantization.
Related reading:
The custom color reduction project that prompted this theory article will be posted shortly
If you would like to be informed about new articles on I Programmer you can either follow us on Twitter, on Facebook , on Digg or you can subscribe to our weekly newsletter.
Last Updated ( Thursday, 30 December 2010 )
RSS feed of all content
Copyright © 2014 i-programmer.info. All Rights Reserved. | {"url":"http://www.i-programmer.info/programming/theory/1679-quadtrees-and-octrees.html?start=1","timestamp":"2014-04-16T17:37:09Z","content_type":null,"content_length":"37378","record_id":"<urn:uuid:560b0333-5036-4c81-b1c1-cb06fa477666>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00179-ip-10-147-4-33.ec2.internal.warc.gz"} |
Where does the game-theoretic characterization of PH come from?
up vote 1 down vote favorite
I have read in a few places that $\mathbf{PH}$ can be interpreted in terms of the complexity of determining the winner in two-player games. I would like to know a) the original reference for this
result and/or b) a concise explanation of it that requires little to no background in complexity theory (e.g., less than Goldreich's book).
computational-complexity reference-request
add comment
3 Answers
active oldest votes
The answer to part (a) of your question is this reference:
A. Meyer and L. Stockmeyer. The equivalence problem for regular expressions with squaring requires exponential space. In Proceedings of the 13th IEEE Symposium on Switching
and Automata Theory, pages 125-129, 1972. [pdf]
up vote 6 down vote What an amazing paper this was! Two later papers that discuss refinements of the result include these:
C. Wrathall. Complete sets and the polynomial-time hierarchy. Theoretical Computer Science 3:23-33, 1977.
A. Chandra, D. Kozen, and L. Stockmeyer. Alternation. Journal of the ACM 28(1):114-133, 1981.
That's great. I'd stumbled across the CKS paper but couldn't easily ID the antecedent papers. – Steve Huntsman Aug 27 '10 at 18:14
add comment
Whenever you have quantifier alternation, you can think of it as a sort of game: one player picks what happens at each universal quantifier, and the other player picks the values at each
existential quantifier. The existential player wins if the inner formula at the end is true, the universal player wins if it is false. The whole formula will be true exactly when the
existential player has a winning strategy.
Different complexity classes correspond to different types of formulas. For example, any language in NP can be represented as the strings y so the $\exists x. \phi(x,y)$, where the length of
up vote $x$ is polynomial in the length of $y$ and $\phi$ can be computed in polynomial time. So NP corresponds to the games where the existential player can make a move that wins immediately.
4 down Working up the polynomial hierarchy, you get games (determined by the input) where the existential player always wins in 2 moves, 3 moves, etc., (where each move still has to be polynomial in
vote size). And a language in the whole polynomial hierarchy is the collection of games where there is some fixed n, so that the game is always won by the existential player in n moves. In
contrast, PSPACE is the games where the number of move can vary as long as it is polynomial in the length of y. (And the different formulas have to arise in some reasonably uniform way). This
is how I usually understand why PH is contained in PSPACE, but then, I am a logician.
Thanks, this is a nice way of putting it. – Steve Huntsman Aug 27 '10 at 18:14
add comment
I'll take a shot at explaining this. The canonical problem in PH is a problem of this sort: $\exists x_1 \forall x_2 \ldots \exists x_k f(x_1,x_2,\ldots,x_k)$. (The last quantifier is
exists or forall depending on whether k is odd or even. Let's take it to be exists for this example.)
The idea is to imagine two players, lets call them Eve (Player 1) and Adam (Player 2). (The names Eve and Adam were chosen so that Eve corresponds to the exists operator, and Adam
corresponds to the forAll operator.)
Imagine that $x_1,x_3,\ldots$ describe Eve's moves in this two player game, and $x_2,x_4,\ldots$ describe Adam's moves. The function $f(x_1,x_2,\ldots,x_k)$ evaluates if these moves lead to
up vote 2 a win for Eve ($f=1$) or a loss ($f=0$). There is no possibility of a draw.
down vote
Now the idea is simple. The value of the boolean expression $\exists x_1 \forall x_2 \ldots \exists x_k f(x_1,x_2,\ldots,x_k)$ exactly tells us if Eve has a winning strategy or not. In
words, the boolean expression says this: "Is there a first move ($x_1$) that Eve can play so that no matter what Adam plays in his second move ($x_2$), there exists a move for Eve ($x_3$),
so that no matter what Adam plays ........ there exists a $k^{th}$ move for Eve ($x_k$) so that she wins (i.e., $f(x_1,x_2,\ldots,x_k) = 1$)."
So the Boolean expression exactly expresses whether Eve has a winning strategy in this two player game.
add comment
Not the answer you're looking for? Browse other questions tagged computational-complexity reference-request or ask your own question. | {"url":"http://mathoverflow.net/questions/36903/where-does-the-game-theoretic-characterization-of-ph-come-from/36910","timestamp":"2014-04-21T00:06:39Z","content_type":null,"content_length":"60710","record_id":"<urn:uuid:198c92c4-0cf9-4946-80e3-07398bd2bc35>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00134-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pikesville Math Tutor
Find a Pikesville Math Tutor
...Being a chameleon, I adapt to each learner, discovering what makes him/her excited, and base my lessons on his/her interests. Humor is everything in making lessons fun, and I appreciate quirky,
off-the-wall kids. I have taught for 25 years, and love tutoring.I majored in Latin and my specialty is poetry, especially Republican.
33 Subjects: including prealgebra, English, reading, GRE
...In Mathematics, my tutoring experience includes: - Arithmetic - Pre-algebra - Algebra I & II - Plane & Analytic Geometry - Trigonometry - Probability & Statistics - Number Theory - Calculus -
Differential Equations -- Ordinary and Partial - Real & Complex Analysis - Numerical Analysis In the Sc...
39 Subjects: including calculus, ACT Math, SAT math, probability
I taught for 38 years before I retired. I have worked with students with all abilities and really enjoyed each student. In one year I worked with students from students whose IQ was higher than
mine and students who were less able.
8 Subjects: including algebra 1, vocabulary, grammar, prealgebra
...In my 1st grade placement with dyslexic students,I assisted with students one-on-one in phonics instruction. Over the years, I have gathered numerous lesson plans, worksheets, and materials for
phonics instruction. I am confident, as an early literacy educator, I can help your child with phonics :] Studying is not something you do JUST the night before you take the test.
10 Subjects: including prealgebra, reading, grammar, elementary (k-6th)
...My name is Christina and I am currently the global music specialist and general music teacher at a charter school in Baltimore City. I have earned both my B.M. in Music Education and Master's
in Teaching for K-12. I have been teaching for 2 years.
19 Subjects: including algebra 1, SAT math, prealgebra, reading | {"url":"http://www.purplemath.com/Pikesville_Math_tutors.php","timestamp":"2014-04-19T02:15:13Z","content_type":null,"content_length":"23767","record_id":"<urn:uuid:734aa840-8156-485f-8914-48fb4b6328d0>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00169-ip-10-147-4-33.ec2.internal.warc.gz"} |
Todo Telling Time
Todo Telling Time is available on Apple and Amazon App Stores.
Learning to tell time is an important life skill so we created Todo Telling Time to provide playful learning opportunities for children in kindergarten through second grade to learn all aspects of
time telling through fun, interactive mini games. With this app, children will learn to tell time to the hour and minute, calendar concepts, digital time, and the components of a daily schedule.
Telling Time also addresses secondary factors necessary for mastering time telling. These include practice with ordering numbers around a clock face, counting by 5s, elapsed time, and estimates of
time. Each game is unique and engaging, ensuring that children continue to have fun as they learn the necessary skill of telling time.
Todo Telling Time contains six multi-level mini games.
Days and Weeks
Children solve puzzles by arranging days and weeks in order.
Ferris Wheel
Watch as the ferris wheel cars drop to the ground and put them back in the correct order. This game helps children become familiar with the orientation of the numbers on a clock face, hours and
minutes both included as they progress through levels!
Level Descriptions
Level 1 Place four numbers of the 1-hour clock in their respective places on a clock face (e.g. 1-2-3-4-5-6-7-8…)
Level 2 Place the numbers 1-12 (the hours of a clock) around a circle in the correct order
Level 3 Place four numbers of the 5-minute increment clock in their respective places on a clock face (e.g. 5-10-15-20-25…)
Level 4 Place all 5-minute increment numbers of a clock around a clock face in the correct order
Level 5 Place four 5-minute increment numbers around a clock face beginning with the number 2 (e.g. 2-7-12-17-22-27-32-37…)
Time to brush your teeth, eat lunch, play with pup! Set the analog clock to the correct time and watch a fun animation that corresponds with that time of the day!
Level Descriptions
Level 1 Adjust the hour-hand of an analog clock to match its numerical equivalent
Level 2 Adjust the minute-hand of an analog clock to match its numerical equivalent in 15-minute increments
Level 3 Adjust the hour-hand of an analog clock to represent a given time in future up to four hours ahead (e.g. It’s 1:00. Let’s see what happens in four hours).
Level 4 Adjust the minute-hand of an analog clock to match its numerical equivalent in 5-minute increments
Level 5 Adjust the hour and minute-hands of an analog clock to represent a given time in the past or future (e.g. Set the clock to 30 minutes past 9:30).
Train Time
What time will the train leave the station? Use the number tiles to set the appropriate time.
Level Descriptions
Level 1 Place a number in the hour-component of a digital clock to correspond with a given analog clock.
Level 2 Place numbers in the hour and minute-components (in 15-minute increments) of a digital clock to correspond with a given analog clock.
Level 3 Place individual digits in a digital clock to correspond with a given analog clock (given five options).
Level 4 Place individual digits in a digital clock to correspond with a given analog clock in increments of five minutes (given all ten number options).
Level 5 Place individual digits in a digital clock to correspond with a given analog clock in increments of one minute (given all ten number options).
Time Quiz
Each level contains a different focus as children put their time telling skills to the test in this multiple choice game.
Level Descriptions
Level 1 Approximate what time of day certain daily activities occur
Level 2 Approximate the length of various daily activities
Level 3 Determine the quantity of time between two given times.
Level 4 Determine how many hours have elapsed between two analog clocks
Level 5 Determine how many hours and minutes–in 5-minute increments–have elapsed between two analog clocks
Children will build short-term memory and sequencing skills as they watch and re-create the numbers in the order they are flashed in this fun cuckoo clock game.
Level Descriptions
Level 1 Repeat a sequence of three numbers on a clock face after seeing the sequence
Level 2 Repeat a sequence of three numbers on a clock face after seeing the numbers only.
Level 3 Repeat a sequence of three numbers on a clock face after seeing the sequence (without numbers showing)
Level 4 Repeat a sequence of three numbers on a clock face after seeing the numbers only (and no numbers showing on the clock face)
Common Core State Standards
In the Common Core State Standards for Mathematics the topic of telling time is introduced in the 1st grade with telling time to the hour & half-hour and culminating in 3rd grade with telling time to
the nearest minute. We noted that the topic is spread over 3 years of school and not a major focus in any grade. For students who struggle or take longer to learn this concept, there is a real risk
of the class moving on before he or she has mastered reading the clock. To address this risk, we created Todo Telling Time with multiple game levels to provide just the right amount of challenge and
support so that students who struggle to learn this concept can master it at their own pace.
Students with Learning Differences
“All means all” is an expression we embrace at LocoMotive Labs and is the inspiration behind our choice of Todo. Todo, meaning “all” in Spanish, is our way of communicating the inclusion of children
with learning differences in the apps’ design and development. We define learning differences to include deficits in auditory and visual processing, short-term memory, language delay or
under-developed fine motor skills. Students with formal diagnosis can include, but are not limited to students with LD, ASD, ADD/ADHD, Dyspraxia, and Down syndrome. | {"url":"http://locomotivelabs.com/todo-math/todo-telling-time/","timestamp":"2014-04-18T00:53:26Z","content_type":null,"content_length":"36908","record_id":"<urn:uuid:254108e3-5dc2-4741-95fe-6bb3d53edca1>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00591-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math and English: free math work sheets with word problems and word stories written in understandable language, simple vocabulary and recognizable situations
Free and Printable Math Word Problems
Word problems make math meaningful and involve number sense, creativity and conceptual understanding. Our math word stories are great for remedial maths or for tutoring purposes.
Grade 2 Word Problems
Addition and Subtraction
- Numbers up to 1,000
Addition and Subtraction
- Numbers up to 1,000
Multiplication tables up to 10
Multiplication tables up to 10
- Basic Multiplication Tables
Numbers up to 1,000
- Units of Measurement
Numbers up to 1,000
- Units of Measurement
Mixed Topics
- Numbers up to 1,000
Mixed Topics
- Numbers up to 1,000
Numbers up to 1,000
Numbers up to 1,000
Numbers up to 1,000
To the nearest 5 minutes
To the nearest 5 minutes
Volume in Liters
Volume in Liters
Basic Division math problems
- Solve these math worded problems by reading, analyzing and solving the division problems. Find the number sentences and show your working. Great grade 2 remedial or extra practice math worksheet.
Basic Division math problems
- Solve these math worded problems by reading, analyzing and solving the division problems. Find the number sentences and show your working. Great grade 2 remedial or extra practice math worksheet.
Here some examples of our word problem worksheets:
Read here
why math word problems can be very challenging for both native English and ESL students.
Our worksheets with word problems are made by ESL math teachers and used in the classroom. We believe that math stories are very difficult for second language learners. The students have to put
effort in understanding both the language and mathematical components.
Our worksheets use a very basic and simple vocabulary with words that most students will understand. The idea behind this is that the kids will focus on the math and problem solving rather than
digging through their dictionaries.
Children who are not native speakers of English can come from many different backgrounds and we believe that also culturally biased math problems can lead to misunderstanding and that the math
problems will become unnecessary difficult.
Please feel free to print our math worksheets and use them in your classes. The learning materials can be downloaded in pdf form. Please don't hesitate to contat us if you have any suggestions or
questions about our math materials.
Want to know about our new material? Follow us on Facebook. | {"url":"http://www.mathinenglish.com/menuWordProblems.php","timestamp":"2014-04-20T17:44:19Z","content_type":null,"content_length":"27833","record_id":"<urn:uuid:53127852-f747-419b-928a-d517b41bc939>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00328-ip-10-147-4-33.ec2.internal.warc.gz"} |
New Orleans Saints - blackandgold.com - City of Los Angeles High School Math Proficiency Exam
WhoDat!656 02-24-2012 06:21 PM
City of Los Angeles High School Math Proficiency Exam
NAME ____________________
GANG NAME _________________
TAG ____________________
HOOD ____________________
1). Little Johnny has an AK 47 with a 30 round clip. He usually misses 6 out of every 10 shots and he uses 13 rounds per drive-by shooting. How many drive-by shootings can Little Johnny attempt
before he has to reload?
2). Jose has 2 ounces of cocaine. If he sells an 8 ball to Antonio for $320 and 2 grams to Juan for $85 per gram, what is the street value of the rest of his hold?
3). Rufus pimps 3 hos. If the price is $85 per trick, how many tricks per day must each ho turn to support Rufus’s $800 per day crack habit?
4). Jerome wants to cut the pound of cocaine he bought for $40,000 to make 20% profit. How many ounces will he need?
5). Willie gets $200 for a stolen BMW, $150 for stealing a Corvette, and $100 for a 4x4. If he steals 1 BMW, 2 Corvettes and 3 4x4’s, how many more Corvettes must he have to steal to have $900?
6). Raoul got 6 years for murder. He also got $10,000 for the hit. If his common-law wife spends $100 per month, how much money will be left when he gets out?
Extra credit bonus: How much more time will he get for killing the ho that spent his money?
7). If an average can of spray paint covers 22 square feet and the average letter is 3 square feet, how many letters can be sprayed with 3 eight ounce cans of spray paint with 20% paint free?
8). Hector knocked up 3 girls in the gang. There are 27 girls in his gang. What is the exact percentage of girls Hector knocked up?
9). Bernie is a lookout for the gang. Bernie has a boa constrictor that eats 3 small rats per week at a cost of $5 per rat. If Bernie makes $700 a week as a lookout, how many weeks can he feed the
boa on one week’s income?
10). Billy steals Joe’s skateboard. As Billy skates away at 35 mph, Joe loads his 357 Magnum. If it takes Joe 20 seconds to load his magnum, how far away will Billy be when he gets whacked?
saintfan 02-24-2012 06:28 PM
Well, I didn't really 'get' Algebra until I was in industrial electronics and HAD to know the answer else I get killed.
Sometimes you gotta come at stuff from a different angle. This probably isn't real, but it oughta be.
All times are GMT -5. The time now is 02:02 AM.
Copyright 1997 - 2013 - BlackandGold.com | {"url":"http://blackandgold.com/ee/41473-city-los-angeles-high-school-math-proficiency-exam-print.html","timestamp":"2014-04-17T06:56:39Z","content_type":null,"content_length":"6698","record_id":"<urn:uuid:9b14867e-eb77-41f7-a70c-6e2136714b76>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00170-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Bending of Beams - Deflection - Stress - Strain
1. The problem statement, all variables and given/known data
A beam of lengh is supported in a clamp system. distance between clamp is 1m. Loads applied at 250mm and 750mm. strain guage fitted at 175mm with 100ohm resistance. Weight is applied at points a and
2. Relevant equations
Dont know?
3. The attempt at a solution
I am in need of the formulas please. I need to find out the theory for deflection, stress and strain and any other information i can extract. | {"url":"http://www.physicsforums.com/showpost.php?p=1932005&postcount=1","timestamp":"2014-04-20T03:27:09Z","content_type":null,"content_length":"8956","record_id":"<urn:uuid:03db91c8-81f8-447f-80db-af8810af89b2>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00529-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: The Church Thesis
V. Sazonov V.Sazonov at doc.mmu.ac.uk
Tue Jan 30 08:12:08 EST 2001
Comments of Alasdair Urquhart on CT seem to me
quite reasonable. I would only say that whatever
proof of CT we consider it will be just a proof of
equivalence of two mathematical notions (like
holomorphic functions = analitic ones.) "Eventually"
such things as CT are devoted to relate intuition
and formalization.
On the other hand, let us consider a version of
Bounded Arithmetic. Let me recall that exponential
is not provably total there. We can easily formalize
the notion of Turing machine in BA. No exponential
is needed for this. If to do everything sufficiently
carefully, then we can construct universal TM and
prove in BA all the usual properties of UTM such as
s-m-n property and therefore to prove the ordinary
fixed point (i.e., actually, recursion theorem).
Cf. my paper
"An equivalence between polynomial constructivity of
Markov's principle and the equality P=NP",
{\em Siberian Advances in Mathematics\/}, 1991, v.1,
N4, pp. 92-121 (This is English translation of a paper
in Russian published in Matematicheskaja logika i
algoritmicheskie problemy, Trudy Instituta matematiki
SO AN SSSR, 12, Novosibirsk, ``Nauka'', Sibirskoe
otdelenie, 1989, pp. 138--165.)
Thus, we have the notion of Turing computability in the
framework of BA, we can associate with it a corresponding
CT (even postulate a formal CT in the framework of an
intuitionistic version of BA), but we cannot say that 2^n,
as we understand it in the ordinary arithmetic, is computable.
It is only *partial recursive*.
I am wandering, why CT is usually considered in an
absolute (Platonistic?) way without relating it to
any formal theory which fixes a framework where we
can speak on any (computable) functions at all.
Even in the framework of PA or ZFC the extent of the
class of (total) computable functions is different.
What can TM do depends on a corresponding theory.
Moreover, if consider a weaker theory than BA, something
like feasible numbers theory (cf. corresponding my paper
available via
then it is more difficult to say without special
investigation what may happen with TM and CT. Say, the
notion of computability may split and we probably will
have several versions of CT.
Finally, let me ask, what is the difference between
CT and any other situation when mathematicians found
a formalization of an informal concept (like continuity
in terms of epsilon-delta or topological space) and
assert a deep satisfaction concerning the relation
between intuition and its formalization? What is so
special here about CT? I think - nothing, expect our
special interest to computability rather than, say,
to continuity notion.
Vladimir Sazonov
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2001-January/004676.html","timestamp":"2014-04-21T05:58:39Z","content_type":null,"content_length":"4993","record_id":"<urn:uuid:1fc175d8-17e1-404a-a0df-83ec4c6b912c>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00455-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding the 2nd smallest value in an Array
Oct 6, 2012, 15:54
Finding the 2nd smallest value in an Array
min(array(48, 9, 52, 183));
I can find the smallest value in an Array with the code above which is helped by guido2004 and logic_earth in the former post.
Now I like to find the 2nd smallest value which is 48 in the Array.
I guess I have 2 ways.
The 1st way is that finding the 2nd smallest value directly if there is any code or function in PHP.
The other way is that creates a new Array with the removing or deleting the smallest value which is 9 in the Array.
The new Array will be newArray[48, 52, 183) and the code below will find my target value 48.
min(array(48, 52, 183));
How can I find the 2nd smallest value directly(the 1st way)?
How can I get remove an array value which is 9 for finding the second smallest value(the other way)?
How can I get my target result?
Oct 6, 2012, 16:21
PHP Code:
sort( $array, SORT_NUMERIC );
$smallest = array_shift( $array );
$smallest_2nd = array_shift( $array );
Oct 6, 2012, 17:20
Thank you very much, logic_earth.
Oct 7, 2012, 04:40
use sort() and access the 2nd array item.
PHP Code:
$e = array($a, $b, $c, $d);
sort( $e );
echo $e[1]; // 48 | {"url":"http://www.sitepoint.com/forums/printthread.php?t=891802&pp=25&page=1","timestamp":"2014-04-21T03:49:25Z","content_type":null,"content_length":"7960","record_id":"<urn:uuid:ca1d4e76-f64f-4a4d-b106-576fd7a1d8ec>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00300-ip-10-147-4-33.ec2.internal.warc.gz"} |
GPS and Galileo Satellite Coordinates Computation
If you wish to contribute or participate in the discussions about articles you are invited to join Navipedia as a registered user
GPS and Galileo Satellite Coordinates Computation
From Navipedia
Title GPS and Galileo Satellite Coordinates Computation
Author(s) J. Sanz Subirana, J.M. Juan Zornoza and M. Hernández-Pajares, Technical University of Catalonia, Spain.
Level Intermediate
Year of Publication 2011
Next table 1 provides the GPS or Galileo broadcast ephemeris parameters to compute their satellite coordinates at any observation epoch. These parameters are periodically renewed (typically every 2
hours for GPS and 3 hours for Galileo) and must not be used out of the prescribed time (about four hours), because the extrapolation error grows exponentially beyond its validity period.
The algorithm provided is from the [GPS/SPS-SS, table 2-15] ^[footnotes 1].The Galileo satellites follow a similar scheme
In order to compute satellite coordinates from navigation message, the algorithm provided as follows must be used. An accuracy of about 5 meters (RMS) is achieved for GPS satellites with S/A=0ff and
several tens of meters with S/A=on ^[footnotes 2]:
• Compute the time t[k] from the ephemerides reference epoch t[oe] (t and t[oe] are expressed in seconds in the GPS week):
t[k] = t − t[oe]
If $t_k>302\,400$ sec, subtract $604\,800$ sec from t[k]. If $t_k< -302\,400$ sec, add $604\,800$ sec.
• Compute the mean anomaly for t[k],
$M_k=M_o+\left( \frac{\sqrt{\mu }}{\sqrt{a^3}}+\Delta n\right)t_k$
• Solve (iteratively) the Kepler equation for the eccentricity anomaly E[k]:
M[k] = E[k] − esinE[k]
• Compute the true anomaly v[k]:
$v_k=\arctan \left( \frac{\sqrt{1-e^2}\sin E_k}{\cos E_k-e}\right)$
• Compute the argument of latitude u[k] from the argument of perigee ω, true anomaly v[k] and corrections c[uc] and c[us]:
$u_k=\omega +v_k+c_{uc}\cos 2\left( \omega +v_k\right) +c_{us}\sin 2\left( \omega +v_k\right)$
• Compute the radial distance r[k], considering corrections c[rc] and c[rs]:
$r_k=a\left( 1-e\cos E_k\right) +c_{rc}\cos 2\left( \omega +v_k\right) +c_{rs}\sin 2\left( \omega +v_k\right)$
• Compute the inclination i[k] of the orbital plane from the inclination i[o] at reference time t[oe], and corrections c[ic] and c[is]:
$i_k=i_o+\stackrel{\bullet }{i} t_k+c_{ic}\cos 2\left( \omega +v_k\right) +c_{is}\sin 2\left( \omega +v_k\right)$
• Compute the longitude of the ascending node λ[k] (with respect to Greenwich). This calculation uses the right ascension at the beginning of the current week (Ω[o]), the correction from the
apparent sidereal time variation in Greenwich between the beginning of the week and reference time t[k] = t − toe, and the change in longitude of the ascending node from the reference time t[oe]:
$\lambda _k=\Omega _o+\left( \stackrel{\bullet }{\Omega }-\omega _E\right) t_k-\omega _E t_{oe}$
• Compute the coordinates in TRS frame, applying three rotations (around u[k], i[k] and λ[k]):
$\left[ \begin{array}{c} X_k \\ Y_k \\ Z_k \end{array} \right] ={\mathbf R}_3\left( -\lambda _k\right) {\mathbf R}_1\left( -i_k\right) {\mathbf R}_3\left( -u_k\right) \left [ \begin{array}{c}
r_k \\ 0 \\ 0 \end{array} \right]$
where ${\mathbf R}_1$ and ${\mathbf R_3}$ are the rotation matrices defined in Transformation between Terrestrial Frames.
1. ^ [GPS/SPS-SS], DoD, USA, Global Positioning System Standard Positioning Service Performance Standard. http://www.navcen.uscg.gov/pubs/gps/sigspec/gpssps1.pdf, 1995.
2. ^ Actually, the S/A was mainly applied to the satellite clocks and, apparently, not so often to the ephemeris. | {"url":"http://www.navipedia.net/index.php?title=GPS_and_Galileo_Satellite_Coordinates_Computation&oldid=11246","timestamp":"2014-04-18T05:34:48Z","content_type":null,"content_length":"29800","record_id":"<urn:uuid:4b76a077-df93-4099-9ec0-529e7dd7c7bc>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00414-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pass arguments to a function from each row of a matrix
up vote 0 down vote favorite
I created this function:
nDone<- function (under,strike,ttoe,vol,rf,dy) {
return(pnorm(((log(under/strike) + (rf-dy+(vol^2)/2)*ttoe)/(vol*(ttoe^0.5)))))
[1] 0.6174643
So far that's fine and works. Now I want the function to be applied to each row of a matrix.
b<- c(90,95,100,100,3,2,0.17,0.18,0.05,0.05,0,0)
dim(b) <- c(2,6)
Which gives:
[,1] [,2] [,3] [,4] [,5] [,6]
[1,] 90 100 3 0.17 0.05 0
[2,] 95 100 2 0.18 0.05 0
So now I want to pass the elements in each row to the function. I've tried using apply:
And get the following error:
Error in under/strike : 'strike' is missing
I've also tried:
I get the following error:
Error in under/strike : 'strike' is missing
What I want is multiple results of the function. What am I doing wrong here?
Um....this is exactly the same as your last question. – joran Feb 10 '12 at 23:33
Hi Joran, I know it looks the same and I'm certainly not trying to waste anyone's time. In this question now I'm trying to pass each row of a matrix to a function. Again thanks for reviewing. –
user1181337 Feb 10 '12 at 23:43
Yes. @joran's answer (actually a comment) to the previous question should work to answer this one as well. He left a little bit of room for struggle/interpretation on your part. I would suggest
that you take a look at his comment, see if you can understand what to do with it, then (if you can't) come back and ask for clarification there ... – Ben Bolker Feb 10 '12 at 23:59
Another route you could go down would be something like this: nDone(b[,1],b[,2],b[,3],b[,4],b[,5],b[,6]), since nDone is already vectorized. Or convert the matrix to a list with each column being
an element and use the solution from the previous question. – joran Feb 11 '12 at 0:09
add comment
2 Answers
active oldest votes
This should work:
apply(b, 1, function(x)do.call(nDone, as.list(x)))
up vote 4 down vote
accepted What was wrong with your version is that through apply(), your nDone() function was getting the whole row as a single argument, i.e., a vector passed under "strike" and nothing
for the other arguments. The solution is to use do.call().
Thanks Flodel! And thanks to all who posted! – user1181337 Feb 11 '12 at 6:59
add comment
It is worth mentioning that, if you wanted to bind the results of the functions to the original matrix, you could use mdply from plyr
> library(plyr)
> mdply(b, nDone)
up vote 3 down vote X1 X2 X3 X4 X5 X6 V1
1 90 100 3 0.17 0.05 0 0.6174643
2 95 100 2 0.18 0.05 0 0.6249916
add comment
Not the answer you're looking for? Browse other questions tagged r or ask your own question. | {"url":"http://stackoverflow.com/questions/9236306/pass-arguments-to-a-function-from-each-row-of-a-matrix","timestamp":"2014-04-18T02:18:19Z","content_type":null,"content_length":"74009","record_id":"<urn:uuid:ed6923c6-8ad8-4677-aab3-b03f4bf4e12e>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00356-ip-10-147-4-33.ec2.internal.warc.gz"} |
Casting PWS during spirit shell.
2013-05-24, 02:27 PM #1
Stood in the Fire
Join Date
Aug 2012
Casting PWS during spirit shell.
According to my calculations and confirmed by game testing it is possible to increase the absorption stack you build during spirit shell by using PWS, while spirit shell is active. This is very
dependent on your haste.
H = (1+(hasterating)/4.25)*PI, where PI = 1 if you have PI up or 0 otherwise
M = %mastery from tooltip expressed as a fraction of 1
Tn = Spirit shell duration time available for casting non borrowed time spirit shell PoH after casting n PWS
Xn = total number of spirit shell PoH casts when casting n PWS
An = total absorb value of spirit shell + n PWS
BT = 1 if you have borrowed time up before activating spirit shell or 0 otherwise
PoH spirit shell absorb*5 = k
PWS absorb = k*0.42*(1+M)/(1+0.5M)
The decrease in casting time for PoH due to borrowed time is 0.15*2.5/(1.05*H), thus we can assume all casts of PoH have the same cast time and just increase the total casting by by this factor
for each borrowed time proc.
Xn = int[Tn/2.5*(1.05*H)], where int is a function that returns the integer part of a real number
An = k*[Xn + n*0.42*(1+M)/(1+0.5M)]
Thus maximising the absorption buffer you can create during spirit shell requires maximising [Xn + n*0.42*(1+M)/(1+0.5M)]
This assumes that you are pre-shielding i.e. the raid is not currently taking high levels of damage, so you are just interested in the amount of absorb you can put on top of ppl in anticipation
to the damage spike. Under conditions of continuous damage PWS/PoH 1:1 is automatically better HPS than straight PoH spam anyway.
Tn = 15 +(0.15*2.5)/(1.05*H)*BT - n*(1.5-0.15*2.5)/(1.05*H) = 15-(n*(1.5-0.15*2.5) - 0.15*2.5*BT)/(1.05*H)
Xn = int(6*1.05*H +0.15*BT - 0.45*n) = int(6.3*H + 0.15*BT - 0.45*n)
Xn = 6 with n = 1, requires H = 1, but that is with 0 loss of time from latency. For me I need at least 3% haste in order to get 6 spirit shell casts with zero haste so all values need to have 3%
added to them to account for this.
Xn = 6 with n = 2 requires H = 1.0715
That means 7.15% unbuffed haste allows you to cast 2 extra PWS during spirit shell, which means it raises the amount of absorb by 0.9 of a full spirit shell cast at 25% mastery
Xn = 7, with n = 0 requires H = 1.0874
Xn = 6 with n = 3 requires H = 1.143
Xn = 7 with n = 1 requires H = 1.159
Xn = 7 with n = 2 requires H = 1.231
Xn = 8 with n = 0 requires H = 1.247
Thus in general there is an intermediate breakpoint for the absorb with n = 1.
With PI on
H needs to be divdied by 1.2, which means anything with H below 1.2 is baseline.
Xn = 8, n = 0 requires H = 1.039
Xn = 8, n = 1 requires H = 1.098
Hence up to 10% unbuffed haste + a latency factor (this is ~3% for me) allows you to add 7 casts of spirit shell baseline and 8, n=1 casts during spirit shell.
Seems quite obvious considering that there's very few haste amounts where you can fit in exactly X PoH's without any spare time, simply spending that spare time on shields (and said spare time
will then be increased slightly by the borrowed time) is obviously more efficent than to start a PoH cast that'll be finished after SS has expired.
However unless you can fit in two PW:S the difference between doing this and just throwing a shield (or something else, a solace/penance might be more useful) at the end of your SS (and speeding
up your next cast after) should be very slim, so I don't really see any reason to gear specifically for any haste amount based on this, but it might be useful to keep in mind if you end up with
said haste anyways.
In regards to the speciifc breakpoint where you can fit in 2 shields you are already quite close to the next PoH during shell, so wouldn't it simply be more convenient/useful to gear for that
breakpoint instead since that's a far more efficent way to gain roughly the same amount of absorbs. If you just care about HPS without any regard for mana you should just weave PW:S->PoH->PW:S
during shell either way.
To me shadowfiend breakpoints are the only ones worth considering for 10 man, for 25 these breakpoints may have some use but I doubt you'd care more about these than a simple SS/PoH breakpoint. I
usually think you make some great posts, but this just feels like you came up with an idea that you like way too much. To me the actual value of these breakpoints are incredibly slim and I'd
never consider gearing for them.
Last edited by Cookie; 2013-05-24 at 03:41 PM.
I think you missed the points. Stopping your spirit shell cast early and then going to PWS spam, is only slightly less effective that using the correct number of PWS between PoH casts, in terms
of theoretical throughput, but it is completely different functionally:
Knowing that you can add 1 PWS in the middle of your PoH casts means that if you are interrupted for whatever reason and you have to move you can throw a PWS out and not lose a spirit shell cast.
If you are below the breakpoint maybe you can spare 1s to move, but you might not have the time to cast a PWS. The result of NOT knowing where you are makes the difference between zero time lost
or replacing one PoH cast with a PWS.
Adding PWS in the middle of your sequence, also means you have leeway to save someone in danger, who might not be grouped up for spirit shell.
It means more rapture procs
The +2 breakpoint might be close to the extra cast breakpoint, but it is better, because it is less subject to interruption, leads to less wasted time when you active PI and you can spend more of
your budget on crit/mastery.
1:1 PWS-PoH during spirit shell is highly ineffective. You will most likely end up wasting spirit shell uptime.
Whether you choose to gear for these breakpoints or any other breakpoints, is not really the subject of my post. The subject is to tell you what you can cast given how much haste you have.
Knowing that give your haste you can actually hit a PWS mid spirit shell without losing any spirit shell casts is invaluable for me personally, so I make sure to calculate it. Since it changed in
5.3 I recalculated and decided to get a proper formula for it. I posted the results so others can benefit.
I think you missed the points. Stopping your spirit shell cast early and then going to PWS spam, is only slightly less effective that using the correct number of PWS between PoH casts, in terms
of theoretical throughput, but it is completely different functionally:
Knowing that you can add 1 PWS in the middle of your PoH casts means that if you are interrupted for whatever reason and you have to move you can throw a PWS out and not lose a spirit shell cast.
If you are below the breakpoint maybe you can spare 1s to move, but you might not have the time to cast a PWS. The result of NOT knowing where you are makes the difference between zero time lost
or replacing one PoH cast with a PWS.
Normally you just cast an instant regardless if you have to move during shell.
The +2 breakpoint might be close to the extra cast breakpoint, but it is better, because it is less subject to interruption, leads to less wasted time when you active PI and you can spend more of
your budget on crit/mastery.
Sometimes you need to move once, twice, thrice or not at all during shell. Gearing for one of these specific situations isn't something I think is worthwhile, but if so the one assuming zero
movement is the most common occurance.
1:1 PWS-PoH during spirit shell is highly ineffective. You will most likely end up wasting spirit shell uptime.
Hpm wise yes, and so is 2 additional PW:S over an additional PoH.
Whether you choose to gear for these breakpoints or any other breakpoints, is not really the subject of my post. The subject is to tell you what you can cast given how much haste you have.
Knowing that give your haste you can actually hit a PWS mid spirit shell without losing any spirit shell casts is invaluable for me personally, so I make sure to calculate it. Since it changed in
5.3 I recalculated and decided to get a proper formula for it. I posted the results so others can benefit.
Fair enough. I agree that it's useful to know the specific breakpoint and if that was your intention with the post I have nothing to disagree about. I simply got the impression that you advocated
gearing for this specific breakpoint (+2 PW:S) which I really don't think is worthwhile unless you already are incredibly close (if even then).
Normally you just cast an instant regardless if you have to move during shell.
Sometimes you need to move once, twice, thrice or not at all during shell. Gearing for one of these specific situations isn't something I think is worthwhile, but if so the one assuming zero
movement is the most common occurance.
Hpm wise yes, and so is 2 additional PW:S over an additional PoH.
Fair enough. I agree that it's useful to know the specific breakpoint and if that was your intention with the post I have nothing to disagree about. I simply got the impression that you advocated
gearing for this specific breakpoint (+2 PW:S) which I really don't think is worthwhile unless you already are incredibly close (if even then).
If you don't have the right amount of haste then you just shot yourself in the arse by casting an instant. You just wasted a spirit shell cast. Moving out of shit takes less time than a GCD and
you might just make the extra cast. That is why instead of criticizing you should be paying close attention to my post, if you DONT have the haste then it might be better to not cast that instant
as long as you only move a short distance. If you do have the haste, there is no need to bother thinking about that you just cast PWS automatically and you can be sure that you are losing no
Again the point of my post is that you can gear for moving twice and get THE SAME RETURN OR BETTER as gearing for zero PWS casts. Even being able to move once dramatically reduces the time you
will lose from moving twice, since having to move once is far more frequent than having to move twice. Even if you have to move twice at least you lose only one cast not two, which again is much
better. Gearing for these breakpoints either eliminates or reduces your losses from moving. Since it is either slightly lower or higher healing overall what possible reason is there it not gear
for it.
On your last point you made a critical error: 1:1 PWS-PoH outside spirit shell is higher HPS than PoH alone, but during spirit shell it is not because it will lead to you not getting completely
overhealing PoH instead of a spirit shell PoH at the last cast. Adding the 2 PWS during spirit shell leads to effectively zero loss in HPS and it can actually be better HPS depending on your
So basically you are getting almost zero benefit from the extra 800 haste rating you need to get to the 7 casts breakpoint. All you are doing is shooting yourself in the arse by making your
biggest CD extremely inflexible.
Being able to cast a PWS during spirit shell AT ZERO LOSS of spirit shell casts adds an immense amount of flexibility. You can move during spirit shell, you can PWS someone about to die without
losing spirit shell casts, you build an absorb buffer on a group that you can't hit 5 targets in.
Basically getting that extra haste gives you more HPS overall since haste is a great stat and it is much much much better for spirit shell due to the added flexibility. Basically the 1 PWS haste
break point is best for a non-haste build and the 2 PWS haste break point is best for a haste build. The extra cast breakpoint is only best for a full crit build, but anyone going for a full crit
build doesn't really care about optimising their healing.
Last edited by Havoc12; 2013-05-31 at 12:39 PM.
If you don't have the right amount of haste then you just shot yourself in the arse by casting an instant. You just wasted a spirit shell cast. Moving out of shit takes less time than a GCD and
you might just make the extra cast. That is why instead of criticizing you should be paying close attention to my post
Wouldnt the added 15% reduced cast time for the next poh more and less make up for the penalty of casting a shield while having to move anyway, though? Starting to think the differences there are
so minuscule time-wise that latency will bite you in the ass with these calculations, hard.
I already covered a fair bit of the things you mention here in the disc reforging thread, so I'll just add a few things.
If you don't have the right amount of haste then you just shot yourself in the arse by casting an instant. You just wasted a spirit shell cast. Moving out of shit takes less time than a GCD and
you might just make the extra cast. That is why instead of criticizing you should be paying close attention to my post, if you DONT have the haste then it might be better to not cast that instant
as long as you only move a short distance. If you do have the haste, there is no need to bother thinking about that you just cast PWS automatically and you can be sure that you are losing no
Depends on the length of the movement, but if you want to maximize your absorbs like you keep talking about, throwing a PW:S over delaying a cast will be a hps gain regardless of the movement
length (if you then miss a whole PoH for the SS, use more shields in this time). I never disputed that knowing if you can fit in an instant or not without losing a PoH during SS was helpful, I
was saying that it isn't worth gearing for a specific cap since the amount of movement varies depending on the fight.
Again the point of my post is that you can gear for moving twice and get THE SAME RETURN OR BETTER as gearing for zero PWS casts. Even being able to move once dramatically reduces the time you
will lose from moving twice, since having to move once is far more frequent than having to move twice. Even if you have to move twice at least you lose only one cast not two, which again is much
better. Gearing for these breakpoints either eliminates or reduces your losses from moving. Since it is either slightly lower or higher healing overall what possible reason is there it not gear
for it.
Already went over this in the other thread, but why not gear for 3 PW:S+PoHs? 4? Yes, because the increased mana cost. Your choice is arbitary and in most fights you can use SS without moving.
On your last point you made a critical error: 1:1 PWS-PoH outside spirit shell is higher HPS than PoH alone, but during spirit shell it is not because it will lead to you not getting completely
overhealing PoH instead of a spirit shell PoH at the last cast.
Depends on your haste amount, or you could simply replace one PoH/PW:S for essentially the same effect as long as all the PoH's are fit into the SS window.
Adding the 2 PWS during spirit shell leads to effectively zero loss in HPS and it can actually be better HPS depending on your mastery.
Adding 3 PWS during spirit shell leads to effectively zero loss in HPS and it can actually be better HPS depending on your mastery.
Adding 4 PWS during spirit shell leads to effectively zero loss in HPS and it can actually be better HPS depending on your mastery.
etc. The reason why you don't is due to the large mana increase for a negligible hps increase.
So basically you are getting almost zero benefit from the extra 800 haste rating you need to get to the 7 casts breakpoint. All you are doing is shooting yourself in the arse by making your
biggest CD extremely inflexible.
I'm personally not gearing for any SS caps but the gain is quite obvious, mana.
Being able to cast a PWS during spirit shell AT ZERO LOSS of spirit shell casts adds an immense amount of flexibility. You can move during spirit shell, you can PWS someone about to die without
losing spirit shell casts, you build an absorb buffer on a group that you can't hit 5 targets in.
Yes, using other spells during SS than PoH adds flexibility but nullifies any particular haste caps, which is why I don't find them very valuable.
Basically getting that extra haste gives you more HPS overall since haste is a great stat and it is much much much better for spirit shell due to the added flexibility. Basically the 1 PWS haste
break point is best for a non-haste build and the 2 PWS haste break point is best for a haste build. The extra cast breakpoint is only best for a full crit build, but anyone going for a full crit
build doesn't really care about optimising their healing.
Haste is a shitty stat. Yes it's obviously better for SS/PoH than most of our other spells since PoH is a filler and hastes biggest weakness is the fact that it does jack/very little for our cds.
Regarding crit I suppose this should've been talked about in the other thread, but crits additional healing being only DA is definitely an argument for it being stronger than a formula balancing
the two stats indicates (even excluding the dps gain). At some point mastery obviously becomes stronger if you keep stacking crit, but how you should balance them is hardly that obvious.
Regarding overhealing a higher crit amount will generally let your DA shields be refreshed and last longer, it means that you dpsing in low damage situations has some additional benefit (more da
shields up for higher damage periods) and in most (not all) scenarios where they drop without doing anything mastery wouldn't have been that helpful either.
Last edited by Cookie; 2013-05-31 at 01:09 PM.
Depends on the length of the movement, but if you want to maximize your absorbs like you keep talking about, throwing a PW:S over delaying a cast will be a hps gain regardless of the movement
length (if you then miss a whole PoH for the SS, use more shields in this time). I never disputed that knowing if you can fit in an instant or not without losing a PoH during SS was helpful, I
was saying that it isn't worth gearing for a specific cap since the amount of movement varies depending on the fight.
Already went over this in the other thread, but why not gear for 3 PW:S+PoHs? 4? Yes, because the increased mana cost. Your choice is arbitary and in most fights you can use SS without moving.
Depends on your haste amount, or you could simply replace one PoH/PW:S for essentially the same effect as long as all the PoH's are fit into the SS window.
Adding 3 PWS during spirit shell leads to effectively zero loss in HPS and it can actually be better HPS depending on your mastery.
Adding 4 PWS during spirit shell leads to effectively zero loss in HPS and it can actually be better HPS depending on your mastery.
etc. The reason why you don't is due to the large mana increase for a negligible hps increase.
I'm personally not gearing for any SS caps but the gain is quite obvious, mana.
Yes, using other spells during SS than PoH adds flexibility but nullifies any particular haste caps, which is why I don't find them very valuable.
Haste is a shitty stat. Yes it's obviously better for SS/PoH than most of our other spells since PoH is a filler and hastes biggest weakness is the fact that it does jack/very little for our cds.
Regarding crit I suppose this should've been talked about in the other thread, but crits additional healing being only DA is definitely an argument for it being stronger than a formula balancing
the two stats indicates (even excluding the dps gain). At some point mastery obviously becomes stronger if you keep stacking crit, but how you should balance them is hardly that obvious.
Regarding overhealing a higher crit amount will generally let your DA shields be refreshed and last longer, it means that you dpsing in low damage situations has some additional benefit (more da
shields up for higher damage periods) and in most (not all) scenarios where they drop without doing anything mastery wouldn't have been that helpful either.
Casting a PWS and losing a spirit shell cast instead of moving and getting an extra spirit shell cast is a loss of HPS, because when you are setting up spirit shell health bars are mostly full.
Thus any lost spirit shell casts cannot be replaced by PWS casts or PoH casts AFTER spirit shell. Try and model it as much as you want you will see that just moving without casting leads to a
gain in HPS over casting a PWS if you don't have haste above a certain level. Using any other instant is very frequently an HPS loss due to overheal.
There is no 3 and 4 PWS breakpoints. The only breakpoints are 1 PWS, 2 PWS, extra cast. You can potentially squeeze more PWS casts into the sequence, but that is counter productive. You aren't
gaining anything in terms of flexibility and you are no longer able to maintain spirit shell on more than two groups. You really don't want to have less than 6 casts of spirit shell available.
Casting a PWS during spirit shell pays for itself due to increased rapture proc frequency. Even 2 PWS are a neglible mana loss considering you spirit shell at most once a minute.
Haste is a damn good stat. Just mana hungry.
Let us calculate the chance that aegis will be refreshed if someone is hit n times by a heal during the aegis duration
It is simple 1-(1-crit)^n
Lets add some values n = 2 crit = 25% --> 43.75% chance
n = 2 crit = 30% --> 51% chance
n = 2 crit = 35% --> 57.75% chance
n = 2 crit = 40% --> 64% chance
The higher the n value the faster crit loses value. A 60% increase in crit rating is a 46% increase in the refresh rate under conditions when you are PoH spamming 3 groups. Basically the return
is negative, because you are shifting more healing to crits, which means more crits during those times when aegis can be wasted, but the refresh rate does not go up proportionally.
Now lets factor in the chance that someone will be hit by additional spells during spirit shell. We can getting an approximate value by assuming that getting hit a cast has a fixed probability P
then the probability of getting hit by a crit is P*Crit. Thus the chance of refreshing aegis is
for n = 2 crit = 25% P = 10% --> 4.94% chance
n = 2 crit = 30% P = 10% -->5.91% chance
n = 2 crit = 35% P = 10% --> 6.88% chance
I trust that you understand the problem. Basically the only thing that crit really does is reduce your overheal sensitivity, which is of course very good. Mastery increases healing and those two
work best when combined together.
The burst nature of the encounters where most of the healing is concentrated in short bursts means that balanced mastery/crit is always better and that is pretty much inescapable. At the end of
the day though because most of our crit comes from buffs and intellect, even if you gear for as much crit as you can you won't get far enough from the breakpoint to make the difference
---------- Post added 2013-05-31 at 04:58 PM ----------
It doesn't. You need 5% haste on top of the latency threshold to do that and the latency threshold can be quite high. For me it is 3% haste. Basically if you have less than 1k haste rating at a
latency of 40ms or so you will fail to get an extra cast if you cast a PWS when moving, but you actually can spare nearly 1s for moving and still get a cast off.
I tested this a lot and it works exactly once you add the latency threshold.
There is no question that having that small amount of extra haste is immensely beneficial and it is good to get enough haste to fit 1 PWS into spirit shell even if you don't really want haste
(unless of course you are going for full crit). It costs neither mana, nor HPS but it is an immense gain in flexibility.
Even if one wants to go for extra haste there is no reason to hit the extra cast threshold, the two PWS threshold is what one needs to aim for.
Casting a PWS and losing a spirit shell cast instead of moving and getting an extra spirit shell cast is a loss of HPS, because when you are setting up spirit shell health bars are mostly full.
Thus any lost spirit shell casts cannot be replaced by PWS casts or PoH casts AFTER spirit shell. Try and model it as much as you want you will see that just moving without casting leads to a
gain in HPS over casting a PWS if you don't have haste above a certain level. Using any other instant is very frequently an HPS loss due to overheal.
Obviously one PW:S won't replace one PoH SS cast, but two however pretty much will and takes about as much time, or even slightly less with BT used (if this was untrue your entire basis for 6
PoH+2 PW:S being as good as 7 PoH would be incorrect). One PoH SS cast can essentially be replaced by 2 PW:S casts if you need to move, at the cost of mana.
There is no 3 and 4 PWS breakpoints. The only breakpoints are 1 PWS, 2 PWS, extra cast. You can potentially squeeze more PWS casts into the sequence, but that is counter productive. You aren't
gaining anything in terms of flexibility and you are no longer able to maintain spirit shell on more than two groups. You really don't want to have less than 6 casts of spirit shell available.
Yes you are gaining something, the ability to move even more during SS which is the entire premiss for your argument. Why is it worth so much to move one or two globals during SS but nothing to
move three globals? If less than 6 SS is an issue depends on how far away the damage intake is, but the SS that falls off the earliest if you use 5 instead of 6 PoHs falls off about 3 seconds
earlier, that'll rarely be a big deal unless you screwed up your SS timing entirely.
Casting a PWS during spirit shell pays for itself due to increased rapture proc frequency. Even 2 PWS are a neglible mana loss considering you spirit shell at most once a minute.
and for 3 instead of 2 PW:S you pay the additional cost of one PW:S once a minute, not a big deal either.
Haste is a damn good stat. Just mana hungry.
and since that forces you to spend extra stats into spirit to compensate it can hardly be considered such an amazing stat.
Let us calculate the chance that aegis will be refreshed if someone is hit n times by a heal during the aegis duration
It is simple 1-(1-crit)^n
Lets add some values n = 2 crit = 25% --> 43.75% chance
n = 2 crit = 30% --> 51% chance
n = 2 crit = 35% --> 57.75% chance
n = 2 crit = 40% --> 64% chance
The higher the n value the faster crit loses value. A 60% increase in crit rating is a 46% increase in the refresh rate under conditions when you are PoH spamming 3 groups. Basically the return
is negative, because you are shifting more healing to crits, which means more crits during those times when aegis can be wasted, but the refresh rate does not go up proportionally.
The refresh rate goes up slightly slower but the shields being refreshed will also be larger with the high crit rate. Not really sure if this is significant enough to lower it's value that much.
Now lets factor in the chance that someone will be hit by additional spells during spirit shell. We can getting an approximate value by assuming that getting hit a cast has a fixed probability P
then the probability of getting hit by a crit is P*Crit. Thus the chance of refreshing aegis is
for n = 2 crit = 25% P = 10% --> 4.94% chance
n = 2 crit = 30% P = 10% -->5.91% chance
n = 2 crit = 35% P = 10% --> 6.88% chance
I trust that you understand the problem.
Not really following you here, I'm tired with a fever, feel free to clarify what you mean.
Basically the only thing that crit really does is reduce your overheal sensitivity, which is of course very good.Mastery increases healing and those two work best when combined together.
I'm well aware of both these things, that's kinda my point. I don't think that blindly stacking crit and ignoring mastery will yield the maximum hps, but in realistic situations I think more crit
and less mastery than what a forumla showing the theoretical max indicates, exactly because crit has a lower chance of ending up with overhealing.
The burst nature of the encounters where most of the healing is concentrated in short bursts means that balanced mastery/crit is always better and that is pretty much inescapable.
That's kinda why I feel like crit has a slightly higher value, that means that a slightly bigger portion of the atonement spells I'm spamming when a very low amount of damage is going out will
come in handy.
At the end of the day though because most of our crit comes from buffs and intellect, even if you gear for as much crit as you can you won't get far enough from the breakpoint to make the
difference noticeable.
Considering that it's one of our premier output stats I fail to see how it isn't noticable.
Last edited by Cookie; 2013-05-31 at 04:24 PM.
Obviously one PW:S won't replace one PoH SS cast, but two however pretty much will and takes about as much time (if this was untrue your entire basis for 6 PoH+2 PW:S being as good as 7 PoH would
be incorrect). One PoH SS cast can essentially be replaced by 2 PW:S casts if you need to move, at the cost of mana.
Yes you are gaining something, the ability to move even more during SS which is the entire premiss for your argument....
and for 3 PW:S you pay the additional cost of one PW:S once a minute, not a big deal either.
and since that forces you to spend extra stats into spirit to compensate it can hardly be considered such an amazing stat.
The refresh rate goes up slightly slower but the shields being refreshed will also be significantly larger with the high crit rate.
I'm well aware of both these things, that's kinda my point. I don't think that blindly stacking crit and ignoring mastery will yield the maximum hps, but in realistic situations I think more crit
and less mastery than what a forumla showing the theoretical max indicates, exactly because crit has a lower chance of ending up with overhealing.
Considering that it's one of our premier output stats I fail to see how it isn't noticable.
Nope, my premise is that 6 casts + 2 PWS is better than 6 casts and 1s of the spirit shell duration wasted. Not that you can replace 1 PoH with 2 PWS. That does not work. You need extra haste to
do that.
You can add an infinite amount of PWSs and claim that it is just 1 more. 1 PWS is for partially offset by increased rapture. 2 PWS is one more, 3 PWS is 2 more. Every PWS you add makes the mana
loss tougher and tougher to bear.
6 POH with 1% more healing from extra crit/mastery + 0.91 of a PoH from the PWS means that you have the equivalent of 6.97 PoH casts compared to 7 with the extra haste. That does not work out the
same way when you use more than 2 PWS.
However 4 PWS and 4 PoHs with 1% more healing are 6.84 PoH casts compared with 7 casts.
Significantly larger shields also leads to more overcapping and wastage. Having more crit leads overall to bigger wasting of aegis. This is only partially offset by the higher refresh rate, in a
senario where everyone gets hit by one of your heals at least once every 15s. In a situation where the chance of that happening is only a tiny proportion of your shields is refreshed, but the
refresh is roughly proportional to crit.
Either the return from refreshing is negative or rather small. Basically crit hits the wall of negative returns pretty hard and the higher refresh rate does not really do anything. Most of the
aegis you put during a low damage phase when atonement healing is on the tank.
Basically refresh offers no real advantage to crit.
If you have 10k crit rating that is 16% crit. You get [edit] 14-15% from int and the crit buff [/edit]. The benefit of your entire crit rating is 10-14% more HPS at most. The differences from
pretty much any random combination of secondary stats is not that noticeable. It becomes more noticeable if a stat becomes ineffective though. The difference between one combination of mastery
and crit and a different combination is even smaller. Actually the best point for me on 25man is between the high mastery low crit PWS breakpoint and the high crit low mastery balance point for
heals. I still have more crit than mastery though.
Last edited by Havoc12; 2013-05-31 at 06:11 PM.
Nope, my premise is that 6 casts + 2 PWS is better than 6 casts and 1s of the spirit shell duration wasted. Not that you can replace 1 PoH with 2 PWS. That does not work. You need extra haste to
do that.
Wasn't the haste required for 6 PoH+2 PW:S lower than 7 PoHs according to you? If so 2 PW:S with borrowed time used should essentially replace a PoH, assuming that they aren't more than the
number of PoHs (borrowed time gets consumed).
You can add an infinite amount of PWSs and claim that it is just 1 more. 1 PWS is for partially offset by increased rapture. 2 PWS is one more, 3 PWS is 2 more. Every PWS you add makes the mana
loss tougher and tougher to bear.
6 POH with 1% more healing from extra crit/mastery + 0.91 of a PoH from the PWS means that you have the equivalent of 6.97 PoH casts compared to 7 with the extra haste. That does not work out the
same way when you use more than 2 PWS.
It does, it's just other haste points, just as arbitary but you didn't chose to look into them.
However 4 PWS and 4 PoHs with 1% more healing are 6.84 PoH casts compared with 7 casts.
I'll assume that you mean 4 PW:S and 5 PoH's? And it's more mobility, isn't that always better despite the mana cost?
Significantly larger shields also leads to more overcapping and wastage. Having more crit leads overall to bigger wasting of aegis. This is only partially offset by the higher refresh rate, in a
senario where everyone gets hit by one of your heals at least once every 15s. In a situation where the chance of that happening is only a tiny proportion of your shields is refreshed, but the
refresh is roughly proportional to crit.
If the shields are overcapping/expiring frequently there's so little damage going out that the mastery would be practically irrelevant as well.
Either the return from refreshing is negative or rather small. Basically crit hits the wall of negative returns pretty hard and the higher refresh rate does not really do anything. Most of the
aegis you put during a low damage phase when atonement healing is on the tank.
Basically refresh offers no real advantage to crit.
I'm not saying that the gain is huge, I'm saying that crit offers more than mastery/haste in a situation with low damage, both due to the damage and the absorbs. Since mastery and crit are really
close (unless your crit amount is way higher) that's worth taking into account.
If you have 10k crit rating that is 16% crit. You get 19-20% from int and the crit buff. The benefit of your entire crit rating is 10-13% at most. The differences from pretty much any random
combination of secondary stats is not that noticeable. It becomes more noticeable if a stat becomes ineffective though. The difference between one combination of mastery and crit and a different
combination is even smaller. Actually the best point for me on 25man is between the high mastery low crit PWS breakpoint and the high crit low mastery balance point for heals. I still have more
crit than mastery though.
Yes the secondaries are relatively close, doesn't mean that the differences are irreleveant or there wouldn't be a point in discussing it.
Nope, my premise is that 6 casts + 2 PWS is better than 6 casts and 1s of the spirit shell duration wasted. Not that you can replace 1 PoH with 2 PWS. That does not work. You need extra haste to
do that.
You can add an infinite amount of PWSs and claim that it is just 1 more. 1 PWS is for partially offset by increased rapture. 2 PWS is one more, 3 PWS is 2 more. Every PWS you add makes the mana
loss tougher and tougher to bear.
6 POH with 1% more healing from extra crit/mastery + 0.91 of a PoH from the PWS means that you have the equivalent of 6.97 PoH casts compared to 7 with the extra haste. That does not work out the
same way when you use more than 2 PWS.
However 4 PWS and 4 PoHs with 1% more healing are 6.84 PoH casts compared with 7 casts.
Significantly larger shields also leads to more overcapping and wastage. Having more crit leads overall to bigger wasting of aegis. This is only partially offset by the higher refresh rate, in a
senario where everyone gets hit by one of your heals at least once every 15s. In a situation where the chance of that happening is only a tiny proportion of your shields is refreshed, but the
refresh is roughly proportional to crit.
Either the return from refreshing is negative or rather small. Basically crit hits the wall of negative returns pretty hard and the higher refresh rate does not really do anything. Most of the
aegis you put during a low damage phase when atonement healing is on the tank.
Basically refresh offers no real advantage to crit.
If you have 10k crit rating that is 16% crit. You get [edit] 14-15% from int and the crit buff [/edit]. The benefit of your entire crit rating is 10-14% more HPS at most. The differences from
pretty much any random combination of secondary stats is not that noticeable. It becomes more noticeable if a stat becomes ineffective though. The difference between one combination of mastery
and crit and a different combination is even smaller. Actually the best point for me on 25man is between the high mastery low crit PWS breakpoint and the high crit low mastery balance point for
heals. I still have more crit than mastery though.
That depends on how much raid damage is going out. We're healers not dps. Theories on max output is pointless.
Wasn't the haste required for 6 PoH+2 PW:S lower than 7 PoHs according to you? If so 2 PW:S with borrowed time used should essentially replace a PoH, assuming that they aren't more than the
number of PoHs (borrowed time gets consumed).
It does, it's just other haste points, just as arbitary but you didn't chose to look into them.
I'll assume that you mean 4 PW:S and 5 PoH's? And it's more mobility, isn't that always better despite the mana cost?
If the shields are overcapping/expiring frequently there's so little damage going out that the mastery would be practically irrelevant as well.
I'm not saying that the gain is huge, I'm saying that crit offers more than mastery/haste in a situation with low damage, both due to the damage and the absorbs. Since mastery and crit are really
close (unless your crit amount is way higher) that's worth taking into account.
Yes the secondaries are relatively close, doesn't mean that the differences are irreleveant or there wouldn't be a point in discussing it.
5 PoHs with 4 PWS requires H = 1.0556. That is effectively 1:1 PWS:PoH, which means that you will have just 2s to absorb the first PWS cast. Now consider that PWS can crit and that you can also
have spirit shell on the same group on 10man. It also increases the time between spirit shells on different groups to 3.8s and removes your ability to cast it on 3 groups. That sacrifices
flexibility instead of gaining it, as makes timing spirit shell very awkward.
A small amount of mana is definitely worth a lot of flexibility, but a large amount of mana is not worth a small amount of flexibility and a lot more awkwardness
7 PoHs = 13k mana *7.
6 PoHs + 2 PWS = 13k mana * 7 + 13k mana - extra rapture
5 PoHs + 4 PWS = 13k mana *7 + 39k mana - extra rapture
8k mana a minute is worth the added flexibility of 2 PWS casts
34k mana a minute on the other hand is actually pretty tough to swallow, considering you get less flexibility.
Basically you quadruple the mana sacrificed to make your CD less flexible and harder to use.
As I said you never want to sacrifice the 6th spirit shell cast. It just messes up timing and wastes mana for no added benefit. If you need to move 3 and 4 times you have enough haste to do so
and get PWS off even if you are at the 6+2 breakpoint.
There is nothing arbitrary at all about the 6+2 breakpoint and the +1 and +2 breakpoints are always the only ones worth considering. There only question is do I want enough haste to just
neutralize latency on spirit shell or do I want enough haste to get to 6+2 breakpoint. It is not worth considering the 7 casts breakpoint in my view, especially on 10man.
The low damage phases are a smaller part of your healing than the high damage phases and the added crit does not really help that much considering that the vast majority of your absorbed aegis
during low damage phases is from the tanks.
Also consider that absorbs from heals are given by the formula crit*(1+mastery)*(1+0.5*mastery). So mastery also increases the amount of aegis you put out. Lets see how this works out
The rate at which crit rating increases absorbs is (1+mastery)*(1+0.5*mastery). The rate at which mastery increases absorbs is crit*(1.5+mastery)*1.6
So at 25% crit and 28% mastery the rate at which crit increases absorbs is 1.28*1.14 = 1.4592. The rate at which mastery increases absorbs is 0.25*(1.5+0.28)*1.6 =0.712
Thus mastery increases absorbs at roughly half the rate at which crit increases absorbs, however you also forget that PWS is a significant part of your absorbs during low damage phases. The rate
at which mastery increases PWS is (1+crit)*1.6, while the rate at which crit increases PWS is (1+mastery). Using the same values as above you get 1.25*1.6 = 2 for mastery and 1.28 for crit.
So if 30% of your healing during a low damage phase is PWS, 50% is aegis and 3% is your non overhealing heals the rate at which mastery increases your healing would be 2*0.3+0.712*0.5+0.5*0.03 =
0.971, while the rate at which crit increases your healing is 1.28*+1.4592*0.5 = 0.933888
Surprised? You should be. Mastery is nowhere near as bad as you think during low damage phases.
Lets say you cast 1.5 penance, 5 smites and 1.5 holy fires per PWS. Then the amount of raw healing done by atonement is about 4 times that of PWS. With 25% crit rate and 28% mastery that means
aegis will be equal to 1.28 of a PWS, so the values I picked for my example are pretty reasonable. I cast twice as much PWS during low damage phases though.
Basically crit has little benefit for low damage phases over mastery because PWS is an important source of heals and both crit and mastery increase both aegis and pws, but with opposite
Balanced mastery crit is always better than full crit and full mastery no matter what. The balance point is dependent on your casting pattern and healing style, but typically it lies somewhere
between the balance point for heals and for PWS.
Last edited by Havoc12; 2013-06-01 at 01:50 AM.
5 PoHs with 4 PWS requires H = 1.0556. That is effectively 1:1 PWS:PoH, which means that you will have just 2s to absorb the first PWS cast. Now consider that PWS can crit and that you can also
have spirit shell on the same group on 10man. It also increases the time between spirit shells on different groups to 3.8s and removes your ability to cast it on 3 groups. That sacrifices
flexibility instead of gaining it, as makes timing spirit shell very awkward.
A small amount of mana is definitely worth a lot of flexibility, but a large amount of mana is not worth a small amount of flexibility and a lot more awkwardness
7 PoHs = 13k mana *7.
6 PoHs + 2 PWS = 13k mana * 7 + 13k mana - extra rapture
5 PoHs + 4 PWS = 13k mana *7 + 39k mana - extra rapture
8k mana a minute is worth the added flexibility of 2 PWS casts
34k mana a minute on the other hand is actually pretty tough to swallow, considering you get less flexibility.
Basically you quadruple the mana sacrificed to make your CD less flexible and harder to use.
As I said you never want to sacrifice the 6th spirit shell cast. It just messes up timing and wastes mana for no added benefit. If you need to move 3 and 4 times you have enough haste to do so
and get PWS off even if you are at the 6+2 breakpoint.
It's not harder to use or less flexible. Every PW:S past the first (rapture) adds the same amount of movement and the same amount of mana cost during SS, if they cost you a PoH or not just
depends on whatever haste you are going for. If you don't need to move for two globals during the 6+2 "breakpoint" that additional haste has no special value (there's no actual advantage besides
the normal hps gain from haste to fit in 2 PW:S inside the SS window instead of dumping one after). If you need to move more than two globals that additional haste has no special value. The only
time it's worth more than standard haste is if you have to move exactly two globals during SS, doesn't happen that often for me at least. Fitting in one PW:S during SS does have some merit since
it's essentially free, due to rapture, any PW:S beyond that has no additional/special value compared to the next (regardless of it's the second, third or fourth), this is what I was trying to
Regarding the sixth spirit shell cast you've yet to explain why it has some special value, it's an additional SS'd PoH and has no more value than the fifth/seventh. There's no realistic chance of
a SS expiring due to you not refreshing it if you didn't use it way too early. There's no special need for every grp to have the same amount of absorbs (or hell, you could even arrange that by
doing a IF PoH on the grp that just gets one shell). I fail to see why you are so obsessed about it.
The low damage phases are a smaller part of your healing than the high damage phases and the added crit does not really help that much considering that the vast majority of your absorbed aegis
during low damage phases is from the tanks.
Also consider that absorbs from heals are given by the formula crit*(1+mastery)*(1+0.5*mastery). So mastery also increases the amount of aegis you put out. Lets see how this works out
The rate at which crit rating increases absorbs is (1+mastery)*(1+0.5*mastery). The rate at which mastery increases absorbs is crit*(1.5+mastery)*1.6
So at 25% crit and 28% mastery the rate at which crit increases absorbs is 1.28*1.14 = 1.4592. The rate at which mastery increases absorbs is 0.25*(1.5+0.28)*1.6 =0.712
Thus mastery increases absorbs at roughly half the rate at which crit increases absorbs, however you also forget that PWS is a significant part of your absorbs during low damage phases. The rate
at which mastery increases PWS is (1+crit)*1.6, while the rate at which crit increases PWS is (1+mastery). Using the same values as above you get 1.25*1.6 = 2 for mastery and 1.28 for crit.
So if 30% of your healing during a low damage phase is PWS, 50% is aegis and 3% is your non overhealing heals the rate at which mastery increases your healing would be 2*0.3+0.712*0.5+0.5*0.03 =
0.971, while the rate at which crit increases your healing is 1.28*+1.4592*0.5 = 0.933888
Surprised? You should be. Mastery is nowhere near as bad as you think during low damage phases.
Lets say you cast 1.5 penance, 5 smites and 1.5 holy fires per PWS. Then the amount of raw healing done by atonement is about 4 times that of PWS. With 25% crit rate and 28% mastery that means
aegis will be equal to 1.28 of a PWS, so the values I picked for my example are pretty reasonable. I cast twice as much PWS during low damage phases though.
Basically crit has little benefit for low damage phases over mastery because PWS is an important source of heals and both crit and mastery increase both aegis and pws, but with opposite
Balanced mastery crit is always better than full crit and full mastery no matter what. The balance point is dependent on your casting pattern and healing style, but typically it lies somewhere
between the balance point for heals and for PWS.
I fail to see why you'd ever cast PW:S outside of rapture in a low damage situation, even with a lmg proc, assuming that you aren't looking to do some silly padding. The additional damage from
atonement combined with the spot healing vs a shield that (outside of the tank, the rapture'd shield) might not even get consumed (far more likely for a PW:S on a non-tank to not get used up in a
low damage situation than small DA shields, mostly on the tank). If you for some reason have 30% of your healing from PW:S in a low damage situation, sure, crit isn't far ahead (fairly certain
that you mean 20 and not 3% from your actual heals in the calc btw). For me this isn't even close to the case, in a low damage situation I'd never cast PW:S outside of rapture (assuming that I'm
not loading up for a high damage period) and if my mana is at or close to full I wouldn't even use it when rapture is up.
I'm not disputing that a balance of mastery and crit will be superior to purely stacking crit or mastery when it comes to healing output (I've sad this several times). I'm saying that the there's
more things to factor in when you chose to balance them than the balance giving the most raw hps based on your PW:S/other heals usage. The added dps and less overhealing which, assuming that all
of the DA gets consumed, is the case regardless of how you swing it unless you have litterarly zero overhealing from your actual heals does make crit slightly (or if you value the dps highly
significantly) stronger than the raw hps numbers indicate. Regarding the overhealing it's really quite easy to determine this, crit/mastery are relatively equal in hps output, one achives it's
full hps in the situation I suggest (all the aegis gets consumed) one doesn't (not all the healing boosted by master is used). For mastery to still as much healing as crit it'd simply have to be
ahead in raw hps, which isn't the case.
Last edited by Cookie; 2013-06-01 at 02:37 AM.
1.5 penance, 5 smites, 1.5 solace 1 PWS , 1.5*3+6.5*1.5+1.5 = 15.75 seconds. This is a FULL atonement cycle with PWS every 15s mate. It will lead to the kind of distribution I just posted aegis
to PWS will be just about 1:1.
I didn't just make that up I looked at various low damage phases from various fights.
You are just thinking intuitively, but your intuition is wrong. High crit is worse for healing but just adds a tiny about of DPS more.
1.5 penance, 5 smites, 1.5 solace 1 PWS , 1.5*3+6.5*1.5+1.5 = 15.75 seconds. This is a FULL atonement cycle with PWS every 15s mate. It will lead to the kind of distribution I just posted aegis
to PWS will be just about 1:1.
First of all, despite not being a fan of haste I'm stuck at over 4k (playing a fair bit of shadow as well so I'm stuck with a few of those pieces, I guess most discs are closer to ~3k), I also
doubt that anyone who hasn't been farming 25 man hcs for weeks can avoid all the haste. You'll generally also get slightly more than 1,5 penance/PW:S due to ToT which considering how strong
penance is skews the numbers a bit further. This is also not even including that you can sacrifice PW:S usage entirely if your mana is at a good level. Meaning that your FULL cycle is incorrect
for most players.
Secondly. The raw healing from from the rotation you mentioned (1.5 penance, 5 smites, 1.5 solace) is more than four times the raw healing from PW:S, especially if you don't assume a higher
mastery than crit % (which I don't have, but if your rotation has a significantly larger portion of PW:S I guess that makes sense for you).
I didn't just make that up I looked at various low damage phases from various fights.
If I look at random logs in WoL those aren't the ratio's/amounts I find in most low damage phases, nor are they correct for my logs.
You are just thinking intuitively, but your intuition is wrong. High crit is worse for healing but just adds a tiny about of DPS more.
No, I'm thinking logically, there's a difference. You are purposly using math/numbers that confirm your statements. If the dps increase for shifting some (again, I'm not talking about going full
crit/ignoring mastery entirely) mastery into crit is tiny I don't really know what I should call the healing difference, abyssmal might be a good word:P. The additional healing from crit will
essentially never result in overhealing (if it does the damage situation is so low that the additional healing from mastery would as well), while mastery is prone to at least some overhealing in
every situation but the most extreme.
To clarify, they both add roughly the same amount of healing (obviously varies on your spell usage and stats), for one 100% of it is effective healing (generalising, but if da is dropping without
absorbing there's so little damage that your healing is irrelevant, meaning that crit wins out from the added damage) for the other a portion (yes a big part of mastery is PW:S/da/SS, but not all
of it) is going to end up overhealing to some degree in almost every situation. So assuming that your stats/spell usage indicate that the stats add the same amount of healing (which they do, if
you balance the stats like you have/suggest) crit is simply a slightly superior stat to mastery (dps+less overhealing, what's masterys perk?). If we go pure crit and no mastery, mastery adds
significantly more healing than crit, resulting in it being a stronger stat. The truth/correct way to gear is, to me, somewhere in between (and arguably closer to the crit/mastery "balance" that
you suggest).
I'm not saying that I know exactly where/how crit/mastery should be balanced according to this (there's way too many factors to consider, not just spell usage and overhealing, but spell usage/
overhealing for different portions of the fight where healing is more or less important, etc), I'm saying that the balance should be somewhere between those two points.
That being said, I'm honestly more interested in your reasoning why the sixth PoH/second PW:S is so important during SS. I can understand your reasoning regarding crit/mastery (even if I think
your numbers are a bit skewed, at least for me personally) and either way the difference between the amounts is incredibly slim (at my current stats that's shifting an extra ~2k mastery to crit).
However your reasoning regarding SS just makes no sense to me, at all.
Last edited by Cookie; 2013-06-01 at 03:02 PM.
This isn't true. Regardless of which stat you stack, the difference in total average output is trivial. The only difference is the ratio of your raw healing to DA, with high Crit builds obviously
having the it skewed towards DA. Still, high crit builds tend to do better in a practical raid setting due to DA having lower overhealing.
First of all, despite not being a fan of haste I'm stuck at over 4k....
Secondly. The raw healing from from the rotation you mentioned (1.5 penance, 5 smites, 1.5 solace) is more than four times the raw healing from PW:S, especially if you don't assume a higher
mastery than crit % (which I don't have, but if your rotation has a significantly larger portion of PW:S I guess that makes sense for you).
You want to add an extra smite in there and make it 1.7 penance? No problem. Make penance 50k per tick and everything else average 50k. so 1.7*150 = 255, 7.5*50k = 375k, for 630k total. 25% of
that is 157.5. No crit PWS at 100k with 25% crit it becomes 125k. so aegis to PWS of 1.26:1. I am assuming 30% PWS and 50% aegis which is 1.66:1 aegis to PWS and that is pretty high especially
for a low damage phase. I expected an argument like this so I took a pretty high aegis to PWS ratio. Look at some logs and you will see that I am right. PWS is reasonably close to aegis in most
cases and during low damage phases PWS usally comes on top due to almost zero overheal on PWS, but significant overheal on aegis. Some examples:
Picking some 10H top logs at random (so I didn't scan a lot of logs and cherry picked) I am picking high parses and ones that look english. These are probably 5.2 values, so pre-nerf.
Maegera: http://www.worldoflogs.com/reports/r...8&e=2396#Steni
1:1 Aegis PWS. Look at the low damage phases to see that PWS is the top heal. 25% odd crit rate 20% overheal on aegis, 4.5% on PWS 17% on atonement healing.
Same person on council 1:1 aegis:PWS 15.8% overheal on aegis.
Council: http://www.worldoflogs.com/reports/2...e=304#Amabella
1.74:1 Aegis PWS ~40% crit rate 27.5% overheal on aegis, 16% on PWS, 22% on atonement healing.
I dont see this person ranked in the first page on maegera
Another top log on maegera
again 25% crit rate approximately. 20% PWS, 15% aegis (more PWS than aegis) 30% overheal on aegis. 2.4% on PWS
Top disc log on ji'kun
Again roughly 25% crit rate, 20% aegis, 16% PWS 25% overheal on aegis 26% overheal on PWS Atonement overheal 22% average or so
Every log I see I rarely see a anything higher than the 1.66:1 aegis/PWS and on average aegis has higher overheal the higher your crit rate just as I predicted mathematically. More importantly
aegis does not have anywhere near the low overheal rate ppl expect. PWS does, but aegis no. Because aegis is random even with extremely high crit rates, refresh for long periods of time is
Many of the top logs don't use excessive crit rates, which means they are running pretty much balanced crit/mastery.
Only on places like horridon with a massive damage buff do you see high aegis/PWS ratios and even then not universally. People who say high crit rates work better than balanced mastery/crit are
the same like those who were claiming PoH double dipped on crit in cataclysm. It never did, people just couldn't do the calculations correctly.
Basically here is the problem: crit scales nicely with aegis, and moderately well for PWS, while mastery scales extremely well with PWS and about 40-70% what crit does on aegis. On top of that it
helps your heals. The higher your crit rate the better the scaling for mastery on aegis and on PWS.
The system just does not behave like you think. The optimal point is probably between the PWS optimal and the heal optimal. Going above 30% crit rate is not necessarily a great idea.
That being said, I'm honestly more interested in your reasoning why the sixth PoH/second PW:S is so important during SS. I can understand your reasoning regarding crit/mastery (even if I think
your numbers are a bit skewed, at least for me personally) and either way the difference between the amounts is incredibly slim (at my current stats that's shifting an extra ~2k mastery to crit).
However your reasoning regarding SS just makes no sense to me, at all.
It is very important to be able to prolong your spirit shell buffer for as long as possible and the 6th cast is essential for that, the 2 PWS break point is not important at all. I find it very
useful, someone else might not. What is extremely useful is the 1 PWS haste breakpoint, which actually only requires taking enough haste to counteract latency, since the 5% haste buff is enough
to reach that point.
---------- Post added 2013-06-02 at 01:45 AM ----------
This isn't true. Regardless of which stat you stack, the difference in total average output is trivial. The only difference is the ratio of your raw healing to DA, with high Crit builds obviously
having the it skewed towards DA. Still, high crit builds tend to do better in a practical raid setting due to DA having lower overhealing.
Actually it is true. The difference is 3-5%, which is not trivial but it is small. Both mastery AND crit healing decrease the raw healing to DA ratio, because mastery incerases DA much more than
it increases heals. but we don't really care much about that. What matters is your PWS to DA ratio. Mastery has exceptionally high scaling for PWS that just gets bigger and bigger with crit. So
mastery increases your PWS absorbs massively and your DA absorbs about 60% as much for a high crit rate build, while crit increases both aegis and PWS absorbs moderately at low mastery. Because
mastery has such a high scaling for PWS it sneaks quickly ahead of crit in the battle of absorbs. On top of that you are going to get the mastery benefit for many of your regular heals.
If it was just heals yes you would see a slightly better result from crit, but because you also have PWS and spirit shell in the equation, mastery gets ahead very quickly as soon as your crit
rate goes above 25%.
....Every log I see I rarely see a anything higher than the 1.66:1 aegis/PWS
I'm finding that roughly half of the logs I look into (just randomly clicked on some in top 20) have more than a 1.66 aegis/pws total ratio:
http://www.worldoflogs.com/reports/d...?s=4749&e=5009 (low damage period close to 57% DA, 7% PW:S)
http://www.worldoflogs.com/reports/w...?s=3026&e=3400 (close to 38% DA, 5% PW:S)
http://www.worldoflogs.com/reports/s...?s=5751&e=6138 (close to 49% DA, 17% PW:S)
http://www.worldoflogs.com/reports/r...?s=8410&e=8974 (close to 28% DA, 10% PW:S)
http://www.worldoflogs.com/reports/2...s/4/?s=0&e=304 (22% DA, 20% PW:S) high ratio of PW:S!
http://www.worldoflogs.com/reports/0...?s=6827&e=7179 (14% DA, 0% PW:S)
and on average aegis has higher overheal the higher your crit rate just as I predicted mathematically. More importantly aegis does not have anywhere near the low overheal rate ppl expect. PWS
does, but aegis no. Because aegis is random even with extremely high crit rates, refresh for long periods of time is impossible.
If da ends up overhealing additional mastery would be essentially useless as well, and crit wins due to the added damage. PW:S does have a lower overheal ratio than da, but da is way ahead of our
normal heals on most fights (especially non-smart/big ones). It's not about refreshing them infinitely, it's about any da applied having a chance to be useful for (at least) 15 seconds if damage
is coming in (and if it doesn't, well no harm), while the non da/pw:s part of mastery will be wasted a lot of the time.
Many of the top logs don't use excessive crit rates, which means they are running pretty much balanced crit/mastery.
People who say high crit rates work better than balanced mastery/crit are the same like those who were claiming PoH double dipped on crit in cataclysm. It never did, people just couldn't do the
calculations correctly.
No clue about cata, never played during that. There's a difference in a high/excessive crit ratio and balancing mastery/crit at a slightly different level. I've never advocated purely stacking
crit and disregarding mastery entirely, just that if they are entirely "balanced" looking at the raw healing crit has a slightly higher value.
Only on places like horridon with a massive damage buff do you see high aegis/PWS ratios and even then not universally.
Worth noting that there are several fights with damage buffs in this tier (jin'rokh, horridon, tortos, which admittedly does favor mastery, and ji-kun), so it can't be disregarded entirely.
Basically here is the problem: crit scales nicely with aegis, and moderately well for PWS, while mastery scales extremely well with PWS and about 40-70% what crit does on aegis. On top of that it
helps your heals. The higher your crit rate the better the scaling for mastery on aegis and on PWS.
and vice-versa. I'm well aware that a large (but far from all) of masterys benefit comes from da/pw:s, or I would actually be advocating that we stack full crit.
The system just does not behave like you think. The optimal point is probably between the PWS optimal and the heal optimal. Going above 30% crit rate is not necessarily a great idea.
I'm fairly certain that it behaves exactly like I think, I'm simply looking at different numbers/benefits than you. Yes, the optimal point is probably somewhere in between the pw:s optimal and
the heal optimal (although closer to the heal optimal). Crit doesn't really lose a significant value going higher, it's just that mastery gains, which yes makes stacking crit blindly a bad idea.
It is very important to be able to prolong your spirit shell buffer for as long as possible and the 6th cast is essential for that
Are you refering to the actual length the shields applied by SS lasts? If so, the sixth SS adds less than 3 seconds until the SS applications starts dropping (admittedly not from the same grp as
it would with 6). You essentially gain one PoH cast in time to wait for the damage to start going out before your shells starts to drop (and in 5's case the shell dropping at that point would be
a small one, which gets consumed quicker), it's really not that significant.
the 2 PWS break point is not important at all.
Good, then I'm on board.
What is extremely useful is the 1 PWS haste breakpoint, which actually only requires taking enough haste to counteract latency, since the 5% haste buff is enough to reach that point.
Yep, I agree that having enough haste to fit in one PW:S during your SS applications is somewhat handy (your focus seemed to be on 2 PW:S being so important, which I found strange). Considering
that you generally want to/can throw a PW:S right before applying the SS (meaning that rapture usually is on cd during most of it) I don't really think having enough haste to fit another one in
is 'that' important, but yes it has some value (especially since it's well, hard to avoid).
Actually it is true. The difference is 3-5%, which is not trivial but it is small. Both mastery AND crit healing decrease the raw healing to DA ratio, because mastery incerases DA much more than
it increases heals. but we don't really care much about that. What matters is your PWS to DA ratio. Mastery has exceptionally high scaling for PWS that just gets bigger and bigger with crit. So
mastery increases your PWS absorbs massively and your DA absorbs about 60% as much for a high crit rate build, while crit increases both aegis and PWS absorbs moderately at low mastery. Because
mastery has such a high scaling for PWS it sneaks quickly ahead of crit in the battle of absorbs. On top of that you are going to get the mastery benefit for many of your regular heals.
If it was just heals yes you would see a slightly better result from crit, but because you also have PWS and spirit shell in the equation, mastery gets ahead very quickly as soon as your crit
rate goes above 25%.
False. Basing it on http://us.battle.net/wow/en/characte...Siory/advanced and adjusting the stats to account for heavy Crit (37.03% crit/13.68% heal mastery/27.35% shield mastery) and heavy
Mastery (29.62% crit/19.6% heal mastery/39.2% shield mastery), you get these values for:
Crit stacking - [1 * 1.1368] + [0.3703 * 1.1368 * 1.2735] = 1.67
Mastery stacking - [1 * 1.196] + [0.2962 * 1.196 * 1.3962] = 1.69
Mastery results in ~1.2% higher SS values on average.
Crit stacking - [0.6297 * 1.2735] + [0.3703 * 2 * 1.2735] = 1.75
Mastery stacking - [0.7038 * 1.3962] + [0.2962 * 2 * 1.3962] = 1.81
Mastery results in ~3.5% higher PW:S values on average.
Crit stacking - [0.3703 * 1.1368 * 1.2735] = 0.54
Mastery stacking - [0.2962 * 1.196 * 1.3962] = 0.49
Crit results in ~10% higher overall DA values on average.
Crit stacking - 1.1368
Mastery stacking - 1.196
Mastery results in ~5.2% more healing on average.
The difference between stacking both stats is rather negligible, especially when it comes to SS and PW:S. The main difference is whether you want higher raw healing (Mastery) or DAs (Crit).
Ultimately, the overall difference in total output is pretty trivial, and Crit may actually come out ahead once overhealing is factored in. You could do this for various values of Crit/Mastery
stacked builds, and you'd still reach the same conclusion.
2013-05-24, 03:28 PM #2
2013-05-24, 08:57 PM #3
Stood in the Fire
Join Date
Aug 2012
2013-05-25, 01:42 AM #4
2013-05-31, 12:36 PM #5
Stood in the Fire
Join Date
Aug 2012
2013-05-31, 12:54 PM #6
2013-05-31, 01:07 PM #7
2013-05-31, 03:48 PM #8
Stood in the Fire
Join Date
Aug 2012
2013-05-31, 04:15 PM #9
2013-05-31, 04:34 PM #10
Stood in the Fire
Join Date
Aug 2012
2013-05-31, 07:05 PM #11
2013-05-31, 07:26 PM #12
The Patient
Join Date
Dec 2011
2013-06-01, 01:01 AM #13
Stood in the Fire
Join Date
Aug 2012
2013-06-01, 02:08 AM #14
2013-06-01, 08:25 AM #15
Stood in the Fire
Join Date
Aug 2012
2013-06-01, 12:39 PM #16
2013-06-01, 02:04 PM #17
High Overlord
Join Date
May 2013
2013-06-02, 12:38 AM #18
Stood in the Fire
Join Date
Aug 2012
2013-06-02, 01:57 AM #19
2013-06-02, 05:23 AM #20
High Overlord
Join Date
May 2013 | {"url":"http://www.mmo-champion.com/threads/1273144-Discipline-trinkets?goto=nextnewest","timestamp":"2014-04-19T04:48:55Z","content_type":null,"content_length":"212126","record_id":"<urn:uuid:7458e513-e71c-45d9-8687-c76b32a828e4>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00529-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: July 2010 [00451]
[Date Index] [Thread Index] [Author Index]
Re: Avoid the use of certain functions
• To: mathgroup at smc.vnet.net
• Subject: [mg111108] Re: Avoid the use of certain functions
• From: Murray Eisenberg <murray at math.umass.edu>
• Date: Tue, 20 Jul 2010 03:45:30 -0400 (EDT)
One should not be so quick to condemn the world when it does not conform
to your own preconceptions and preferences!
There is a good geometric reason that sec is an appropriate name for the
reciprocal of cos rather than of sin. Draw the unit circle, center at
the origin O. Mark off a central angle of theta from the positive
x-axis, and let P be the corresponding point on the circle. Draw the
vertical line L through the point (1,0). Extend the ray OP until it
intersects L at a point Q. Then the length of OQ is sec (theta). That
line segment OQ cuts across the circle.
[See, e.g., http://en.wikipedia.org/wiki/Trigonometric_functions. (I
thought I remembered also some Wolfram Demonstration of this, but I
cannot locate it now.)]
Furthermore, in the usual list of the 6 trig functions
there is a symmetry of pairs around the middle: cot = 1/tan, sec =
1/cos, csc = 1/sin.
Finally, what's wrong with sec and csc? Mathematical expressions are
often simpler when expressed in terms of them, since this avoids
(explicit) use of fractions for the functions.
On 7/19/2010 2:10 AM, AES wrote:
>> From: Sam Takoy [mailto:sam.takoy at yahoo.com]
>> Hi,
>> Is there a way to ask Mathematica to avoid expressing answers in terms
>> of certain functions. For example, I [can't?] stand Sec, Csc, Sech, and Csch
>> and would rather see Sec^-1, etc.
> I'm with you on this one: always hated Sec and Csc (and never
> understood the backward naming of these functions -- why isn't Sec =
> 1/Sin and Csc = 1/Cos?)
> So, I'd like the various Simplifiy and XxxToYyy functions in Mathematica to
> always give precedence to Sin and Cos by default, and avoid using Sec
> and Csc whenever possible, even if this produces some "1 overs" in the
> output expression.
> But I suspect trying to implement this at this point would require more
> complexity than it would be worth.
Murray Eisenberg murray at math.umass.edu
Mathematics & Statistics Dept.
Lederle Graduate Research Tower phone 413 549-1020 (H)
University of Massachusetts 413 545-2859 (W)
710 North Pleasant Street fax 413 545-1801
Amherst, MA 01003-9305 | {"url":"http://forums.wolfram.com/mathgroup/archive/2010/Jul/msg00451.html","timestamp":"2014-04-16T10:20:51Z","content_type":null,"content_length":"27374","record_id":"<urn:uuid:8bf5e8d3-abb1-4b18-9d33-ba27a7e32af9>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00114-ip-10-147-4-33.ec2.internal.warc.gz"} |
how do you solve for X?
As has been said, use the Chinese remainder theorem (whcih is Euclid's algorithm but dressed up).
Of course if you typed it correctly you're looking for 5x=1 (13) and 37x=5 (13). Are those even compatible? The fact that you've written 37 and not 11 means that either you've mistyped the 13, or
you're not happy with modulo arithmetic, and don't see that you can always replace something with something else congruent mod m if it helps. | {"url":"http://www.physicsforums.com/showthread.php?t=139184","timestamp":"2014-04-19T12:41:41Z","content_type":null,"content_length":"42916","record_id":"<urn:uuid:6d4a73ea-cc25-4e18-b644-1480806dd673>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00242-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cosecant: Introduction to the Cosecant Function in Mathematica
Introduction to the Cosecant Function in Mathematica
The following shows how the cosecant function is realized in Mathematica. Examples of evaluating Mathematica functions applied to various numeric and exact expressions that involve the cosecant
function or return it are shown. These involve numeric and symbolic calculations and plots.
Mathematica forms of notations
Following Mathematica's general naming convention, function names in StandardForm are just the capitalized versions of their traditional mathematics names. This shows the cosecant function in
This shows the cosecant function in TraditionalForm.
Additional forms of notations
Mathematica has other popular forms of notations that are used for print and electronic publications. In this particular instance the task is not difficult. However, it must be made to work in
Mathematica's CForm, TeXForm, and FortranForm.
Automatic evaluations and transformations
Evaluation for exact and machine-number values of arguments
For the exact argument , Mathematica returns an exact result.
For a machine‐number argument (numerical argument with a decimal point), a machine number is also returned.
The next inputs calculate 100‐digit approximations at and .
Within a second, it is possible to calculate thousands of digits for the cosecant function. The next input calculates 10000 digits for and analyzes the frequency of the digit in the resulting decimal
Here is a 50‐digit approximation to the cosecant function at the complex argument .
Mathematica automatically evaluates mathematical functions with machine precision, if the arguments of the function are numerical values and include machine‐number elements. In this case only six
digits after the decimal point are shown in the results. The remaining digits are suppressed, but can be displayed using the function InputForm.
Simplification of the argument
Mathematica knows the symmetry and periodicity of the cosecant function. Here are some examples.
Mathematica automatically simplifies the composition of the direct and the inverse cosecant functions into its argument.
Mathematica also automatically simplifies the composition of the direct and any of the inverse trigonometric functions into algebraic functions of the argument.
In cases where the argument has the structure or , and or with integer , the cosecant function can be automatically transformed into trigonometric or hyperbolic cosecant or secant functions.
Simplification of combinations of cosecant functions
Sometimes simple arithmetic operations containing the cosecant function can automatically generate other equal trigonometric functions.
The cosecant function arising as special cases from more general functions
The cosecant function can be treated as a particular case of some more general special functions. For example, appears automatically from Bessel, Struve, Mathieu, Jacobi, hypergeometric, and Meijer
functions for appropriate values of their parameters.
Equivalence transformations using specialized Mathematica functions
General remarks
Almost everybody prefers using instead of . Mathematica automatically transforms the second expression into the first one. The automatic application of transformation rules to mathematical
expressions can result in overly complicated results. Compact expressions like should not be automatically expanded into the more complicated expression . Mathematica has special functions that
produce such expansions. Some are demonstrated in the next section.
The function TrigExpand expands out trigonometric and hyperbolic functions. In more detail, it splits up sums and integer multiples that appear in the arguments of trigonometric and hyperbolic
functions, and then expands out products of trigonometric and hyperbolic functions into sums of powers, using trigonometric and hyperbolic identities where possible. Here are some examples.
The function TrigFactor factors trigonometric and hyperbolic functions. In more detail, it splits up sums and integer multiples that appear in the arguments of trigonometric and hyperbolic functions,
and then factors the resulting polynomials into trigonometric and hyperbolic functions, using trigonometric and hyperbolic identities where possible. Here are some examples.
The function TrigReduce rewrites the products and powers of trigonometric and hyperbolic functions in terms of trigonometric and hyperbolic functions with combined arguments. In more detail, it
typically yields a linear expression involving trigonometric and hyperbolic functions with more complicated arguments. TrigReduce is approximately opposite to TrigExpand and TrigFactor. Here are some
The function TrigToExp converts trigonometric and hyperbolic functions to exponentials. It tries, where possible, to give results that do not involve explicit complex numbers. Here are some examples.
The function ExpToTrig converts exponentials to trigonometric and hyperbolic functions. It is approximately opposite to TrigToExp. Here are some examples.
The function ComplexExpand expands expressions assuming that all the variables are real. The option TargetFunctions can be given as a list of functions from the set {Re, Im, Abs, Arg, Conjugate,
Sign}. ComplexExpand will try to give results in terms of the specified functions. Here are some examples.
The function Simplify performs a sequence of algebraic transformations on the expression, and returns the simplest form it finds. Here are some examples.
Here is a collection of trigonometric identities. Each is written as a logical conjunction.
The function Simplify has the Assumption option. For example, Mathematica treats the periodicity of trigonometric functions for the symbolic integer coefficient of .
Mathematica also knows that the composition of the inverse and direct trigonometric functions produces the value of the internal argument under the corresponding restriction.
FunctionExpand (and Together)
While the cosecant function auto‐evaluates for simple fractions of , for more complicated cases it stays as a cosecant function to avoid the build up of large expressions. Using the function
FunctionExpand, the cosecant function can sometimes be transformed into explicit radicals. Here are some examples.
If the denominator contains squares of integers other than 2, the results always contain complex numbers (meaning that the imaginary number appears unavoidably).
Here the function RootReduce is used to express the previous algebraic numbers as roots of polynomial equations.
The function FunctionExpand also reduces trigonometric expressions with compound arguments or compositions, including trigonometric functions, to simpler ones. Here are some examples.
Applying Simplify to the last expression gives a more compact result.
The function FullSimplify tries a wider range of transformations than Simplify and returns the simplest form it finds. Here are some examples that contrast the results of applying these functions to
the same expressions.
Operations under special Mathematica functions
Series expansions
Calculating the series expansion of a cosecant function to hundreds of terms can be done in seconds.
Mathematica comes with the add‐on package DiscreteMath`RSolve` that allows finding the general terms of the series for many functions. After loading this package, and using the package function
SeriesTerm, the following term of can be evaluated.
This result can be verified by the following process.
Mathematica can evaluate derivatives of the cosecant function of an arbitrary positive integer order.
Finite products
Mathematica can calculate some finite symbolic and nonsymbolic products that contain the cosecant function. Here are two examples.
Indefinite integration
Mathematica can calculate a huge number of doable indefinite integrals that contain the cosecant function. The results can contain special functions. Here are some examples.
Definite integration
Mathematica can calculate wide classes of definite integrals that contain the cosecant function. Here are some examples.
Limit operation
Mathematica can calculate limits that contain the cosecant function. Here are some examples.
Solving equations
The next inputs solve two equations that contain the cosecant function. Because of the multivalued nature of the inverse cosecant function, a printed message indicates that only some of the possible
solutions are returned.
A complete solution of the previous equation can be obtained using the function Reduce.
Solving differential equations
Here is a nonlinear first-order differential equation that is obeyed by the cosecant function.
Mathematica can find the general solution of this differential equation. In doing so, the generically multivariate inverse of a function is encountered, and a message is issued that a solution branch
is potentially missed.
Mathematica has built‐in functions for 2D and 3D graphics. Here are some examples. | {"url":"http://functions.wolfram.com/ElementaryFunctions/Csc/introductions/CscInMathematica/ShowAll.html","timestamp":"2014-04-20T23:47:40Z","content_type":null,"content_length":"121591","record_id":"<urn:uuid:a66d27ed-c1d0-4e0f-a6d0-54fb105bc488>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00234-ip-10-147-4-33.ec2.internal.warc.gz"} |
Folds over Data.Map
Johan Tibell johan.tibell at gmail.com
Mon Aug 23 05:04:36 EDT 2010
Hi all,
After working on optimizing the folds defined in Data.Map, I'm not sure that
the current definitions are correct. In particular, `fold` is defined as
-- | /O(n)/. Fold the values in the map, such that
-- @'fold' f z == 'Prelude.foldr' f z . 'elems'@.
-- For example,
-- > elems map = fold (:) [] map
-- > let f a len = len + (length a)
-- > fold f 0 (fromList [(5,"a"), (3,"bbb")]) == 4
It's not true that
fold f z == foldr f z . elems
in general. It only holds if `z` is an identity for `f` as `z` is used at
every leaf in the tree.
Using a none identity element for `z` doesn't really work in practice as a
user cannot tell how many times `z` will be used, as it depends on the shape
of tree which in turn depends on how balanced it is at any given point.
Is there a better way to define folds over maps or should we just change the
documentation to say that you most likely want `z` to be an identity?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.haskell.org/pipermail/libraries/attachments/20100823/414a714c/attachment.html
More information about the Libraries mailing list | {"url":"http://www.haskell.org/pipermail/libraries/2010-August/014109.html","timestamp":"2014-04-20T01:38:17Z","content_type":null,"content_length":"3492","record_id":"<urn:uuid:bfaf0c64-52d1-42eb-bbe5-566b70b5e9d2>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00639-ip-10-147-4-33.ec2.internal.warc.gz"} |
ACT Math
Northridge, CA 91326
Professor, Math Tutor
...I can help you understand the concepts of probability, conditional probabilities, binomial probabilities, combinations, and permutations. Working together we will maximize your potential for
success. We can take a book full of
ACT Math
tests and we will go over...
Offering 10+ subjects including ACT Math | {"url":"http://www.wyzant.com/Pacoima_ACT_Math_tutors.aspx","timestamp":"2014-04-20T03:58:48Z","content_type":null,"content_length":"59268","record_id":"<urn:uuid:cedc7c11-ad9a-4dc9-889e-458131a633bb>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00109-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebra I/Working with Numbers/Combining Like Terms
From Wikibooks
Back to Table of Contents Algebra I
(Note to contributors: Please use the ^ symbol to designate exponents when you enter them in the wikibook. I will format them on the student-user interface.--HSTutorials 00:42, 17 July 2006 (UTC)
Algebra is used to make many problems simpler, and that is why a lot of algebra is about finding simple expressions which mean the same thing as harder ones. Variables are given different letters and
symbols in algebra so they can be kept apart, so every time x is used in an expression it means the same thing, and every time y is used it means the same thing, but a different thing to x (of course
this is only in the same expression, different expressions can use the same letters to mean different things). Since the different letters keep the variables apart this means that an expression with
many variables in many places can be made simpler by bringing them together.
Example Problems
Here is an example of variables keeping numbers apart even if we don't know them, and this lets us combine them without changing their value: Albert has some books in his bag, he does not know how
many. Beth also has some books and she does not know how many. Chris does not know how many books he has, but he knows it is the same as Beth. Dora knows she has the same number of books as Albert.
In this example there are 4 lots of books, so we could write the total number of books as:
a + b + c + d
Since we know that Albert and Dora have the same number of books, and Chris and Beth have the same number of books, we could also write:
a + b + a + b
This is the same as writing:
2*a + 2*b
Here we have grouped both a terms and both b terms. We could also go further, since everything is being multiplied by 2, and write:
2(a + b)
This is the simplest way of writing how many books there are. Not only were the variables combined, but so were the constants (in this case the number 2). We can check if they are the same by seeing
what happens when Albert has 2 books and Beth has 5.
a + b + c + d = 2 + 5 + 2 + 5 = 14
2(a + b) = 2(2 + 5) = 2*7 = 14
Practice Games
Here are some links to games that reinforce these skills: [1] [2]
Practice Problems
(Note: put answer in parentheses after each problem you write)
Simplify these into the form x(y + z) where x, y and z are integers or variables.
15a + 19b + 2a - 2b ( 17(a + b) )
a + 9 + 2a ( 3(a + 3) )
12a + 16b ( 4(3a + 4b) )
3a - 3b ( 3(a - b) )
5a + a^2 ( a(5 + a) ) | {"url":"https://simple.wikibooks.org/wiki/Algebra_I/Working_with_Numbers/Combining_Like_Terms","timestamp":"2014-04-16T16:19:25Z","content_type":null,"content_length":"23455","record_id":"<urn:uuid:41a183d9-4bae-4306-ab83-b9384d036709>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00585-ip-10-147-4-33.ec2.internal.warc.gz"} |
``A Tutorial on Hidden Markov Models
An Erratum for
``A Tutorial on Hidden Markov Models
and Selected Applications in Speech Recognition''
Send bugs to: Ali Rahimi (ali@mit.edu)
Rabiner's excellent tutorial on hidden markov models [1] contains a few subtle mistakes which can result in flawed HMM implementations. This note is intended as a companion to the tutorial and
addresses subtle mistakes which appear the sections on ``scaling'' and ``multiple observations sequences.'' Following is a summary of the terms introduced in the tutorial and corrections to some of
the equations.
Section III.A of Rabiner introduces in eq (r18) the forward variable
and the backward variable
Equation (r37) in section III.C defines the posterior probability of going from state
Finally, in eqs (r38) and (r27), it is observed that
These terms rely on the forward and backward variables, but modern floating point engines do not have the necessary precision to compute
Section V.A introduces the scaled forward variable scaled backward variable
Rabiner's eqs (r91-r92b) for computing
We are looking for a recursion to calculate a variable
The following is a corrected version of the recursion of eqs (r91-r92b)
The proof that this recursion results in the criterion of eq (3) is by induction:
Base case. According to the recursion, we get
which satisfies the condition of eq (3) with
Induction. If
which was what we wanted to show. Note that as a consequence of eq (4) and the definition of 3), we obtain a useful expression for
we also define the term
The following recursion produces desired values of
Note that defining not the same as imposing the requirement
The proof that the recursion produces the desired result is again inductive:
Base case.
which satisfies the condition of the backward scaling with
Induction. If
The last step uses the definition of 6) and produces a result in agreement with the scaling requirement.
We have shown recursions for computing 5,6). The next section uses provides alternative ways of computing
Substituting the scaled variables in the definition for
But 7), and
Therefore eq (10) simplifies to
which is a simple way of computing
These two entities can be used as-is in the Baum-Welch and Viterbi algorithms.
Section V.B of Rabiner explains how to use multiple observations sequences for training. In the M step of Baum-Welch, a new state transition matrix is computed according to eq (r109):
and the observation matrix is updated according to eq (r110):
Onces 2) and (11) into (r109). Equations (13) and (14) are easy to use and should be used for computing the updates. However, for the sake of completeness, equations analogous to (r111) with the
correct substitutions are included here:
There are two salient corrections proposed in the paper: the first corrects Rabiner's notation for computing the scaled variables. The second correction is in the way the HMM parameters are updated
in the M step under multiple observation sequences. This note also provides an inductive proof that the recursions provide the desired results.
If you notice bugs in this note, please inform the author.
L. R. Rabiner.
A tutorial on hidden markov models and selected apllications in speech recognition.
In A. Waibel and K.-F. Lee, editors, Readings in Speech Recognition, pages 267-296. Kaufmann, San Mateo, CA, 1990.
Ali Rahimi 2000-12-30 | {"url":"http://alumni.media.mit.edu/~rahimi/rabiner/rabiner-errata/rabiner-errata.html","timestamp":"2014-04-17T04:13:56Z","content_type":null,"content_length":"38762","record_id":"<urn:uuid:5bbefe48-d460-4da9-b990-b13950b1c52b>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00217-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Goes Pop!
Last week, some of you may have seen this article about a study on Australian aboriginies. The study suggests that, even without having the language to describe numbers, the human mind has an innate
ability to count and differentiate between numbers.
Australian Aboriginies: Math All Stars? The study focused on two Aborigine tribes in Australia, and found that even though both tribes lack words for individual numbers (the languages only have words
to describe ‘one,’ ‘two,’ ‘few,’ and ‘many’), members of the tribe nevertheless seem to have a sense for different numbers and counting. This conclusion was reached, for example, by banging two
sticks together n times, and asking children to represent those n times with concrete objects.
I am no linguist, so I cannot speak to the linguistic ramifications of this study. From a mathematical viewpoint, however, it is certainly a good thing to hear, because it suggests that . . . → Read
More: Math in the News: Counting without Language
Winning them the Oscar for Best Original Screenplay in 1998, Good Will Hunting propelled Matt Damon and Ben Affleck to the Hollywood A-list (no doubt Phantoms would have done this for Ben Affleck,
had it not been for the success of Good Will Hunting only months earlier). I will not summarize the plot, except to say that in this film, Matt Damon is a math superstar. For those wanting more in
the way of plot summary, this trailer may help to refresh your memory: There are a number of films that center around math geniuses, and for the most part they have met with some degree of critical
and commercial success. Our purpose here is not to critique these films, but to answer a simple question: In what ways do these films perpetuate stereotypes about mathematics and mathematicians, and
in what ways do these films rise above those same stereotypes? . . . → Read More: Math in the Movies: Good Will Hunting
Do you wonder whether you will ever find true love? Are you tired of looking for Mr. or Ms. Right? (I mean this in a metaphorical sense – if you are actually looking for an individual by the name of
Right, this article will probably be of no use to you.) Have you grown weary of idle party chit-chat, and awkward mornings after nights spent in venues with deceptive lighting? Well, my friend,
whether you are willing to accept it or not, mathematics can help you find the one to share your life with. Unfortunately, the primary disadvantage to the method described below is that if you don’t
know about it before you jump into the dating scene, it may be too late for you to utilize. But with an open mind, and a willingness to let mathematics do its work, you can maximize the likelihood
that you will find . . . → Read More: Math Gets Around: Dating
I apologize in advance for the fact that this references an article that is four months old. However, given the connection between the Monty Hall problem and popular culture, it cannot rightly be
overlooked here, and this article from the New York Times allows us to discuss this problem from a unique perspective.
The Monty Hall problem is so named because of its origins in the game show “Let’s Make a Deal.” The problem itself is famous for having a completely counterintuitive solution, and my goal after
discussing the problem and its relationship to the New York Times article on cognitive dissonance will be to explain where this disconnect between the problem and our intuition arises.
Here is a rigorous and unambiguous statement of the problem: Suppose you’re on a game show and you’re given the choice of three doors. Behind one door is a car; behind the others, goats. . . . → Read
More: Math in the News: Monty Hall Strikes Again
Those of you itching for some news last weekend may have noticed the following article, which was briefly featured on the front page of Yahoo News. In short, the article discusses the results of an
experiment on the brains of roundworms. The experiment indicates that roundworms can mentally compute changes in salt levels with respect to their position in order to find food. Anyone who’s taken a
bit of calculus may recognize that hidden in this is the notion of a derivative. In essence, concludes University of Oregon biologist Shawn Lockery, the worms use calculus to survive. More computing
power than an Apple IIe?The notion that insects can do calculus is certainly good for a headline, and from a pedagogical standpoint it may be useful, although somewhat insulting to those who have
trouble with math: “If worms can do calculus, anyone can!” All that aside though, isn’t the claim a . . . → Read More: Math in the News: Worms Love Calculus? | {"url":"http://www.mathgoespop.com/2008/08","timestamp":"2014-04-18T05:31:45Z","content_type":null,"content_length":"85058","record_id":"<urn:uuid:caa599f7-7a2a-44b5-b247-aa98d4aa5290>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00088-ip-10-147-4-33.ec2.internal.warc.gz"} |
Buford, GA Algebra 2 Tutor
Find a Buford, GA Algebra 2 Tutor
...For example, to find an answer for a division problem, look for the multiple of the numbers. This type of technique gives students the confidence they need to tackle any math problems by
breaking it up into smaller components that they recognize. Then math can truly be elementary.
26 Subjects: including algebra 2, English, reading, geometry
...Oh yes, and I am very good with computers and Microsoft Excel.Algebra has always been my favorite math subject. When I was in high school I went to statewide math contests for two years and
placed in the top ten both times. This algebra deals mostly with linear functions.
22 Subjects: including algebra 2, calculus, geometry, ASVAB
...All of which has given me a uniquely personal and spiritual perspective on the broad topic of religion. I hope to share my knowledge and experience with you or your student. As a child I went
to Glen Haven christian private school where I memorized more of the Holy Bible than not.
32 Subjects: including algebra 2, Spanish, English, geometry
...I have taken courses in world religions, Christian church history, Christian theology, Christian education, preaching, ethics and pastoral care. I have also studied the bible and translated
from the ancient Greek and Hebrew into English. I have a Master of Divinity from Columbia Theological Seminary.
26 Subjects: including algebra 2, English, reading, writing
...My goal is always to make sure that the student, not only understands the material, but also feels confident in what they are doing. I feel that every student is different in what builds their
confidence in the material, so try to figure out what that is as we work together. I also ask for some...
9 Subjects: including algebra 2, chemistry, calculus, geometry
Related Buford, GA Tutors
Buford, GA Accounting Tutors
Buford, GA ACT Tutors
Buford, GA Algebra Tutors
Buford, GA Algebra 2 Tutors
Buford, GA Calculus Tutors
Buford, GA Geometry Tutors
Buford, GA Math Tutors
Buford, GA Prealgebra Tutors
Buford, GA Precalculus Tutors
Buford, GA SAT Tutors
Buford, GA SAT Math Tutors
Buford, GA Science Tutors
Buford, GA Statistics Tutors
Buford, GA Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Cumming, GA algebra 2 Tutors
Doraville, GA algebra 2 Tutors
Duluth, GA algebra 2 Tutors
Dunwoody, GA algebra 2 Tutors
Flowery Branch algebra 2 Tutors
Gainesville, GA algebra 2 Tutors
Johns Creek, GA algebra 2 Tutors
Lawrenceville, GA algebra 2 Tutors
Mableton algebra 2 Tutors
Milton, GA algebra 2 Tutors
Norcross, GA algebra 2 Tutors
Rest Haven, GA algebra 2 Tutors
Snellville algebra 2 Tutors
Sugar Hill, GA algebra 2 Tutors
Suwanee algebra 2 Tutors | {"url":"http://www.purplemath.com/Buford_GA_algebra_2_tutors.php","timestamp":"2014-04-20T06:39:46Z","content_type":null,"content_length":"23965","record_id":"<urn:uuid:a1e128bf-6a08-46e0-bfb4-434d22960696>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00559-ip-10-147-4-33.ec2.internal.warc.gz"} |
Spectral gap global solutions for degenerate Kirchhoff equations
Ghisi, Marina and Gobbino, Massimo (2009) Spectral gap global solutions for degenerate Kirchhoff equations. Nonlinear Analysis: Theory, Methods & Applications, 71/200 (9). pp. 4115-4124. ISSN
Download (178Kb) | Preview
SUMMARY: We consider the second order Cauchy problem $$u''+m(|A^{1/2}u|^2)Au=0, u(0)=u_{0}, u'(0)=u_{1},$$ where $m:[0,+\infty)\to[0,+\infty)$ is a continuous function, and $A$ is a self-adjoint
nonnegative operator with dense domain on a Hilbert space. It is well known that this problem admits local-in-time solutions provided that $u_{0}$ and $u_{1}$ are regular enough, depending on the
continuity modulus of $m$, and on the strict/weak hyperbolicity of the equation. We prove that for such initial data $(u_{0},u_{1})$ there exist two pairs of initial data $(\overline{u}_{0},\overline
{u}_{1})$, $(\widehat{u}_{0},\widehat{u}_{1})$ for which the solution is global, and such that $u_{0}=\overline{u}_{0}+\widehat{u}_{0}$, $u_{1}=\overline{u}_{1}+\widehat{u}_{1}$. This is a byproduct
of a global existence result for initial data with a suitable spectral gap, which extends previous results obtained in the strictly hyperbolic case with a smooth nonlinearity $m$.
Repository staff only actions | {"url":"http://eprints.adm.unipi.it/673/","timestamp":"2014-04-17T09:42:26Z","content_type":null,"content_length":"20522","record_id":"<urn:uuid:966295c3-b80f-4e47-8b54-0f7da11a8680>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00364-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Floating-Point Trick to Solve Boundary-Value Problems Faster
Seminar: Departmental | March 20 | 12:10-1 p.m. | 380 Soda Hall
W. Kahan, UC Berkeley
Electrical Engineering and Computer Sciences (EECS)
This talk resuscitates an old trick to accelerate the numerical solution of certain discretized boundaryvalue problems.
Without the trick, half the digits carried by the arithmetic can be lost to roundoff when
the discretization s grid-gaps get very small. The trick can obtain adequate accuracy from arithmetic with
float variables 4-bytes wide instead of double variables 8-bytes wide. Wider data moves slower through
the computer s memory system and pipelines. The trick is tricky for programs written in MATLAB 7,
JAVA, FORTRAN and post-1985 ANSI C. The trick is easy for the original Kernighan-Ritchie C of the
late 1970s, and for a few implementations of C99 that fully support IEEE Standard 754 for Binary
odedsc@cs.berkeley.edu, 510-516-4321 | {"url":"http://events.berkeley.edu/index.php/calendar/sn/eecs.html?event_ID=65443","timestamp":"2014-04-16T16:12:41Z","content_type":null,"content_length":"41054","record_id":"<urn:uuid:f78b4d08-f3ef-4604-a84c-80e46ccda167>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00279-ip-10-147-4-33.ec2.internal.warc.gz"} |
Estimation Exercises
David M. Lane
All material presented in the Estimation Chapter
Selected answers
You may want to use the Analysis Lab and various calculators for some of these exercises.
Inverse t Distribution: Finds t for a confidence interval.
t Distribution: Computes areas of the t distribution.
Fisher's r to z': Computes transformations in both directions.
Inverse Normal Distribution: Use for confidence intervals.
1. When would the mean grade in a class on a final exam be considered a statistic? When would it be considered a parameter? (relevant section)
2. Define bias in terms of expected value. (relevant section)
3. Is it possible for a statistic to be unbiased yet very imprecise? How about being very accurate but biased? (relevant section)
4. Why is a 99% confidence interval wider than a 95% confidence interval? (relevant section & relevant section)
5. When you construct a 95% confidence interval, what are you 95% confident about? (relevant section)
6. What is the difference in the computation of a confidence interval between cases in which you know the population standard deviation and cases in which you have to estimate it? (relevant
section & relevant section)
7. Assume a researcher found that the correlation between a test he or she developed and job performance was 0.55 in a study of 28 employees. If correlations under .35 are considered
unacceptable, would you have any reservations about using this test to screen job applicants? (relevant section)
8. What is the effect of sample size on the width of a confidence interval? (relevant section & relevant section)
9. How does the t distribution compare with the normal distribution? How does this difference affect the size of confidence intervals constructed using z relative to those constructed using t?
Does sample size make a difference? (relevant section)
10. The effectiveness of a blood-pressure drug is being investigated. How might an experimenter demonstrate that, on average, the reduction in systolic blood pressure is 20 or more? (relevant
section & relevant section)
11. A population is known to be normally distributed with a standard deviation of 2.8. (a) Compute the 95% confidence interval on the mean based on the following sample of nine: 8, 9, 10, 13, 14,
16, 17, 20, 21. (b) Now compute the 99% confidence interval using the same data. (relevant section)
12. A person claims to be able to predict the outcome of flipping a coin. This person is correct 16/25 times. Compute the 95% confidence interval on the proportion of times this person can
predict coin flips correctly. What conclusion can you draw about this test of his ability to predict the future? (relevant section)
13. What does it mean that the variance (computed by dividing by N) is a biased statistic? (relevant section)
14. A confidence interval for the population mean computed from an N of 16 ranges from 12 to 28. A new sample of 36 observations is going to be taken. You can't know in advance exactly what the
confidence interval will be because it depends on the random sample. Even so, you should have some idea of what it will be. Give your best estimation. (relevant section)
15. You take a sample of 22 from a population of test scores, and the mean of your sample is 60. (a) You know the standard deviation of the population is 10. What is the 99% confidence interval
on the population mean. (b) Now assume that you do not know the population standard deviation, but the standard deviation in your sample is 10. What is the 99% confidence interval on the mean
now? (relevant section)
16. You read about a survey in a newspaper and find that 70% of the 250 people sampled prefer Candidate A. You are surprised by this survey because you thought that more like 50% of the
population preferred this candidate. Based on this sample, is 50% a possible population proportion? Compute the 95% confidence interval to be sure. (relevant section)
17. Heights for teenage boys and girls were calculated. The mean height for the sample of 12 boys was 174 cm and the variance was 62. For the sample of 12 girls, the mean was 166 cm and the
variance was 65. (a) What is the 95% confidence interval on the difference between population means? (b) What is the 99% confidence interval on the difference between population means? (c) Do
you think the mean difference in the population could be about 5? Why or why not? (relevant section)
18. You were interested in how long the average psychology major at your college studies per night, so you asked 10 psychology majors to tell you the amount they study. They told you the
following times: 2, 1.5, 3, 2, 3.5, 1, 0.5, 3, 2, 4. (a) Find the 95% confidence interval on the population mean. (b) Find the 90% confidence interval on the population mean. (relevant
19. True/false: As the sample size gets larger, the probability that the confidence interval will contain the population mean gets higher. (relevant section & relevant section)
20. True/false: You have a sample of 9 men and a sample of 8 women. The degrees of freedom for the t value in your confidence interval on the difference between means is 16. (relevant section &
relevant section)
21. True/false: Greek letters are used for statistics as opposed to parameters. (relevant section)
22. True/false: In order to construct a confidence interval on the difference between means, you need to assume that the populations have the same variance and are both normally distributed. (
relevant section)
23. True/false: The red distribution represents the t distribution and the blue distribution represents the normal distribution. (relevant section)
Questions from Case Studies:
The following questions are from the Angry Moods (AM) case study.
24. (AM#6c) Is there a difference in how much males and females use aggressive behavior to improve an angry mood? For the "Anger-Out" scores, compute a 99% confidence interval on the difference
between gender means. (relevant section)
25. (AM#10) Calculate the 95% confidence interval for the difference between the mean Anger-In score for the athletes and non-athletes. What can you conclude? (relevant section)
26. Find the 95% confidence interval on the population correlation between the Anger-Out and Control-Out scores. (relevant section)
The following questions are from the Flatulence (F) case study.
27. (F#8) Compare men and women on the variable "perday." Compute the 95% confidence interval on the difference between means. (relevant section)
28. (F#10) What is the 95% confidence interval of the mean time people wait before farting in front of a romantic partner. (relevant section)
The following questions use data from the Animal Research (AR) case study.
29. (AR#3) What percentage of the women studied in this sample strongly agreed (gave a rating of 7) that using animals for research is wrong?
30. Use the proportion you computed in #29. Compute the 95% confidence interval on the population proportion of women who strongly agree that animal research is wrong. (relevant section)
31. Compute a 95% confidence interval on the difference between the gender means with respect to their beliefs that animal research is wrong. (relevant section)
The following question is from the ADHD Treatment (AT) case study.
32. (AT#8) What is the correlation between the participants' correct number of responses after taking the placebo and their correct number of responses after taking 0.60 mg/kg of MPH? Compute the
95% confidence interval on the population correlation. (relevant section)
The following question is from the Weapons and Aggression (WA) case study.
33. (WA#4) Recall that the hypothesis is that a person can name an aggressive word more quickly if it is preceded by a weapon word prime than if it is preceded by a neutral word prime. The first
step in testing this hypothesis is to compute the difference between (a) the naming time of aggressive words when preceded by a neutral word prime and (b) the naming time of aggressive words
when preceded by a weapon word prime separately for each of the 32 participants. That is, compute an - aw for each participant.
1. Would the hypothesis of this study be supported if the difference were positive or if it were negative?
2. What is the mean of this difference score? (relevant section)
3. What is the standard deviation of this difference score? (relevant section)
4. What is the 95% confidence interval of the mean difference score? (relevant section)
5. What does the confidence interval computed in (d) say about the hypothesis.
The following question is from the Diet and Health (WA) case study.
34. Compute a 95% confidence interval on the proportion of people who are healthy on the AHA diet.
│ │ Cancers │ Deaths │ Nonfatal illness │ Healthy │ Total │
│ AHA │ 15 │ 24 │ 25 │ 239 │ 303 │
│ Mediterranean │ 7 │ 14 │ 8 │ 273 │ 302 │
│ Total │ 22 │ 38 │ 33 │ 512 │ 605 │
The following questions are from (reproduced with permission)
Visit the site
35. Suppose that you take a random sample of 10,000 Americans and find that 1,111 are left-handed. You perform a test of significance to assess whether the sample data provide evidence that more
than 10% of all Americans are left-handed, and you calculate a test statistic of 3.70 and a p-value of .0001. Furthermore, you calculate a 99% confidence interval for the proportion of
left-handers in America to be (.103,.119). Consider the following statements: The sample provides strong evidence that more than 10% of all Americans are left-handed. The sample provides
evidence that the proportion of left-handers in America is much larger than 10%. Which of these two statements is the more appropriate conclusion to draw? Explain your answer based on the
results of the significance test and confidence interval.
36. A student wanted to study the ages of couples applying for marriage licenses in his county. He studied a sample of 94 marriage licenses and found that in 67 cases the husband was older than
the wife. Do the sample data provide strong evidence that the husband is usually older than the wife among couples applying for marriage licenses in that county? Explain briefly and justify
your answer.
37. Imagine that there are 100 different researchers each studying the sleeping habits of college freshmen. Each researcher takes a random sample of size 50 from the same population of freshmen.
Each researcher is trying to estimate the mean hours of sleep that freshmen get at night, and each one constructs a 95% confidence interval for the mean. Approximately how many of these 100
confidence intervals will NOT capture the true mean?
1. None
2. 1 or 2
3. 3 to 7
4. about half
5. 95 to 100
6. other
11) (a) (12.39, 16.05)
12) (.43, .85)
15) (b) (53.96, 66.04)
17) (a) (1.25, 14.75)
18) (a) (1.45, 3.05)
26) (-.713, -.414)
27) (-.98, 3.09)
29) 41%
33) (b) 7.16
Please answer the questions: | {"url":"http://onlinestatbook.com/2/estimation/ch8_exercises.html","timestamp":"2014-04-20T13:18:14Z","content_type":null,"content_length":"26718","record_id":"<urn:uuid:58b3c30e-5495-4da0-8548-dfaa157106ac>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00366-ip-10-147-4-33.ec2.internal.warc.gz"} |
Error term of the Prime Number Theorem and the Riemann Hypothesis
up vote 3 down vote favorite
I have read that the Riemann Hypothesis is equivalent to
$\pi(x)=\text{Li}(x)+O(\sqrt{x}\log x)$
Is there an analogous statement saying the Riemann Hypothesis is equivalent to
$\pi(x)=\frac{x}{\log x}+ O(f(x))\quad$ for some $f$
$\pi(x)=\frac{x}{\log x}+ g(x) + O(h(x))\quad$ for some elementary function $g$ and $h$
I'm guessing that $f$ could not possibly be $\sqrt{x}\log x$ because I plotted
$\frac{\text{Li}(x)-x/\log(x)}{\sqrt x\log x}$ and it looked like it grew without bound as $x$ goes to infinity.
prime-number-theorem nt.number-theory
6 Yes to the second one: g(x)=Li(x)-x/log x; h(x)=sqrt(x)log x – Anthony Quas Jul 19 '11 at 7:03
Changed my post to include that $g$ must be an elementary function. – user16557 Jul 19 '11 at 7:14
add comment
1 Answer
active oldest votes
It is not hard to show that $$\mathrm{Li}(x) = \frac{x}{\log x} \sum_{k=0}^{m - 1}{\frac{k!}{(\log x)^k}} + O\left(\frac{x}{(\log x)^{m + 1}}\right)$$ for any $m \geq 0$ (just use the
definition of $\mathrm{Li}(x)$ and repeated integration by parts). Thus $$\pi(x) = \frac{x}{\log x} \sum_{k=0}^{m - 1}{\frac{k!}{(\log x)^k}} + O\left(\frac{x}{(\log x)^{m + 1}}\
up vote 15 right).$$ It is not possible to improve on this (this is true unconditionally; you don't even need the Riemann hypothesis). So $\mathrm{Li}(x)$ really is the "better" approximation to $
down vote \pi(x)$ compared to $x/\log x$.
add comment
Not the answer you're looking for? Browse other questions tagged prime-number-theorem nt.number-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/70713/error-term-of-the-prime-number-theorem-and-the-riemann-hypothesis?sort=oldest","timestamp":"2014-04-20T06:44:34Z","content_type":null,"content_length":"53255","record_id":"<urn:uuid:630c4f9f-94e1-4361-8d50-5bcd7ac3ba88>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00275-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matches for:
Return to List
Mathematical Quantum Theory II: Schrödinger Operators
A co-publication of the AMS and Centre de Recherches Mathématiques.
             
CRM Proceedings & The articles in this collection constitute the proceedings of the Canadian Mathematical Society Annual Seminar on Mathematical Quantum Theory, held in Vancouver in August 1993. The
Lecture Notes meeting was run as a research-level summer school concentrating on two related areas of contemporary mathematical physics. The first area, quantum field theory and many-body theory,
is covered in volume 1 of these proceedings. The second area, treated in the present volume, is Schrödinger operators. The meeting featured a series of four-hour mini-courses,
1995; 304 pp; designed to introduce students to the state of the art in particular areas, and thirty hour-long expository lectures. With contributions from some of the top experts in the field,
softcover this book is an important resource for those interested in activity at the frontiers of mathematical quantum theory.
Volume: 8 Titles in this series are co-published with the Centre de Recherches Mathématiques.
ISBN-10: Research mathematicians.
• S. Agmon -- Topics in spectral theory of Schrodinger operators on non-compact Riemannian manifolds with cusps
ISBN-13: • W. Hunziker and I. M. Sigal -- The general theory of \(N\)-body quantum systems
978-0-8218-0366-0 • I. M. Sigal -- Lectures on large Coulomb systems
• B. Simon -- Spectral analysis of rank one perturbations and applications
List Price: US$96 • V. Enss and R. A. Weder -- Inverse potential scattering: A geometrical approach
• C. Gerard -- New results on the long-range scattering
Member Price: • B. Helffer -- Spectral properties of the Kac operator in large dimension
US$76.80 • J. M. Combes and P. D. Hislop -- Localization properties of continuous disordered systems in d-dimensions
• T. A. Osborn and F. H. Molzahn -- Cluster WKB
Order Code: CRMP/ • P. A. Perry -- The Selberg zeta function and scattering poles for Kleinian groups
8 • R. Seiler -- Charge transport in quantum hall systems and the index of projectors
• L. A. Seco -- Number theory, classical mechanics, and the theory of large atoms
• H. Siedentop -- Bound for the atomic ground state density at the nucleus
• I. W. Herbst and E. Skibsted -- Spectral analysis of N-body Stark Hamiltonians
• R. G. Froese and R. Waxler -- Ground state resonances for hydrogen in an intense magnetic field
• D. R. Yafaev -- On the scattering matrix for N-particle Hamiltonians | {"url":"http://ams.org/bookstore?fn=20&arg1=crmpseries&ikey=CRMP-8","timestamp":"2014-04-16T18:00:38Z","content_type":null,"content_length":"16506","record_id":"<urn:uuid:b7410fef-8ab0-46ed-84b1-fb4f30a42545>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00636-ip-10-147-4-33.ec2.internal.warc.gz"} |
SAS Similarity ( Read ) | Geometry
What if you were given a pair of triangles, the lengths of two of their sides, and the measure of the angle between those two sides? How could you use this information to determine if the two
triangles are similar? After completing this Concept, you'll be able to use the SAS Similarity Theorem to decide if two triangles are congruent.
Watch This
CK-12 Foundation: SAS Similarity
Watch this video beginning at the 2:09 mark.
James Sousa: Similar Triangles
Now watch the second part of this video.
James Sousa: Similar Triangles Using SSS and SAS
By definition, two triangles are similar if all their corresponding angles are congruent and their corresponding sides are proportional. It is not necessary to check all angles and sides in order to
tell if two triangles are similar. In fact, if you know only that two pairs of sides are proportional and their included angles are congruent, that is enough information to know that the triangles
are similar. This is called the SAS Similarity Theorem.
SAS Similarity Theorem: If two sides in one triangle are proportional to two sides in another triangle and the included angle in both are congruent, then the two triangles are similar.
If $\frac{AB}{XY} = \frac{AC}{XZ}$$\angle A \cong \angle X$$\triangle ABC \sim \triangle XYZ$
Example A
Are the two triangles similar? How do you know?
We know that $\angle B \cong \angle Z$$\frac{10}{15} = \frac{24}{36}$$\frac{AB}{XZ} = \frac{BC}{ZY}$$\triangle ABC \sim \triangle XZY$
Example B
Are there any similar triangles in the figure? How do you know?
$\angle A$$\triangle EAB$$\triangle DAC$$\frac{AE}{AD} = \frac{AB}{AC}$
$\frac{9}{9+3} &= \frac{12}{12+5}\\\frac{9}{12} &= \frac{3}{4} eq \frac{12}{17} && \text {The two triangles are} \ not \ \text{similar.}$
Example C
From Example B, what should $BC$$\triangle EAB \sim \triangle DAC$
The proportion we ended up with was $\frac{9}{12} = \frac{3}{4} eq \frac{12}{17}$$AC$$\frac{12}{16} = \frac{3}{4}$$AC = AB + BC$$16 = 12 + BC$$BC$
CK-12 Foundation: SAS Similarity
Guided Practice
Determine if the following triangles are similar. If so, write the similarity theorem and statement.
1. We can see that $\angle{B} \cong \angle{F}$
Since the ratios are the same $\triangle ABC \sim \triangle DFE$
2. The triangles are not similar because the angle is not the included angle for both triangles.
3. $\angle{A}$
The ratios are not the same so the triangles are not similar.
Fill in the blanks.
1. If two sides in one triangle are _________________ to two sides in another and the ________________ angles are _________________, then the triangles are ______________.
Determine if the following triangles are similar. If so, write the similarity theorem and statement.
Find the value of the missing variable(s) that makes the two triangles similar.
Determine if the triangles are similar. If so, write the similarity theorem and statement.
6. $\Delta ABC$$\Delta DEF$
7. $\Delta GHI$$\Delta JKL$
12. $\overline{AC} = 3$
$\overline{DF} = 6$ | {"url":"http://www.ck12.org/geometry/SAS-Similarity/lesson/SAS-Similarity/","timestamp":"2014-04-20T16:41:14Z","content_type":null,"content_length":"118979","record_id":"<urn:uuid:34e2a0e5-c01b-45eb-8f1a-0b117951604f>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00644-ip-10-147-4-33.ec2.internal.warc.gz"} |
In and Out
Operational Solutions to Differential Equations
Q: If 6])
How can I implement the operator
A: The definition of functions.wolfram.com/03.01.02.0001.01.
Then, formally, one has
For an arbitrary potential NestList. For example, here are the first four terms.
For an axially symmetric potential, the Laplacian in cylindrical coordinates reads,
and the potential satisfies Laplace's equation,
Now for a concrete example. For a disk of radius
Using the operator formalism one obtains
There is another approach to this problem: In the case of azimuthal symmetry, the general solution to Laplace's equation
Now, if the potential is known on the axis, that is
Now, equate this to
Hence we obtain the (truncated series expansion of the) potential of the disk off the axis in spherical coordinates.
To compare this solution to that obtained earlier, we expand functions.wolfram.com/01.01.06.0002.01 and functions.wolfram.com/01.01.06.0003.01.
Check that we have the correct expansion for
Hence the potential of the disk off the axis is given by
Changing coordinates from polar to cylindrical coordinates,
Elmar Zeitler (zeitler@fhi-berlin.mpg.de) submitted another example of an operator expansion. Using the integral definition (functions.wolfram.com/03.01.07.0005.01),
and the identity (functions.wolfram.com/01.07.16.0096.01),
then the change of variables
Implementation of this operator expansion is direct. | {"url":"http://www.mathematica-journal.com/issue/v10i2/contents/Inout10-2/Inout10-2_6.html","timestamp":"2014-04-18T23:21:10Z","content_type":null,"content_length":"14655","record_id":"<urn:uuid:11bb6c2a-7803-4ab8-ae6f-c3d5ecea5df4>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00127-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fraction problems - Urgent
$\frac{4x^2-1}{x^2-9y^2} * \frac{2x^2+6xy-x-3y}{4x^2-4x+1} * \frac{1}{2x+1}$ and also $\frac{64a^3+b^3}{16a^2-b^2} / \frac{16a^2b^2-4ab^3+b^4}{4a^2-ab+12a-3b}$
Chuck_3000 first expand all the terms which are the difference of two squares, and factorise of the denominator of the middle term: $<br /> \frac{4x^2-1}{x^2-9y^2} \times \frac{2x^2+6xy-x-3y}{4x^
2-4x+1} \times \frac{1}{2x+1}$$<br /> <br /> =\frac{(2x-1)(2x+1)}{(x-3y)(x+3y)} \times \frac{(x+3y)(2x-1)}{4x^2-4x+1} \times \frac{1}{2x+1}<br />$ Now cancell the common terms where possible: $<br />
\frac{4x^2-1}{x^2-9y^2} \times \frac{2x^2+6xy-x-3y}{4x^2-4x+1} \times \frac{1}{2x+1}$$<br /> <br /> =\frac{(2x-1)}{(x-3y)} \times \frac{(2x-1)}{4x^2-4x+1}<br />$ Multiply the two denominators, and do
any remaining cancellations: $<br /> \frac{4x^2-1}{x^2-9y^2} \times \frac{2x^2+6xy-x-3y}{4x^2-4x+1} \times \frac{1}{2x+1}$$<br /> <br /> =\frac{(4x^2-4x+1)}{(x-3y)(4x^2-4x+1)}=\frac{1}{x-3y}<br />$
thanks for that i got 1 over 3y-x, so im glad i got close do you have any idea for the next one?
Chuck_3000 First get rid of the division and replace it with a multiplication: $<br /> \frac{64a^3+b^3}{16a^2-b^2} / \frac{16a^2b^2-4ab^3+b^4}{4a^2-ab+12a-3b}=$$<br /> <br /> \frac{64a^3+b^3}{16a^2-b
^2} \times \frac{4a^2-ab+12a-3b}{16a^2b^2-4ab^3+b^4}<br />$ Now on the right hand side of the equals sign expand the difference of two squares in the numerator of the first term and factorise the
denominator of the second term: $<br /> \frac{64a^3+b^3}{16a^2-b^2} / \frac{16a^2b^2-4ab^3+b^4}{4a^2-ab+12a-3b}=$$<br /> <br /> \frac{64a^3+b^3}{(4a-b)(4a+b)} \times \frac{(4a-b)(a+3)}{16a^2b^2-4ab^
3+b^4}<br />$ Now cancel where we can: $<br /> \frac{64a^3+b^3}{16a^2-b^2} / \frac{16a^2b^2-4ab^3+b^4}{4a^2-ab+12a-3b}=$$<br /> <br /> \frac{64a^3+b^3}{(4a+b)} \times \frac{(a+3)}{16a^2b^2-4ab^3+b^4}
<br />$ Now a little trial and error shows that $4a+b$ is a factor of $64a^3+b^3$, so doing the long division, and factoring out the factor of $b^2$ in the numerator of the last term gives: $<br /> \
frac{64a^3+b^3}{16a^2-b^2} / \frac{16a^2b^2-4ab^3+b^4}{4a^2-ab+12a-3b}=$$<br /> <br /> \frac{(4a+b)(16a^2-4ab+b^2)}{(4a+b)} \times \frac{(a+3)}{b^2(16a^2-4ab+b^2)}<br />$ Finally cancelling: $<br />
\frac{64a^3+b^3}{16a^2-b^2} / \frac{16a^2b^2-4ab^3+b^4}{4a^2-ab+12a-3b}=$$<br /> <br /> \frac{a+3}{b^2}<br />$ RonL | {"url":"http://mathhelpforum.com/algebra/4624-fraction-problems-urgent.html","timestamp":"2014-04-19T14:08:19Z","content_type":null,"content_length":"45007","record_id":"<urn:uuid:b3902c48-5278-4942-a786-a47d8a8d6603>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00133-ip-10-147-4-33.ec2.internal.warc.gz"} |
2011.51: Model Order Reduction or How to make everything as simple as possible but not simpler
2011.51: Younes Chahlaoui (2011) Model Order Reduction or How to make everything as simple as possible but not simpler. In: MAMERN11: 4 th International Conference on Approximation Methods and
Numerical Modelling in Environment and Natural Resources, 23-26 May 2011, Saidia, Morocco.
Full text available as:
PDF - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader
70 Kb
Official URL: http://196.200.156.187/logos/abstract/Chahlaoui_Mamern11.pdf
Large complex mathematical models are regularly used for simulation and prediction. However, in control design it is common practice to work with as simple models as possible, because they are easier
to analyse and evaluate. There is a strong need for methods and tools that can take a complex model and deduce simple models for various purposes such as control design. A simple but good model
captures much knowledge. It points out the basic properties and can give good insight about the process. For simple linear time-invariant models there is a well-established theory and commercially
available tools for design of controllers with given specifications. Real experiments or simulations using more complex models are then used to verify that the designed controller really works well.
For nonlinear models the methods are much less developed. It is simple to derive a linearization on symbolic form from a nonlinear model. It is much more difficult to give explicit expressions for
stationary operating points since these calculations involve nonlinear equation systems. The main idea in model reduction is that a high-dimensional state vector is actually belonging to a
low-dimensional subspace [1, 2, 4]. Provided that the low-rank subspace is known, the original model can be projected on it to obtain a required low-dimensional approximation [3]. The goal of every
model reduction method is to find such a low-dimensional subspace. In this talk I will introduce model reduction and I will overview some of the most used methods.
Download Statistics: last 4 weeks
Repository Staff Only: edit this item | {"url":"http://eprints.ma.man.ac.uk/1638/","timestamp":"2014-04-17T04:05:41Z","content_type":null,"content_length":"10872","record_id":"<urn:uuid:a178a228-2fdb-446b-9479-57eaff0b1fe7>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00294-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Relational lattice
Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
Home -> Community -> Usenet -> comp.databases.theory -> Re: Relational lattice
Re: Relational lattice
From: Vadim Tropashko <vadimtro_invalid_at_yahoo.com> Date: 14 Mar 2005 11:16:41 -0800 Message-ID: <1110827801.062965.41920@l41g2000cwc.googlegroups.com>
Jan Hidders wrote:
> Vadim Tropashko wrote:
> > Jan Hidders wrote:
> >>
> >> The answer to your question is actually in that reference. See
> >> Remark 7.3 on page 16. If you want to know more, the paper by
> >> Kolaitis they refer to is actually also on-line: See Phokion's
> >> page:
> >
> > OK, where is the equation?
> It is constructed in the proof of Prop. 7.2. Each of the conditions
> the variable X in the list 1, .., 4 can be expressed with equations.
> > I have difficulty understanding which constructs they admit into
> > algebra and which don't.
> They use in the proof the standard 5 operation of the flat relational
> algbra and furthermore you can use an arbitrary set of extra relation
> variables (which are of course existentially quantified) next to the
> ones in the database and the one that defines the result.
> > The concept of sparce equation also eludes me:-(
> You can ignore that for now. We only look at sets of equations over
> relations that always define a unique result, and those are always
> sparse anyway.
Ignoring sparsity brings us back to the "obvious equation algebra expression for transitive closure" in the example 4.3. There I see that the problem of finding the minimum solution has been solved
with introduction of an operator
which works over powerset. It seems like the concept of sparse equations is merely a way of bringing powerset algebra back to earth to operate with normal ("flat") relations.
Returning back to proposition 7.2, what are the equations for conditions 3 and 4? Those conditions have universal quantifiers, which is not that much different from universal quantifier that says:
"find a minimal relation that satisfies the TC recurrence relation". Received on Mon Mar 14 2005 - 13:16:41 CST
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US | {"url":"http://www.orafaq.com/usenet/comp.databases.theory/2005/03/14/0141.htm","timestamp":"2014-04-20T12:10:56Z","content_type":null,"content_length":"9129","record_id":"<urn:uuid:d4e7ba51-f9c6-4b60-9465-68eb6940d85d>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00019-ip-10-147-4-33.ec2.internal.warc.gz"} |
Carry out a timing and accuracy comparison of
• Matlab's built-in qr(A,0)
• Your Matlab house/getQ
• Your mex'ed C (or Fortran) house/getQ
The Legendre polynomials make a good test case. Make sure you use a large enough n and m to make the comparsion meaningful (at least m=500, and go higher until it starts to take too long to wait),
but but start with smaller values until you are sure everything is working. Plot your results on nicely labelled plots over a range of test sizes, using different plotting symbols for different codes
(type "help plot"). Measure the times using cputime . (Elapsed time measures obtained by "etime" or "tic; ... toc" are affected by how much work other users are doing on the same machine). Measure
accuracy with norm(Q'*Q - I) and norm(A - Q*R). Make some written comments which draw conclusions from the results.
If you want to use Java instead of C or Fortran, which has the attraction of avoiding the MEX interface altogether, you may do so, but you are on your own as far as the details go. In order to use
the BLAS, you would need either to to call the C or Fortran compiled versions using the Java Native Interface or download Java versions of the BLAS from the web.
Some of you may find this homework very difficult. The key is to carefully look at the files available on the web and, once you understand them, make the necessary changes. Don't try to code from
scratch! If you are lost, don't hesitate to contact me and discuss things - the sooner, the better. Don't panic, eventually you will get it!
I will be out of town again Wed-Fri this week, but available this Tuesday and all of next week. | {"url":"http://cs.nyu.edu/cs/faculty/overton/g22_nm/hw/hw4.html","timestamp":"2014-04-16T07:18:53Z","content_type":null,"content_length":"3321","record_id":"<urn:uuid:f1f4034b-3c3e-422d-897f-b2ac358c4f03>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00628-ip-10-147-4-33.ec2.internal.warc.gz"} |
from The American Heritage® Dictionary of the English Language, 4th Edition
• adj. Of, relating to, or resembling a line; straight.
• adj. In, of, describing, described by, or related to a straight line.
• adj. Having only one dimension.
• adj. Characterized by, composed of, or emphasizing drawn lines rather than painterly effects.
• adj. Botany Narrow and elongated with nearly parallel margins: a linear leaf.
from Wiktionary, Creative Commons Attribution/Share-Alike License
• adj. Having the form of a line; straight.
• adj. Of or relating to lines.
• adj. Made in a step-by-step, logical manner.
• adj. Long and narrow, with nearly parallel sides.
• adj. Of or relating to a class of polynomial of the form .
• adj. A type of length measurement involving only one spatial dimension (as opposed to area or volume).
from the GNU version of the Collaborative International Dictionary of English
• adj. Of or pertaining to a line; consisting of lines; in a straight direction; lineal.
• adj. Like a line; narrow; of the same breadth throughout, except at the extremities.
• adj. Thinking in a step-by-step analytical and logical fashion; contrasted with holistic, i.e. thinking in terms of complex interrelated patterns.
from The Century Dictionary and Cyclopedia
• Of or pertaining to a line or lines; composed or consisting of lines: as, linear drawing; linear perspective.
• Relating to length only; specifically, in mathematics and physics, involving measurement in one dimension only, or a sum of such measurements; involving only straight lines; unidimensional; of
the first degree: as, linear numbers; linear measure.
• In bot., zoöl, and anatomy, like a line or thread; slender; very narrow and elongate: as, a linear leaf.
• In prosody, consisting in or pertaining to a succession of single verses all of the same rhythm and length; stichic: as, linear composition; “Paradise Lost” is linear in composition.
from WordNet 3.0 Copyright 2006 by Princeton University. All rights reserved.
• adj. measured lengthwise
• adj. designating or involving an equation whose terms are of the first degree
• adj. of or in or along or relating to a line; involving a single dimension
• adj. (of a leaf shape) long and narrow
• adj. of a circuit or device having an output that is proportional to the input
Latin līneāris, from līnea, line; see line^1.
(American Heritage® Dictionary of the English Language, Fourth Edition)
From Latin linearis, from linea ("line") + -aris (adjectival suffix). (Wiktionary)
• "The term linear just means that each output bit of the mixing function is the XOR of several of the input bits.",
• If you think in linear algebra terms, the linear system is the addition operator, and the non-linear system is the multiplier (cross product in this case):
• Anthropologists get bound up in linear social progression to such an extent that knob-gourds are interpreted as meaning/representing all sorts of crazy things, when in reality they just make the
wearer feel, you know, sexy.
Cheeseburger Gothic » Yes I get ffkn grmpy whn I’m on fkn deadline.
• Add into the mix the fact that the story is not told in linear time the first half of the book is working backwards into history, while the second half works forwards into the future and you
start to see the complexity of writing like this.
• Astronomers really do measure the moisture content of the atmosphere in linear units (though not in hairs), because what they care about is the total amount of water above the telescope, not the
humidity (and certainly not the relative humidity).
• Analysis of discontinuities in linear memory indicate the effects of a large, quick EM pulse.
365 tomorrows » submission : A New Free Flash Fiction SciFi Story Every Day
• The relationship between height and water pressure in linear and follows the relationship of 1 psi (pounds per square inch) = 2.308 feet of water (27. 7 inches) therefore a typical one-storey
home with a roof tank mounted on the roof will have the water level in the tank approximately 4.5 metres (177.2 inches) above the ground.
• This is said to show the true sequence of recorded events in linear time, presumably on the grounds that Spanish loan words and intrusions would become more common in later chronicles.
• When numbers are spread out evenly on a ruler, the scale is called linear.
• It allows fine-grain control over typographic attributes like leading and kerning, and has two modes: linear is displayed in the screenshot at right; check the web site to see what rotary looks
like (the video is worth more words than I'm willing to type).
Log in or sign up to get involved in the conversation. It's quick and easy. | {"url":"https://wordnik.com/words/linear","timestamp":"2014-04-18T18:52:34Z","content_type":null,"content_length":"39984","record_id":"<urn:uuid:2e7f161a-a413-4310-b67c-0db5455e0f6a>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00433-ip-10-147-4-33.ec2.internal.warc.gz"} |
Section 4: Fractions to Decimals to Percents
Shodor > Interactivate > Textbooks > Math Thematics 1st Ed. Book 2 > Section 4: Fractions to Decimals to Percents
Math Thematics 1st Ed. Book 2
Module 5 - Recreation
Section 4: Fractions to Decimals to Percents
Lesson • Activity • Discussion • Worksheet • Show All
Lesson (...)
Fraction Conversion
Lesson: Students learn how to convert from fractions to decimals.
Fraction Conversion II
Lesson: Students learn how to convert from fractions to percentages.
Activity (...)
Cantor's Comb
Activity: Learn about fractions between 0 and 1 by repeatedly deleting portions of a line segment, and also learn about properties of fractal objects. Parameter: fraction of the segment to be deleted
each time.
Koch's Snowflake
Activity: Step through the generation of the Koch Snowflake -- a fractal made from deforming the sides of a triangle, and explore number patterns in sequences and geometric properties of fractals.
Sierpinski's Carpet
Activity: Step through the generation of Sierpinski's Carpet -- a fractal made from subdividing a square into nine smaller squares and cutting the middle one out. Explore number patterns in sequences
and geometric properties of fractals.
Discussion (...)
Worksheet (...)
No Results Found
©1994-2014 Shodor Website Feedback
Math Thematics 1st Ed. Book 2
Module 5 - Recreation
Section 4: Fractions to Decimals to Percents
Lesson • Activity • Discussion • Worksheet • Show All
Lesson (...)
Fraction Conversion
Lesson: Students learn how to convert from fractions to decimals.
Fraction Conversion II
Lesson: Students learn how to convert from fractions to percentages.
Activity (...)
Cantor's Comb
Activity: Learn about fractions between 0 and 1 by repeatedly deleting portions of a line segment, and also learn about properties of fractal objects. Parameter: fraction of the segment to be deleted
each time.
Koch's Snowflake
Activity: Step through the generation of the Koch Snowflake -- a fractal made from deforming the sides of a triangle, and explore number patterns in sequences and geometric properties of fractals.
Sierpinski's Carpet
Activity: Step through the generation of Sierpinski's Carpet -- a fractal made from subdividing a square into nine smaller squares and cutting the middle one out. Explore number patterns in sequences
and geometric properties of fractals.
Discussion (...)
Worksheet (...)
No Results Found | {"url":"http://www.shodor.org/interactivate/textbooks/section/126/","timestamp":"2014-04-18T18:12:16Z","content_type":null,"content_length":"14865","record_id":"<urn:uuid:b0981399-f2e2-4be3-9e04-4ec92ede8e8c>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00282-ip-10-147-4-33.ec2.internal.warc.gz"} |
Foundation Statistics Mod
October 27th 2012, 04:39 AM
Foundation Statistics Mod
Hello everyone!
I am having trouble getting in contact with my lecturer so I am hoping for assistance here with a few questions and would like support in answering them.
The set data given are arrival times e.g. 11.59pm, 11.43pm,11.23pm ect. all on the same day.
1. Draw the histogram of the Arrival Distribution and use the 5% significance test to test whether it is coming from an exponential distribution.
Do I group the data into interval times?
How to solve exponential. Do I use f(x) = 1/theta . exp ^ -x/theta
2. Estimate the parameter of the distribution
3. Show that the estimate used to estimate its parameter is the best estimate
4. Suppose the population increase by 100% within the next 10 years give the equation of the predicted distribution and draw its histogram
5. If every 5^th customer is getting a voucher give the arrival distribution of the customers who will receive a voucher
6. Give the distribution of the number of customers arriving in every half an hour
October 27th 2012, 06:02 PM
Re: Foundation Statistics Mod
Hey Jon123.
Can you show us what you have tried? To start you off, for number 2, consider the likelihood with the sample (given n observations) and then for MLE solve for the log-likelihood where its
derivative is zero (I assume you want to use MLE but if not just show your attempt for the other estimator).
October 29th 2012, 05:08 AM
Re: Foundation Statistics Mod
Hi Chiro!
f(x,θ) = (1/θ) * exp(-x/θ) 0 ≤ x < ∞
this will equal to (1/θ)^n * exp^ -Σ(-x/θ)
Then solve for θ, which it will equal to θ = Σx/n = x bar
To solve moment est E(x) = integral ∞,0 x * (exp(-x/θ) / θ) dx = θ ect..
m.l.e is unbiased for θ .. so, E(x bar) = θ
E(x1+...+xn / n) = nE(x)/n = E(x) = θ therefore x bar is unbiased for θ
Using Rao cramer lower bound(rclb)
V(x bar) =θ^2 / n ... will equal -1/θ^2 = -1/n[1/-1/θ^2] = θ^2/n .. therefore x bar is the best estimator.
I dont quite understand questions 4, 5 and 6
Are you able to help me answer 2 and 3 ? from what I did above should be correct in theorem. But do i need to replace θ with a number...
The data that given where times from am to pm.. n=107 entries..
October 29th 2012, 07:37 PM
Re: Foundation Statistics Mod
For number 4, consider what happens to the rate when you double the population: if the you double the amount of people that come through the gate then what would this do to the average rate for
the exponential?
For number 5, you need to consider a conditional distribution: you have every 5th person get a voucher so consider a random sample of five people are taken then consider P
(Next-Person-Gets-Voucher|Last-Multiple-Of-Five-Got-A-Voucher) in which conditional probabilities are given by P(A|B) = P(A and B)/P(B).
For number 6, consider what happens to the rate when you are going a "per hour" rate to a "per half hour" rate: How is the rate adjusted in the new set of units? | {"url":"http://mathhelpforum.com/advanced-statistics/206172-foundation-statistics-mod-print.html","timestamp":"2014-04-18T00:26:12Z","content_type":null,"content_length":"6971","record_id":"<urn:uuid:cc89d4fd-1ed2-4375-ba26-c43c803c707a>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00434-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Calculate the Coal Quantity Used in a Power Plant
• Very often, the Power engineer is required to perform some basic calculations regarding the key parameters of a power plant. Most important is the quantity and cost of fuel that is required.This
article gives the simple calculation method. (A detailed calculation required in the context of a contract, tender, performance repor,t or a legal document may require more accurate input data.)
We take the example of a 100 MW Coal Fired Power Plant.
• Energy Content in Coal
The basic function of the power plant is to convert energy in coal to electricity. Therefore, the first thing we should know is how much energy there is in coal. Energy content of coal is given
in terms of KiloJoules (kJ) per Kilogram (kg) of coal as the Gross calorific value (GCV) or the Higher Heating value (HHV) of coal. This value can vary from 10500 kJ/kg to 25000 kJ/kg depending
on the quality and type of the coal.
You should have an idea of the type of coal, or the source or mine from where the the plant gets the coal. Published data about the sources, mines, regions or the procurement data gives an idea
about the HHV of coal. For this example we use a HHV of 20,000 kJ/kg.
• Efficiency
Energy conversion takes place in two stages.
□ The first part of the conversion is efficiency of the boiler and combustion. For this example we take 88 % on an HHV basis that is the normal range for a well-optimized power plant.
□ Second part is the steam cycle efficiency. Modern Rankine cycle, adopted in coal fired power plants, have efficiencies that vary from 32 % to 42 %. This depends mainly on the steam
parameters. Higher steam perssure and temperatures in the range of 600 ° C and 230 bar have efficiencies around 42 %. We assume a value of 38 % for our case.
• Heat Rate
Heat rate is the heat input required to produce one unit of electricity. (1 kw hr)
□ One Kw is 3600 kJ/hr. If the energy conversion is 100 % efficient then to produce one unit of electricity we require 3600 kJ.
□ After considering the conversion efficiency in a power plant we require an heat input of (3600 / 33.44% ) 10765 kJ/ kw hr.
• Coal Quantity
□ Since coal has a heat value of 20,000 kJ/kg, for producing one kw.hr we require (10765 / 20000) 0.538 kg of coal. This translates to (0.538 x 100 x 1,000) 53800 kg/hr (53.8 T/hr) of coal for
an output of 100 MW.
• Coal Cost
Basic cost of coal depends on the market conditions. Transportation costs, regional influences and government taxes are also part of the cost. Coal trader’s web sites give base prices in the
international market.
□ We take a coal price of around 65 $ / Ton.
□ The cost of coal consumed by 100 MW power plant is (53.8 x 65) 3497 $ /hr
□ A 100 MW unit produces 100,000 units of electricity. So the cost of coal per unit of electricity is (3497/100,000) 3.5 cents per unit. | {"url":"http://www.brighthubengineering.com/power-plants/52544-basic-calculations-for-a-power-plant-calculating-the-coal-quantity/","timestamp":"2014-04-18T13:39:33Z","content_type":null,"content_length":"43778","record_id":"<urn:uuid:585e9680-1901-423c-80bf-c60490291472>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00522-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - totally antisymmetric tensor
1. The problem statement, all variables and given/known data
The totally antisymmetric rank 4 tensor is defined as 1 for an even combination of its indices and -1 for an odd combination of its indices and 0 otherwise.
Is a rank 3 totally antisymmetric tensor defined the same way?
2. Relevant equations
3. The attempt at a solution | {"url":"http://www.physicsforums.com/showpost.php?p=1463434&postcount=1","timestamp":"2014-04-20T21:29:53Z","content_type":null,"content_length":"8744","record_id":"<urn:uuid:2dbb9b63-ae3e-467e-92f7-3c4eabc2b286>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00184-ip-10-147-4-33.ec2.internal.warc.gz"} |
MATHEMATICA BOHEMICA, Vol. 127, No. 4, pp. 591-596 (2002)
Induced-paired domatic numbers of graphs
Bohdan Zelinka
Bohdan Zelinka, Department of Applied Mathematics, Technical University of Liberec, Voronezska 13, 460 01 Liberec, Czech Republic, e-mail: bohdan.zelinka@vslib.cz
Abstract: A subset $D$ of the vertex set $V(G)$ of a graph $G$ is called dominating in $G$, if each vertex of $G$ either is in $D$, or is adjacent to a vertex of $D$. If moreover the subgraph $<D
\>$ of $G$ induced by $D$ is regular of degree 1, then $D$ is called an induced-paired dominating set in $G$. A partition of $V(G)$, each of whose classes is an induced-paired dominating set in $G$,
is called an induced-paired domatic partition of $G$. The maximum number of classes of an induced-paired domatic partition of $G$ is the induced-paired domatic number $d_{\ip }(G)$ of $G$. This paper
studies its properties.
Keywords: dominating set, induced-paired dominating set, induced-paired domatic number
Classification (MSC2000): 05C69, 05C35
Full text of the article:
[Previous Article] [Next Article] [Contents of this Number] © 2005 ELibM and FIZ Karlsruhe / Zentralblatt MATH for the EMIS Electronic Edition | {"url":"http://www.kurims.kyoto-u.ac.jp/EMIS/journals/MB/127.4/9.html","timestamp":"2014-04-19T09:44:26Z","content_type":null,"content_length":"2780","record_id":"<urn:uuid:f5c7dc32-9c72-414c-8594-5da8b9367062>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00153-ip-10-147-4-33.ec2.internal.warc.gz"} |
Park Ridge, IL Math Tutor
Find a Park Ridge, IL Math Tutor
...In college I received a Bachelors in Computer Science and was introduced to many languages but PHP in particular was a language I fell in love with. Working in finance, a heavy emphasis was on
Excel and I became adept at programming Excel macros with Visual Basic. I'm very comfortable with comp...
22 Subjects: including prealgebra, computer science, PHP, Visual Basic
...My former students were of different ages, nationalities, and levels of mathematical skills. All of them significantly improved their mathematical knowledge and grades. I hold a PhD in math
ematics and physics and tutor a lot of high school and college students during the last 10 years.
8 Subjects: including algebra 1, algebra 2, calculus, geometry
...I taught four years of High School Math (3 years teaching Algebra 2/Trig). I've been tutoring math since I was in high school 15 years ago. I taught four years of High School Math (All 4 years
teaching Honors Geometry). I've been tutoring math since I was in high school 15 years ago. I have Bachelor of Science in Mathematics Education and a Master of Science in Applied Mathematics.
10 Subjects: including statistics, algebra 1, algebra 2, calculus
...I have been teaching all K-8th Mandarin Chinese for the last three years in public schools. I am certified to teach Mandarin Chinese by ISBE with middle school endorsement. I have taught
Mandarin Chinese in public school for three years.
28 Subjects: including SAT math, Chinese, GRE, algebra 1
...My teaching and mentoring experience includes: - Graduate school teaching assistant with laboratory, lecturing and mentoring responsibilities for 30 students. - Undergraduate engineering tutor
for calculus, engineering analysis, mechanical engineering (Purdue and Northwester Universities) - High ...
22 Subjects: including algebra 1, geometry, precalculus, trigonometry
Related Park Ridge, IL Tutors
Park Ridge, IL Accounting Tutors
Park Ridge, IL ACT Tutors
Park Ridge, IL Algebra Tutors
Park Ridge, IL Algebra 2 Tutors
Park Ridge, IL Calculus Tutors
Park Ridge, IL Geometry Tutors
Park Ridge, IL Math Tutors
Park Ridge, IL Prealgebra Tutors
Park Ridge, IL Precalculus Tutors
Park Ridge, IL SAT Tutors
Park Ridge, IL SAT Math Tutors
Park Ridge, IL Science Tutors
Park Ridge, IL Statistics Tutors
Park Ridge, IL Trigonometry Tutors | {"url":"http://www.purplemath.com/park_ridge_il_math_tutors.php","timestamp":"2014-04-17T19:23:49Z","content_type":null,"content_length":"24000","record_id":"<urn:uuid:e7e102e2-58fb-4bc6-b95d-f826801d1a92>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00171-ip-10-147-4-33.ec2.internal.warc.gz"} |
Guidelines for SEP Soil Structure Interaction Reviews (Generic Letter 80-109)
Guidelines for SEP Soil Structure Interaction Reviews (Generic Letter 80-109)
DEC 15 1980
Enclosed for your information are guidelines for performing soil-structure
interaction reviews for SEP facilities. Also included is a simplified
analytical approach for evaluating the effects of soil-structure interaction
using a lumped parameter model. The simplified approach presented does not
preclude the use of other procedures which would be reviewed and approved on
a case-by-case basis.
Dennis N. Crutchfield, Chief
Operating Reactors Branch #5
Division of Licensing
Enclosure: SSRT Guidelines for SEP Soil-Structure Interaction Review
cc: See next page
Docket Nos: 50-155 TERA RTedesco
50-10 & 237 OI&E (3) TNovak
50-213 ACRS (16) SEP File
50-409 Heltemes, AEOD JRoe
50-245 NRR Reading RDiggs
50-219 SEPB Reading
50-255 WRussell
50-244 DCrutchfield
50-206 HSmith
50-29 Project Managers
NRC PDR GCwalina
Local PDR RHermann
OELD DEisenhut
TERA RPurple
NSIC, JBuchanan GLainas
1211 CIVIL ENGINEERING BUILDING
URBANA, ILLINOIS 61801
8 December 1980
Mr. William T. Russell, Chief
Systematic Evaluation Program Branch
Division of Licensing
Office of Nuclear Reactor Regulation
U. S. Nuclear Regulatory Commission
Washington, D.C. 20555 (Mail Stop 516)
Re: SSRT Guidelines for SEP Soil-Structure
Interaction Review
Contract NRC-03-78-150
Dear Mr. Russell:
The Guidelines for SEP Soil-Structure Interaction Review, as prepared
by the Senior Seismic Review Team, are transmitted herewith with signature
We are appreciative of the help of the many individuals who contributed
to the preparation of these guidelines.
Sincerely yours,
N. M. Newmark
Chairman, SSRT
W. T. Russell - 2
T. Cheng - 1
N. M. Newmark - 2
W. J. Hall - 1
R. P. Kennedy - 1
R. Murray - 1
J. D. Stevenson - 1
December 8, 1980
SSRT GUIDELINES FOR SEP SOIL-STRUCTURE INTERACTION REVIEW
When a structure is founded within or on a base of soil, it interacts
with its foundation. The forces and displacements transmitted to the
structure and the feedback to the foundation regions are complex in nature;
the interactions that take place modify the free-field motions. Many methods
for dealing with soil-structure interaction have been proposed by a number
of writers. These methods can be classified in various ways and involve
generally: (1) procedures similar to those applicable to a rigid block on an
elastic half-space; (2) finite element or finite difference procedures
corresponding to various forcing functions acting on the combined structure
soil complex; and (3) substructure modeling techniques that may or may not
include use of the direct finite element method. Another, and perhaps more
convenient, classification of soil-structure interaction analysis procedures
is that of (a) direct solution techniques and (b) substructure solution
techniques as described in the report entitled "Recommended Revisions to
Nuclear Regulatory Commission Seismic Design Criteria", Report
NUREG/CR-1161, May 1980.
The elastic half-space theory considers a foundation plate resting on
an elastic medium with harmonic oscillation applied to the plate; the few
test results available to date in general have been obtained for this type
of model in this excitation condition. This concept is the basis for the
first of the three procedures described above, although for seismic
excitation the problem is the inverse of the original problem formulation.
in that the excitation originates in the earth. The other two methods noted
also involve modeling of the structure-soil system; as such the system has
intrinsic properties reflecting the make-up of the modeled system, physical
properties, and especially the boundaries (for example, as they affect
motion input, and reflection).
These analysis methods represent major advances in computational
ability, but unfortunately all the techniques have limitations, and in many
cases are not well understood. At present their use involves a great deal
of interpretive judgment.
One principal difficulty with all of the techniques is associated with
the handling of the ground input. Except for special long period waves, in
most cases the ground motion is noncoherent and nonuniform. Thus far it
appears that the analysis models may not be able to handle a broad spectrum
of complex wave motions. None of the techniques adequately handle nonlinear
effects, which are known to be of importance. As yet no good confirmatory
comparison basis exists between field observations and computations made
prior to an earthquake.
This entire topic is one that requires the most careful consideration.
Exercise of judgment as to the meaning of the results, in the light of the
comments given above, is required. Reliance on any sole approach is to be
SEP Review Guideline Recommendations
In keeping with the SEP approach to review existing facilities, and as
reflected in the philosophy and criteria developed to date, it appears
desirable to outline briefly one technical procedure for estimating
soil-structure interaction effects. As a result of extensive discussions
between members of the SSRT and the NRC/LLL staff, and with recognition of
the many uncertainties and complexities of the topic under consideration,
the general approach presented below is recommended at this time as a
guideline. It will be appreciated that many decisions will have to be made
as a part of the calculational procedures described below and the exercise
of judgment obviously will be required. Justification and documentation are
necessary parts of the final analysis product.
At the outset it should be noted that the simplified approach described
below is not intended to preclude the use of any other procedures. The
structural input motions (at the foundation level), however developed and
justified, under no conditions shall correspond to less than 75 percent of
the defined control motions (normally taken as the free-field surface
motions); if a reduction in translational input motion is employed, then the
rotational components of motion also should be included. If other procedures
are employed they should be reviewed on a case-by-case basis.
For purposes of SEP review, one simplified approach for evaluating the
effects of soil-structure interaction, involving a lumped parameter model,
is deemed to be acceptable when employed under the following conditions.
1. The control motions are defined as the free-field surface motions
and are input at the structure foundation level.
2. The soil stiffness, as represented by springs anchored at the
foundation level, shall be modeled as follows.
i) To account for uncertainty in soil properties, the soil properties,
the soil stiffnesses (horizontal, vertical, rocking and torsional) employed
in analysis shall include a range of soil shear moduli bounded by (a) 50
percent of the modulus corresponding to the best estimate of the large
strain condition and (b) 90 percent of the modulus corresponding to the best
estimate of the low strain condition. For purposes of structural analysis
three soil modulus conditions generally will suffice corresponding to (a)
and (b) above, and (c), a best estimated shear modulus.
For structural capacity review the analyst generally should employ the
worst case condition. For equipment review the in-structure response spectra
shall be taken as a smoothed envelope of the resulting spectra from these
three analyses.
ii) When embedment is to be considered it is recommended that the soil
resistances (stiffnesses as noted above) shall correspond to 50 percent of
the theoretical embedment effects. This reduction is intended to account for
changes in soil properties arising from backfilling, and any gap effects.
iii) Where it is judged necessary to model the supporting soil media
as layered media, the stiffnesses are to be estimated through use of
acceptable procedures.
3. The radiation and material energy dissipation (i.e., the damping
values) are considered to be additive for computation convenience. Normally
the material damping can be expected to be about 5 to 8 percent.
The geometric damping (radiation energy dissipation) is recognized to
be frequency-dependent. However, in order to reduce the calculational
effort (at least initially), and to be sure that excessive damping is not
employed, it is recommended that values of damping be estimated
theoretically (on a frequency-independent basis) as follows.
i) Horizontal to be taken as 75 percent of the theoretical value.*
ii) Vertical to be taken as 75 percent of the theoretical value.*
iii) Rotation (rocking and torsional) to be taken at 100 percent of
the theoretical value.*
In the case of layered systems the approach employed in establishing
these values needs to be justified.
4. The following analysis approaches are considered to be acceptable.
i) When all composite modal damping ratios** are less than 20
percent, modal superposition approaches can be used without any validation
ii) If in investigating the use of modal superposition approaches
it is ascertained that a composite modal damping ratio** exceeds 20 percent,
one must perform a validation analysis. To perform this validation, it is
generally acceptable to use a time-history analysis in which the energy
dissipation associated with the structure is included with the structural
elements, and that associated with the soil is included with the soil
*As calculated by generally accepted methods, as for example given in the
book Vibrations of Soils and Foundations, by F. E. Richart, Jr., J. R. Hall,
Jr., and R. D. Woods, Prentice-Hall Inc., 1970.
**As defined by generally accepted methods.
in-structure response spectra obtained from a superposition analysis
employing composite modal damping throughout the frequency range of interest
must be similar to or more conservative than those obtained from the
validation analyses.
It is emphasized that the aforementioned procedures are intended to be
guidelines and may be subject to revision as experience is gained under the
SEP Program in attempting to arrive at relatively economical and simplified
techniques for estimating the possible effects of soil-structure
Respectfully submitted by the Senior Seismic Review Team:
N. M. Newmark, Chairman
W. J. Hall
R. P. Kennedy
R. C. Murray
J. D. Stevenson
Page Last Reviewed/Updated Monday, June 17, 2013 | {"url":"http://www.nrc.gov/reading-rm/doc-collections/gen-comm/gen-letters/1980/gl80109.html","timestamp":"2014-04-17T00:53:14Z","content_type":null,"content_length":"37687","record_id":"<urn:uuid:3cc0c407-4db8-47dd-aa06-1fac1b5c69c4>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00504-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Total # Posts: 15
wouldn't "the sum of twenty-seven and a number" be equivalent to 27 + y
Please help with the translation of this algebraic expression: Twenty-seven more than a number and the sum of twenty-seven and a number, what is the difference between the two? Thank you!
how would i put that into the calculator exactly?
What is the value of K for this aqueous reaction at 298 K? A+B<==> C+D delta G= 23.41 kJ?
A critical reaction in the production of energy to do work or drive chemical reactions in biological systems is the hydrolysis of adenosine triphosphate, ATP, to adenosine diphosphate, ADP, as
described by ATP(aq) +H2O (l) --->ADP(aq) +HPO4^2- for which ΔG°rxn = ...
English 3
Subordinate clause
thank you very much guys..i really appreciate it.
what is the electric field at the position 20 cm from Q2 and 60 cm from Q1? Q2=40 micro coulombs Q1=20 micro coulombs Q2 and Q1 are separated by 80 cm
Water boils at sea level at 100 Degrees Celsius. The boiling point of water decreases 5 degrees Celsius for every mile above sea level. Santa Fe, New Mexico is at 7200 feet above sea level. At about
what point does water boil in Santa Fe,New Mexico?
Please help! In triangle RST,segment XY parallels segment RS. If TX=3, XR=TY and YS=6 find XR.
Please help me unscramble this word seetar
5th grade
a friend and I entered an art contest. a blue ribbon is awarded for 1/3 of the pieces you entered in the contest.I won 2 blue ribbons and my friend won 3 ribbons. Explain how this could be.
Can someone please assist me. Thanks One side of a rectangular stage is 4 ft longer than the other and the diagonal is 20ft. Use the Pythagorean Theorem to determine the lengths of the sides.
what are three external users of accounting | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=quinton","timestamp":"2014-04-20T11:29:43Z","content_type":null,"content_length":"8491","record_id":"<urn:uuid:c2a7d8ed-4b68-4bb5-8968-c71ee17f3255>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00101-ip-10-147-4-33.ec2.internal.warc.gz"} |
Class Schedule
Our family has an exciting new home. Please visit us at:
Visit www.OhanaGym.com for our new and exciting home and schedule of classes
Gymnastics Classes offered:
□ MiniGym– 45 Minutes, Parent assisted – Ages walking to 36 months
□ KinderGym– 45 Minutes, Beginning Level Ages 3-5
□ JuniorGym – 45 Minutes, Beginning Level Ages 5-6
□ Level 1 Beginning Gymnastics or Trampoline – 1 Hour, Ages 7-8
□ Level 2 Beginning Gymnastics or Trampoline – 1 Hour, Ages 9-12
□ Level 3 Intermediate Gymnastics or Trampoline – 1.5 Hours, Ages 7-12
33 Responses to Class Schedule
1. Can you give some info on open gyms? What are the ages, costs, etc.? Do my kids have to take classes regularly to participate or can anyone drop in anytime?
□ Open Gyms are for anyone ages 3-12. The cost is $10/child per hour. No you do not have to take classes to attend. Open Gym is EVERY Friday from 4-6p and Saturdays from 9-10a. Don’t forget
Date Nite- Every 3rd Saturday of the month from 6p-10p. The cost is $40/child and $10/sibling. There are discounts to pre-register and for children enrolled in classes.
2. MiniGym classes are Monday, Wednesday, and Fridays from 9:00-9:45 and Monday – Friday 10am-10:45am
3. I am wondering if you offer an adult class? And if so when it is. Thanks!
□ Adult Classes are Mon-Thur 6p-7:30p. Cost is $75 per month and you may attend as many classes a week as you can!
4. where do I find out what age groups the different classes cover
thank you
□ The class age group description is on the “recreational class” link….just scroll down.
5. Do you do any classes just for tumbling? I’m 15.
□ Hi Maddy-
The Flip-N-Fun class would be perfect for working on your tumbling skills. This class is for teens and adults of all levels and the instructors will help lead you towards your goals. The
class meets Monday – Thursday from 6p-7:30p. The cost is $12/drop in or $40 for 1 time per week or $75 for unlimited (2-5 times per week – includes the Parkour class Friday from 5:30p-7:00p)
Drop in to try a class!
6. What age are the Tumbling Tigers and what is it? I have a kindergartner!
□ Hi Heather-
Tumbling Tigers is a class designed for children on the Autism Spectrum. Our Junior Gym classes are for our “average developing” 5-6yr olds. Just contact the office and we can find the
perfect class for your Kindergartner. info@scsportscentral.com or 713-5954.
GLC at SCSC Staff
7. I have adaugther 16 months old , could you please give me more info about your classes for her?
□ Hi Claudia-
Our MiniGym Program with Teacher Rick would be perfect for your daughter. The MiniGym classes are offered Monday-Friday from 10-10:45am. Please stop in for a free trial class.
8. Do you have a complete list of class descriptions anywhere on your website?
□ Our general class description is listed under our Recreational Classes link. It lists the ages and levels of each class title. For a more in-depth description or if you have specific
questions please email the office directly at : info@scsportscentral.com
9. Can you please tell me where on the website I can locate information on what holidays SCSC is closed?
□ We are closed for Memorial Day (Sat-Monday); Labor Day (Monday); Halloween; Thanksgiving (Monday – Saturday) and 2 weeks for the Winter/Holiday Break. The complete breakdown and list of when
you get make-ups for which closures are posted in the gym by the front door.
10. Do the classes build on skills each week? We live in the Central Valley and would like to come on Saturdays, but are worried our 10 y/o will be left behind if we miss a class. Thanks!
11. Hi, I’m a Mom of a 3 year old little boy, and I bought one of your deals off of Groupon a while back. My Mommy friends, who also have 3 year olds all bought the same deal as well. We were hoping
to get our kids all into the same Kinder Gym, and was wondering if that was even possible? Is there a certain amount of time we should call in advance to make that happen? It would be 4-5 3 year
olds. Thank you!
□ Hi Kim-
I will email you directly with a list of available classes for you and your friends…
12. Do you have a developmental team for young girls? If so, for what ages and what is the practice schedule?
□ Our “Flares” train Tues and Thurs from 3:30-5:00. They range in age from 4-7. Just email or call the office for more details.
13. Hi, I noticed you have a tumbling class for teens/adults, but I have an 8 year old who is interested in improving her tumbling skills for dance. Do you offer just tumbling or acrobatics for kids?
□ Hi Kim! Yes we do. She can either do Nathan’s Trampoline and Tumbling classes on Tuesday at 4p or Rick’s Tumbling class Wednesdays at 4:30p. Just email the office and we can get you signed
14. Just wanted to know if I need to enroll to show up on Tuesday for the Mini Gym and Kinder gym. I purchased teh groupon 10 class package for my children and was not sure if a drop-in for the class
is OK.
15. I have two boys 10 and 8 who are interested in Parkour. I heard you might have a Parkour class at your gym. If so, can you let me know details. They are eager to sign up and get going. Thanks,
16. I am wondering if you have a Parkour class available. I have 12 and 10 year old boys.
17. Hello! I’d like to get my daughter in the Kinder Gym class. Are they drop-in classes? We’d only be able to come in on Saturday. Is Kinder Gym a set class schedule or do you just come in when you
can? My concern with drop-in classes is that they would fill up. Thanks!!
□ Drop-in is only if space permits. So registering for class is the best way to guarantee your spot…and the cheapest!
18. I have two kids under the age of three. If I signed them up for the kinder class is it ok if it’s just me in the class with them or does there need to be another adult. I am a stay at home mom so
I don’t have anyone else who could come.
□ Having an extra hand is always best but Teacher Rick can help and having both kids in class would be fine.
19. Hi, I just bought a groupon for 12 classes…I have a 5 year old & 2 1/2yr old – are there classes for both of them at same time?
□ Sorry, when the 2 1/2 year old is ready for KinderGym then YES! but for now they are in the MiniGym offered M-F at 10 and F at 9. and the 5 yr old has MANY options in our Junior Gym program,
just check our schedule. | {"url":"http://www.scsportscentral.com/?page_id=115","timestamp":"2014-04-20T08:40:18Z","content_type":null,"content_length":"55616","record_id":"<urn:uuid:65181e57-bb9b-44f9-85bb-f0e1580ed8b1>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00658-ip-10-147-4-33.ec2.internal.warc.gz"} |
Function sortrows - sorting strings, MATLAB in Statistics
function sortrows - sorting strings:
The function sortrows sorts each and every row as a block, or group, and it also will work on numbers. Here in this illustration the rows starting with 3 and 4 are placed first then, for the rows
starting with 5, the values in the second column (6 and7) determine the order.
>> mat = [5 7 2; 4 6 7; 3 4 1; 5 6 2]
mat =
>> sortrows(mat)
ans =
Posted Date: 10/22/2012 7:48:04 AM | Location : United States
Your posts are moderated
Use of function polyval: The better the curve fit, the more exact these interpolated and extrapolated values will be. By using the subplot function, we can loop to display the
Statistics There are numerous statistical analyses which can be executed on data sets. In MATLAB software, the statistical functions are in the data analysis help topic known a
Graphics objects: The objects involve graphics primitives like lines and text, and also the axes used to orient the objects. These objects are organized hierarchically, and the
Function polyval - interpolation: The function polyval can then be used to compute the polynomial at particular values. For illustration, we could compute at every value in th
Nested Structures: The nested structure is a structure in which at least one of the members is itself a structure. For illustration, a structure for the line segment may co
Rectangle - graphics objects: The other core graphics object is the rectangle that can have curvature added to it (!!). Merely calling the function rectangle without any argum
Vectorizing: In most of the cases in MATLAB, loops are not essential. As MATLAB is written specifically to work with the vectors and matrices, most operations can be completed
Determine sequence weights for the sequences ACTA, ACTT, CGTT, and AGAT in problem 1 by using Thompson, Higgins, and Gibson method a) compute pairwise distances between sequences
fscanf function - file function: The fscanf reads matrix variable mat columnwise from the file specified by fid. The 'format' involves conversion characters much similar to th
Example of Median For the vector [1 4 5 9 12 33], the median is the average of the 5 & 9 in the middle: >> median([1 4 5 9 12 33]) ans = 7 | {"url":"http://www.expertsmind.com/questions/function-sortrows-sorting-strings-30120070.aspx","timestamp":"2014-04-19T09:31:20Z","content_type":null,"content_length":"29964","record_id":"<urn:uuid:e2993884-31f9-44b0-b464-29d02ce870f9>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00657-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
solving square root equations: square root of x-1+3=x? help please
Best Response
You've already chosen the best response.
Are you sure this is the equation? It's not possible for a number plus two to equal itself.
Best Response
You've already chosen the best response.
\[\sqrt{X-1 +3=x}\]
Best Response
You've already chosen the best response.
It's like that but with the 3 and the x out the square root
Best Response
You've already chosen the best response.
In that case, first you want to isolate the square root, so subtract 3 from both sides\[\sqrt{x-1}+3-3=x-3\] leaving you with \[\sqrt{x-1}=x-3\] next, square both sides to get rid of the square
root sign (remember, a square root is the same thing as taking something to the one-half power. Refer to your exponent rules if you're still confused).\[\sqrt{x-1}^{2}=(x-3)^{2}\] now you have \
[x-1=(x-3)^{2}\] expand the right hand side of the equation using the foil method, and you get \[x-1=x ^{2}-6x+9\] subtract x-1 from both sides, leaving the left side zero. Now you have \[0=x ^
{2}-7x+10\] Use the formula \[x=(-b+or -\sqrt{b ^{2}-4ac})\div2a\] to get your answer.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4f1b750ae4b04992dd23211b","timestamp":"2014-04-20T03:41:19Z","content_type":null,"content_length":"35375","record_id":"<urn:uuid:995b66ae-2526-4fa7-9148-be96fc60f897>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00217-ip-10-147-4-33.ec2.internal.warc.gz"} |
This paper illustrates the use of the nonparametric Wald-Wolfowitz test to detect stationarity and ergodicity in agent-based models. A nonparametric test is needed due to the practical
impossibility to understand how the random component influences the emergent properties of the model in many agent-based models. Nonparametric tests on real data often lack power and this problem
is addressed by applying the Wald-Wolfowitz test to the simulated data. The performance of the tests is evaluated using Monte Carlo simulations of a stochastic process with known properties. It
is shown that with appropriate settings the tests can detect non-stationarity and non-ergodicity. Knowing whether a model is ergodic and stationary is essential in order to understand its
behavior and the real system it is intended to represent; quantitative analysis of the artificial data helps to acquire such knowledge.
Statistical Test, Stationarity, Ergodicity, Agent-Based, Simulations
The aim of this paper is to present a set of nonparametric tools to perform a quantitative analysis of the emergent properties of an agent-based model, in particular to assess stationarity and
ergodicity. Modeling a system by using an agent-based model implies assuming that no explicit mathematical form can explain the behavior of the system. The impossibility of having an analytical
form of the data generator process and the difficulty in understanding how the random component influences the process, require—besides the traditional parametric tools—nonparametric statistics
and tests. The choice of a nonparametric test instead of the traditional parametric tests derives from the particular problem faced. As stated by Siegel (1957, p. 14): "By a criterion of
generality, the nonparametric tests are preferable to the parametric. By the single criterion of power, however, the parametric tests are superior, precisely for the strength of their
assumptions". Any conclusion derived from using parametric tests is valid only if the underlying assumptions are valid. Since the objective is to analyze the behavior of the model, the test can
be performed on the artificial data. By construction the stochastic behavior of the model is unknown, while the power problem of nonparametric tests can easily be solved by increasing the number
of observations; in that way nonparametric tests are perfectly suited for testing agent-based models. By considering the agent-based model as a good approximation of the real data generator
process, the tests on artificial data can also be used to make inferences on the real data generator process.
Knowledge of the basic properties of the artificial time series is essential to reach a correct interpretation of the information which can be extracted from the model. To acquire such knowledge
it is important to perform statistical testing of the properties of the artificial data (Leombruni and Richiardi 2005; Richiardi et al. 2006). Supposing that an agent-based model has a
statistical equilibrium, defined as a state where some relevant statistics of the system are stationary (Richiardi et al. 2006) the stationarity test can help in detecting it. The ergodicity test
helps in understanding whether the statistical equilibrium is unique (conditional to the parameters of the model) regardless of the initial conditions. Whether the aim is to compare the moments
of the agent-based model with different parameter settings with observed data or to estimate the structural parameters, it is necessary to know if the model produces stationary and ergodic
series. If the artificial data are stationary and ergodic, a different emphasis can be given to the theoretical and empirical results of the model and artificial and real data can be compared to
estimate the structural parameters (Gilli and Winker 2003; Grazzini 2011a). One aim of the paper is to underline the importance of statistical tests for stationarity and ergodicity as
complementary tools in addition to sensitivity analysis and other quantitative studies over the behavior of the agent-based model to have a deeper understanding of the model itself. Another aim
of the paper is to describe nonparametric tests for stationarity and ergodicity easy to use and to apply to agent-based models. The stationarity test is described in section 2 and the ergodicity
test is described in section 3. Performance of the tests has been evaluated using the Monte Carlo method^[1] and applied to a simple agent-based model in section 4.
Stationarity is the property of a process necessary to estimate consistently the moments using the observations of the time series. The properties of a strictly stationary data generator process
are constant in time. This implies that each observation can be considered as an extraction from the same probability distribution and that each observation carries information about the constant
properties of the observed realization of the data generator process^[2]. In agent-based models the stationarity test is important to know whether the model reaches a statistical equilibrium
state. Given a stochastic process {X[t]} t=1,2,…., {X[t]} is strictly stationary if X[t] has the same distribution for every t, and the joint distribution of (X[t],X[t1],X[t2],…,X[tn]) depends
only on t1-t,t2-t,… and not on t. A less demanding definition of stationarity is the covariance stationarity. A stochastic process is covariance stationary if E(X[t])=μ is independent of t, and
if the covariance Cov(X[t],X[t-j]) depends only on j. An example of a strictly stationary process is the white noise, with x[t]=u[t] where u[t] is i.i.d. Examples of non-stationary series are the
returns in a stock market, where there is clustered volatility (the variance changes during the series) and GDP time series exhibiting time trends like most aggregate time series (Hayashi 2000).
Stationarity is a very important property of time series since it has both theoretical and practical implications. For this reason there is a wide variety of tests to verify the stationarity of
time series (see Phillips and Xiao 1998 for a survey). Dickey-Fuller tests (Dickey and Fuller 1979; 1981) are unit root tests (the null-hypothesis is nonstationarity) based on an autoregressive
process of known order with independent and identical distributed (iid) errors, u[t]. The test needs quite strong hypotheses: the knowledge of the model producing the time series and the
assumptions on the error term. An important extension of the test was proposed by Said and Dickey (1984) where it is shown that the original Dickey-Fuller procedure is valid also for a more
general ARIMA(p,1, q) in which p and q are unknown. The extension allows using the test in the presence of a serial correlated error term. Adding lags to the autocorrelation allows the
elimination of the effect of serial correlation on the test statistics (Phillips and Xiao 1998). An alternative procedure was proposed by Phillips (1987) using a semi-parametric test (Phillips
and Xiao 1998) allowing the error term to be weakly dependent and heterogeneously distributed (Phillips 1987; Phillips and Perron 1988). Another test proposed as complementary to the unit root
tests is the KPSS test (Kwiatkowski, Phillips, Schmidt and Shin 1991) in which the null-hypothesis instead of being nonstationarity is stationarity. Note that all the tests described above are
parametric in the sense that they need assumptions about the stochastic process generating the tested time series. In a framework where the complexity of the model is such that an analytical form
has been regarded as not able to represent the system, such assumptions over the data generator process may be too restrictive. Therefore, in addition to the parametric tests, it can be
interesting to perform a nonparametric test that does not need any assumption over the data generator process. As noted in the introduction, the problem with parametric tests is the power of the
test. Fewer assumptions imply the need for more information thus more observations. Given that the test will be made on the artificial data, the power of the test is not a problem since the
number of available observations can be increased at will with virtually no costs. Our interest is to understand how a given set of moments of the simulated time series behaves. If we want to
test the equilibrium properties of the model, if we want to compare observed and simulated moments or if we want to understand the effect of a given policy or change in the model, we are
interested in the stationarity and the ergodicity of a given set of moments. The test which is described in the next section will test whether a given moment is constant during the time series,
using as the only information the agent-based model. The nonparametric test used is an application of the Wald-Wolfowitz test (Wald and Wolfowitz 1940). As it will be shown the Wald-Wolfowitz
test is suited for the type of nonparametric test needed and is easy to implement on any statistical or numerical software since the asymptotic distribution of its test statistic is a Normal
Stationarity Test
The test which will be used to check stationarity is the Runs Test (or Wald-Wolfowitz test). The Runs Test was developed by Wald and Wolfowitz (1940) to test the hypothesis that two samples come
from the same population (see paragraph about ergodicity below). Particularly the extension that uses the Runs Test to test the fitness of a given function will be employed (Gibbons 1985). Given
a time series and a function that is meant to explain the time series, the observations should be randomly distributed above and below the function if the function fits the time series,
regardless of the distribution of errors. The Runs Test tests whether the null hypothesis of randomness can be rejected or not. Given the estimated function, a 1 is assigned to the observations
above the fitted line, and a 0 to the observations below the fitted line (where 1 and 0 are considered as symbols). Supposing that the unknown probability distribution is continuous, probability
will be 0 that a point lies exactly on the fitted line (if this occurs that point should be disregarded). The outcome of the described process is the sequence of ones and zeros that represents
the sequence of observations above and below the fitted line. The statistics used to test the null hypothesis is the number of runs, where a run is defined as "a succession of one or more
identical symbols which are followed and preceded by a different symbol or no symbol at all" (Gibbons 1985). For example in the sequence 1,0,0,1,1,1,0 there are 4 runs ({1},{0,0},{1,1,1} and
{0}). The number of runs, too many or too few runs, may reflect the existence of non-randomness in the sequence. The Runs Test can be used with either one or two sided alternatives^[3] (Gibbons
1985). In the latter case the alternative is simply non-randomness, while the former (with left tail alternative) is more appropriate in the presence of trend alternatives and situations of
clustered symbols, which are reflected by an unusually small number of runs. Following Wald and Wolfowitz's notation (1940), the U-statistic is defined as the number of runs, m as the number of
points above the fitted function and n as the points below the fitted function. The mean and variance of the U-statistic under the null-hypothesis are
The asymptotic null-distribution of U, as m and n tend to infinity (i.e. as the number of observations tends to infinity) is a normal distribution with an asymptotic mean and asymptotic variance.
In the implementation of the test, exact mean and variance (1) and (2) are used to achieve better results with few observations and equivalent results with many observations. The derivation of
the finite sample properties and of the asymptotic distribution of U is reported in the literature (Wald and Wolfowitz 1940; Gibbons 1985). To conclude, the Runs Test tests the null-hypothesis
that a given set of observations is randomly distributed around a given fitted function; it tests whether the fitted function gives a good explanation of the observations.
The idea is to use the test described above to check the stationarity of a time series. Defining the moment of order k as the non-centered moment of order k:
and supposing that we have an agent-based model, we want to test the stationarity of a given set of moments of the artificial time series. We may be interested in the behavior of the moments to
compare them to some real data, or we may use the moments to analyze the behavior of the model under different conditions. In any case it is necessary to know whether the estimation of the
artificial moment of order k is consistent (i.e. if it reflects the behavior of the model). In order to check whether a moment is stationary, we have to check whether the moment is constant in
time. The first step is to divide a time series produced with the model into w windows (sub-time series). Then the moment of order k for each window is computed. If the moment of order k is
constant, the "window moments" are well explained by the moment of the same order computed over the whole time series ("overall moment"). To test the hypothesis of stationarity the Runs Test is
used: if the sample moments are fitted by the "overall moment" (i.e. if the sample moments are randomly distributed around the overall moment), it is concluded that the hypothesis of stationarity
for the tested moment cannot be rejected. A strictly stationary process will have all stationary moments, while a stationary process of order k in this framework means that the first k
non-centered moments are constant.
To run the test, the length of the time series and the length of the windows must be decided. Under the null hypothesis, longer windows imply a better estimation of the subsample moments, but at
the same time they imply fewer windows (given the length of the time series) and a worse approximation of the distribution of runs toward the normal distribution. The trade off can be solved by
using long series and long windows. In the following, Monte Carlo experiments will be made to check the performance of the test; in particular a time series of 100000 observations will be used,
and the performance of the test on 100 processes will be checked. The following window lengths will be used^[4]: 1, 10, 50, 100, 500, 1000, 5000, 10000. By changing the length of the windows the
number of samples is changed (since the length of the time series is fixed). The experiments will check the stationarity of the moment of order 1 (mean) of an autoregressive function of the first
with θ=0 (strictly stationary), θ=0.99 (stationary), and θ=1(non-stationary), and ε[t] a random error with uniform distribution U(-1,1). The experiments have been carried out using the two-tail
test. In figure 1 is shown an example of a strictly stationary process (θ=0) with the overall mean and the window means (the window length is 10). In figure 2 an example of a nonstationary
process (θ=1) is shown together with its overall and window means. The different behavior of the first moment in the two different processes is clear. The test assigns a 0 to the window moments
below the overall moment and a 1 to the window moments above the overall moment. The different behavior of the two processes is detected by the test from the different number of Runs. In figure 1
the overall mean is a good estimator of the window means (and the null-hypothesis cannot be rejected), in figure 2 the overall mean is not a good estimator of the window means (the
null-hypothesis is rejected).
Figure 1. An example of the process (4) with θ=0 and y[0]=0. The black line is the time series, the red line is the overall mean, the blue dots are the window means (shown in the middle of the
windows, the window length is 10). In a stationary series the window moments are randomly distributed around the overall moment.
Figure 2. An example of the process (4) with θ =1 and y[0]=0. The black line is the time series, the red line is the overall mean, the blue dots are the window means (shown in the middle of the
windows, the window length is 10). The overall mean is not a good estimator of the window means, the test detects non-stationarity due to the small number of runs (the number of runs defined on
the window moments is only 2 in this example).
Figure 3. Stationarity null-hypothesis rejected (%) with different window lengths. The process is stationary.
Figure 4. Stationarity null-hypothesis rejected (%) with different window lengths. The process is stationary.
Figure 5. Stationarity null-hypothesis rejected (%) with different window lengths. The process is non stationary.
Figures 3, 4 and 5 show the result of Monte Carlo simulations of the test on the first moment using the process defined in (4) with different values of θ. The null hypothesis is that the first
moment is constant, and in turn that the sub-time series of moments are fitted by the overall first moment. Since a type-I error^[5] equal to 0.05 is being used, we will find that the null
hypothesis is rejected when the null is true in 5% of the cases; this occurs with both θ= 0 and θ = 0.99. It is interesting to note that the length of the windows has no influence when the
process is strictly stationary. Particularly, if every observation has the same distribution, the stationarity can be detected even when the window length is equal to 1. However, if θ=0.99,
longer windows are needed to detect the stationarity property in order to allow the sub-time series to converge toward the overall mean; in other words more observations are needed to obtain a
good estimation of the subsample moments. Non-stationarity is also simple to detect; the test has full power (it can always reject the null when the null is false) for all the window lengths
except the ones that reduce the number of windows under the threshold of good approximation of the normal distribution (the test has power 1 as long as the number of samples is more than 50).
According to the experiments, the best option seems to be a window of length 1000 (given the length of the whole time series) that permits both the estimation of the subsample moments and at the
same time the convergence of the distribution of the runs toward the normal distribution. The test has to be repeated for every needed moment. Figure 6 shows how the first and second window
moments (the dots) behave in a time series produced by a process as described in (4) with θ=0 and with an error term that has a distribution of U(-1,1) in the first part of the time series and a
distribution of U(-8,8) in the second part. The test outcome is (correctly) stationarity of the first moment and non-stationarity of the second moment. The "overall second moment" does in fact
not fit the subsample second moments, and the test can detect this lack of fit caused by the limited number of runs (only two in this case).
Figure 6. The dots are the window moments, the line is the overall moment. The first moments are randomly distributed around the overall mean (above). The second moments are not randomly
distributed around the overall moments (below).
The experiment shows the flexibility and the limits of the test: we can—and must—test the moments we need. If the length of the time series and the number of windows are properly set, the result
stating stationarity for the tested moment is reliable, i.e. the power of the test approaches 1, while the actual type-I error is around 5%. If non-stationarity is found, the traditional methods
may be used to transform the series into stationary (for example detrending or differentiating the series) and the nonparametric test can then be used on the transformed series.
Ergodicity, together with stationarity, is a fundamental property of a data generator process. Hayashi (2000) gives the following formal definition of ergodicity: a stationary process {y[t]} is
said to be ergodic if for any two bounded functions f:R^k → R and g:R^l → R,
Heuristically, this means that a process is ergodic if it is asymptotically independent: two distant observations are almost independently distributed. If the process is stationary and ergodic
the observation of a single sufficiently long sample provides information that can be used to infer about the true data generator process and the sample moments converge almost surely to the
population moments as the number of observations tend to infinity (see the Ergodic Theorem in Hayashi 2000, p. 101). Ergodic processes with different initial conditions (and in agent-based
models, with different random seeds) will thus have asymptotically convergent properties, since the process will eventually "forget" the past.
Ergodicity is crucial to understanding the agent-based model which is being analyzed. If the stationarity test reveals the convergence of the model toward a statistical equilibrium state, the
ergodicity test can tell whether such equilibrium is unique. Supposing that we want to know the properties of a model with a given set of parameters, if the model is ergodic (and stationary) the
properties can be analyzed by using just one long time series. If the model is non-ergodic it is necessary to analyze the properties over a set of time series produced by the same model with the
same set of parameters but with different random seeds (that is with a different sequence of random numbers). It is even more important to take ergodicity into consideration if the model is
compared with real data. If the data generator process is non-ergodic, the moments computed over real data cannot be used as a consistent estimation of the real moments, simply because they are
just one realization of the data generator process, and the data generator process produces different results in different situations. A typical example of a stationary non-ergodic process is the
constant series. Supposing for example that a process consists of the drawing of a number y[1] from a given probability distribution, and that the time series is y[t ]= y[1] for every t. The
process is stationary and non-ergodic. Any observation of a given realization of the process provides information only on that particular process and not on the data generator process. Despite
the importance of the ergodicity hypothesis in the analysis of time series, the literature about ergodicity tests is scarce. Domowitz and El-Gamal (1993; 2001) describe a set of algorithms for
testing ergodicity of a Markovian process. The main intuition behind the test is the same as the one used in this paper: if a data generator process is ergodic it means that the properties of the
produced time series (as the number of observations goes to infinity) is invariant with respect to the initial conditions. The test described below is different from the one described by Domowitz
and El-Gamal (1993) since it involves different algorithms, uses a different nonparametric test and is intended to be used directly on any computational model (an example of application to a
simple agent-based model will be shown in section 4). The algorithm basically creates two samples representing the behavior of the agent-based model with different random seeds and compares the
two samples using the Wald-Wolfowitz test (Wald and Wolfowitz 1940). There are a number of alternative nonparametric tests that could have been used such as the Kolmogorov-Smirnov test (used by
Domowitz and El-Gamal 1993) and the Cramer-Von Mises test (for references about nonparametric tests see for example Darling 1957, Gibbons 1985 and Gibbons and Chakraborti 2003). The
Wald-Wolfowitz test was chosen due to its simplicity and to the fact that under the null-hypothesis the test statistic is asymptotically distributed as a Normal, which implies an easy
implementation with most statistical and numerical software and libraries (in this paper the Rpy library for Python was used).
The test described below is a test of ergodicity of the moment of order k; it tests the invariance of the moment of order k between different processes produced by the same data generator process
with different random seeds. The ergodicity test says if the first moments (for example) of a series can be used as the estimation of the true moment of the data generator process. It is
necessary to replicate the test for every moment required.
Ergodicity Test
To test the ergodic property of a process, the Run Test is used again, but this time in the original version presented by Wald and Wolfowitz (1940) to test whether two samples come from the same
population. Wald and Wolfowitz's notation is used supposing that there are two samples {x[t]} and {y[t]}, and supposing that they come from the continuous distribution f(x) and g(x). Z is the set
formed by the union of {x[t]} and {y[t]} and the Z set is arranged in ascending order of magnitude. Eventually, the V set is created, i.e. a sequence defined as follows: v[i]=0 if z[i ]∈{x[t]}
and v[i]=1 if z[i]∈{y[t]}. A run is defined as in the previous section and the number of runs in V, the U-statistic, is used to test our null hypothesis f(x)=g(x). In the event that null is true,
the distribution of U is independent of f(x) (and g(x)). A difference between g(x) and f(x) will tend to decrease U. If m is defined as the number of elements coming from the sample {x[t]}
(number of zeros in V) and n as the number of elements in Z coming from the sample {y[t]} (number of ones in V), m+n is by definition the total number of observations. The mean and the variance
of the U-statistics are (1) and (2). If m and n are large, the asymptotic distribution of U is a Normal distribution with the asymptotic mean and the asymptotic variance (as in the stationarity
test, the exact mean and variance to implement the test are used). Given the actual number of runs, U, the null hypothesis is rejected if U is too low (U is tested against its null distribution
with the left one-tailed test). In this case the aim is to use this test as an ergodicity test supposing that the stationarity test has not rejected the null hypothesis of stationarity. Under the
null hypothesis of ergodicity, the sample of moments computed over the sub-samples of one time series has to come from the same distribution of the moments computed on many processes of the same
length as the sub-samples. To test ergodicity of the first moment (or moments of higher order) one random long time series is created and divided into sub-samples. As shown in the previous
paragraph, in order to have a good estimate of the sample moments 100000 observations can be used and divided into 100 sub-samples of 1000 observations each. The first sample (e.g. {x[t]} ) is
formed by the moments of the first order of the 100 sub-samples. To create the second sample (e.g. {y[t]} ) 100 random time series are created (with different random seeds) of 1000 observations
each, and the moment of the first order of each process is computed. Given the two samples the Runs Test can be used as described above. Under the null hypothesis, sample {x[t]} and sample {y[t]}
have the same distribution. The moments of the two samples have to be computed over time series of the same length (in this case 1000). Under the null hypothesis, the variance of the moments
depends on the number of observations used to compute the moments. For example, the use of very long time series to build the second sample would produce a sample of moments with a lower
variance, and the Runs Test would consider the two samples as coming from different distributions.
As regards the implementation of the test, it is worth noticing the case in which the time series converges during the simulation toward a long run equilibrium. If the number of observations is
sufficiently long the stationarity test will correctly deem the process as stationary, whereas the ergodicity test will yield a non-ergodic outcome even if the process is ergodic. This result is
due to the way the samples are produced. In figure 7 it is possible to see what happens; it shows the long process, the short process and the set of moments computed from the 100 sub-samples of
the long time series (dots) and the moments computed over 100 different processes (squares). The need to keep the number of observations equal in the sub-samples and in the short processes
creates the "convergence problem". Since the small processes used to build the second sample of moments do not have the time to reach the long run mean, the ergodicity test will detect
non-ergodicity: it will find significant differences between the two samples.
Figure 7. The long process (above), a short process (middle) and the moments computed from the subsamples of the long process (points) and the moments computed from the short processes (squares).
The process used in figure 7 is y[t]=0.99y[t-1]+u[t] where u[t ]∼ N(1,1) and y[0]=0. The process is stationary and ergodic; it starts from zero and converges toward the asymptotic mean E(y[t])=
100. The ergodicity test will fail, since the two samples are clearly different (see bottom of figure 7). A strategy to solve this problem is to select a set of observations of the length of the
sub-samples in a region where the time series has already converged to the long run mean. For example, to build the second sample a set of time series with 2000 observations can be created and
the moments using the last 1000 observations can be computed.
To check the performances of the test a Monte Carlo simulation testing the ergodicity of the first moment is performed using the following process^[6]:
where u[t ]∼ N(l,1) and l is a random variable distributed as U(-5,5). At the beginning of the process a draw of l will determine the expected value of u[t ]for the whole process. If the process
(6) is stationary, i.e. |θ|< 1, the value of l determines the equilibrium state of the process. The stationarity test on the first moment would, in this case, detect the presence of an
equilibrium state, while the ergodicity test on the first moment will detect whether the equilibrium is unique (given θ). Figure 8 shows different examples of the process (6) with different
values of l, with y[0]=0 and θ = 0.9. It is clear that the process converges toward a different equilibrium state depending on the value of l.
Figure 8. Different initializations of the process (6). The value of l extracted at the beginning of the process determines the equilibrium state of the process. The process is stationary (has an
equilibrium state) and non-ergodic (given θ the process has multiple equilibrium depending on the value of l).
The process is ergodic if l is always the same for any initialization of the process. The aim of the process defined in (6) is to replicate a situation in which the starting condition has an
everlasting effect on the process. By isolating the convergence problem using the method described above^[7], the test performs exactly in the same way whether an ergodic process has a
convergence phase or not. These performances are shown in the first part of figure 9 where the process defined in (6) was tested with initial condition y[0]=0 and with different values of θ .
When θ = 0 and θ= 0.99 and the processes are ergodic the test on the first moment gives non-ergodicity results in about 5% of cases (due to the type-I error) but if the process is
non-stationary,θ=1, the test gives 100% non-ergodicity results (as stated before, the ergodicity test needs a stationary process to work).
Figure 9. Ergodicity test performances (% of rejection of the ergodicity null hypothesis under different conditions). The test on the first moment for an ergodic process (above) and the test on
the first moment for a non-ergodic process (below). One experiment is made by testing the same process100 times using different random seeds. The experiment was done 5 times for each setting.
The second part of figure 9 shows the test on the first moment made on a non-ergodic process (note that the process defined in (6) is non-ergodic in the first moment), where an initial random
draw determines the asymptotic mean of the process. Since y[0] is fixed, the technique described above has been used to solve the convergence problem. In this way it is certain that there is no
bias in the test (if the convergence problem had not been overcome, the result would have been non-ergodicity also in case of ergodicity and it would not have been possible to exclude the
possibility that the high power of the test was due to the convergence problem). The test can detect non-ergodic processes with power 1. In order to clarify how the test works, the two samples
built to make the test for an ergodic (above) and a non-ergodic process (below) are shown in figure 10. From a simple graphic analysis it is possible to see intuitively how the two samples come
from the same distribution in case of an ergodic process; while there is a difference in case of non-ergodicity (the dots are the first sample, while the squares are the second sample).
Figure 10. The test checks whether there is a significant difference between the two samples of moments. The two samples in the case of ergodicity (above) and the two samples in the case of
non-ergodicity (below). The dots are the moments coming from the first sample (built with the moments of the subsamples of the long series) and the squares are the moments coming from the second
sample (built with the moments of the short processes). The process is as in (6) with θ= 0.
The test can detect whether there is a significant difference between the two sets of moments. This is a test of invariance of the mean between different processes; if the test is passed the mean
of an observed series can be used as a consistent estimator of the true mean. The process may be ergodic in the first moment but non-ergodic in the second moment. To analyze the performance of
the test in case of a non-ergodic second moment the same setting as before is used (6) with u[t]∼[ ]N(0,l). If the process is non-ergodic, the variance of the error changes in different
processes, l∼ U(1,5). In this case it is necessary to test the ergodicity of both the first and the second moment (as defined above). In order to test the second moment, the first sample is built
using the second moment of the 100 subsamples and the second sample using the second moment of the 100 processes. The test is exactly as above but comparing second moments. Given a model that
produces the data following the process (6) with the new error term, and provided that the process is stationary and non-ergodic, the outcome of the test of ergodicity on the first moment yields
between 20% and 30% of non-ergodicity results. This is because the different variance of the error implies a different variance in the first moments, so despite the fact that the different
processes have the same mean, the test detects that "something is wrong". Given such a result it is necessary to test for the ergodicity of the second moment. The outcome of the test (on the
second moment) on a (second moment) non-ergodic process yields 100% of non-ergodic results (the power is 1). The result of the test on an ergodic stationary process gives about 5% of
non-ergodicity results, which is the chosen type-I error.
Figure 11. The two samples of second order non-centered moments in the case of ergodicity (above) and non-ergodicity (below).
In figure 11 the (second) moments in the first and the second samples for an ergodic process (with error term u[t ]∼ N(0,1)) and the (second) moments in the first and in the second sample for a
non-ergodic process (with error term u[t ]∼ N(0,l) and l ∼ U(1,5), l is fixed at the beginning of the process as it depends on the "initial conditions"), in both figures the process is as in (6)
with θ=0.
In this section a partially modified version of the computational stock market proposed by Gode and Sunder (1993) will be presented and analyzed^[8]. The aim is to use the tests on an actual
(even if very simple) agent-based model. Gode and Sunder (1993) propose a computational stock market model where the agents have very limited cognitive abilities. The objective is to compare an
experimental stock market with human subjects to a computational stock market with two different types of agents. The intention is to reproduce the result of the experiment in a completely
controlled computer environment. The problem of experimental economics is that it is possible to control the environment, but it is not possible to control the effect of the behavior of the
agents. Using agent-based modeling, the behavior of the agents is completely formalized as an algorithm and it is possible to check the behavior of the system with different types of agents.
Following the well-known experiment described in Smith (1962), Gode and Sunder (1993) make an experiment in which they randomly divide the subjects in buyers and sellers, provide each trader with
a private value (known only to the trader) for the fictitious commodity traded in the stock market. The subjects were free to submit orders at any time, conditioned only by their assigned role
and by their private value. The interesting results described by Smith (1962) and replicated in Gode and Sunder (1993) is the fast convergence of the transaction price toward the theoretical
equilibrium, defined by the demand and supply schedules induced by the experimenter. The market is highly efficient, despite the low number of traders and the private nature of the evaluation of
the fictitious commodity. Gode and Sunder (1993) built a computational market organized as a continuous double auction. The behavior of the market with human subjects is compared to the
computational market with Zero-Intelligence (ZI) traders and with Zero-Intelligence Constrained (ZI-C) traders. The ZI traders made orders only conditional to their role choosing a price randomly
between 1 and 200. The ZI-C traders made orders conditional both to their role and to their private value. The chosen price would be a random value between the private cost and 200 for the
sellers and between 1 and the private value for the buyer. Gode and Sunder (1993) interpreted the constraint as imposed by the market mechanism: "Market rules impose a budget constraint on the
participant by requiring traders to settle their account" (Gode and Sunder 1993, p. 122). The astonishing result is that the efficiency of the market increases dramatically with the imposition of
constraints on the traders' behavior. Gode and Sunder (1993) conclude that what really matters in the observed convergence of the price toward the theoretical equilibrium in the experiment are
the market rules, while the rationality and profit-seeking behavior of human traders can account only for a small fraction of the efficiency of the market (this conclusion has been criticized,
see in particular Gjerstad and Shachat 2007 and Cliff and Bruten 1997). The model presented below is very similar to the Gode and Sunder (1993) model^[9]. There are 22 artificial traders divided
in two groups: sellers and buyers. Each seller is given a number representing the cost and each buyer is given a number representing the value of the traded asset. Values and costs are private
information. The trading is divided into periods, in each period each trader is free to issue orders. When a deal is made the involved traders drop out from the market for that period. At the end
of each period the order book is cleared. This division of time allows the definition of a demand schedule and a supply schedule for each time period and therefore the definition of the
theoretical equilibrium of the market (see Smith 1962). In figure 12 the demand and supply schedules, the market price over 10 periods with ZI agents and the market price over 10 periods with
ZI-C traders are shown. The volatility of the price is heavily reduced and the market price fluctuates around the theoretical equilibrium even if the agents behave (almost) randomly.
Figure 12. From left to right: the per period demand and supply schedules defined by the private values and costs given to the traders; the transaction price in a market with Zero-Intelligence
traders; the transaction price in a market with Zero-Intelligence Constrained traders. The vertical lines in the central and right figures refer to the periods.
The aim is to test whether the model with ZI-C traders is stationary and ergodic. The method that will be used is the one described in previous sections and the test will be made on the first
moment. The Monte Carlo experiments performed in section 2 on a process with known properties showed that to have a test with full power in the stationarity test it is necessary to have a minimum
number of windows (at least 50, to have a good approximation of the distribution under the null-hypothesis). On the other hand, the length of the windows is important to detect stationarity when
the process is weakly stationary. The stationarity test is performed over time series with 2500 observations using windows with a length of 1, 10, 30 and 50. The results are in figure 13. The
rejection rate for 100 simulations for each of the window lengths is coherent with the chosen type-I error (5%), it is therefore not possible to reject the null hypothesis of stationarity. The
test is not rejecting stationarity even with windows of length 1, a result which allows inferring that the transaction price is strictly stationary^[10].
Figure 13. The % of rejections of stationarity null-hypothesis with different window lengths made on the stock market model.
The ergodicity test is carried out as described in section 3. A long time series is created and divided into windows, and the first moment of each window is computed. Given the stationarity test
we know that the expected value of each window is the same, and therefore we know that the sample moment computed over each window comes from the same distribution, with a mean value equal to the
expected value and variance depending on the variance of the process and on the length of the windows. The aim of the ergodicity test is to test whether the model shows the same behavior when the
initial conditions are changed. The ergodicity test will therefore compare the set of moments computed over the windows of one long time series (sample 1) with a set of moments computed over a
set of different runs of the same model with different random seeds (sample 2). Under the null-hypothesis of ergodicity sample 2 comes from the same distribution as sample 1, the test is made
using the Wald Wolfowitz test. The ergodicity test is performed using one long time series of 1000 observations divided into 100 windows with length 10 to form sample 1. Sample 2 is created by
running the model 100 times with different random seeds with a length of 10 observations. The Wald-Wolfowitz test compares the two samples and cannot reject the null hypothesis, i.e. the model is
ergodic. The 100 tests present a rejection rate of 5%, a result that is coherent with the chosen type-I error (5%). The results of the tests are that the model is stationary and ergodic.
It is also possible to modify the model in order to make it non-ergodic^[11]. From the analysis of the behavior of the model we know that the mean transaction price—the equilibrium since it is
stationary—depends on the private values of the agents. In the ergodic version of the model the agents arrive on the market with an evaluation of the traded commodity. A non-ergodic version of
the model can be built by supposing that the traders arrive on the market with a confused evaluation of the commodity and that they wait in order to observe the behavior of the other traders
before they decide how to trade. In behavioral terms it can be explained as an anchoring behavior (Tversky and Kahneman 1974), i.e. the traders look for information in the environment (for
example observing the actions of the other traders) before they decide how to behave. If the anchoring happens only once at the beginning of the trading day the market is stationary (there is an
equilibrium) but non-ergodic since the particular value of the equilibrium depends on the anchoring value (that will be simply represented as a random draw at the beginning of the market day). To
give a quantitative evaluation of the statistical properties of the model the ergodicity test is used. Suppose that the first trader is a buyer and that she arrives on the market and draws her
private value v ∼U(200,300). The next buyer arrives on the market and chooses a value equal to that of the buyer who arrived before her minus 10, in this way the demand function is created. To
keep the market simple the demand and supply schedules are symmetric, therefore the costs of the sellers are given according to the values of the buyers (figure 12 shows an example of symmetric
market). The market is non-ergodic since its equilibrium depends on the initial condition and in particular on the value of the first buyer that arrives to the market. The maximum value of the
equilibrium price is 250 and the minimum value is 150. The equilibrium price is distributed as a U(150,250) due to the distribution of the random variable that determines the first value. The
average equilibrium is thus 200 and the variance of the equilibrium is 833.3.
The test was performed with the same parameters of the previous ergodicity test (the length of the long time series is 1000 and the length of the windows and of the short time series is 10) and
the result is the rejection of the ergodicity null-hypothesis. In 100 tests the null is rejected 100% of the times (full power). Figure 14 shows two markets with different initial conditions.
Figure 14. Two realizations of the non-ergodic stock market. On the right the realized demand and supply schedule, on the left the first 100 transactions are shown.
The main drawback of agent-based models is that it is not possible to prove formally the properties of the model. Even in a model as simple as the described artificial stock market, the
equilibrium properties cannot be formally proven. The aim of the tests is to reduce the problem using statistics over the artificial data. The test allows a quantitative and statistical
assessment of the properties of the model, and thereby of the economic properties of the model. Knowing that the model has (or does not have) a unique stable equilibrium is essential both for
understanding the behavior of the model and for understanding the system under analysis. The stationarity test is thus useful to confirm statistically that the model has an equilibrium state and
the ergodicity test allows to know whether the equilibrium is unique regardless of the initial condition. Given the properties of the model, supposing that it has been built to represent a real
system, it is possible to use the results of the tests performed on artificial data to infer about the real data. In the present case the model is far too simple to represent any real stock
market, but if it had been a real stock market we would have discovered that the transaction price is stationary and ergodic. While the stationarity can be tested also on the real data, the
ergodicity test can be performed only on the artificial data. If the model is well-specified, a sufficient number of observations from the real system are available and if the model (i.e. the
real data) is stationary and ergodic, then it is possible to consistently estimate/calibrate the model (see Grazzini 2011a). Given the result of the test carried out on the model it is possible
to perform the sensitivity analysis with a better knowledge of its basic properties.
This paper has presented an algorithm using the Wald-Wolfowitz test to detect stationarity and ergodicity in agent-based models. The tests were evaluated using the Monte Carlo method. Knowing
whether a model is ergodic, and when its output becomes stationary is essential in order to understand its behavior. The stationarity test can help to detect the point where a system reaches its
statistical equilibrium state by using not only a visual and qualitative inspection, but also a quantitative tool. The ergodicity test provides a rigorous base to the analysis of the behavior of
the model (see Grazzini 2011b). Any comparison between different settings of the model, or between different policies, should be based on the knowledge about the ergodicity property. If a model
is not ergodic, it makes no sense to perform a sensitivity analysis over the parameter space with a given random seed. The tests are crucial also in view of a comparison between the properties of
the model and the properties of the real system. It is always possible to use real data to "calibrate" an agent-based model, but if the system is non-ergodic, the parameters of the model cannot
be properly estimated (Grazzini 2011a).
In view of this, a nonparametric test is required due to the practical impossibility to understand how the random component influences the emergent properties of the model in many agent-based
models. While nonparametric tests on real data often lack power, this problem disappears when the tests are applied to simulated data to investigate the properties of a theoretical model: by
increasing the number of observations their power can be increased at will.
I would like to thank three anonymous referees, as well as Matteo Richiardi and Pietro Terna, for their stimulating comments on earlier versions of the paper.
^1 The Python random library has been used for the generation of the pseudo random numbers. The algorithm is the Mersenne Twister which produces 53-bit precision floats and has a period of 2^
^2 If the model is stationary and ergodic (see section 3) then the observed sample carries information about the true data generator process (which is always the same for any initial conditions).
^3 If the null-hypothesis is true the test statistic has a known distribution. The null-hypothesis is rejected if the actual test statistic (computed over the observations) is too far away from
the null-hypothesis, where "too far" is defined relatively to the null-distribution and the chosen type I error (the type I error is defined as the probability of rejecting the null-hypothesis
when the null is true). The test can be made either with a two-sided alternative (the null is rejected both if the actual test statistic is too big or too small) or a one-sided alternative (the
null is rejected if the actual test statistic is either too big, right alternative, or too small, left alternative). Left side alternative means that the null-hypothesis is rejected only if the
actual test statistic is too small.
^4 The test is coded in Python. The Python module required for the use of the the test can be downloaded from the internet site http://jakob.altervista.org/nonparametrictest.htm
^5 Type I error is the probability of rejecting the null-hypothesis when the null-hypothesis is true.
^6 The test is coded in Python. The Python module required for the use of the test can be downloaded from the internet site: http://jakob.altervista.org/nonparametrictest.htm
^7 Use a set of observations chosen from the stationary part of the process
^8The Python modules of the model and the tests can be downloaded from http://jakob.altervista.org/GodeSunderErgodic.rar
^9 The first difference is that the ZI traders generate offers and bids using a uniform random variable between 0 and 300 (instead of 1 and 200). The ZI-C traders use a random value between the
private cost and 300 for the sellers and between 0 and the private value for the buyers. The results are the same. Other differences are in some trading rules. For example Gode and Sunder (1993)
use resampling: they force every agent to issue a new order after each transaction (see LiCalzi and Pellizzari 2008).
^10The overall mean computed on the 2500 observations is 150.04, very similar to the theoretical equilibrium.
^11 The Python modules of the non-ergodic version of the model and of the tests can be downloaded from http://jakob.altervista.org/GodeSunderNonErgodic.rar
CLIFF, D. and Bruten, J. (1997). Minimal-intelligence agents for bargaining behaviors in market based environments, HP Laboratories Bristol (HPL-97-91).
DARLING, D.A. (1957). The Kolmogorov-Smirnov, Cramer-Von Mises tests, The Annals of Mathematical Statistics 28(4), 823-838.
DICKEY, D.A. and Fuller W.A. (1979), Distribution of the estimators for autoregressive time series with unit root, Journal of the American Statistical Association 74 (366), 427-231.
DICKEY, D.A. and Fuller W.A. (1981). Likelihood ratio statistics for autoregressive time series with a unit root, Econometrica 49(4), 1057-1072.
DOMOWITZ, I. and El-Gamal, M.A. (1993). A consistent test of Stationarity-Ergodicity. Econometric Theory 9(4), 589-601.
DOMOWITZ, I. and El-Gamal, M.A. (2001). A consistent nonparametric test of ergodicity for time series with applications. Journal of Econometrics 102, 365-398.
GIBBONS, J. D. (1985). Nonparametric Statistical Inference. New York: Marcel Dekker Inc., second ed.
GIBBONS, J. D. and Chakraborti S. (2003). Nonparametric statistical inference. New York: Marcel Dekker Inc.
GILLI, M. and Winker, P. (2003). A global optimization heuristic for estimating agent-based models. Computational Statistics and Data Analysis 42.
GJERSTAD, S. and Sachat, J. M. (2007). Individual rationality and market efficiency, Institute for Research in the Behavioral, Economic and Management Science (1204).
GODE, D.K. and Sunder S. (1993). Allocative Efficiency of Markets with Zero-Intelligence Traders: Markets as a Partial Substitute for Individual Rationality. The Journal of Political Economy 101
GRAZZINI, J. (2011a), Estimating Micromotives from Macrobehavior, Department of Economics Working Papers 2011/11,University of Turin
GRAZZINI, J. (2011b), Experimental Based, Agent Based Models, CeNDEF Working paper 11-07 University of Amsterdam.
HAYASHI, F. (2000). Econometrics. Princeton: Princeton University Press.
KWIATKOWSKI, D., Phillips, P.C.B., Schmidt, P. and Schin Y. (1992), Testing the null hypothesis of stationarity against the alternative of a unit root. Journal of Econometrics 54, 159-178.
LEOMBRUNI, R. and Richiardi, M. (2005). Why are economists sceptical about agent-based simulations? Physica A 355(1), 103-109.
LICALZI, M. and Paolo Pellizzari (2008). Zero-Intelligence trading without resampling. Working Paper 164, Department of Applied Mathematics, University of Venice.
PHILLIPS, P.C.B. (1987). Time series regression with unit root, Econometrica 55(2), 277-301.
PHILLIPS, P.C.B. and Perron, P. (1988). Testing for unit root in time regression, Biometrika 75(2), 335-346.
PHILLIPS, P.C.B. and Xiao, Z. (1998). A primer on unit root testing, Journal of Economic Surveys 12(5) 423-470.
RICHIARDI, M., Leombruni, R., Saam, N. J. and Sonnessa, M. (2006). A common protocol for agent-based social simulation. Journal of Artificial Societies and Social Simulation 9(1), 15. http://
SAID, S.E. and Dickey, D.A. (1984). Testing for unit roots in autoregressive-moving average models of unknown order, Biometrika 71(3), 599-607.
SIEGEL, S. (1957). Nonparametric Statistics, The American Statistician, 11(3), 13-19.
SMITH, V. L. (1962). An experimental study of competitive market behavior, The Journal of Political Economy 70(2), 111-137.
TVERSKY , A. and Kahneman D. (1974). Judgment under Uncertainty: Heuristics and Biases. Science 185(4157), 1124-1131.
WALD, A. and Wolfowitz, J. (1940). On a test whether two samples are from the same population. The Annals of Mathematical Statistics 11(2)147-162. | {"url":"http://jasss.soc.surrey.ac.uk/15/2/7.html","timestamp":"2014-04-21T13:28:12Z","content_type":null,"content_length":"81407","record_id":"<urn:uuid:4f5e0118-cca9-4dc2-a32f-985c0c4ea714>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00247-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 11 - 20 of 22
, 1988
"... A brief survey of the theory and practice of sequence comparison is made focusing on diff, the UNIX 1 file difference utility. 1 Sequence comparison Sequence comparison is a deep and fascinating
subject in Computer Science, both theoretical and practical. However, in our opinion, neither the theo ..."
Cited by 7 (0 self)
Add to MetaCart
A brief survey of the theory and practice of sequence comparison is made focusing on diff, the UNIX 1 file difference utility. 1 Sequence comparison Sequence comparison is a deep and fascinating
subject in Computer Science, both theoretical and practical. However, in our opinion, neither the theoretical nor the practical aspects of the problem are well understood and we feel that their
mastery is a true challenge for Computer Science. The central problem can be stated very easily: find an algorithm, as efficient and practical as possible, to compute a longest common subsequence
(lcs for short) of two given sequences 2 . As usual, a subsequence of a sequence is another sequence obtained from it by deleting some (not necessarily contiguous) terms. Thus, both en/pri and en/pai
are longest common subsequences of sequence/comparison and theory/and/practice. Part of this work was done while the author was visiting the Universit'e de Rouen, in 1987. That visit was partially
, 1994
"... Given two sequences A = a 1 a 2 : : : am and B = b 1 b 2 : : : b n , m n, over some alphabet \Sigma, a common subsequence C = c 1 c 2 : : : c l of A and B is a sequence that can be obtained from
both A and B by deleting zero or more (not necessarily adjacent) symbols. Finding a common subsequenc ..."
Cited by 6 (0 self)
Add to MetaCart
Given two sequences A = a 1 a 2 : : : am and B = b 1 b 2 : : : b n , m n, over some alphabet \Sigma, a common subsequence C = c 1 c 2 : : : c l of A and B is a sequence that can be obtained from both
A and B by deleting zero or more (not necessarily adjacent) symbols. Finding a common subsequence of maximal length is called the Longest CommonSubsequence (LCS) Problem. Two new algorithms based on
the well-known paradigm of computing minimal matches are presented. One runs in time O(ns+minfds; pmg) and the other runs in time O(ns +minfp(n \Gamma p); pmg) where s = j\Sigmaj is the alphabet
size, p is the length of a longest common subsequence and d is the number of minimal matches. The ns term is charged by a standard preprocessing phase. When m n both algorithms are fast in situations
when a LCS is expected to be short as well as in situations when a LCS is expected to be long. Further they show a much smaller degeneration in intermediate situations, especially the second al...
, 2003
"... Two algorithms are presented that solve the problem of recovering the longest common subsequence of two strings. The first algorithm is an improvement of Hirschberg’s divide-and-conquer
algorithm. The second algorithm is an improvement of Hunt-Szymanski algorithm based on an efficient computation of ..."
Cited by 5 (0 self)
Add to MetaCart
Two algorithms are presented that solve the problem of recovering the longest common subsequence of two strings. The first algorithm is an improvement of Hirschberg’s divide-and-conquer algorithm.
The second algorithm is an improvement of Hunt-Szymanski algorithm based on an efficient computation of all dominant match points. These two algorithms use bit-vector operations and are shown to work
very efficiently in practice.
"... This paper performs the analysis necessary to bound the running time of known, efficient algorithms for generating all longest common subsequences. That is, we bound the running time as a
function of input size for algorithms with time essentially proportional to the output size. This paper consider ..."
Cited by 4 (2 self)
Add to MetaCart
This paper performs the analysis necessary to bound the running time of known, efficient algorithms for generating all longest common subsequences. That is, we bound the running time as a function of
input size for algorithms with time essentially proportional to the output size. This paper considers both the case of computing all distinct LCSs and the case of computing all LCS embeddings. Also
included is an analysis of how much better the efficient algorithms are than the standard method of generating LCS embeddings. A full analysis is carried out with running times measured as a function
of the total number of input characters, and much of the analysis is also provided for cases in which the two input sequences are of the same specified length or of two independently specified
- Eprint arXiv:cs.DS/0211001, Comp. Sci. Res. Repository , 2002
"... This paper shows that a simple algorithm produces the all-prefixes-LCSs-graph in O(mn) time for two input sequences of size m and n. Given any prefix p of the first input sequence and any prefix
q of the second input sequence, all longest common subsequences (LCSs) of p and q can be generated in tim ..."
Cited by 3 (2 self)
Add to MetaCart
This paper shows that a simple algorithm produces the all-prefixes-LCSs-graph in O(mn) time for two input sequences of size m and n. Given any prefix p of the first input sequence and any prefix q of
the second input sequence, all longest common subsequences (LCSs) of p and q can be generated in time proportional to the output size, once the all-prefixes-LCSs-graph has been constructed. The
problem can be solved in the context of generating all the distinct character strings that represent an LCS or in the context of generating all ways of embedding an LCS in the two input strings.
"... This paper deals with a new practical method for solving the longest common subsequence (LCS) problem. Given two strings of lengths m and n, m, on an alphabet of size s, we first present an
algorithm which determines the length p of an LCS in O(ns + min{mp, p(n p)}) time and O(ns) space. ..."
Cited by 2 (0 self)
Add to MetaCart
This paper deals with a new practical method for solving the longest common subsequence (LCS) problem. Given two strings of lengths m and n, m, on an alphabet of size s, we first present an algorithm
which determines the length p of an LCS in O(ns + min{mp, p(n p)}) time and O(ns) space.
- Journal of Information Science and Engineering , 2002
"... this paper, a scalable and efficient systolic algorithm is presented. For two given strings of length m and n,wherem # n,the algorithm can solve the LCS problem in m +2r -- 1 (respectively n +2r
-- 1) time steps with r < n/2 (respectively r < m/2) processors. Experimental results show that the al ..."
Cited by 1 (0 self)
Add to MetaCart
this paper, a scalable and efficient systolic algorithm is presented. For two given strings of length m and n,wherem # n,the algorithm can solve the LCS problem in m +2r -- 1 (respectively n +2r --
1) time steps with r < n/2 (respectively r < m/2) processors. Experimental results show that the algorithm can be faster on multicomputers than all the previous systolic algorithms for the same
"... Abstract. We study the complexity of the longest common subsequence (LCS) problem from a new perspective. By an indeterminate string (istring, in short) we mean a sequence e X = e X[1] e X[2]...
e X[n], where eX[i] ⊆ Σ for each i, and Σ is a given alphabet of potentially large size. A subsequence o ..."
Cited by 1 (0 self)
Add to MetaCart
Abstract. We study the complexity of the longest common subsequence (LCS) problem from a new perspective. By an indeterminate string (istring, in short) we mean a sequence e X = e X[1] e X[2]... e X
[n], where eX[i] ⊆ Σ for each i, and Σ is a given alphabet of potentially large size. A subsequence of e X is any usual string over Σ which is an element of the finite (but usually of exponential
size) language e X[i1] e X[i2]... e X[ip], where 1 ≤ i1 < i2 < i3... < ip ≤ n, p ≥ 0. Similarly, we define a supersequence of x. Our first version of the LCS problem is Problem ILCS: for given
i-strings e X and e Y, find their longest common subsequence. From the complexity point of view, new parameters of the input correspond to |Σ | and maximum size ℓ of the subsets in e X and e Y. There
is also a third parameter R, which gives a measure of similarity between e X and eY. The smaller the R, the lesser is the time for solving Problem ILCS. Our second version of the LCS problem is
Problem CILCS (constrained ILCS): for given i-strings e X and e Y and a plain string Z, find the longest
, 903
"... Abstract. Computing string or sequence alignments is a classical method of comparing strings and has applications in many areas of computing, such as signal processing and bioinformatics.
Semi-local string alignment is a recent generalisation of this method, in which the alignment of a given string ..."
Cited by 1 (0 self)
Add to MetaCart
Abstract. Computing string or sequence alignments is a classical method of comparing strings and has applications in many areas of computing, such as signal processing and bioinformatics. Semi-local
string alignment is a recent generalisation of this method, in which the alignment of a given string and all substrings of another string are computed simultaneously at no additional asymptotic cost.
In this paper, we show that there is a close connection between semi-local string alignment and a certain class of traditional comparison networks known as transposition networks. The transposition
network approach can be used to represent different string comparison algorithms in a unified form, and in some cases provides generalisations or improvements on existing algorithms. This approach
allows us to obtain new algorithms for sparse semi-local string comparison and for comparison of highly similar and highly dissimilar strings, as well as of run-length compressed strings. We conclude
that the transposition network method is a very general and flexible way of understanding and improving different string comparison algorithms, as well as their efficient implementation. 1
- IFSA-EUSFLAT , 2009
"... We propose a new, human consistent method for the evaluation of similarity of time series that uses a fuzzy quantifier base aggregation of trends (segments), within the authors’ (cf. Kacprzyk,
Wilbik, Zadro˙zny [1, 2, 3, 4, 5, 6] or Kacprzyk, Wilbik [7, 8, 9]) approach to the linguistic summarizatio ..."
Add to MetaCart
We propose a new, human consistent method for the evaluation of similarity of time series that uses a fuzzy quantifier base aggregation of trends (segments), within the authors’ (cf. Kacprzyk,
Wilbik, Zadro˙zny [1, 2, 3, 4, 5, 6] or Kacprzyk, Wilbik [7, 8, 9]) approach to the linguistic summarization of trends based on Zadeh’s protoforms and fuzzy logic with linguistic quantifiers. The
results obtain are very intuitively appealing and justified by valuable outcomes of similarity analyses between quotations of an investment fund and the two main indexes of the Warsaw Stock Exchange. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=199885&sort=cite&start=10","timestamp":"2014-04-24T07:15:13Z","content_type":null,"content_length":"36912","record_id":"<urn:uuid:743e2fe9-e122-40ec-a91e-6a29ec6e9ba0>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00317-ip-10-147-4-33.ec2.internal.warc.gz"} |
Paul Erdős was born in Budapest, Hungary on March 26, 1913. He was a mathematical prodigy and he has stayed among the very first among mathematicians for all his life. He obtained his Ph.D. in
mathematics in 1934 from the University of Budapest. He spent four years in Manchester as a postdoc - this is the longest time he ever spent at the same place. It would be impossible to list the
universities, academies and research institutes where he has lectured, or even those where he obtained honorary degrees. It would be impossible to outline the topics of his more than 1200 papers or
even count those papers that cite Erdős's work as their main motivation. Let it suffice to mention the Wolf prize, one of the highest recognitions in mathematics, which he received in 1984.
Quotation from the paper Paul Erdős is 80 by L. Lovász, in the book Combinatorics, Paul Erdős is Eighty, Volume 1, Bolyai Society Mathematical Studies (edited by D. Miklós, V.T. Sós and T. Szönyi),
This picture was taken at the Paul Erdős 60'th birthday conference in Keszthely, Hungary, in 1973 at the boat excursion on Lake Balaton. Erdős is posing a problem related to Ramsey numbers.
His approach to mathematics is as unique as his life. He has invented a new kind of art: the art of raising problems. Paul Erdős says that mathematics is eternal because it has an infinity of
problems; and in his view, the more elementary a problem is, the better. He also invented his system of offering prizes for problems. His lectures with the title "My favorite problems in
combinatorics" (or equivalent) always draw large audiences. Many try to imitate him in problem raising, but few can master this art: his problems may appear ad hoc or random at the first sight,
especially for those not closely acquainted with the field, but after a few months or years of tirelessly pursuing one problem after the other, they suddenly connect up and form whole new theories -
as if Erdős had those theories and metatheorems in his mind right away, and gave us only their corollaries.
Quotation from the paper Paul Erdős is 80 by L. Lovász, in the book Combinatorics, Paul Erdős is Eighty, Volume 1, Bolyai Society Mathematical Studies (edited by D. Miklós, V.T. Sós and T. Szönyi),
Paul Erdős is the consummate problem solver; his hallmark is the succinct and clever argument, often leading to a solution from 'the book'. He loves areas of mathematics which do not require an
excessive amount of technical knowledge but give scope for ingenuity and surprise. The mathematics of Paul Erdős is the mathematics of beauty and insight.
Quotation from the book A Tribute to Paul Erdős , Cambridge University Press (edited by A. Baker, B. Bollobás and A. Hajnal), 1990.
Tommy R. Jensen and Bjarne Toft
Last modified: August 2011. | {"url":"http://www.imada.sdu.dk/~btoft/erdoes.html","timestamp":"2014-04-19T06:52:17Z","content_type":null,"content_length":"4578","record_id":"<urn:uuid:9be4f5a5-3716-42c0-af9a-7eddfed32216>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00491-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: Add Measures and Attribution to PerformanceAnalytics and PortfolioAnalytics based on Bacon (2008)
Description: PerformanceAnalytics is an R package that provides a collection of econometric functions for performance and risk analysis. It applies current research on return-based analysis of
strategy or fund returns for risk, autocorrelation and illiquidity, persistence, style analysis and drift, clustering, quantile regression, and other topics.
PerformanceAnalytics has long enjoyed contributions from users who would like to see specific functionality included. In particular, Diethelm Wuertz of ETHZ (and the well-known R/Metrics packages)
has very generously contributed a very large set of functions based on Bacon (2008). It is a great starting point, but many of these functions need to be finished and converted such that the
functions and interfaces are consistent with other PerformanceAnalytics functions, and are appropriately documented.
In addition, certain functions useful for optimization will need to be converted to be used in PortfolioAnalytics, an extensible business-focused framework for portfolio optimization and analysis.
That package focuses on numeric approaches for solving non-quadratic problems useful, for example, in risk budgeting.
Assuming the prior work goes well, there will be an opportunity to extend PortfolioAnalytics to cover portfolio attribution as described in Bacon (2008). This is frequently-requested functionality
from the finance community that uses R. We would use FinancialInstrument, an R package for defining and storing meta-data for tradeable contracts (referred to as instruments, such as for stocks,
futures, options, etc.), to define relationships among securities to be used by attribution functions.
• Carl Bacon “Practical Portfolio Performance Measurement and Attribution”, (London, John Wiley & Sons. September 2004) ISBN 978-0-470-85679-6. 2nd Edition May 2008 ISBN 978-0470059289
Test: The successful applicant will demonstrate proficiency with R, and specifically with xts and PerformanceAnalytics or PortfolioAnalytics. This could be via identifying a patch or extension,
providing a new demo script for the one of the packages that would show understanding of the functionality provided, or by doing a detailed proposal with pseudocode for one or more potential
enhancements to the toolchain. The ideal candidate would also demonstrate prior interest or experience in finance.
Mentor: Peter Carl is one of the primary authors of these related packages, and would mentor the GSOC participant. 2012-02-18
Brian Peterson is one of the primary authors of these packages, has previously mentored GSoC projects, and would be backup mentor for this project. | {"url":"http://rwiki.sciviews.org/doku.php?id=developers:projects:gsoc2012:performanceanalytics","timestamp":"2014-04-16T10:09:41Z","content_type":null,"content_length":"16279","record_id":"<urn:uuid:74ba1584-b920-4484-9faa-f6bdd9270460>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00585-ip-10-147-4-33.ec2.internal.warc.gz"} |
• Use "?" for one missing letter: pu?zle. Use "*" for any number of letters: p*zle. Or combine: cros?w*d
• Select number of letters in the word, enter letters you have, and find words!
cissoid's examples
• The Cissoid of Diocles is the roulette of the vertex of a parabola rolling on an equal parabola. It is in commentaries on Archimedes' On the sphere and the cylinder that the cissoid appears and
is attributed to Diocles. — “Cissoid”, www-history.mcs.st-
• In geometry, a cissoid is a curve generated from two given curves C1, C2 and a point O (the pole) Then the locus of such points P is defined to be the cissoid of the curves C1, C2 relative to O.
— “Cissoid - Wikipedia, the free encyclopedia”,
• Cissoid - web site, address and contact details.Plus all the latest news stories from the company here on Electronicstalk. — “Cissoid latest product and services news”,
• The cissoid 1) can be constructed as follows: Given a circle C with diameter OA and a tangent l through A. Now, draw lines m through O, cutting circle C in point Q, line l in point R. Then the
cissoid is the set of points P for which OP = QR. — “cissoid”, 2
• cissoid. A plane curve consisting of two infinite branches symmetrically placed with reference to the diameter of a circle, so that at one of its extremities they form a cusp, while the tangent
to the circle at the other extremity is their common asymptote. — “cissoid”,
• The Cissoid is an unbounded plane curve with a single cusp, which is symmetric about the line of tangency of the cusp, and whose pair of symmetrical branches both approach the same asymptote as a
point moving along the Cissoid moves farther away from the cusp. — “Cissoid. Function Graph”,
• Definition of cissoid in the Online Dictionary. Meaning of cissoid. Pronunciation of cissoid. Translations of cissoid. cissoid synonyms, cissoid antonyms. Information about cissoid in the free
online English dictionary and encyclopedia. — “cissoid - definition of cissoid by the Free Online Dictionary”,
• CISSOID unveils THEMIS and ATLAS, their Driver for Silicon Carbide Switches in Power Converters and Motor Drives. — “CISSOID unveils THEMIS and ATLAS - Cissoid - ”,
• Category:Cissoid. From Wikimedia Commons, the free media repository Media in category "Cissoid" The following 18 files are in this category, out of 18 total. Cisoido.png. — “Category:Cissoid -
Wikimedia Commons”,
• cissoid ( ′sis′öid ) ( mathematics ) A plane curve consisting of all points which lie on a variable line passing through a fixed point, and whose. — “Cissoid: Definition from ”,
• Cissoid definition, a curve having a cusp at the origin and a point of inflection at infinity. Equation: r = 2a sin(θ)tan(θ). See more. — “Cissoid | Define Cissoid at ”,
• CISSOID is a Fabless Semiconductor company, leader in High Temperature Electronics CISSOID provides high reliability products guaranteed from -55°C to +225°C and commonly used outside that range,
from cryogenic lows to upper extremes. — “CISSOID”,
• CISSOID (from the Gr. Kurala, ivy, and ethos, form), a curve invented by the Greek mathematician Diodes about 180 B.C., for the purpose of constructing two mean proportionals between two given
lines, and in order to solve the problem of duplicating the cube. — “Cissoid - LoveToKnow 1911”, 1911
• Cissoid is a method of deriving a new curve based on two (or one) given curves C1, C2, and a fixed point O. A curve derived this way may be called the cissoid of C1 and C2 with the pole O.
Step-by-step description: Given two curves C1 and C2, and given a fixed point O. Let P1 be a point on C1. — “Cissoid”,
• Definition of cissoid from Webster's New World College Dictionary. Meaning of cissoid. Pronunciation of cissoid. Definition of the word cissoid. Origin of the word cissoid. — “cissoid -
Definition of cissoid at ”,
• CISSOID (from the Gr. κισσός, ivy, and εἶδος, form), a curve invented by the Greek mathematician Diocles about 180 b.c., for the purpose of constructing two mean proportionals between two given
lines; and in order to solve the problem of duplicating the cube. — “1911 Encyclopædia Britannica/Cissoid - Wikisource”,
• Encyclopedia article about cissoid. Information about cissoid in the Columbia Encyclopedia, Computer Desktop Encyclopedia, computing dictionary. — “cissoid definition of cissoid in the Free
Online Encyclopedia”, encyclopedia2
• It is a cissoid of a circle and a line tangent to the circle with. respect to a point on the circle opposite to the tangent point. 6. The locus of Q (as P1 moves on C) is the cissoid of Diocles.
— “Cissoid”,
• The cissoid is often called the cissoid of Diocles in honour of the Ancient Greek mathematician Diocles (3rd century B.C.), who discussed it in connection with the The cissoid is the set of
points for which , where and are the points. — “Springer Online Reference Works”,
• CISSOID unveils power transistor driver chipset silicon carbide switches CISSOID has unveiled THEMIS and ATLAS, the company's power transistor driver chipset meant for high efficiency motor
drives and power converters. — “CISSOID unveils power transistor driver chipset silicon”, powermanagement-
related images for cissoid
marya4 4 99 gif 28 Apr 1999 17 02 6k cissoid jpg 18 May 2005 14 13 6k prob852053 gif 10 May 2009 05 14 6k day 2 04 13 99 gif 29 Apr 1999 02 46 6k
Kappa Klein Bottle with Maplesoft® Code Koch Snowflake with Maplesoft® Code La galande Folium of Descartes
Lemniscate of Bernoulli Lemniscate with Maplesoft®Code Limaçon with Maplesoft®Code Limaçon of Pascal
tangency of the cusp and whose pair of symmetrical branches both approach the same asymptote but in opposite directions as a point moving along the cissoid moves farther away from the cusp The
cissoid of Diocles is named after the Greek geometer Diocles who used it in 180 B C to solve the Delian problem how much must the length of a cube be increased in order to double the
evenson2 8 10 99 gif 10 Aug 1999 03 47 6k sana10 21 03 gif 22 Oct 2003 02 54 6k marya4 4 99 gif 28 Apr 1999 17 02 6k cissoid jpg 18 May 2005 14 13 6k
Cissoid of Diocles Agian since I was using Geogebra I had to change r t = a sin t tan t so that I could see the curve as a function of x and y Then when I as able to make the negitive pedal curve
near Barstow California focus the sun s rays on a central tower where heat is converted to electricity
related videos for cissoid
On Twitter
twitter about cissoid
Blogs & Forum
blogs and forums about cissoid
• “Be Angels: business angels, capital risque, financement des entreprises, spinoffs, projets innovants (Bruxelles - Région Wallonne)”
— Invitation Forum,
• “Ask questions, share knowledge, explore ideas, and help solve problems with fellow engineers. CISSOID (). No idea if this would work with the TPS40200. Posted on23 Sep 2010. in Forum - Non”
— TI E2E Community, e2
• “CIE - Components in Electronics: Wireless Technology . mimoOn GmbH, the software defined radio (SDR) software company, has launched the 1st generation of it's high-efficiency mi!Spectrum
Femtocell LTE Air Interface Scheduler, now fully”
— High-efficiency Femto Forum API compliant scheduler from,
• “%post_excerpt Face is also the education of the Omaha Rail and Commerce Historic District, which has next cissoid hands and missions traditionally not. There was a heat to wish Pershing
President of the United States in 1920, but he handled to continually accept”
— Roulettes,
• “June 25, 1937 in Atlanta, Georgia. Blackjack tournaments online, once, any twenty-seven University of California, Santa Cruz, where he took part cissoid and the company of cult”
— Slots On Pc | Online Casino,
• “BLOG. Casino Poker Games. This decides the science and problem of all professional stations, from articles to courage browser to where Crocbot had been resurrected either, and said a use cissoid
city where the access drug counting as a wellbeing for”
— Online Casino " Casino Poker Games,
• “I mentioned this in a post before, that the curve of a cissoid, derived from the doubling of a sphere may well be a suitable FullRangeDriver Forum " Is a front wave guide or short horn needed
— Fullrangedriver Forum / Is a front wave guide or short horn,
• “Grissom and Sara practise a Vegas head to take a high crime: two denominations social, one causing an time, and a cissoid between them. blog comments powered by Disqus. Spacer. Conservation
Blog. Categories”
— Bingo Site | Online Casino,
related keywords for cissoid
similar for cissoid | {"url":"http://wordsdomination.com/cissoid.html","timestamp":"2014-04-19T07:02:45Z","content_type":null,"content_length":"43269","record_id":"<urn:uuid:94e6f8e7-383d-4ee8-9c35-d13ad8c94b5e>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00507-ip-10-147-4-33.ec2.internal.warc.gz"} |
Errors with some basic while and for loops, help?
12-01-2011, 10:51 PM
Errors with some basic while and for loops, help?
Hello. For a class assignment, I have to write a program that finds the first four perfect numbers on the domain (0, ∞). A perfect number is a number that has the sum of its divisors (aside from
the number itself) equivalent to the number. 6, for example, is a perfect number.
Anyways, here's my code.
public class PerfectNumbers
public static void main(String[] args)
int i = 1;
while (int i <= 4)
int j;
for (int j = 1; i <= 4; j++;)
int sum = 0;
int k = 1;
while (k <= j - 1)
if (j % k == 0)
sum += k;
if (sum == j)
I'm getting errors on lines 7, 10 and 33. I'm sure I'm making some novice mistakes...can someone please point me in the right direction?
Thank you.
12-01-2011, 11:06 PM
Re: Errors with some basic while and for loops, help?
When you get errors you should post the exact error message.
Computers are very good at remembering things. Once you have told the computer what type a variable is you do not have to keep reminding it.
12-01-2011, 11:12 PM
Re: Errors with some basic while and for loops, help?
Oh, hey, taking out the two extra "int"s fixed most of my errors! Thank you.
Now I'm left with two. Seems to be a problem with my for loop.
On line 10, I'm getting a "')' expected" and "illegal start of expression"
12-01-2011, 11:24 PM
Re: Errors with some basic while and for loops, help?
Look very carefully on that line to see if anything that should be there isn't or if anything that is there shouldn't be there.
12-01-2011, 11:25 PM
Re: Errors with some basic while and for loops, help?
12-01-2011, 11:32 PM
Re: Errors with some basic while and for loops, help?
Thanks a lot guys. We just learned how to write for and while loops today in class, so I'm glad it was only a small syntax error.
Now I just have to figure out why my program isn't printing anything. But I guess that's just a logic error...at least the compiler errors are gone. Thanks again.
12-01-2011, 11:41 PM
Re: Errors with some basic while and for loops, help?
Sorry for the double post, but I got my program working. Whoo! When I went back to edit it I had changed one of the less than signs to greater than. All is well now.
12-01-2011, 11:43 PM
Re: Errors with some basic while and for loops, help?
This can be achieved with only 2 loops. The outer loop goes until it finds the four perfect numbers. The inner loop calculates all the factors of a given number and sums them. After the inner
loop have an if statement to see if the sum equals the number. | {"url":"http://www.java-forums.org/new-java/52071-errors-some-basic-while-loops-help-print.html","timestamp":"2014-04-24T17:27:44Z","content_type":null,"content_length":"11061","record_id":"<urn:uuid:7188f48e-8d19-4e48-ba86-6200ff2d76ef>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00500-ip-10-147-4-33.ec2.internal.warc.gz"} |
This page lists articles related to various programs for doing statistical analysis, as well as general information on working with statistical data.
Stata Journal at the UW-Madison Library System (2012 vol. 4-present) Stata Journal Archive Search Stata's Online Help (For full Stata documentation, start Stata and click Help, PDF Documentation.)
An Introduction to Stata Graphics An Introduction to Mata Finding and Installing User-Written Stata Programs Propensity Score Matching in Stata using teffects Working with Dates in Stata Exploring
Regression Results using Margins Using Stata Graphs in Documents Including Calculated Results In Stata Graphs Using Reshape to Manage Hierarchical Data Bootstrapping in Stata Speeding up Multiple
Imputation in Stata using Parallel Processing Making Predictions with Counter-Factual Data in Stata Using STATA on Linux
Running SPSS Jobs on Linux SPSS Online Documentation Using SPSS to Reformat Data Records from One to Several
An Introduction to SAS Data Steps Managing Output in SAS 9.3 Running SAS on Linux Running Large SAS Jobs on Linstat Launching SAS Files from Windows Explorer SAS Version 9 Online Documentation SAS
Workbook for Writing SAS Programs to Process Data on UNIX Redirecting and Customizing Tabular Output in SAS
Converting SAS Formats from 32-Bit to 64-Bit SAS and Excel Files on Winstat A Simple Procedure for Producing Publication-Quality Graphs using SAS Storing SAS Formats Using SAS to Perform a Table
Lookup Constructing Indicator Variables with SAS Converting a Code Book to a SAS FORMAT Library Using SAS to Reformat Data Records from One to Several Saving SAS Graphs For Printing or Other Uses
Using Compressed Data in SAS
Using R Packages on SSCC Computers Installing R on Your Personal Computer
Computing Resources at the SSCC Using Stat/Transfer Running Linux Programs Using Windows (Mostly) Managing Jobs on Linstat Using Excel for Data Entry Programming in Color | {"url":"http://www.ssc.wisc.edu/sscc/pubs/stat.htm","timestamp":"2014-04-21T10:35:27Z","content_type":null,"content_length":"15413","record_id":"<urn:uuid:8877fbe8-0fc0-4ee2-b43e-ae8d9c018213>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00475-ip-10-147-4-33.ec2.internal.warc.gz"} |
How Many Handshakes?
Date: 08/18/2005 at 20:39:33
From: Victor
Subject: How Many Handshakes
Thirty people at a party shook hands with each other. How many
handshakes were there altogether?
Before answering this question, draw a diagram and see if you can
establish a pattern by collecting data in a table for one, two, three,
four and five people shaking hands.
If there were 300 people at the party, how many handshakes will there
be altogether?
I'm not sure how to draw a diagram establishing a pattern of the
number of people and the number of handshakes.
Date: 08/19/2005 at 16:01:51
From: Doctor Wilko
Subject: Re: How Many Handshakes
Hi Victor,
Thanks for writing to Dr. Math!
First, when we say handshakes, we'll agree that we mean Adam shaking
Bob's hand is the same as Bob shaking Adam's hand, i.e., once two
people shake hands it is considered a hand shake.
Now let's reason through the handshakes, starting with two people and
working our way up, while looking for a pattern.
If there are two people at a party, they can shake hands once. There
is no one else left to shake hands with, so there is only one
handshake total.
2 people, 1 handshake
If there are three people at a party, the first person can shake hands
with the two other people (two handshakes). Person two has already
shaken hands with person one, but he can still shake hands with person
three (one handshake). Person three has shaken hands with both of
them, so the handshakes are finished. 2 + 1 = 3.
3 people, 3 handshakes
If there are four people at a party, person one can shake hands with
three people, person two can shake hands with two new people, and
person three can shake hands with one person. 3 + 2 + 1 = 6.
4 people, 6 handshakes
Are you seeing a pattern?
If you have five people, person five shakes four other hands, person
four shakes three other hands, person three shakes two other hands,
and person two shakes one hand. Another way to see it is,
Person 5 Person 4 Person 3 Person 2
4 + 3 + 2 + 1 = 10 handshakes total
People at Party Number of Handshakes
3 1 + 2 = 3
4 1 + 2 + 3 = 6
5 1 + 2 + 3 + 4 = 10
6 1 + 2 + 3 + 4 + 5 = 15
. n(n-1)
n 1 + 2 + ...+ (n-1) = --------
It turns out this is just the formula for adding up an arithmetic
sequence where you know how many terms you have total, and you know
the first and last terms of the sequence. See this link for more on
Adding Arithmetic Sequences
So, to find how many handshakes there are at a party of 30 people, you
could add up all the numbers from 1 to 29 or use the formula,
---------- = 435 handshakes
Can you figure out how many handshakes there are at a party of 300
Here's another link from our archives:
Handshakes at a Party
Does this help? Please write back if you have any further questions.
- Doctor Wilko, The Math Forum
Date: 08/19/2005 at 23:49:46
From: Victor
Subject: Thank you (How Many Handshakes)
Thank you very much Dr. Wilko! | {"url":"http://mathforum.org/library/drmath/view/68330.html","timestamp":"2014-04-19T08:28:31Z","content_type":null,"content_length":"8815","record_id":"<urn:uuid:7cd8eb63-02c1-4484-938d-796c55f04926>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00585-ip-10-147-4-33.ec2.internal.warc.gz"} |
Precalculus Functions and Graphs 6th edition by Munem | 9781572591578 | Chegg.com
Precalculus 6th edition
Functions and Graphs
Details about this item
Precalculus: This is a study of the mathematical prerequisites needed to study calculus. Topics include functions, trigonometry, systems of equations and matrices, and conics. The graphing calculator
is integrated where appropriate, and problem sets are divided into categories - mastering the concept, applying the concept, and developing and extending the concept. There are two chapters of
algebra review.
Back to top
Rent Precalculus 6th edition today, or search our site for M. A. textbooks. Every textbook comes with a 21-day "Any Reason" guarantee. Published by Institute of Electrical & Electronics Engineers | {"url":"http://www.chegg.com/textbooks/precalculus-6th-edition-9781572591578-1572591579","timestamp":"2014-04-20T12:34:37Z","content_type":null,"content_length":"19369","record_id":"<urn:uuid:b1249319-9709-417f-ba7b-e6abbcfa324b>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00388-ip-10-147-4-33.ec2.internal.warc.gz"} |
19-XX $K$$K$-theory [See also 16E20, 18F25]
19Lxx Topological $K$$K$-theory [See also 55N15, 55R50, 55S25]
19L10 Riemann-Roch theorems, Chern characters
19L20 $J$$J$-homomorphism, Adams operations [See also 55Q50]
19L41 Connective $K$$K$-theory, cobordism [See also 55N22]
19L47 Equivariant $K$$K$-theory [See also 55N91, 55P91, 55Q91, 55R91, 55S91]
19L50 Twisted $K$$K$-theory; differential $K$$K$-theory
19L64 Computations, geometric applications
19L99 None of the above, but in this section | {"url":"http://www.ams.org/mathscinet/msc/msc2010.html?t=19L47&btn=Current","timestamp":"2014-04-18T01:27:32Z","content_type":null,"content_length":"13430","record_id":"<urn:uuid:ace41e87-50a8-4d48-8d1b-d8ad95d74992>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00627-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] Fixed-point arithemetic...any solution yet?
Charles R Harris charlesr.harris@gmail....
Thu Dec 10 09:07:53 CST 2009
On Thu, Dec 10, 2009 at 3:22 AM, Ruben Salvador <rsalvador.wk@gmail.com>wrote:
> On Wed, Dec 9, 2009 at 8:26 PM, Neal Becker <ndbecker2@gmail.com> wrote:
>> Ruben Salvador wrote:
>> > Hello everybody.
>> >
>> > I've seen this question arise sometimes on the list, but don't know if
>> > something has "happened" yet or not. I mean, any solution feasible to
>> use
>> > more or less right out of the box?
>> >
>> > I'm just a hardware engineer, so it would be difficult for me to create
>> my
>> > own class for this, since my knowledge of python/numpy is very limited,
>> > and, just don't have the time/knowledge to be more than a simple user of
>> > the language, not a developer.
>> >
>> > I have just come across this:
>> > http://www.dilloneng.com/documents/downloads/demodel/ but haven't used
>> it
>> > yet. I'll give it a try and see how it works and come back to the list
>> to
>> > report somehow. But, is there any "official" plans for this within the
>> > numpy developers? Is there any code around that may be used? I just need
>> > to test my code with fixed point arithmetic (I'm modelling hardware....)
>> >
>> > Thanks for the good work to all the Python/Numpy developers (and all the
>> > projects related, matplotlib and so on....) and for the possiblity of
>> > freeing from matlab!!! I'm determined to do research with as many free
>> > software design tools as possible....though this fixed-point arithmetic
>> > issue is still a chain!
>> >
>> > Regards!
>> I've done some experiments with adding a fixed-pt type to numpy, but in
>> the
>> end abandoned the effort. For now, I use integer arrays to store the
>> data,
>> and then just keep variables for the #bits and position of the binary
>> point.
>> For actual signal processing, I use c++ code. I have a class that is
>> based
>> on boost::constrained_value (unreleased) that gives me the behavior I want
>> from fixed point scalars.
>> _______________________________________________
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
> Well...I think I may also try this way. This FIXED_POINT_FACTOR scaling is
> what is actually done implicitly in hardware to align bit vectors. And if I
> over-dimension the bit length, I "won't need to take care" of the number of
> bits after arithmetic operations...
> I'll try and see...but, if anybody has a quicker solution....I'm actually
> in a hurry :S
> I had a look at the code I mentioned in my first email. It does the trick
> someway, but from my point of view, needs some more tweaking to be usable in
> a wider context. It only supports some operations and I just guess it will
> fail in many numpy.array routines, if data is not cast previously (maybe not
> since the actual numerical value is floating point, and the fixed point is
> an internal representation of the class)...will try and report back....
> Anyway, don't you people think we should boost this fixed-point issue in
> numpy? We should make some kind of roadmap for the implementation, I think
> it's a *MUST*.
There is certainly a whole class of engineering problems for which it would
be very useful. But things in numpy/scipy tend to get done when someone
scratches their itch and none of the current developers seem to have this
particular itch. Now, if someone comes along with a nice implementation,
voila, they become a developer and the job gets done.
Which is to say, no one is keeping the gate, contributions are welcome.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.scipy.org/pipermail/numpy-discussion/attachments/20091210/13a5a30f/attachment.html
More information about the NumPy-Discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2009-December/047378.html","timestamp":"2014-04-20T13:49:34Z","content_type":null,"content_length":"7915","record_id":"<urn:uuid:eed902d7-eae5-4a90-8055-af06a057dcca>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00644-ip-10-147-4-33.ec2.internal.warc.gz"} |