content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Sustainable reputations with rating systems
Mehmet Ekmekci
Journal of Economic Theory, 2011, vol. 146, issue 2, pages 479-503
Abstract: In a product choice game played between a long lived seller and an infinite sequence of buyers, we assume that buyers cannot observe past signals. To facilitate the analysis of applications
such as online auctions (e.g. eBay), online shopping search engines (e.g. BizRate.com) and consumer reports, we assume that a central mechanism observes all past signals, and makes public
announcements every period. The set of announcements and the mapping from observed signals to the set of announcements is called a rating system. We show that, absent reputation effects, information
censoring cannot improve attainable payoffs. However, if there is an initial probability that the seller is a commitment type that plays a particular strategy every period, then there exists a finite
rating system and an equilibrium of the resulting game such that, the expected present discounted payoff of the seller is almost his Stackelberg payoff after every history. This is in contrast to
Cripps, Mailath and Samuelson (2004) [5], where it is shown that reputation effects do not last forever in such games if buyers can observe all past signals. We also construct finite rating systems
that increase payoffs of almost all buyers, while decreasing the seller[modifier letter apostrophe]s payoff.
Keywords: Reputations; Rating; systems; Online; reputation; mechanisms; Disappearing; reputations; Permanent; reputations (search for similar items in EconPapers)
Date: 2011
References: Add references at CitEc
Citations View citations in EconPapers (6) Track citations by RSS feed
Downloads: (external link)
http://www.sciencedirect.com/science/article/B6WJ3 ... 0d54b54af2bccd77e778
Full text for ScienceDirect subscribers only
Related works:
Working Paper: Sustainable Reputations with Rating Systems (2010)
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text
Persistent link: http://EconPapers.repec.org/RePEc:eee:jetheo:v:146:y:2011:i:2:p:479-503
Access Statistics for this article
Journal of Economic Theory is edited by A. Lizzeri and K. Shell
More articles in Journal of Economic Theory from Elsevier
Series data maintained by Zhang, Lei (). | {"url":"http://econpapers.repec.org/article/eeejetheo/v_3a146_3ay_3a2011_3ai_3a2_3ap_3a479-503.htm","timestamp":"2014-04-16T10:30:28Z","content_type":null,"content_length":"14731","record_id":"<urn:uuid:7012d09f-b967-4a2c-83dc-e884f8889717>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00375-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding half-power frequency.
At DC your power dissipation is 0 (cap is blocking current). At very high freq, power dissipation is 0 (inductor is blocking current). At resonance your power dissipation is
(V^2)/R since the LC is series resonant and behaves like a short, all of the input voltage is across the resistor.
There will be two half power frequencies, one above and one below resonance, where the circuit power dissipation is (V^2)/2R, or the voltage across the resistor is 0.707Vin.
Your transfer function needs to be in the form of output/input.
In this case, since we are finding power consumption, which only occurs in the resistor, our output is the voltage across the resistor. Vr/Vin = R/(R + sL + 1/sc).
Next, set Vr/Vin = 0.707 and solve for freq. | {"url":"http://www.physicsforums.com/showthread.php?t=610454","timestamp":"2014-04-20T03:24:36Z","content_type":null,"content_length":"35907","record_id":"<urn:uuid:6d11a426-88d0-4a57-83a9-3f503751bf66>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00026-ip-10-147-4-33.ec2.internal.warc.gz"} |
Nonclassical correlations from random measurements
In this talk, I will demonstrate that correlations inconsistent with any locally causal description can be a generic feature of measurements on entangled quantum states. Specifically,
spatially-separated parties who perform local measurements on a maximally-entangled state using randomly chosen measurement bases can, with significant probability, generate nonclassical correlations
that violate a Bell inequality. For n parties using a Greenberger-Horne-Zeilinger state, this probability of violation rapidly tends to unity as the number of parties increases. Moreover, even with
both a randomly chosen two-qubit pure state and randomly chosen measurement bases, a violation can be found about 10% of the time. Amongst other applications, our work provides a feasible alternative
for the demonstration of Bell inequality violation without a shared reference frame. | {"url":"http://perimeterinstitute.ca/videos/nonclassical-correlations-random-measurements","timestamp":"2014-04-18T03:31:39Z","content_type":null,"content_length":"26891","record_id":"<urn:uuid:f0215e8a-1da9-4f02-95e5-fbd902077555>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00351-ip-10-147-4-33.ec2.internal.warc.gz"} |
Infinity is for Children---and Mathematicians!
How Big is Infinity?
Most everyone is familiar with the infinity symbol--the one that looks like the number eight tipped over on its side. The infinite sometimes crops up in everyday speech as a superlative form of the
word many. But how many is infinitely many? How far away is "from here to infinity"? How big is infinity?
You can't count to infinity. Yet we are comfortable with the idea that there are infinitely many numbers to count with: no matter how big a number you might come up with, someone else can come up
with a bigger one: that number plus one--or plus two, or times two. Or times itself. There simply is no biggest number. Is there?
Is infinity a number? Is there anything bigger than infinity? How about infinity plus one? What's infinity plus infinity? What about infinity times infinity? Children to whom the concept of infinity
is brand new, pose questions like this and don't usually get very satisfactory answers. For adults, these questions don't seem to have very much bearing on daily life, so their unsatisfactory answers
don't seem to be a matter of concern.
At the turn of the century, in Germany, the Russian-born mathematician Georg Cantor applied the tools of mathematical rigor and logical deduction to questions about infinity in search of satisfactory
answers. His conclusions are paradoxical to our everyday experience, yet they are mathematically sound. The world of our everyday experience is finite. We can't exactly say where the boundary line
is, but beyond the finite, in the realm of the transfinite, things are different.
Cantor is the founder of the branch of mathematics called Set Theory, which is at the foundation of much of 20th century mathematics. At the heart of Set Theory is a hall of mirrors--the paradoxical
infinity. Georg Cantor was known to have said, "I see it, but I do not believe it," about one of his proofs.
The set is the mathematical object which Cantor scrutinized. He defined a set as any collection of well-distinguished and well-defined objects considered as a single whole. A collection of matching
dishes is a set, as well as a collection of numbers. Even a collection of seemingly unrelated things like, {toothbrush, elephant, clothespin, 6} is a set. They are well-defined and can be
distinguished from one another.
This is a set:
Sets can be large and small. They can also be finite and infinite. A finite set has a finite number of members. No matter how many there are, given enough time, you can count them all. Cantor's
surprising results came when he considered sets that had an infinite number of members. Sets such as all of the counting numbers, or all of the even numbers are infinite sets.
In order to study infinite sets, Cantor first formalized many of the things that are intuitive and obvious about finite sets. At first, it seems like these formalizations are just a whole lot of
trouble, a way of making simple things complicated. Because the formalisms are clearly correct, however, they provide a powerful tool for examining things that are not so simple, intuitive or
Which Set is Bigger?
Cantor needed a way to compare the sizes of sets, some method for determining whether sets had the same number of members. If two sets didn't have the same number of members, he needed a method for
telling which one was larger. Of course this is simple for finite sets. You count the members in both sets. If the number is the same, they are the same size. If the number of members in one set is
greater than the number of members in the other, then that set is larger.
You can't count the members in an infinite set, though, so this method won't work for comparing their sizes. If you have two infinite sets, you need some other way to tell if one is larger.
The formal notion that Cantor used for comparing sizes of sets is the idea of a one-to-one correspondence. A one-to-one correspondence pairs up the members of one set with the members of another. We
could pair up the elements of the ridiculous set {toothbrush, elephant, clothespin, 6} with the numbers {1,2,3,4}. It is possible to do this so that one member of each set is paired up with one
member of the other, no member is left out, and no member has more than one partner. Then we can be sure that the set {1,2,3,4} has the same number of members as the set {toothbrush, elephant,
clothespin, 6}
Infinity plus 1
This may seem like a lot of work just to say that small finite sets have the same number of members, but it provides a way to think about the sizes of infinite sets, so that children and
mathematicians have a tool to look for answers to questions like: What's bigger, the set of all counting numbers, or the set of all counting numbers with a shoe thrown in?
Our intuition tells us that the set with the shoe in it must be bigger. Can we be sure? The set of counting numbers has an infinite number of members. How can the set with the shoe in it have more
than an infinite number of members?
The set of all counting numbers is {1,2,3,... }. Is there a way to put its members in a one-to-one correspondence with the set {shoe, 1, 2, 3}? If they come out even, then we can be sure the two sets
are the same size. If one set has members left over when the other set has used up all of its members, then we will know which set is bigger. The answer (believe it or not) is that yes! the two sets
can be put in one-to-one correspondence with each other, and yes! they are the same size.
Our intuition and experience with finite things in the world makes us want to protest that when we pair up the members of these sets, the set without the shoe, will run out of members before the
other one does. But these are infinite sets. We will not get to the end. Neither set will run out of members. The pairing goes on and on. Ad infinitum.
In The Hotel Infinity the pairing of hotel guests with hotel rooms puts the members of a set of hotel guests in one-to-one correspondence with a set of hotel rooms.
Infinity plus one is infinity. Once you slip into the realm of the infinite, the rules of arithmetic are not the same anymore. The rules of transfinite arithmetic take over.
Infinity times 2
There are many other infinite sets whose sizes are interesting and surprising to compare. Consider the set of all even numbers {2,4,6,...} Is it half as big as the set of all counting numbers? We can
try putting their members in one-to-one correspondence with each other to try and find out:
We won't run out of even numbers before we run out of counting numbers or vice versa, so the members of the two sets can be placed in one-to-one correspondence with each other. This means that they
are the same size. There are as many even numbers as there are counting numbers. Infinity times two is infinity. Infinity divided by two is infinity. These are the kinds of conclusions which caused
Georg Cantor to say, "I see it, but I do not believe it."
Infinity times Infinity
What about the set of all rational numbers--the numbers that we can express as fractions: {1/2, 3/4, 5/6,...}
Part of the set of rational numbers consists of {1/1, 1/2, 1/3, 1/4,...}. These are all the fractions whose numerator is one. There are an infinite number of those. Then there are {2/1, 2/2, 2/3, 2/
4...} These are all the fractions whose numerators are 2--another infinite set. We could make infinitely many such lists, each one with infinitely many rational numbers on it. So to figure out how
many rational numbers there are, all we would have to do would be multiply infinity times infinity. Certainly that must be more than infinity.
We can't just say that it seems like there are more rational numbers than counting numbers, we have to prove that there are. If we want to say that infinity times infinity is bigger than infinity,
then we have to show how the set with infinity-times-infinity members (the rational numbers) cannot be put into a one-to-one correspondence with the set that has an infinite number of members (the
counting numbers).
When the all of the busses pulled up to the Hotel Infinity, fully loaded, with everyone on them needing a room, it was as though all of the rational numbers showed up to a dance where all of the
counting numbers were, and everyone started looking for a partner. If you imagine each person on the bus as having a bus number and a seat number, you can list the people on the first bus as {1/1, 1/
2, 1/3,...} the people on the second bus would be listed as {2/1, 2/2, 2/3...} in the story, George had an idea for finding a room for every person on the busses. If his idea works, you can pair up
every rational number with a counting number. Could it be true that infinity times infinity is infinity?
Perhaps, thought Cantor, once you start dealing with infinities, everything is the same size. Oddly enough, this did not turn out to be the case. Cantor developed an entire theory of transfinite
arithmetic, the arithmetic of numbers beyond infinity. The results are surprising. Although the sizes of the infinite sets of counting numbers, even numbers, odd numbers, square numbers, etc., are
the same, there are other sets, the set of numbers that can be expressed as decimals, for instance, that are larger. Cantor's work revealed that there are hierarchies of ever-larger infinities. The
largest one is called the Continuum.
Some mathematicians who lived at the end of the 19th century (the same time as Cantor lived) did not want to accept his work at all. The fact that his results were so paradoxical was not the problem
so much as the fact that he considered infinite sets at all. At that time, some mathematicians held that mathematics could only consider objects that could be constructed directly from the counting
numbers. You can't list all the elements in an infinite set, they said, so anything that you say about infinite sets is not mathematics. The most powerful of these mathematicians was Leopold
Kronecker who even developed a theory of numbers that did not include any negative numbers.
Although Kronecker did not persuade very many of his contemporaries to abandon all conclusions that relied on the existence of negative numbers, Cantor's work was so revolutionary that Kronecker's
argument that it "went too far" seemed plausible. Kronecker was a member of the editorial boards of the important mathematical journals of his day, and he used his influence to prevent much of
Cantor's work from being published in his lifetime. Cantor did not know at the time of his death, that not only would his ideas prevail, but that they would shape the course of 20th century | {"url":"http://www.ccs3.lanl.gov/mega-math/workbk/infinity/inbkgd.html","timestamp":"2014-04-19T02:09:05Z","content_type":null,"content_length":"12380","record_id":"<urn:uuid:1e13f8d5-7cb5-47d3-a83a-e7ab64dc0f20>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00450-ip-10-147-4-33.ec2.internal.warc.gz"} |
WMAP 5-Year Results Released
It doesn’t seem like all that long ago that we were enthusing about the results from the first three years of data from the Wilkinson Microwave Anisotropy Probe satellite. Now the team has put out an
impressive series of papers discussing the results of the first five years of data. Here is what the CMB looks like, with galaxy and foregrounds and monopole and dipole subtracted, from Ned Wright’s
Cosmology Tutorial:
And here is one version of the angular power spectrum, taken from the Dunkley et al. paper. I like this one because it shows the individual points that get binned to create the spectrum you usually
see. (Click for larger version.)
The headline two years ago was “Cosmology Makes Sense.” (That was my headline, anyway — others were not quite as accurate.) This continues to be true — the biggest piece of news isn’t that the
results have overturned any foundations, but that the concordance model with dark matter, dark energy, and ordinary matter continues to work. The WMAP folks have produced an elaborate cosmological
parameters table that runs the numbers for different sets of assumptions (with and without spatial curvature, running spectral index, etc), and for different sets of data (not just WMAP but also
supernovae, lensing, etc). Everything is basically consistent with a flat universe comprised of 72% vacuum energy, 23% dark matter, and 5% ordinary matter. The perturbations are close to scale-free,
but still seem to be a little larger on long wavelengths than shorter ones (0.014 < 1-n[s] < 0.067 at 95% confidence). Probably the most fun result is that there is, for the first time, evidence from
the CMB that neutrinos exist! Good to know.
My personal favorite was the constraint in the Komatsu et al. paper on parity-violating birefringence that would rotate CMB polarization. I was in on the ground floor where birefringence is
concerned, so I’m sentimentally attached to it. But it’s also a signature of some very natural quintessence models, so this helps constrain the physics of dark energy as well.
Congratulations to the WMAP team, who have done a great job in establishing some of the pillars of contemporary cosmology — it’s historic stuff.
The Komatsu et al. paper on parity-violating birefringence that would rotate CMB polarization suggests that the P-violating birefringence occurs in domains similar to magnetization domains for a
temperature below the Curie point. This might mean the inflaton settles into different Mexican hat potential configurations or with different vevs.
Lawrence B. Crowell
@ Mr. Crowell:
Have you considered that the stochastic nature of the parametrically-driven electron birefrigence would circumvent the dynamic Curie threshold and instead oscillate unstably until (pi/d^2) – 1
converges on infinity, thus causing the universe to implode?
So “very natural quintessence models” isn’t a contradiction in terms?
• http://lablemminglounge.blogspot.com/
Can somebody please link an explanation for how the universe can be both flat and accelerating in its expansion? I don’t quite understand that.
• http://www.davidnataf.com
Lab Lemming,
You may be confusing terminology, why can’t a flat universe accelerate?
Imagine being a 3dimensional God looking at a 2d sheet of paper universe… there’s no reason that FLAT sheet of paper can’t just expand.
If you look at Friedmann’s equations here: http://upload.wikimedia.org/math/0/e/1/0e14ece5bb6fddb797a9d3c62fc6473d.png
You see the second derivative of the scaling factor; for an accelerating universe that term needs to be positive. For that to happen, lambda needs to be big enough to overcome the combined
negative contribution of the gravity and pressure of matter and radiation.
Lab Lemming: “flat” means that spatial slices are flat, not that spacetime is flat.
• Pingback: Michael Nielsen » WMAP 5-year results: live lecture
A cosmology can be spatially flat, but the spacetime curved. The spacetime would be flat if the spatial surface and its evolute which foliate the spacetime are constant with no expansion. Some
care must be given, for a spatial surface can be arbitrarily chosen. One can easily embed any spatial surface of choice, which is similar to a gauge condition in electromagnetism.
One of the strange consequences of an expanding univerise is that energy is not something which can be globally defined. The energy balance of the universe is a sort of survey based on local
measurements. The gravitational mass-energy density of the universe is
rho_t~=~rho~+~P_z~+~P_y~+~P_z~-~Lambda/(4pi G),
for [itex]rho~+~P_z~+~P_y~+~P_z[/itex] due to luminous and dark matter. The dark matter component is some 85% of this portion. The comsological constant term [itex]Lambda[/itex] is modelled as
the result of quantum vacuum energy in the universe with
Lambda~=~(8pi G/c^2)(rho_{vac}~+~3P_{vac})
This term appears to have the condition that [itex]P_{vac}~=~-rho_{vac}[/itex] which is the source of this “negative pressure” that causes the universe to accelerate outwards. The cosmological
constant defines an event horizon at [itex]r~=~sqrt{3/ Lambda}[/itex], where the finite value of the distance ~ [itex]10^{10}[/itex] ly.
As a caveat, there is potentially a level of abuse here. Assigning the cosmological constant to a source, as I did above, for the cosmological constant is a factor of an Einstein space(time),
where the Ricci curvature is proportional to the metric. This is usually thought of as a purely geometric property of the spacetime and not something due to a source of gravity.
It is tempting to think of the universe as having a net zero mass energy. How to get [itex]E~=~0[/itex] is problematic. The metric terms in the deSitter cosmology are [itex]g_{ii}~=~exp(sqrt
{Lambda/3}t)[/itex], and it is not possible to find a [itex]K_t[/itex] so that there is a stationary condition [itex]{cal L}_{K_t}g~=~0[/itex], for [itex]{cal L}_{K_t}[/itex] a Lie derivative.
This Lie derivative is defined according to brackets so that
{cal L}_{K_t}g~=~{cal L}_{K_t}g(X,~Y)~=~g([K_t,~X], Y)~+~g(X, [K_t,~Y]),
where the brackets [itex][K_t,~X][/itex] are, for [itex]K_t~=~Apartial/partial t[/itex], not zero because the vectors X are functions of time. So there is no involutary system which defines a
conservation of energy on the entire spacetime.
Now, the cosmology will in time expand “infinitely,” and the cosmological horizon will also decay in a manner similar to the decay of black holes. Thus the horizon will receed to infinity and the
evolution of the universe has as its attractor point a Minkowski spacetime that is an empty flat void with no mass-energy. So in that sense, as defined by the attractor point, the universe has
zero net content, and the evolution of the universe from the vacuum, or some set of inequivalent vacua, to this final attractor point is a way that that “nothing” is reshuffled into another form
of “nothing.” From a quantum gravitational perspective the Wheeler DeWitt equation [itex]HPsi([g])~=~0[/itex], means that time is not something which is also defined explicitely, but is only a
bookkeeping method the analyst imposes to “organize” spatial surfaces, or the wave functional where spatial surfaces are configuration variables.
Lawrence B. Crowell
Any chance someone can use an analogy to help explain or put in terms that is light on the math?
EDIT: (I submitted the previous comment prematurely by mistake) … Any chance someone can use an analogy to help explain or put in terms that is light on the math … with regard to the universe
being flat? I try to imagine flatness in the way we see faraway galaxies appear as two-dimensional. Is that accurate?
Well the comic xkcd has been on top of things for a while now. They even have a shirt(7th one down) for the predecessor to the WMAP , so maybe he will put an shirt together for the updated
• http://blogs.discovermagazine.com/cosmicvariance/sean/
CVF, general relativity says that spacetime is curved, and that curvature is what we perceive as gravity. In general the curvature of spacetime is an enormously complicated thing, as we have to
keep track of (for example) how different parallel lines diverge or converge in all sorts of directions.
But in cosmology, where we start by assuming that matter is distributed uniformly through space (but expanding with time), things simplify a great deal. The curvature of spacetime comes uniquely
from two contributions: the curvature of space all by itself, and the fact that space is expanding as a function of time. A certain density of matter spread uniformly through the universe implies
a certain total amount of spacetime curvature, but that can be distributed in various ways between spatial curvature and the expansion rate. So you can have a “flat universe,” which means zero
spatial curvature, even though spacetime overall is curved because of the expansion.
More at Ned Wright’s cosmology tutorial.
If “The curvature of spacetime comes uniquely from two contributions: the curvature of space all by itself, and the fact that space is expanding as a function of time.” Than wouldn’t the speed of
light have to increase proportionally, since C is the most stable measure of spacetime? Lightspeed is the ruler we use to measure these distances, so if space is expanding, wouldn’t this ruler be
stretched as well? If not, than wouldn’t it be increasing distance in stable space, not the space itself expanding?
• http://www.davidnataf.com
What do you think are the prospects for very informative, qualitatively new results to come from the Planck telescope? Will there simply be more precision of existing data, or do you think
they’ll be able to elucidate previous foggy regimes of the parameter space?
• http://blogs.discovermagazine.com/cosmicvariance/sean/
David, I’m not an expert, but I’m optimistic that Planck will teach us exciting things. So far the CMB has helped us pin down the concordance model, but there is always the possibility that a
real surprise is lurking around the corner. It will certainly constrain theories of modified gravity, for example. We’ll have to wait to see.
• Pingback: Cosmology News « Twisted One 151’s Weblog
CVF: Trying using a balloon. When no air is blown into the balloon, it’s flat(2-d). Yet when the first breathe is pushed into the balloon, it expands(3-d). So, using this as an illustration, now
picture the universe being the balloon. There is your common english, with light of math, explaination to the expansion of the universe.
Thanks for the post Sean. Is there any way that the estimate (from the WMAP website) that neutrinos constsituted 10 % of the mass and energy density at the time of recombination could lead to
either an upper or a lower bound on the mass of individual neutrinos?
• http://blogs.discovermagazine.com/cosmicvariance/sean/
What you actually get is a limit on the sum of the neutrino masses; you can’t tell which individual ones are massive etc. From here, the limit on the sum of the masses is 1.4 eV at 95%
Could someone explain how they get the error bars on that power spectrum plot? It looks like the points are distributed much more widely than the error bars show.
• http://blogs.discovermagazine.com/cosmicvariance/sean/
Brian, it’s just the standard error of the mean:
The more data points you have, the broader the distribution looks to your eye, but the better-determined the mean of the distribution actually is.
• http://deleted
David (et al.) –
I would say from conversations with many people that Planck has been so long in development that by the time it does get up and running many of its results will be anticipated by other
experiments (for example, the ground based ones.) Other than primordial gravitational waves (PGWs), I don’t find much to get excited about relative to other things coming online — which will
mostly be telling us about the later universe.
I don’t find PGWs particularly exciting — they are supposedly “pinning down” a free parameter for inflation, but let’s be honest here: there are many varieties of inflation, too many, and did the
WMAP result “ruling out” single-field phi^4 really change anything in the field? Not really. It was cool and fun, and worthy of celebration, but it had not enough impact on the proliferation of
inflation models, and did little to provoke theorists and observers in new directions.
Other than that, perhaps the search for non-Gaussianity will be the biggest result from Planck. But it’s hard to square with the massive amount of resources that have been expended. Could we have
built a “non-Gaussian probe” better faster cheaper?
Definitely, Planck’s results will not have the massive, stunning impact that WMAP’s no-BS settling of the concordance model (taking “concordance” here to just mean a negative pressure fluid and
not a pure cosmological constant) did. (By “settle” here, I don’t mean “prove” — I mean “provide the baseline against which new models must improve upon.”)
Anyway, as I’ve been typing this I’ve mellowed a little on Planck. Perhaps there will be something.
• Pingback: MAP 5-Year Results Released « The Bee Buzz
I guess I missed this one:
Adam on Mar 5th, 2008 at 10:45 pm
@ Mr. Crowell:
Have you considered that the stochastic nature of the parametrically-driven electron birefrigence would circumvent the dynamic Curie threshold and instead oscillate unstably until (pi/d^2) – 1
converges on infinity, thus causing the universe to implode?
I am trying to figure out what you are meaning here. The result seems to point to some analogy with the T
Retransmission, I forgot this does not like the carrot symbols
I am trying to figure out what you are meaning here. The result seems to point to some analogy with the T less than T_c magnetization of a metal in domains. Though some care must be given in that
our observations are not across the whole of space, but are along projective rays on a past light cone. This birefringence result is an anisotropy and not an inhomogeneity result. But it does
suggest that a CP violating phase in the PMNS (CKM) matrix might assume local values, which is suggestive of physics similar to more down to Earth physics of symmetry breaking and Landau-Ginsburg
type of potentials. The WZW action in supersymmetry is similar to this, and this might be a manifestation or “fossil” signature of such physics in the early universe.
Lawrence B. Crowell
Did you read about the latest measurements of spatial curvature? It is now firmly established that the radius of our Universe is greater than the size of our observable universe a fact which will
make it even harder for advocates of non-trivial topology to hang out onto their “small universe” hypothesis….
There are a number of reasons to suspect that the spatial manifold of the universe is simply connected. There are some energy conditions by Penrose and Hawking which point to why space, at least
classically, should be simply connected. There is also the work by Gregory Perleman on the Poincare conjecture. This involves Hamilton’s Ricci flow equations, which describe how a space will
evolve so it has some minimal energy configuration. Think of a balloon you have twisted up (though not tied up) and you then let it go. The balloon snaps back to its round shape. Perleman’s work
with the Ricci flow equations indicate that the minimal configuration of a closed three dimensional space is a sphere. So I suspect that the universe does not have worm holes or dodecahedral
portholes and the like.
Lawrence B. Crowell
Secret American Planck Basher,
Well, the free parameter in question is a pretty important one, in my view: the energy scale of inflation. Pinning down the energy scale of inflation is a pretty big step in making any
significant statements about what inflation actually is. There are also measurements of the CMB out to smaller scales, which will provide further, much more significant constraints on various
other inflationary parameters. It’s also a big help to make use of the same instrument for both the large angular scales and the small angular scales, as there are always uncertainties that crop
up when combining different experiments. Then there’s the fact that additional sky coverage improves measurements at all angular scales, and so Planck will be able to do better out to somewhere
around 10-20 arcminutes or so than any ground-based instrument can possibly do.
I’ve been trying to access this site for a few days. It seems to work for everybody in other countries, except for me, no matter what ISP I used. Now, I used a foreign proxy, and finally accessed
it. Could it be that NASA is blocked Brazil to access this site?
• http://deleted
Energy scale of inflation. Is it really “important”? Yes, in the context of inflation models, but I don’t see it as having much of an impact beyond the “conclusions” section of Yet Another
Inflation Paper. It will kill 60% (? — high side) of models on a good day, but it won’t open up vistas. Once upon a time there was the consistency relation, a lot harder to check, but today even
that’s old hat.
Measurements to smaller scales. Once past 10 arcminutes or so, you’re in the damping tail and will learn nothing more of the early universe. Getting l of 700 won’t really help much with l of 3000
except you might know the input cosmological parameters a bit better.
As for the other benefits you mention. Yes, of course there will be a lot of data. I just don’t see it as really changing things, pushing the field, in new directions. And I say that as a guy
with Kolb and Tuner on his desk and a couple of early universe ideas out in the literature and in my head. I wish there was a measurement from Planck that I’d be truly excited about, but right
now it’s a grab bag of things that other places will cover as well.
Jason Dick on Mar 10th, 2008 at 1:19 pm
Secret American Planck Basher,
Well, the free parameter in question is a pretty important one, in my view: the energy scale of inflation. Pinning down the energy scale of inflation is a pretty big step in making any
significant statements about what inflation actually is.
In cosmology the time dependency of the metric coefficients means there is no Killing vector which as an isometry defines an energy conservation. As strange as it might sound, conservation of
energy in cosmology is not definable. The only conservation law we really have is the continuity equation
plus conservation based what ever spatial killing vectors might exist.
Lawrence B. Crowell
According to Inflation theory, it is many, many times the size of the visible universe. To the point that, from our perspective, it is just about spatially infinite. Curvature seems mostly a
function of the time dimension, in that the analysis of redshift and CMBR says it is only13.73 billion years old.
“Cosmic inflation has the important effect of smoothing out inhomogeneities, anisotropies and the curvature of space.”
Basically it explains how the factors which suggest an infinite universe can exist in a finite model.
If inflation expanded a small spherical cosmology into a flat spacetime, this spacetime may well be of infinite extent. A finite cosmology, or a three dimensional ball with a two dimensional
boundary, requires that boundary conditions exist there. This is somewhat problematic. However, even for an infinite dimensional R^3 space the topology change likely means that some topological
information is either lost (destroyed) or is transformed into some other form. What that is is difficult to say, maybe a ‘t Hooft-Polyakov monopole.
Lawrence B. Crowell
Doesn’t the Uncertainty Principle essentially show that some information is destroyed in the very process of measuring other information?
Since I view time as a consequence of motion, rather than the dimensional basis for it, I think monumental amounts of information are destroyed and replaced every moment. I would posit the
current credit meltdown, as well as the rest of history, as proof.
Quantum mechanics is the unitary evolution of a wave function, which completely preserves quantum information (Q-bits). Quantum fluctuations are really a manifestation of measurement, or some
decherent process, when a quantum system couples to some unspecified set of states. These states can be in a measurement apparatus. Yet pure quantum mechanics is perfectly time reversal invariant
and preserves all the information in the initial conditions of a quantum system.
Lawrence B. Crowell
If time is a measurement, than what physically exists is perfectly conserved, as it passes from one macro-state to the next. It is only these macro states that are created and consumed.
• Pingback: The Lopsided Universe | Cosmic Variance
• Pingback: WMAP 5-YEARS
• Pingback: A Special Place in the Universe | Cosmic Variance
• Pingback: Dipolo CMbr: acelerando a través del Universo | Imagen astronomía diaria - Observatorio | {"url":"http://blogs.discovermagazine.com/cosmicvariance/2008/03/05/wmap-5-year-results-released/","timestamp":"2014-04-18T01:48:06Z","content_type":null,"content_length":"139409","record_id":"<urn:uuid:bb4debcd-2076-43a3-8697-0e6921280f84>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00481-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Tetrahedron
The tetrahedron has 4 faces, 4 vertices, and 6 edges. Each face is an equilateral triangle. Three faces meet at each vertex.
Begin with a tetrahedron of edge length s. Its faces are equilateral triangles. The length of their sides is s, and the measure of their interior angles is π/3.
First, find the area of each triangular face. Multiply that by the number of faces to get the total surface area, A.
The dihedral angle formula can be applied here because three faces meet at each vertex. All of the faces are equilateral triangles, so let α = β = γ = π/3.
Find the apothem of a face, and use it in the calculations for the inradius and circumradius.
Now, the volume formula.
Here is another way to work out the volume. The tetrahedron is also a pyramid, and its height is the sum of the inradius and the circumradius. Use that fact and apply the pyramid volume formula.
Redundant calculations like this are a good way of checking the results.
Other Properties
The tetrahedron is its own dual, meaning that if the centers of the adjacent faces are connected with line segments, the resulting figure is another tetrahedron. The smaller tetrahedron shown here
has one-ninth the volume of the larger one, but it is not possible to assemble nine tetrahedrons into one.
The tetrahedron has 24 symmetries.
Tetrahedra do not pack space, but it is possible to pack space by combining tetrahedra and octahedra. This property is explained in more depth on the octahedron page.
The tetrahedron has no parallel faces, no parallel edges, and no diametrically opposite vertices. All of these properties are unique among the Platonic solids.
When the midpoints of the adjacent edges of a tetrahedron are connected, an octahedron is formed.
A cross-section of a tetrahedron can be an equilateral triangle or a square.
A planar projection of a tetrahedron can be an equilateral triangle or a square.
Last update: November 2, 2011 ... Paul Kunkel whistling@whistleralley.com
For email to reach me, the word geometry must appear in the body of the message. | {"url":"http://whistleralley.com/polyhedra/tetrahedron.htm","timestamp":"2014-04-17T01:11:00Z","content_type":null,"content_length":"4776","record_id":"<urn:uuid:bec66a2e-aff7-4ec2-9b3d-8267714493a4>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00450-ip-10-147-4-33.ec2.internal.warc.gz"} |
Resampling (statistics)
34,117pages on
this wiki
Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory
Resampling is a term used in statistics to describe a variety of methods for computing summary statistics using subsets of available data (jackknife), drawing randomly with replacement from a set of
data points (bootstrapping), or switching labels on data points when performing significance tests (permutation test, also called exact test, randomization test, or re-randomization test).
Bootstrapping is a statistical method for estimating the sampling distribution of an estimator by resampling with replacement from the original sample, most often with the purpose of deriving robust
estimates of standard errors and confidence intervals of a population parameter, for example a mean, median, proportion, odds ratio, correlation coefficient, regression coefficient etc. It may also
be used for the construction of hypothesis tests. It is often used as a robust alternative to inference based on parametric assumption when those assumptions are in doubt, or where parametric
inference is impossible or require very complicated formulas for the calculation of standard errors. See also jackknife.
See also particle filter for the general theory of Sequential Monte Carlo methods, as well as details on some common implementations.
The jackknife is a statistical method first thought of and applied by Richard von Mises. It is related to bootstrapping in the sense that both methods are used both to estimate and compensate for
bias and to derive robust estimates of standard errors and confidence intervals. Both methods have in common that the variability of a statistic is estimated from the variability within a sample,
rather than from parametric assumptions. Jackknife is a less general technique than the bootstrap, and it explores the sample variation in a different way from the bootstrap. Jackknifed statistics
are developed by systematically dropping out subsets of data one at a time and assessing the resulting variation in the studied parameter. (Mooney & Duval).
Jackknife and bootstrap may in many situations be used to obtain similar results. A difference between them is that when used to obtain an estimate of the standard error of a statistic, bootstrapping
will give slightly different results when the process is repeated on the same data, whereas jackknife will give exactly the same result each time. A situation where jackknife is regarded as the
preferred alternative is the analysis of data from complex sampling schemes, for example multi-stage sampling with varying sampling weights.
Permutation test
Preamble: All statistical tests use observations from a data set to compute a test statistic that characterises a hypothesis of interest. This test statistic is then compared to an expected reference
distribution, to assess the probability of it occurring randomly under a null hypothesis. If the observed probability, the p-value, is small (a value of 1/20 or less is often used in medical,
econometric or social science applications) then the null hypothesis is rejected and a complimentary, alternative hypothesis is accepted.
A permutation test - a particular type of statistical significance test and sometimes called a randomization test, re-randomization test, or an exact test - is a statistical test in which a reference
distribution is obtained by permuting the observed data points across all possible outcomes, given a set of conditions consistent with the null hypothesis. The theory has evolved from the works of
R.A. Fisher and E.J.G. Pitman in the 1930s.
Permutation tests form a branch of non-parametric statistics. In contrast to permutation tests, the reference distributions for many popular ‘classical’ statistical tests, such as the t-test, the
z-test, and the chi-squared test, are obtained from theoretical probability distributions. Many researchers believe this invalidates or, at least, critically weakens their use because the assumptions
relating the theoretical distributions to the empirically obtained test statistics may not be valid. The extent to which this is true, in various real-world settings, is an area of active statistical
investigation. Researchers may be forced to make these assumptions in some situations because there is no other alternative, and a non-optimal statistical test is usually considered better than none
at all.
Fisher's exact test is a commonly used permutation test for evaluating the association between two dichotomous variables and contrasts with Pearson's chi-square test which can be used for the same
purpose. When sample sizes are small the chi-squared test statistic can no longer be accurately compared against the chi-square reference distribution and the use of Fisher’s exact test becomes most
All parametric tests have a corresponding permutation test version that is defined by using the same test statistic as the parametric test, but obtains the p-value from the sample-specific
permutation distribution of that statistic, rather than from the theoretical distribution derived from the parametric assumption. It is for example possible in this manner to construct a permutation
t-test, a permutation chi-squared test of association, a permutation two-sample Kolmogorov-Smirnov test and so on.
Many parametric tests define the test statistic as a ratio t/s, where t measures the deviation of an observable parameter from it's expected value when the null hypothesis is true, and s is an
estimate of the standard error of t. A permutation test need not in general take into account the value of s, as this is a fixed constant for all permutations of a sample. This is an advantage when
constructing new permutation tests, as it will not be necessary to find an expression for the standard error of the test statistic. Finding the standard error (or variance) of a new test statistic is
often the trickiest part when developing new significance tests, requiring deep mathematical knowledge. So the construction of a permutation test rather than a parametric test to solve a certain
problem may be regard as a way of replacing mathematical skill with raw computing power. The most commonly used non-parametric tests are in their original form defined as permutation tests on ranks;
these include for example the Mann-Whitney U test and Spearman’s rank correlation test. Pitman’s original formulation (in 1937) of the general permutation test of association between two variables
describe a general test procedure that when applied to two numeric variables in linear scales gives a permutation test of Pearson's correlation coefficient, when applied to ranked data points gives
Spearman's rank correlation test, when applied to one numeric variable and one dichotomous gives a permutation t-test, when applied to one ranked variable and one dichotomous gives Mann-Whitney’s
U-test (also known as the Wilcoxon rank sum test) and when applied to two dichotomous variables gives Fisher's exact test. In general the most important advantage of permutation tests is that the
results are reliable also for small samples and when data strongly violates the distributional assumptions of the corresponding parametric test. For larger sample sizes the central limit theorem will
in most situations assure that the results obtained from parametric tests are very similar to the results from the related permutation test, so it may be concluded that even when the parametric
assumptions aren't meet, parametric tests are often good approximations to the corresponding ‘exact’ permutation test, provided the sample is large enough.
Prior to the 1980s the burden of creating the reference distribution was overwhelming except for data sets with small sample sizes. However, since the 1980s, the confluence of cheap fast computers
and the development of new sophisticated path algorithms that are applicable in special situations, made the application of permutation test methods practical for a wide range of problems, and
initiated the addition of exact-test options in the main statistical software packages and the appearance of specialized software for performing a wide range of uni- and multi-variable exact tests
and computing test-based ‘exact’ confidence intervals. During the 1990s a totally general short-cut method for finding the reference distribution was introduced, the Monte Carlo method. Even with the
most advanced computer today, the task of performing a general permutation test on continuous data is still overwhelming unless the sample size is very small. The number of permutations = N! for data
with no ties. For N=10 the number of permutations = 3628800.
For N=20 it is 2.4E18 and for N=50 it is 3.0E64.
Therefore it was an important breakthrough in the area of applied statistics when it was realised that by using Monte Carlo sampling, i.e. taking a small (relative to the total number of
permutations) number of random samples with replacement from the permutation distribution, it was possible to accurately estimate the reference distribution of any permutation test on any data. Small
sample in this case meaning at least 10,000.
Limitations of tests based on the permutation principle:
There are two important assumptions behind a permutation test - that the observations are independent and that they are exchangeable under the null hypothesis. An important consequence of the
exchangeability assumption is that tests of difference in location (like a permutation t-test) require equal variance, otherwise the observations are not exchangeable. In this respect the permutation
t-test shares the same weakness as the classical Student’s t-test. Another weakness of permutation tests is that as they are returning a p-value as the only outcome of a statistical analysis, which
means that they do not satisfy the common requirement today that results should be presented as confidence intervals of the parameter of interest, and not (only) as p-values. However, there are
methods for calculating ‘exact’ confidence intervals from the inverse of a permutation test.
See also
Statistical bootstrapping
• Efron, B. (1979). Bootstrap methods: Another look at the jackknife. The Annals of Statistics, 7, 1-26.
• Efron, B. (1981). Nonparametric estimates of standard error: The jackknife, the bootstrap and other methods. Biometrika, 68, 589-599.
• Efron, B. (1982). The jackknife, the bootstrap, and other resampling plans. Society of Industrial and Applied Mathematics CBMS-NSF Monographs, 38.
• Diaconis, P. & Efron, B. (1983). Computer-intensive methods in statistics. Scientific American, May, 116-130.
• Efron, B., & Tibshirani, R. J. (1993). An introduction to the bootstrap. New York: Chapman & Hall, software.
• Mooney, C Z & Duval, R D (1993). Bootstrapping. A Nonparametric Approach to Statistical Inference. Sage University Paper series on Quantitative Applications in the Social Sciences, 07-095.
Newbury Park, CA: Sage
• Edgington, E. S.(1995). Randomization tests. New York: M. Dekker.
• Davison, A. C. and Hinkley, D. V. (1997): Bootstrap Methods and their Applications, software.
• Simon, J. L. (1997): Resampling: The New Statistics.
• Moore, D. S., G. McCabe, W. Duckworth, and S. Sclove (2003): Bootstrap Methods and Permutation Tests
• Hesterberg, T. C., D. S. Moore, S. Monaghan, A. Clipson, and R. Epstein (2005): Bootstrap Methods and Permutation Tests, software.
Permutation test
Original references:
• Fisher, R.A., The Design of Experiments. New York: Hafner; 1935
• Pitman, E. J. G., Significance tests which may be applied to samples from any population. Royal Statistical Society Supplement, 1937; 4: 119-130 and 225-32 (parts I and II).
• Pitman, E. J. G., Significance tests which may be applied to samples from any population. Part III. The analysis of variance test. Biometrika, 1938; 29: 322-35.
Modern references:
• Edgington, E. S. Randomization tests, 3rd ed. New York: Marcel-Dekker. 1995.
• Good, Phillip I. Permutation Tests 2nd ed. Springer. 2000. ISBN: 038798898X
• Lunneborg, Cliff. Data Analysis by Resampling. Duxbury Press. 1999. ISBN: 0534221106
• Welch, W. J., Construction of permutation tests, Journal of American Statistical Association, 85, 693-698, 1990
Computational Methods:
• Mehta, C. R. and Patel, N. R. (1983). A network algorithm for perfoming Fisher’s exact test in r x c contingency tables, J. Amer. Statist. Assoc. 78(382), 427–434.
• Metha, C. R., Patel, N. R. and Senchaudhuri, P. (1988). Importance sampling for estimating exact probabilities in permutational inference, J. Am. Statist. Assoc. 83(404), 999–1005.
External links | {"url":"http://psychology.wikia.com/wiki/Resampling_(statistics)?direction=prev&oldid=84396","timestamp":"2014-04-17T19:00:19Z","content_type":null,"content_length":"75000","record_id":"<urn:uuid:03c2e7a9-7062-4897-bc24-3554410e8611>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00496-ip-10-147-4-33.ec2.internal.warc.gz"} |
Eric The Sheep
September 19th, 2011 by Dan Meyer
Last Wednesday at UC Berkeley in Alan Schoenfeld's class on mathematical thinking and problem solving, Kim Seashore wrote the following paragraph on the board:
Eric is standing at the end of a line of fifty sheep, waiting to be sheared. He is hot and impatient. Each time a sheep is sheared and Becky, the sheep shearer, turns to put the wool away, Eric
sneaks around the next two sheep in line.
"What question am I going to ask next?" Seashore asked us. We thought for a moment and then shared out responses. Here are a few:
• How many sheep will be behind Eric when it's finally his turn to be sheared?
• How many sheep were sheared before Eric?
• How many kilograms of wool will the sheep yield?
• How many sweaters can you make out of them?
• What's the significance of skipping two? Why two instead of three?
• Will the other sheep get mad?
• What if the 49th sheep had the same idea after seeing Eric skip ten more sheep and started skipping three every time? Who gets sheared first?
"Great," Seashore said. Then she had us categorize those questions:
• Which can we answer?
• Which can't we answer?
• Which need more information to be answered?
Then she asked us to work for awhile on a question that interested us and was answerable. One person took up "how much wool?" and she asked him to be explicit about his assumptions. After ten minutes
we grouped ourselves and explained our work to other people.
A Few Notes On This Scene
• "What question am I going to ask next?" isn't the same question as "What question interests you here?"
• Why fifty sheep? How was that number chosen? Fifty sheep was short enough that some students determined how long Eric would wait to be sheared by simulating the entire problem. What is gained or
lost by describing a line of 1,000 sheep?
• Asking students to generate their own questions is risky. Seashore encouraged us to pursue our every whim even though the "kilograms of wool" question was going to involve very different
mathematical thinking than any of the others. I don't know how she planned to reconcile that difference. ¶ My approach is to sample the room for questions and take +1′s for each. (ie. "Is anybody
else interested in Sam's question?") This reveals a hierarchy of student interest which we handle in order. ¶ Meanwhile, I am in contact with teachers who ask their students to generate questions
only to coerce them down to the one they (the teachers) originally wanted to pursue. This interaction will only pay off negative dividends, as far as I can tell. These classes would be much
improved if the teacher would simply ask a concise question that she knew in advance would be of some general interest to her students. Most questions asked in math class are neither concise nor
of much interest to the students so we're already way ahead of the game.
• Abstraction was nine tenths of the work. In answering, "how long will it take Eric to get sheared?" I had to represent the problem with variables and build a model out of them. This was, by far,
the hardest work of the problem. Moreover, no one I spoke with chose the same independent variable that I did.
• Your textbook would abstract the problem for your students.
Be less helpful.
2011 Sep 20: Bowen Kerins locates the original text of the problem, which mercifully leaves the hard work of abstraction to the student.
Featured Comment
A line of 50 sheep makes me wonder why I would ever have to use variables to represent the problem.
A line of 1,000 sheep makes me wish I had an easier problem – one I could actually act out.
What number of sheep will motivate me to model a simpler case and look for patterns? What number of sheep will force me to generalize and move from concrete models to abstract thinking, without
stepping over the boundaries of the story?
Shearing a line of 1,000 sheep? Eric will be waiting a very long time.
13 Responses to “Eric The Sheep”
1. on 19 Sep 2011 at 2:22 pm1
A line of 50 sheep makes me wonder why I would ever have to use variables to represent the problem.
A line of 1,000 sheep makes me wish I had an easier problem – one I could actually act out.
What number of sheep will motivate me to model a simpler case and look for patterns? What number of sheep will force me to generalize and move from concrete models to abstract thinking, without
stepping over the boundaries of the story?
Shearing a line of 1,000 sheep? Eric will be waiting a very long time.
2. Paul’s right — the number of sheep is pretty tricky to pick here, and definitely part of good problem design here. I would pick 110 or 200 sheep if teaching to a high school audience, while 50 or
80 might be better for an elementary or middle school audience. I think 1,000 sheep could intimidate some students while moving others to the abstraction too quickly (inviting mistakes in
executing the abstraction).
I also feel it’s important to pick a number of sheep that is 1 less than a multiple of 3, which kicks in the other thing I see as important if an algebraic approach is taken. How many students
might answer 16 1/3 to this problem because they’re just solving for x and not contextualizing the solution?
“Eric the Sheep” comes from an Australian curriculum, and this page shows the problem being used in Grades 1, 3, 5, and 7:
Fortunately, this curriculum does NOT abstract the problem for students — the task is presented simply and directly. There are also some nice extensions about rule changes, asking students to
come up with new situations (but in a relatively structured scenario).
The task along with some of its extension problems can also be found in the Algebra component of Annenberg’s “Learning Math”:
3. Thoughts on starting with a small number – I like Bowen’s suggestions for each grade level – then making the follow-up question a larger number? It seems this would be especially helpful in lower
grades where developing algebraic thinking would require a bit more scaffolding, though this approach isn’t necessarily more helpful. Or is it?
4. Dan writes:
I don’t know how she planned to reconcile that difference.
Per our conversation over in my place, I find this to be of particular interest. What was the point of the example lesson? If it was to model learning mathematics through challenging tasks that
admit a variety of approaches, then the plans for reconciling the differences among the various problems being worked on would be of primary importance. It’s going to be hard to have a group of
students learn much from each other’s work if they are struggling to wrap their minds around each other’s problems.
But there may be dozens of different reasons for doing Eric the Sheep with graduate students.
5. I remember when Kim used this problem at least 10 summers ago as part of a Bay Area Math Project summer institute. Hearing that she’s still using it, with a new angle, as part of a totally
different class, makes me realize that I don’t reuse good problems enough. My teaching could evolve while using the same original problem, but posed in new ways, reflecting how I am growing as an
educator. Thanks for posting this…it made for good food for reflective thought for me.
6. on 20 Sep 2011 at 1:34 pm6
luke hodge
From one perspective I can see this problem involving an abstraction – introducing variables and/or an equation to solve/model. From another perspective I think this problem is really about
removing abstraction. We have abstract procedures for dividing numbers with no context, but here we are presented with a concrete reason and meaning of division – it is quicker than subtracting 3
a bunch of times.
I don’t see the benefit in explicitly using a variable(s) or an equation to solve this problem. Viewing this problem as a division problem seems much more natural. Why introduce all that
abstraction if you don’t need it?
Also, judging from the class questions, it appears that there are a few students in the mathematical thinking and problem solving class that are trying desperately to avoid mathematical thinking
and problem solving.
7. on 20 Sep 2011 at 2:27 pm7
I noticed that many of the commentators focused on the number 3, as in “a multiple of 3″ and “subtracting by 3 a bunch of times.” I watched a class of sixth graders tackle this problem as part of
a lesson study recently. The “3-ness” of the problem was not readily to apparent to any of the groups, although it was to one or two students.
Part of the process of mathematizing a problem or context is recognizing the salient mathematical features. In this case, a change of 3 sheep. Because, as Bowens points out, it’s 3 sheep, but the
step nature of the context means that an answer like 16 1/3 can be generated, which makes no sense as an actual answer. What the average student needs, is to recognize a change of 3 first, and
why. And we can’t tell them it’s there.
The generalizations to larger numbers of sheep and the divisibility issue are only relevant if the student can see “3″.
8. on 20 Sep 2011 at 3:33 pm8
luke hodge
morrowmath: I think many students would need at least a subtle hint (suggestion to draw or act it out) to “find” the 3 and most would benefit from a very concrete demonstration of why 3 is
important. How did the teacher react to the student’s difficulties in the class you observed?
9. on 20 Sep 2011 at 4:11 pm9
I don’t think that this is a “difficulty,” but rather part of the reasoning process. One group’s table saw a repeated +2 pattern and represented it this way (the number of sheep jumped by Eric):
The only 3 in this problem is the repeated similar groups of 3 expressions.
This is 6th grade, and they were able to verbally generalize this problem. In discussion, the teacher did ask for conclusions that led to thinking about the changes that happened around a
multiple of three, but the students’ reasoning did not lead there. Sometimes you never know where a lesson or mathematical exploration will lead.
10. on 21 Sep 2011 at 4:26 pm10
luke hodge
morrowmath: Sure, you could call it part of the reasoning process, grappling, etc. But are you saying that these students were able to generalize the problem in the sense of being able to fairly
quickly determine how many sheep are sheared before Eric for large numbers of sheep? I would not have guessed many students could do that without seeing that Eric moves forward three spots each
time a sheep is sheared. How did they do it?
11. on 22 Sep 2011 at 6:19 pm11
Chera G.
This is an interesting math question. I loved how the students were able to formulate their own questions based on what they thought was going to be asked next. I also loved that they were able
to chose the question that best interests them and describe their assumptions. The teachers I know from my own experiences and colleagues do not open up the door for creativity and higher level
thinking. Allowing students to chose their own questions and then solve them requires independent thinking and reasoning that fosters growth. I would love to do something similar in my classroom.
It’s very interesting that no one chose the same variable.
12. What is also also interesting with this problem from the Maths 300 resource bank (http://maths300.esa.edu.au) is that the problem subtly changes if:
• Eric sneaks forward x sheep, then the shearer grabs a sheep
• the shearer grabs a sheep and then Eric sneaks forward x sheep.
Further on Maths 300, I’m interested in seeing how we can use Twitter tags (I use #maths300) to share our pedagogical ‘a-ha’ moments when using lessons from the Maths 300 library.
13. [...] So you made me think of your Eric the sheep post. You had a graduate course, or someone who came and spoke in your graduate program who came and had [...] | {"url":"http://blog.mrmeyer.com/2011/eric-the-sheep/","timestamp":"2014-04-17T07:25:36Z","content_type":null,"content_length":"49147","record_id":"<urn:uuid:e5175f72-1172-404c-aebd-e75f1ca9ed93>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00187-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Force Exerted On A 2.00 Kg Ball Is Given By ... | Chegg.com
The force exerted on a 2.00 kg ball is given by F = 64.0t ? 9.60 , where F and t are in SI units. The time the force must act (starting at t = 0) for the speed to change by 14.4 m/s is closest to
Advanced Physics | {"url":"http://www.chegg.com/homework-help/questions-and-answers/force-exerted-200-kg-ball-given-f-640t-960-f-t-si-units-time-force-must-act-starting-t-0-s-q3444964","timestamp":"2014-04-17T04:18:42Z","content_type":null,"content_length":"20333","record_id":"<urn:uuid:c42f0ffe-d576-45e9-be90-0300a86549c6>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00357-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Coq Proof Assistant Reference Manual. INRIA
- Journal of Automated Reasoning , 1999
"... Abstract. We survey a substantial body of knowledge about lambda calculus and Pure Type Systems, formally developed in a constructive type theory using the LEGO proof system. On lambda calculus,
we work up to an abstract, simplified, proof of standardization for beta reduction, that does not mention ..."
Cited by 53 (7 self)
Add to MetaCart
Abstract. We survey a substantial body of knowledge about lambda calculus and Pure Type Systems, formally developed in a constructive type theory using the LEGO proof system. On lambda calculus, we
work up to an abstract, simplified, proof of standardization for beta reduction, that does not mention redex positions or residuals. Then we outline the meta theory of Pure Type Systems, leading to
the strengthening lemma. One novelty is our use of named variables for the formalization. Along the way we point out what we feel has been learned about general issues of formalizing mathematics,
emphasizing the search for formal definitions that are convenient for formal proof and convincingly represent the intended informal concepts.
"... When solving machine learning problems, there is currently little automated support for easily experimenting with alternative statistical models or solution strategies. This is because this
activity often requires expertise from several different fields (e.g., statistics, optimization, linear algebr ..."
Add to MetaCart
When solving machine learning problems, there is currently little automated support for easily experimenting with alternative statistical models or solution strategies. This is because this activity
often requires expertise from several different fields (e.g., statistics, optimization, linear algebra), and the level of formalism required for automation is much higher than for a human solving
problems on paper. We present a system toward addressing these issues, which we achieve by (1) formalizing a type theory for probability and optimization, and (2) providing an interactive rewrite
system for applying problem reformulation theorems. Automating solution strategies this way enables not only manual experimentation but also higher-level, automated activities, such as autotuning.
Keywords: machine learning, algorithm derivation, interactive modeling, type theory
"... In this article we describe the design and implementation of a Linux multi-level secure (MLS) file system containing access control lists (ACL). The resulting prototype is called Lisex. We
implemented Lisex from model formally written and verified in Coq. We used abstract data types (ADT) to impleme ..."
Add to MetaCart
In this article we describe the design and implementation of a Linux multi-level secure (MLS) file system containing access control lists (ACL). The resulting prototype is called Lisex. We
implemented Lisex from model formally written and verified in Coq. We used abstract data types (ADT) to implement some data structures. Hence, we show the methodology that we have applied to program
from formal specifications using ADTs. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1196580","timestamp":"2014-04-18T00:11:42Z","content_type":null,"content_length":"17291","record_id":"<urn:uuid:30ec7651-4cb1-44d1-ae42-20077c822914>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00161-ip-10-147-4-33.ec2.internal.warc.gz"} |
Notes from Overground
At two periods of my life, I commuted from Oxford to London: in 1974–1976, when I worked at Bedford College, and 1986–1997, at my present place. In retrospect, it is not clear how I survived it. The
train journey is at least an hour each way, then add 25 minutes to cycle to the station in Oxford, 25 minutes to walk to Regents Park (longer to get the tube to Stepney Green), and you have four
hours a day spent commuting. The only thing that made it possible was the fact that two of those four hours, on the train, were uninterrupted time when I could get on with work. (This was before the
days of mobile phones.) Indeed, I got a lot of writing done on the train!
In the first of the two spells, I was just beginning to make my first tentative steps from the finite to the infinite. Inspired by some lectures by Graham Higman, I considered the class of
permutation groups G, acting on a set Ω, which have the property that the induced action on Ω^n has only finitely many orbits for every natural number n. Of course, then there are things that can be
counted, which as you know is what I like to do.
I needed a convenient name for this class of permutation groups. I knew what sort of name I wanted. If such a group G is a group of automorphisms of a structure M on Ω, then up to isomorphism there
are only a finite number (i.e. a few) of isomorphism types (i.e. shapes) of n-element substructures of Ω for all n. I wanted a word whose etymology was “few shapes”. I thought the Greek word
“oligomorphic” would fit the bill.
At the time, the commuters on the Oxford-to-London trains were a fairly convivial crowd; we knew one another, exchanged a few words on the journey, and threw a party for anyone who gave up commuting.
One of the crowd, Roger Green, was a near neighour of mine in Wolvercote, and a scholar of modern Greek. So I put the problem to him, and he independently came up with “oligomorphic”. Thus was the
name invented.
I learned much later that “oligomorpic” is also a technical term in computer science, where it describes a computer virus which can only exist in a few different forms, as opposed to a “polymorphic”
virus. (I am not at all sure where the border between “few” and “many” is drawn in that application.)
Roger Green was a skilful logodaedalist. One of his coinages was siderodromology, the study of railways. (I puzzled over this, but the OED gives two meanings of “sidero-”: a combining form relating
to the stars, or a combining form relating to iron.)
Roger wrote a remarkable book about commuting, entitled Notes from Overground. Compiled over several years, it records what he saw from the train, who he saw on the train, fantasies such as a
beautiful passage beginning “I, the writer, am the Regional Controller. Words my rolling-stock”, slogans such as “Commuters of the world, unite! You have nothing to lose but your trains”, small
sequences including Commuter’s Calendar, Trackside Industry, Overheard, Headlines, Anti-Kontakion, and a questionable Loose Couplings including all the names of stations from Paddington to Oxford
(via Maidenhead Junction).
He imagined the book as his lifeline while he served his time in Stalag Zug, smuggled past the barrier every day lest They find out what he was up to and increase his sentence. After a while, he
found a role model. The book is a Premeditated Notebook, somewhere between a verbatim diary and a crafted literary production. As Jorge Luis Borges said of Franz Kafka’s work, any literary form
creates its precursors. After trying out various antecedents such as W. H. Auden’s The Dyer’s Hand and Samuel Butler’s Notebooks, Roger reallised that in the case of the P. N., the best precursor was
the remarkable little book The Unquiet Grave by Palinurus, a nom-de-plume of Cyril Connolly who used it to write himself out of a black depression. So Roger used the nom-de-plume Tiresias for his
I have just been re-reading it. Very amusing and brings back old memories. One in particular was of a piece of graffiti just outside Paddington, which read “FAR AWAY IS CLOSE AT HAND IN IMAGES OF
ELSEWHERE”. Roger and I speculated about its origin and meaning several times. When it began to fade, it was re-painted. But eventually the wall on which it was written was demolished.
Roger was a championship-standard solver of The Times crossword, at that time the most literary of the cryptic crosswords in British national dailies. Here is his comment:
I always tackle the crossword puzzle first. Believe I have my priorities right (insofar as anyone who spends at least two hours a day in trains can be supposed to have his priorities right). Could
happily dispense with the rest of the rag, given a crossword to take me at least an hour to solve. [This is disingenuous; I never knew him take anywhere near that long to solve a crossword.] The
crossword alone offers truth – perfect, Euclidean truth, the search for which can gratify and fill the mind to the exclusion of all else. My grounding in the Classics and English literature at last
comes into its own. All that education, which seemed so pointless, was aimed towards this solving of the daily conundrum. Clearly our mentors did not explain this at the time, even when we questioned
them, because they knew we lacked the maturity required of initiates of the mystery.
Interestingly, he later compares the crossword to poetry.
The index of a book is often interesting reading, and Notes from Overground certainly delivers. The index runs from Abdul & Arthur (two characters I borrowed for homework problems when I taught
Probability) to Zen, by way of Adelwhat?, Great Uncle Bulgaria, Noh, and Spook Erection.
Roger comes over as a fairly gloomy person, etiolated by commuting. Did he escape to a more fulfilled life? He did escape, yes, and went to live in Greece; I believe he shared a house on Hydra with
Leonard Cohen at one point. But I don’t think he became a happy, fulfilled individual. There is a moral there!
But re-reading it suggested to me that the Premeditated Notebook does have a little in common with this blog. I always write things in advance, beginning with one or a few ideas, and get them into
reasonable form before posting them; I almost never type directly into the box on the WordPress page. But, on the other hand, I don’t polish the pieces to perfection; at a certain point they tell me
that they are finished, and I leave them there. I jump randomly from one topic to another, but also have short sequences, notably the one on the symmetric group. I really don’t know whether I learned
a trick from Roger or whether this is just my natural style.
But no matter, it seems to do for me something similar to what it did for Roger and his role model Cyril Connolly. If I am really annoyed by something, this is a very good way to get it out of my
system. It really does seem to help me keep my balance.
So I shall probably go on for a while yet.
2 Responses to Notes from Overground
1. And more – Simon Hoggart (no less) wrote about it in the travel section of the Guardian last summer:
This entry was posted in books, geography, the Web and tagged commuting, Cyril Connolly, logodaedaly, oligomorphic, premeditated notebook, Roger Green, siderodromology. Bookmark the permalink. | {"url":"https://cameroncounts.wordpress.com/2011/04/02/notes-from-overground/","timestamp":"2014-04-17T13:15:38Z","content_type":null,"content_length":"73795","record_id":"<urn:uuid:c2b8632d-a64b-41d4-95a2-c9d31dc7cb69>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00238-ip-10-147-4-33.ec2.internal.warc.gz"} |
Commutative Algebra
A set $\Sigma$ is called a partially ordered set or a poset with respect to a relation $\le$ if for all $\sigma, \tau, \rho\in\Sigma$:
1. $\sigma\le\sigma$ (reflexivity)
2. $\sigma\le\tau\le\sigma\implies\sigma=\tau$ (antisymmetry)
3. $\rho\le\sigma\le\tau\implies\rho\le\tau$ (transitivity)
We write $x \lt y$ if $x\le y$ and $x\ne y$.
Let $X$ be some subset of a poset $\Sigma$. Then we say $X$ is a chain or that $X$ is totally ordered if for all $x,y \in X$ we have $x\le y$ or $y\le x$. We say $\sigma\in\Sigma$ is an upper bound
for $X$ if for all $x\in X$ we have $x\le\sigma$. We say $\sigma\in \Sigma$ is maximal if for all $\tau\in\Sigma$ we have $\sigma\le\tau \implies\sigma =\tau$.
Suppose $x\lt y$ and there is no $z$ for which $x\lt z\lt y$ then we say $y$ covers $x$.
A poset can be visualized using a Hasse Diagram.
@@@@ <object data="hasse.svg" width="100px" height="100px" type="image/svg+xml"> </object> @@@@
This is a graph where each node represents an element and two distinct nodes are joined by an edge if and only if one covers the other. Usually the node being covered lies is drawn below the other
1. $\mathbb{Z},\mathbb{Q},\mathbb{R}$ are totally ordered with respect to the usual $\le$ and none of them posess a maximal element.
2. Let $S$ be a set and set $\Sigma = \mathcal{P}(S) = \{T|T\subset S\}$, the power set of $S$. Then $\Sigma$ is a poset with respect to $\subset$ and $S$ is the maximal element.
Set $\Sigma' = \mathcal{P}(S)\setminus\{S\}$. Then $\Sigma'$ is a poset with respect to $\subset$ with many maximal elements in general, namely $S\setminus\{x\}$ for all $x\in S$. 3. Let $R$ be a
ring and set $\Sigma = \{I|I\triangleleft R, I\ne R\}$. Then $\Sigma$ is a poset with respect to $\subset$ and its maximal elements are precisely the maximal ideals of $R$.
For example, for $R=\mathbb{Z}_2$ the Hasse diagram $\Sigma\cup\{R\}$ is a two node graph joined by one edge, where one node represents $R$ and the other represents $\{0\}$.
If $R,S$ are rings then define their direct sum to be $R\oplus S = \{(x,y)|x\in R, y\in S\}$ with coordinatewise operations. Note that its ideals must have the form $I\oplus J$ for some $I \
triangleleft R, J \triangleleft S$.
Then if $R = \mathbb{Z}_2 \oplus \mathbb{Z}_2$ then the Hasse diagram representing $\Sigma\cup\{R\}$ resembles a diamond, where the four nodes represent the ideals $R, \{(0,0)\}, \langle(1,0)\
rangle, \langle(0,1)\rangle$. The maximal ideals are $\langle(1,0)\rangle, \langle(0,1)\rangle$.
Now let $R = \mathbb{Z}_2 \oplus \mathbb{Z}_2 \oplus \mathbb{Z}_2$. The Hasse diagram for $\Sigma \cup \{R\}$ now resembles a wireframe cube, and the maximal ideals are all isomorphic to $\mathbb
{Z}_2\oplus \mathbb{Z}_2$.
A poset $\Sigma$ is called well-ordered if $\Sigma$ is totally ordered and each nonempty subset of $\Sigma$ has a least element. Note all finite totally ordered sets are well-ordered.
Example: $\mathbb{Z}^+, \mathbb{N}$ are well-ordered and $\mathbb{Z}, \mathbb{Q}^+, \mathbb{R}^+, \mathbb{Q}, \mathbb{R}^+$ are not with respect to the usual $\le$.
Let $\Sigma$ be a finite nonempty alphabet with some total order $\le$. Define the lexicographic or dictionary order on $\Sigma^*$ by $x_1 ... x_m \lt y_1 ... y_n$ if for some $0\le k\lt n$ we have
for all $0\lt i \le k$ that $x_i = y_i$, and if $k + 1 \le m$ then $x_{k+1} \lt y_{k+1}$. This makes $\Sigma^*$ totally ordered. Then $\Sigma^*$ is well-ordered if and only if $|\Sigma| = 1$.
Zorn’s Lemma: Let $\Sigma$ be a nonempty poset such that every chain $\mathcal{C}\subset \Sigma$ has an upper bound in $\Sigma$. Then $\Sigma$ has a maximal element.
This is equivalent to the Axiom of Choice, the Well-Ordering Principle and Transfinite Induction.
The Well-Ordering Principle states that every set can be well-ordered. This is nontrivial. For example, there is no known explicit well-ordering of $\mathbb{R}$.
1. Start with any $x_0 \in \Sigma$.
2. Build a chain $x_0 \lt x_1 \lt x_2 ...$ unless a maximal element is reached.
3. This chain has an upper bound $y_0\in\Sigma$.
4. Build a chain $y_0 \lt y_1\lt y_2 ...$ unless a maximal element is reached.
5. Repeat until a maximal element is reached.
6. Suppose "every element of $\Sigma$ is examined" but the chain $\mathcal{C}$ continues to be built without bound. Then for all $z\in\Sigma$, we either have $z\lt x$ for some $x\in\mathcal{C}$ or
for all $x\in\mathcal{C}$ $x,z$ are not comparable.
7. But this is a contradiction since $\mathcal{C}$ has an upper bound $z_0$, and we must have $x\le z_0$ for all $x\in\mathcal{C}$.
We can make the "examining every element" statement precise by assuming $\Sigma$ is well-ordered.
A lattice is a poset in which the greatest lower bound (g.l.b.) and least upper bound (l.u.b.) exist for any pair of elements. A lattice is complete if the g.l.b. and l.u.b. exist for any subset. | {"url":"http://crypto.stanford.edu/pbc/notes/commalg/poset.html","timestamp":"2014-04-18T23:42:18Z","content_type":null,"content_length":"9627","record_id":"<urn:uuid:ade103ff-b49a-4eec-b071-04a29d58a5ae>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00163-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Using integrals, how do you find the volume of a equilateral triangle based pyramid with height h? let a equal one side of the triangular base
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50c12958e4b016b55a9e1430","timestamp":"2014-04-20T18:43:01Z","content_type":null,"content_length":"94648","record_id":"<urn:uuid:9176440f-4427-4bae-a31a-9f0724981a29>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00563-ip-10-147-4-33.ec2.internal.warc.gz"} |
Schneider, Scott - Department of Mathematics and Computer Science, Wesleyan University
• Scott Michael Schneider Department of Mathematics and Computer Science 1315 Forest Glen Circle
• SOLUTIONS FOR WORKSHOP 1 (1) Letting s be the arc length parameter, we first use the arc length function to find the
• Course Dates Time Location MATH121 02 9/9/09 12/14/09 MWF 10:00 10:50am 139 Exley Science Center
• Course Dates Time Location MATH121 01 9/7/10 12/9/10 TR 10:30 11:50am 139 Exley Science Center
• TRIG IDENTITIES Periodicity: if f is any of the six trig functions, then for all x R, f(x + 2) = f(x).
• TECHNIQUES FOR GRAPHING FUNCTIONS Given a function y = f(x), our goal is to sketch a rough graph of f that reflects important features
• How to Show That a Limit Exists Let f(x) be a real-valued function whose domain includes an open interval containing a R. In
• HOW TO SHOW THAT A LIMIT DOES NOT EXIST Let f(x) be a real-valued function whose domain includes an open interval containing a R. In order
• WORKSHOP 1: DUE FRIDAY, NOVEMBER 13TH Work through the following problems together with the other members of your group, in the class time provided
• THE THEORY OF INTEGRATION Integration theory is motivated by an algebraic problem and by a geometric problem. You are already quite
• Course Dates Time Location MATH221 02 9/9/09 12/14/09 MW 1:10 2:30pm 121 Exley Science Center
• Work through the following problems together with the other members of your group, in the class time provided for you. Use books, notes, or any resource you can think of. Then write up formal
solutions to the problems and turn them in next
• 1. Deductions Definition 1.1. Fix a symbol set S, and let LS, LS. Then an S-deduction of from
• Course Dates Time Place Math 251:F1 6/22/09 8/12/09 10 : 00 11 : 50am Hill 525
• THE PRECISE DEFINITION OF A LIMIT Suppose that y = f(x) is a real-valued function whose domain includes an open interval containing
• Course Dates Time Location MATH122 03 1/22/10 5/5/10 MWF 9:00 9:50am 141 Exley Science Center
• TECHNIQUES FOR GRAPHING FUNCTIONS Given a function y = f(x), our goal is to sketch a rough graph of f that reflects important features
• SOME INTEGRALS (1) cos xesin x
• WORKSHOP 1: DUE TUESDAY, JULY 7TH Work through the following four problems together with the other members of your group, in the class
• THE PRECISE DEFINITION OF A LIMIT Suppose that y = f(x) is a real-valued function whose domain includes an open interval containing
• Here is a list of the 8 different kinds of integrals you should now be familiar with: (Calc 1) Single integral
• HANDOUT: GAUSSIAN ELIMINATION Recipe for solving a system of linear equations: first find the augmented matrix of the system,
• BOREL SUPERRIGIDITY OF SL2(O)-ACTIONS SCOTT SCHNEIDER
• INCOMPLETENESS Let ar = {0, S, +, } be the symbol set for the first-order language Lar
• LEMMAS TO BE USED IN THE PROOF OF THE COMPLETENESS THEOREM Lemma 1. S iff S tautologically implies .
• Borel cardinal invariant properties of countable Borel equivalence relations
• INTEGRATING MONOMIALS OF TRIGONOMETRIC FUNCTIONS In this handout we will see how to integrate any product of powers of the six basic trigonometric functions.
• Math 497 Fall 2011 Worksheet No. 5
• Math 497 Fall 2011 The Beginning of The Elements
• Math 497 Fall 2011 Worksheet No. 1
• Math 497 Fall 2010 Study guide for 1st exam
• Math 497, Fall 2011 Worksheet No. 3
• Math 497 Fall 2011 Some logic expectations implicit in the GLCEs
• Topics in Elementary Mathematics Math 497, Fall 2011 | {"url":"http://www.osti.gov/eprints/topicpages/documents/starturl/50/498.html","timestamp":"2014-04-19T20:07:54Z","content_type":null,"content_length":"13775","record_id":"<urn:uuid:bf317843-7a28-484b-9940-4369b67575da>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00039-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
what is the solution set for |-x + 2| = 5
• one year ago
• one year ago
Best Response
You've already chosen the best response.
\[|x|=a \rightarrow x=a\;and\; x=-a \] then check to see if the solutions exist by plugging in
Best Response
You've already chosen the best response.
you will have 2 equations for that.. -x + 2 = 5 and -x + 2 = -5 then you can solve for the values for x..
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5084ddd7e4b02a1e48d7742a","timestamp":"2014-04-19T19:48:51Z","content_type":null,"content_length":"30069","record_id":"<urn:uuid:a22be22d-d51c-46b1-bd18-845ac7894775>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00088-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding a 3D rotation that composed with another rotation R results in < angle than R
January 14th 2012, 08:02 PM #1
Jan 2012
Finding a 3D rotation that composed with another rotation R results in < angle than R
I have a 3D rotation R1 (which i describe in an axis-angle way) whose axis I don't know at all and has no restrictions, but has an angle close to 180°. And I need to have another rotation R2
that, when "applied" to R1 results in a composed rotation R3 that can have any axis but its angle should be lower than 180° (say, at least 10° lower).
Any guidance on how to come up with a rotation like R2 would be greatly appreciated.
Why do I need this? As a piece of an algorithm I need to find the rotation R1 that best matches some cloud of points to another, I have an algorithm for that that works for all axes and angles
but for angles near 180° (in practice, between 178° and 182°). When It doesn't work, either the angle and the axis come out erroneous from it.
That's why I'm trying to solve for both R1 and R3, and keep the solution that gives me the lowest least squares error when comparing the clouds of points. (And I'll be keeping the rotation
obtained by solving for R3 when R1 had an angle close to 180°)
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/geometry/195314-finding-3d-rotation-composed-another-rotation-r-results-angle-than-r.html","timestamp":"2014-04-16T13:21:15Z","content_type":null,"content_length":"30638","record_id":"<urn:uuid:2e31cbfd-69d2-443e-91b8-75ac713bf898>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00386-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebra Tiles
For each quadratic you will be given the exact number of each size tile. Directions
• Arrange these tiles into the form of a rectangle.
• To flip a rectangular tile, select it with the mouse and press the space bar
• Once you have correctly arranged the tiles into a rectangle, you can determine the factors of the quadratic by adding the terms along the top and left sides of your rectangle.
• Only equal length sides may touch
• You may not lay two equally sized tiles of different color next to each other
Use only positive coefficients AlgebraTiles.java | {"url":"http://people.cehd.tamu.edu/~strader/Mathematics/Algebra/AlgebraTiles/AlgebraTiles2.html","timestamp":"2014-04-17T21:55:38Z","content_type":null,"content_length":"1413","record_id":"<urn:uuid:dbbf236e-8f1f-4e85-9042-e1caf1223ae1>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00434-ip-10-147-4-33.ec2.internal.warc.gz"} |
Thermal expansion coefficient and bulk modulus of polyethylene closed‐cell foams
Thermal expansion coefficient and bulk modulus of polyethylene closed‐cell foams
ABSTRACT A regular Kelvin foam model was used to predict the linear thermal expansion coefficient and bulk modulus of crosslinked, closed-cell, low-density polyethylene (LDPE) foams from the polymer
and gas properties. The materials used for the experimental measurements were crosslinked, had a uniform cell size, and were nearly isotropic. Young's modulus of biaxially oriented polyethylene was
used for modeling the cell faces. The model underestimated the foam linear thermal expansion coefficient because it assumed that the cell faces were flat. However, scanning electron microscopy showed
that some cell faces were crumpled as a result of foam processing. The measured bulk modulus, which was considerably smaller than the theoretical value, was used to estimate the linear thermal
expansion coefficient of the LDPE foams. © 2004 Wiley Periodicals, Inc. J Polym Sci Part B: Polym Phys 42: 3741–3749, 2004
[show abstract] [hide abstract]
ABSTRACT: The cellular structure, physical properties, and structure–property relationships of novel open-cell polyolefin foams produced by compression molding and based on blends of an ethylene/
vinyl acetate copolymer and a low-density polyethylene have been studied and compared with those of closed-cell polyolefin foams of similar chemical compositions and densities and with those of
open-cell polyurethane foams. Properties such as the elastic modulus, collapse stress, energy absorbed in mechanical tests, thermal expansion, dynamic mechanical response, and acoustic absorption
have been measured. The experimental results show that the cellular structure of the analyzed materials has interconnected cells due to the presence of large and small holes in the cell walls,
and this structure is clearly different from the typical structure of open-cell polyurethane foams. The open-cell polyolefin foams under study, in comparison with closed-cell foams of similar
densities and chemical compositions, are good acoustic absorbers; they have a significant loss factor and lower compressive strength and thermal stability. The physical reasons for this
macroscopic behavior are analyzed. © 2009 Wiley Periodicals, Inc. J Appl Polym Sci, 2009
Journal of Applied Polymer Science 06/2009; 114(2):1176 - 1186. · 1.40 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: Finite element analysis, of regular Kelvin foam models with all the material in uniform-thickness faces, was used to predict the compressive impact response of low-density closed-cell
polyethylene and polystyrene foams. Cell air compression was analysed, treating cells as surface-based fluid cavities. For a typical 1 mm cell size and 50 s-1 impact strain rate, the elastic
buckling of cell faces, and pop-in shape inversion of some buckled square faces, caused a non-linear stress strain response before yield. Pairs of plastic hinges formed across hexagonal faces,
then yield occurred when trios of faces concertinaed. The predicted compressive yield stresses were close to experimental data, for a range of foam densities. Air compression was the hardening
mechanism for engineering strains < 0.6, with face- to-face contact also contributing for strains > 0.7. Predictions of lateral expansion and residual strains after impact were reasonable. There
were no significant changes in the predicted behavior at a compressive strain rate of 500 s-1.
Journal of Engineering Materials and Technology-transactions of The Asme - J ENG MATER TECHNOL. 01/2008; 130(4).
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does
not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.
147 Downloads
Available from
Jun 5, 2013 | {"url":"http://www.researchgate.net/publication/227909474_Thermal_expansion_coefficient_and_bulk_modulus_of_polyethylene_closedcell_foams","timestamp":"2014-04-16T08:28:33Z","content_type":null,"content_length":"212945","record_id":"<urn:uuid:15498f88-7367-4f23-a94c-03f2d18ecfad>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00397-ip-10-147-4-33.ec2.internal.warc.gz"} |
Santa Clara University
The Faculty-Staff Newsletter, e-mail edition
Santa Clara University, October 16, 2006, Vol. 7, No. 3
Paul R. Halmos (1916-2006)
Born in Budapest, Hungary, on March 3, 1916, Halmos at the age of 13 moved with his family to Chicago. He attended high school there and later enrolled at the University of Illinois at
Champaign-Urbana, from which he received his Ph.D. in 1938, under the supervision of Joseph Doob. From there he went to the Institute for Advanced Study in Princeton for two years, where he served as
assistant to John von Neumann. After his years at the institute he moved to academic positions at Syracuse University; the University of Chicago; the University of Michigan, Ann Arbor; the University
of Hawaii; the University of California, Santa Barbara; Indiana University; and Santa Clara University, from which he retired in 1995. He had also held visiting appointments at Harvard University,
Tulane University, the University of Montevideo, the University of Miami (Florida), the University of California, Berkeley, the University of Washington (Seattle), the University of Edinburgh, Chiao
Tung University in Taiwan, and the University of Western Australia.
A brilliant writer and lecturer, Halmos not only wrote roughly 100 research papers, but also 16 books and many book reviews. Several of his books are easily accessible to a non-specialist audience,
notably his memoirs, titled I Want to Be a Mathematician: An Automathography, and a pioneering volume, I Have a Photographic Memory, a record of a lifetime of taking pictures of mathematicians.
As a teacher, he was extraordinarily effective, usually using a modified Moore method to encourage student participation in the discovery of mathematics. For this he was awarded the Haimo Award for
Distinguished College or University Teaching of Mathematics by the Mathematical Association of America (MAA) in 1994.
For the clarity of his writing and his lectures he was eve n more widely admired. His work was often witty and colorful, indeed provocative at times, but always well-planned and polished. When asked
about the importance of computers, he replied that they are important, but not to mathematics. He then proceeded to explain why. And he actually wrote an article titled “Applied mathematics is bad
mathematics.” Asked about this, he first replied, “First it is. Second it isn’t . .. [Applied mathematics] is a good contribution. It serves humanity. It solves problems ... but much too often it is
bad, ugly, badly arranged, sloppy, untrue, undigested, unorganized, and unarchitected mathematics.” Those who knew him learned to check for that mischievous look in his eye when he made statements
like these. He knew what he was doing; he was provoking discussion. And provoke discussion he did.
Influential in the mathematical world, he served as vice president (1981-82) and subsequently on the Council and Board of Trustees of the American Mathematical Society (AMS) and on the Board of
Governors of the MAA. For his outstanding work in mathematics he was awarded the prestigious Leroy P. Steele Prize of the AMS and the Distinguished Service Award of the MAA. For his writing he won
the Chauvenet Prize, two Lester R. Ford Awards, and the George Pólya Award, all from the MAA. He served a five-year term as editor of the American Mathematical Monthly and held editorial positions
for Mathe matical Reviews, the Proceedings of the American Mathematical Society, the Journal fÿr die reine und angewandte Mathematik, and the Indiana Journal of Mathematics. He also edited four book
series for Springer Verlag: the Ergebnisse der Mathematik, Problem Books, Undergraduate Texts and Graduate Texts in Mathematics.
Though he had roots in Hungary, he always thought of himself as an American mathematician. Hungary nevertheless honored him with membership in the Hungarian Academy of Sciences. Further, he was
elected to membership in the Royal Society of Edinburgh. He held honorary doctorates from St. Andrews University, DePauw University, Kalamazoo College, and the University of Waterloo. Among his
honors were a Guggenheim Fellowship.
In recent years, Halmos and his wife were recognized for their philanthropy. In 2003 they gave a large gift to the MAA to develop an existing building in Washington, D.C., to become a meetings
center. Already in use for various mathematical activities, the center will be dedicated in April 2007. In addition they endowed book prize funds for both the AMS and the MAA.
Halmos is survived by his wife of 60 years, Virginia, of Los Gatos. In line with Halmos’s request, no services are planned. | {"url":"http://scu.edu/fyi/101606_halmos.cfm","timestamp":"2014-04-16T04:34:50Z","content_type":null,"content_length":"33038","record_id":"<urn:uuid:520c28a7-fd06-48d5-83ad-cdcb276e4c24>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00603-ip-10-147-4-33.ec2.internal.warc.gz"} |
What Is R-Squared?
We’re trying to learn more about mutual funds, which we find quite frightening, so let’s start by breaking down some terms, like R-squared, a measure of volatility. Here’s what Vanguard says:
R-squared measures how much a fund’s past returns can be explained by the returns from its benchmark index.
If a fund’s total returns were precisely synchronized with the index’s return, its R-squared would be 1.00 (100%). If a fund’s returns bore no relationship to the index’s returns, its R-squared
would be 0.
The higher the R-squared, the more the fund’s return can be explained by the performance of the index, and so the performance of the market or market segment. The lower the R-squared, the more
the return can be explained by the fund manager’s decisions.
So, no-load index funds, which we’re interested in, handled completely by computers, which attempt to sync with the performance of a benchmark index, like the S&P 500, should have an R-squared of 1.
For example, the Vanguard 500 Index, whereas the “Vanguard Capital Value” fund seeking “companies that are out of favor with investors and that are trading at prices below what the stocks are worth
compared to potential earnings” has an R-Squared of 50.88%.
(Don’t think we have a crush on Vanguard or anything, we just have an account there so that’s the easiest place to go for us for this information)
This is also referred to as the “coefficient of determination” and can be determined using scary Greek symbols.
Luckily for mere mortals, places like Google Finance figure out the R-squared for ya.
1. skittlbrau says:
In addition, the closer the R-squared is to zero, the lower you should demand your expense ratios to be – your returns are not due to a manager, but the market as a whole.
As R-squared diverges from one, you can expect to see expense ratios go up, as the funds are more actively managed.
And i have a huge crush on Vanguard… it’s ok.
2. Skiffer says:
Sigh…liberal arts majors…
3. miborovsky says:
Technically, r-squared is not a measure of “volatility” per se but rather a measure of goodness-of-fit, or more plainly, how much of the variability can be attributed to the statistical model. In
this case, it measures how much of the fund’s returns can be accounted for by the stock index’s returns.
4. mxyzptlk says:
Let’s not forget the saying: “Correlation does not equal causation.”
5. JKinNYC says:
R-squared values go from 1 to -1. This is a measure of correlation. It means that if r^2 is -1, then the items are inversely correlated.
This only tells you about history and is in no way predictive.
6. JustAGuy2 says:
Just to be clear, the closer the R-squared is to ONE (not to zero), the lower the fees should be, as the fund is more closely tracking the index.
7. will0955 says:
Some research suggests that the instances in which fund managers improve on (and fall short of) the market return are due almost entirely to chance. That is, there aren’t many good fund managers
out there who can consistently beat the market (those that do are lucky or cheating). Maybe a good use of R^2 here is: higher coefficient = higher longterm return.
8. jmackowi says:
Also check out http://www.morningstar.com for more info on mutual funds. Lots of info available with a free membership.
9. rhombopteryx says:
@ baa & Justaguy2
While R-squared values closer to 1 DO likely indicate less active work is being done by a manager, R-squared values should be (mostly) irrelevant to fees. You should seek lower fees regardless of
the R-squared value. It’s like using asking two pharmacies which has lower hourly wages to decide where to buy your prescription from – while there MAY be some correlation, why don’t you just ask
which has the cheaper price instead?
10. guymandude says:
Correlation is an abused tool. If you correlate ice cream sales vs shark attack it will tell you that when ice cream sales are high, shark attacks are more frequent. To the masses this means ice
cream sales causes shark attack.
11. RChris173 says:
I have to disagree with your definition of R^2…Having just taken a college Statistics class, I feel I am “certified” to explain…
First, in statistics, R^2 is derived from R. R is a variable given to the number that describes how strong a linear relationship is between points. It is limited from being -1 to 1 inclusive. For
example, if we take the points (2,10) (4,20) (6,40) we have a perfect linear relationship because if we were to draw a regression line, it would pass through all of those points. Therefore the R
value is 1. The sign (+ or -) of the R value determines of it is a positive linear relationship or a negative linear relationship.
In real life situations, nothing will ever have a PERFECT linear relationship so therefore, we use R to describe HOW STRONG/WEAK the POSITIVE/NEGATIVE linear relationship is.
The R^2 value is derived by simply multiplying R by it self. R * R = R^2.
A contextual explanation using mathematics of how R^2 is derived:
It can also be derived by taking a set of points from data and drawing a regression line using statistical software (TI-83 and on calculators will do).
When the regression line is drawn, there will most likely be points above and below the regression line. Those points are called the “actual values”. The regression line contains all of the
“expected values”.
Next one needs to find the residuals. Take the “expected/estimated values” (y value) and subtract the “actual value” (y value) of the various points then square that result. The sum of these
values is the explained sum of squares.
Then perform the same operation except take the “actual value” (y-value) of the various points and subtract the mean of the y values. Sum those values up and you then have the total sum of
After that, take the “expected/estimated value” (y value) and subtract the mean of the y values (actual/observed values).
Using the formula provided at the start of the pages to find R^2 from the other individual formulas would yield the R^2 value.
R^2 basically gives those that extrapolate from the regression line the amount of “correctness” and how reliable the regression line is.
“If a fund’s returns bore no relationship to the index’s returns, its R-squared would be 0.”
It is important to understand that just because the R value (or R^2) is VERY close to 0 IT DOES NOT MEAN THAT THERE IS NO RELATIONSHIP. It just means there is NO LINEAR RELATIONSHIP.
Oh BTW, I scored a 4 out of 5 on my Advanced Placement Statistics Exam…
12. Ben Popken says:
People, we’re not trying to correlate anything. We’re just trying to define what R-squared means.
13. RChris173 says:
It’s really easy BTW to use a TI calc to derive these values.
If you are using the 83/84 series, press enter the Catalog and turn DiagnosticsOn.
Then press “STAT” then go to “EDIT”. Enter your values for the horizontal axis in L1 and the values for the corresponding x-values in the L2 field. Then go to “STAT” then “CALC” and perform a
Linear Regression Line. You will get the values for the slope and the y-intercept. Look down a bit more and you get a r and r^2 value!
Play around with it. Make points that are linear, and you will see it become 1 or -1 depending on the slope. The more non-linear your graph is, the closer to 0 it becomes.
In my previous comment, I mentioned that a R value of 0 means there is no LINEAR relationship. There are ways to make the data more linear such as taking the natural or common logarithm of the
dependent variable (y), then performing another regression line. You may be surprised that it becomes closer to -1 or 1.
I really wish people at Vanguard, especially mathematical and statistical related, would refrain from getting too detailed unless they have someone that knows what they are talking about…
But in relation to the article…the closer R^2 is to 1 (100%). The stronger the linear relationship there is between the fund and indexes returns.
14. JKinNYC says:
R squared is a measure of correlation. It’s Stats 102.
15. qitaana says:
Jkinnyc – R^2 is strictly between 0 and 1. You’re thinking of just plain old r, the correlation coefficient, when you say it’s between -1 and 1.
heh, I’m TAing an intro stats class this summer, and was just prepping a bit on correlation for Monday’s class.
Another fun “Correlation (Assocation) is Not Causation” one: There’s very strong correlation between the number of firefighters who respond to a blaze, and the dollar value of damage done by that
fire. So, obviously, we should just send one truck out to all fires, cos less damage is done that way.
16. JKinNYC says:
Yeah You’re right, my brain is scrambled today.
17. Dustbunny says:
Another liberal arts major here. Math make head hur…OMG! Ponies! Look!
18. JustAGuy2 says:
Very true that Correlation Does Not Equal Causation. As my stats prof was fond of saying, however, “Correlation Should Make You Pretty Damn Suspicious.”
19. JKinNYC says:
Your stats professor was wrong. Without significant other evidence, correlation means nothing. There was once a study correlating the relationship of MBA graduates and lactating mothers in
20. JKinNYC says:
Sorry, your stats prof was wrong. Correlation alone means nothing except a linear relationship. Which can be completely coincidental.
21. JKinNYC says:
Weird. My one post didnt show up into the second one did.
22. RChris173 says:
I have to disagree with your definition of R^2…Having just taken a college Statistics class, I feel I am “certified” to explain…
First, in statistics, R^2 is derived from R. R is a variable given to the number that describes how strong a linear relationship is between points. It is limited from being -1 to 1 inclusive. For
example, if we take the points (2,10) (4,20) (6,40) we have a perfect linear relationship because if we were to draw a regression line, it would pass through all of those points. Therefore the R
value is 1. The sign (+ or -) of the R value determines of it is a positive linear relationship or a negative linear relationship.
In real life situations, nothing will ever have a PERFECT linear relationship so therefore, we use R to describe HOW STRONG/WEAK the POSITIVE/NEGATIVE linear relationship is.
The R^2 value is derived by simply multiplying R by it self. R * R = R^2.
A contextual explanation using mathematics of how R^2 is derived:
It can also be derived by taking a set of points from data and drawing a regression line using statistical software (TI-83 and on calculators will do).
When the regression line is drawn, there will most likely be points above and below the regression line. Those points are called the “actual values”. The regression line contains all of the
“expected values”.
Next one needs to find the residuals. Take the “expected/estimated values” (y value) and subtract the “actual value” (y value) of the various points then square that result. The sum of these
values is the explained sum of squares.
Then perform the same operation except take the “actual value” (y-value) of the various points and subtract the mean of the y values. Sum those values up and you then have the total sum of
After that, take the “expected/estimated value” (y value) and subtract the mean of the y values (actual/observed values).
Using the formula provided at the start of the pages to find R^2 from the other individual formulas would yield the R^2 value.
R^2 basically gives those that extrapolate from the regression line the amount of “correctness” and how reliable the regression line is.
“If a fund’s returns bore no relationship to the index’s returns, its R-squared would be 0.”
It is important to understand that just because the R value (or R^2) is VERY close to 0 IT DOES NOT MEAN THAT THERE IS NO RELATIONSHIP. It just means there is NO LINEAR RELATIONSHIP.
Oh BTW, I scored a 4 out of 5 on my Advanced Placement Statistics Exam…
23. mermaidshoes says:
now i kind of want to get an MBA, just so i can be correlated to lactating african mothers.
24. Skiffer says:
@JKinNYC: Sigh…that wasn’t a “sigh…(i’m a) liberal arts majors…” it was a “sigh…(i can’t believe anyone’s even asking this question, I guess they’re all a bunch of) liberal arts majors…”
Luckily with my engineering degree I make enough that I don’t have to worry about saving or investing…
Ok, I’ll stop being facetious and utterly arrogant and get back to my CAD drawings…and all you liberal arts majors can go back to your parties…and leave me here…alone…just like in college :P
25. benchman says:
A great resource for trying to understand investing/finance lingo and concepts is http://www.investopedia.com. That site got me through many of my finance classes that I encountered while working
on my MBA. Unfortunately though I never learned how to correlate numbers in relation to “lactating african mothers”.
26. JustAGuy2 says:
Yes, of course, without an explanation of _why_ two variables are correlated, the correlation is worthless. He was saying (which is very valid) that when you find a correlation between two
numbers, it’s worth investigating. | {"url":"http://consumerist.com/2007/07/06/what-is-r-squared/","timestamp":"2014-04-19T23:09:35Z","content_type":null,"content_length":"77173","record_id":"<urn:uuid:01f5c657-9883-4515-af9e-22350630e903>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00567-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: The Foundational Exposition Project
Harvey Friedman friedman at math.ohio-state.edu
Fri Mar 26 08:42:31 EST 1999
Here is a first draft of a manifesto. Feedback greatly welcomed and needed.
Harvey M. Friedman
March 22, 1999
The Foundational Exposition Project (FEP) seeks to exposit crucial
mathematical and logical topics across the intellectual landscape. Initial
efforts will be confined to computer science, logic, mathematics, physics,
and probability/statistics. Later efforts will include economics, finance
and management science, electrical and mechanical engineering, linguistics,
mathematical psychology and political science, and law. Ultimately, it is
hoped that crucial mathematical and logical topics in the biological and
neuro sciences will be addressed.
The goal of the FEP is to produce a series of penetrating papers written
from a common approach to intellectual life. Each paper treats a crucial
topic from first principles, relying only on at most a few earlier papers
from the ongoing FEP. The papers are to be fully accessible to any
professional academic whose work involves mathematical and/or logical
considerations. Initial portions of each paper are to be fully accessible
to students with mathematical and/or logical sophistication. No substantial
knowledge of mathematics or logic is required for full accessibility.
In order to achieve this level of fully accessible uniform exposition,
substantial research is required at virtually every stage of the project.
The initial FEP papers are expected to treat the most classical of topics -
often considered to be completely understood by experts. The present state
of exposition of crucial topics in computer science, logic, mathematics,
physics, and probability/statistics all have serious drawbacks and
limitations that prevent them from having this level of fully accessible
1. In physics. When standard expositions are subjected to close
examination, serious ambiguities and/or hidden assumptions typically
appear. When subjected to intense philosophical examination, the standard
development loses meaning. E.g., consider Newton's laws based on force.
mass, space, time. In standard expositions, none of force, mass, space, or
time, is defined either theoretically or observationally. Their meaning is
recovered informally from the way that the laws are applied. However, after
the development is applied, experts fail to go back and redo the
development in a more meaningful, or philosophically honest, way. And
mathematical physicists tend to quickly assign sophisticated mathematical
structures to physical reality (manifolds, geometries, infinite dimensional
operators, etcetera). We believe that a suitably observational approach
should have greater philosophical clarity.
Tentative Initial Agenda:
a. Free particles. An observational treatment.
b. Free relativistic particles. An observational treatment.
c. Particles under gravitation. An observational treatment.
d. Relativistic particles under gravitation. An observational treatment.
2. In probability/statistics. Standard expositions ignore a number of
fundamental issues. One issue is randomness. This concept may best be
treated as a manifestation of symmetric ignorance. Another issue is that we
cannot measure to more than a finite amount of accuracy or conduct more
than finitely many trials, etcetera. Consequently models should be
finitary. For example, we are looking for a detailed philosophical analysis
of the meaning of such statements as: if I choose a sample of size k from a
population of size n, and the distribution in that sample is such and such,
then I know with confidence x that the distribution in the population has
such and such property. NOTE: The Bayesians emphasize the use of "priors,"
but there are important situations where priors are not needed, and one can
use "absolute ignorance."
Tentative Initial Agenda:
a. Probabilistic statements. A subjective approach.
b. Sampling theory. A subjective approach.
3. In computer science. This comparatively new field is already getting
fragmented. We need a clear unified treatment of the computer from the
circuit level through architecture through operating systems through
machine, assembly, and programming languages, and implementation.
Simplified languages should be constructed at these levels, and
verification given for some simple code. Reasoning involving protocols
should be analyzed formally. Complexity issues should be addressed in terms
of finite models since actual computer systems are finite.
Tentative Initial Agenda:
a. Circuit specification. Verification.
b. Architecture specification. Verification.
c. Programming languages and their implementation. Verification.
d. Protocols and reasoning.
e. Asymptotic and finite complexity.
4. In mathematics. Standard expositions typically cover a large number of
topics that are considered important by experts, and are loosely connected.
There is a concentration on efficient and elegant presentations, rather
than the realization of overarching intellectual goals of general
intellectual interest. Crucial definitions are typically introduced without
compelling justifications. Often compelling justifications can be given
such as "this is the only concept satisfying crucial conditions."
Tentative Initial Agenda:
a. Counting.
b. Measurement.
c. Shapes.
d. Linearity.
e. Symmetry.
5. In logic. Same general comments as for mathematics. At the most
elementary levels, logic and foundations of mathematics are closely
connected. Yet at more advanced levels, they have grown apart. So standard
expositions beyond the elementary levels do not concentrate on issues in
the foundations of mathematics.
Tentative Initial Agenda:
a. Propositional calculus.
b. Predicate calculus.
c. Set theory and the formalization of mathematics.
d. Axioms of set theory.
e. Fragments and extensions of the axioms.
f. Underivability results.
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/1999-March/002924.html","timestamp":"2014-04-19T22:20:02Z","content_type":null,"content_length":"8193","record_id":"<urn:uuid:133245bc-d3c7-4891-a754-e1371dfd466e>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00073-ip-10-147-4-33.ec2.internal.warc.gz"} |
Westfield, NJ Algebra Tutor
Find a Westfield, NJ Algebra Tutor
...My goal is to have the student understand the theory before the end of the lesson but at the same time make the subject fun. I will provide periodic progress reports and assign light homework
so that the student can build on the theories taught. For my own goals, I will ask for feedback from the student or parents to better assess my teaching capabilities.
13 Subjects: including algebra 1, reading, chemistry, geometry
I am a graduate of Columbia University, class of 2008, with a degree in Applied Mathematics and a concentration in Computer Science. I do research on machine learning in music & audio processing
applications. In my spare time, I enjoy hiking, traveling, learning languages, producing/recording music, and cooking.
10 Subjects: including algebra 1, algebra 2, physics, geometry
...I can help students focus on important concepts and show them how to effectively narrow down options answer choices. I fell in love with genetics when I was taking my basic science classes for
my MD degree. I can help students relate their knowledge of biochemistry to genetics.
33 Subjects: including algebra 1, chemistry, physics, statistics
...I also have tutoring experience in prealgebra and algebra in college for 2.5 years. As a college graduate with a degree in biophysics. I have completed several upper level biology courses
(immunology, genetics, cell biology, physiology and microbiology). I have experience tutoring college level introduction to biology, human biology and physiology.
18 Subjects: including algebra 1, algebra 2, chemistry, calculus
...I'm strongest at tutoring students in math and science (geometry, algebra, trigonometry, chemistry, physics, and biology). I also tutor in French language and grammar, having studied the
language for over 10 years. Lastly, I can help prepare for the math and science sections of the SAT and ACT, ...
37 Subjects: including algebra 1, algebra 2, chemistry, physics | {"url":"http://www.purplemath.com/Westfield_NJ_Algebra_tutors.php","timestamp":"2014-04-19T23:34:36Z","content_type":null,"content_length":"24115","record_id":"<urn:uuid:4dd465d3-6a6a-4955-a234-34b1c11fb214>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00405-ip-10-147-4-33.ec2.internal.warc.gz"} |
Braintree SAT Math Tutor
Find a Braintree SAT Math Tutor
...Certain types of questions repeat, designed to test your ability to apply a few basic rules of math. Learn to identify what type of problem you are facing and the easiest way to apply the
appropriate rule, and most questions can be done quite quickly. I look forward to helping your student gain such facility.
55 Subjects: including SAT math, reading, English, writing
...I also tutor introductory statics and dynamics. Got a pulley problem? I can solve it.
8 Subjects: including SAT math, physics, calculus, differential equations
...I enjoy helping others to reason through their ideas about a given text. I received a perfect score on the GRE general test verbal portion, and have extensive experience tutoring and working
one-on-one with students. I have a BA in philosophy, which included a two-year "great books" program, and a sizable writing component.
29 Subjects: including SAT math, English, reading, writing
...I find real life examples and a crystal clear explanation are crucial for success. My schedule is flexible as I am a part time graduate student. I am new to Wyzant but very experienced in
tutoring, so if you would like to meet first before a real lesson to see if we are a good fit, I am willing to arrange that.I was a swim teacher for 8 years at Swim facilities and summer camps.
19 Subjects: including SAT math, Spanish, chemistry, calculus
...I am a Certified Elementary Educator for Grades 1-6. I passed Foundations of Reading MTEL which emphasizes phonics. I have taught phonics in K, 1st, 2nd and 3rd grades.
18 Subjects: including SAT math, English, geometry, grammar | {"url":"http://www.purplemath.com/Braintree_SAT_math_tutors.php","timestamp":"2014-04-19T23:28:11Z","content_type":null,"content_length":"23830","record_id":"<urn:uuid:fa8ea045-fcf2-4da6-8b6a-e8abe388b4dd>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00426-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Areas in a Multiplication Table
Area is width times height.
Select the two factors using the slider controls or with the locator on the table.
Select from the pull-down list of products to see the tallest corresponding area.
THINGS TO TRY
• Drag Locators
"Areas in a Multiplication Table" from the Wolfram Demonstrations Project
Contributed by: Michael Schreiber | {"url":"http://demonstrations.wolfram.com/AreasInAMultiplicationTable/","timestamp":"2014-04-16T13:11:53Z","content_type":null,"content_length":"42112","record_id":"<urn:uuid:1ae58135-57f7-4486-81fb-00413969bebe>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00246-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
The mechanics and thermodynamics of continuous media.
(English) Zbl 0870.73004
Texts and Monographs in Physics. Berlin: Springer. xiv, 504 p. DM 128.00; öS 934.40; sFr 113.00 (1997).
In every rapidly developing scientific branch – and continuum physics is such one – it is once in a while desirable that someone undertakes the difficult task to review and summarize the
state-of-the-art. That means to gather all the different theoretical suggestions, select from this vast manifold the convincing and promising ones, translate them into one language, express them in a
precise notation, and finally assemble them in a compact and readable work. The last time this was done in the 60s in the context of the famous Handbuch der Physik. After more than three decades,
there is again great need to update the matter.
Šilhavý’s work is a welcome contribution in this direction. At least it fulfills the important requirements for such a review. Firstly, it starts from the very fundamental concepts of continuum
mechanics and thermodynamics. Secondly, it covers the broad range up to very recent advances in this field such as phase changes, a very trendy issue. Thirdly, the author uses precise mathematical
language to clearly present these concepts. And, last, but not least, he gives a rather broad overview of different schools and theories in the field.
A general theory of continuous media has not yet been completed and finalized, and Šilhavý alone, of course, it not able to do it, although he could remove many deficiencies and bridge many gaps with
his own ideas. Some of the concepts that are presented still appear unnatural, if not awkward. For example, the introduction of the state-space, being a fundamental concept for both mechanics and
thermodynamics, does not have the desired clearness. But these deficiencies are inherent to the current state-of-the-art and not due to the author.
There is no doubt that the book is a rich source of insight and will be a great help to all researchers in the field. For some readers it might be too mathematical and too far from applications. For
other it will surely not be mathematical enough. So, perhaps, the author has found the right compromise between two extremes. At least the referent considers it as a work of great interest and
stimulus and warmly recommends it to other researchers.
74-02 Research monographs (mechanics of deformable solids)
74A15 Thermodynamics (mechanics of deformable solids)
76-02 Research monographs (fluid mechanics)
80-02 Research monographs (classical thermodynamics) | {"url":"http://zbmath.org/?q=an:0870.73004","timestamp":"2014-04-21T10:16:24Z","content_type":null,"content_length":"22342","record_id":"<urn:uuid:27f0f804-f34e-4029-80c4-268aea7d7bf0>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00029-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Solve |x| + 7 < 4. {x | x < -11 or x > -3} {x | -3 < x < 3} Ø
• one year ago
• one year ago
Best Response
You've already chosen the best response.
no realroots
Best Response
You've already chosen the best response.
subtract 7 from both sides first then since the right is negative, but the absolute value is never negative, there is no solution
Best Response
You've already chosen the best response.
abs(x)+7 is less than 4 abs(x) is less than 4 - 7 abs(x) is less than -3 therefore, x is less than plus or minus 3
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50be1128e4b09e7e3b858ed2","timestamp":"2014-04-19T15:09:22Z","content_type":null,"content_length":"32542","record_id":"<urn:uuid:46ca73d6-5a8d-4022-aab5-772335aff0bb>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00413-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hallandale Science Tutor
Find a Hallandale Science Tutor
...I also hold a yoga teacher's certification. Here is the cancellation or no-show policy that you need to take into consideration if you decide to hire my services: If the cancellation is the
same day of tutoring there will be a 30 min. charge. If a student reschedules for a later day or time within the same week, there will be a 20 minute charge.
16 Subjects: including biology, chemistry, Spanish, algebra 2
...Also, I am about finished my first year as a doctoral student (Accounting). I love teaching/tutoring and I have dedicated my life to that vocation. My philosophy in learning is that anyone can
learn if the new information is received by the student at the right level!This is my pet subject. I scored a 100% in study skills evaluation at Norfolk State University tutoring evaluation.
9 Subjects: including biology, reading, English, accounting
...I am a Graduate of Clemson University with a B.S. in Public Health (Pre-Med). I have a background heavily focused on Sciences along with several Math classes, which requires a lot of Computer
use and Proficiency. I am currently in the MBA program at Barry University with a concentration in Finan...
24 Subjects: including biology, SAT math, anatomy, reading
...I am a very approachable and positive person. Please feel free to contact me any time.I have taught and tutored algebra for about 5 years. With my patience and positive attitude, my success
rate is high.
18 Subjects: including chemistry, biology, biochemistry, calculus
...Personally, I believe that such experience is very much in phase with my personality and has given me a very particular perspective about general psychological impediments to enjoying and
learning Physics and Math concepts. I also strongly believe that learning Math and Sciences is of the upmost...
11 Subjects: including physical science, physics, Spanish, calculus
Nearby Cities With Science Tutor
Aventura, FL Science Tutors
Dania Science Tutors
Dania Beach, FL Science Tutors
Golden Beach, FL Science Tutors
Golden Isles, FL Science Tutors
Hallandale Beach, FL Science Tutors
Hollywood, FL Science Tutors
Miami Gardens, FL Science Tutors
Miramar, FL Science Tutors
N Miami Beach, FL Science Tutors
North Miami Beach Science Tutors
Opa Locka Science Tutors
Pembroke Park, FL Science Tutors
Sunny Isles Beach, FL Science Tutors
West Park, FL Science Tutors | {"url":"http://www.purplemath.com/hallandale_fl_science_tutors.php","timestamp":"2014-04-18T05:54:33Z","content_type":null,"content_length":"24097","record_id":"<urn:uuid:7003bbc6-5912-4031-b3d0-884887b59c96>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00352-ip-10-147-4-33.ec2.internal.warc.gz"} |
June 8th 2009, 01:34 PM #1
Super Member
Jun 2008
Since the field vectors $F(x,y,z) = (x-y)i + (y-z)j + (z-x)k$ and the curve $\alpha$ that is the intersection of surfaces:
$S1: x+y+z =1$ and $S2: x^2+y^2=1$
Find the circulation of F arrond the $\alpha$
My solution:
curl F = (1,1,1)
Surface S1:
Normal: $N = (0,0,1)$
$\int_0^1 \int_0^{1-v} dudv = \frac{1}{2}$
Surface S2:
Normal: $N = (cosu, sinu, 0)$
$\int_0^{2 \pi} \int_0^1 (rcos \theta + rsin \theta)rdrd \theta = 0$
$\int_C F.dr = \int \int_S F.N.da = S1 + S2 = \frac{1}{2}$
It is correct ?
No-- you kind of missed the point
alpha is the curve of intersection of the plane and the cylinder
See the attachment for the details where the line integral is computed directly
If you use Stokes theorem then your surface is z= 1-x-y over a circle of radius 1
N = i + j + k
curl F*N = 3 int(curlF*NdA) = 3*area = 3 pi which is what we obtain computing the line integral directly
Last edited by Calculus26; June 9th 2009 at 08:36 AM.
No-- you kind of missed the point
alpha is the curve of intersection of the plane and the cylinder
See the attachment for the details where the line integral is computed directly
If you use Stokes theorem then your surface is z= 1-x-y over a circle of radius 1
N = i + j + k
curl F*N = 3 int(curlF*NdA) = 3*area = 3 pi which is what we obtain computing the line integral directly
Ok. Thank you
June 9th 2009, 08:12 AM #2
June 9th 2009, 11:19 AM #3
Super Member
Jun 2008 | {"url":"http://mathhelpforum.com/calculus/92219-circulation.html","timestamp":"2014-04-16T06:36:40Z","content_type":null,"content_length":"37330","record_id":"<urn:uuid:7103172b-dcad-40d8-97aa-94c295b94340>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00238-ip-10-147-4-33.ec2.internal.warc.gz"} |
Objective NHL
Back in November of last year, I looked at whether there were any
score effects in relation to minor penalties
. The conclusion? Playing from behind has a significant positive effect on powerplay differential. That is, teams tend to be much better at drawing penalties when trailing, as compared to when
leading or when the game is tied.
While my initial article only looked at data from the 2007-08 and 2008-09 seasons, I've since ran the numbers for 2009-10 as well. Here are the aggregate numbers for all three years:
[PD=penalties drawn; PT; penalties taken; P % = penalties drawn/(penalties drawn + penalties taken)]
I should include a reminder that only penalties that were not accompanied by the calling of another penalty at the same point in time were included in the above totals.
In the original post, I asserted that trailing team's penalty advantage was not owing to its superior play, but was instead caused by favorable officiating. In support of this, I noted that actual
team-to-team distributions in trailing and leading penalty percentage were roughly what one would expect them to be if the putative bias affected all teams equally.
Although I remain confident that my assertion was correct, I suspect that others may have found my explanation to be less than convincing. And in turning my attention to the subject for a second
time, I think that there's a better way in which I can illustrate my point.
In determining whether the trailing team's penalty advantage is the product of bias or earned on merit, it becomes necessary to ask what result we would expect to observe, based on what we know about
what causes some teams to be better at drawing penalties than others.
One of those causes is even strength outshooting. If we look at the relationship between EV tied Corsi and tied penalty differential over the last three seasons, each unit increment in the latter
equates to 0.027 in the former.
It's well established that the average team does much better in terms of Corsi when playing from behind. Over the three years in question, trailing teams had a collective Corsi percentage of 0.552
(107706 For, 87079 Against). Given the positive relationship that exists between outshooting and penalty differential when the score is tied, the trailing team's advantage in Corsi may be able to
account for it's advantage in penalty percentage.
However, upon performing the required calculations, it becomes clear that this factor can only explain part of the difference.
In other words, only about one third of the gap can be attributed to outshooting.
Not only that, but it's clear that the shot statistics flatter the trailing team, given that playing from behind encourages a team to take more risks and play more desperately. For example, during
the period in question, trailing teams only scored 51.9% of all non-empty net even strength goals (4623 For, 4292 Against), despite, as mentioned above, generating 55.2% of all Corsi events. It's
more than arguable that goal differential, and not Corsi differential, provides the best measure of how well the trailing team actually performs.
As with outshooting, there is a positive relationship between even strength goal differential and penalty differential when the score is tied. Based on data from the three seasons in question, each
net goal is worth 0.26 in net penalties drawn. We're able to use this figure to determine what kind of penalty advantage we'd expect the trailing team to have, based on its goal differential.
As the table indicates, we would expect the trailing team to do only slightly better than the leading team in terms of penalty differential on the basis of its advantage in even strength goal
differential. Thus, however which way you approach it, referee bias must account for a substantial part - and probably almost all - of the penalty gap.
My plan is to put out a series of posts - hopefully all within the next while - that relate to subjects that I've posted on previously. The object of these posts is to address certain outstanding
issues that weren't resolved when I tackled these subjects the first time around.
The first post in the series is an extension of a post that I published last month that looked at how various shot metrics - all of them calculated at even strength with the score tied - predicted
future success at the team level.
One related issue that wasn't explored is how well those same shot metrics predict future success when compared to more conventional measures of team strength, such as winning percentage and goal
ratio.* This question is actually more fundamental than the one investigated in the original post. After all, if shot metrics like Fenwick and Corsi failed to predict future success better than the
conventional measures, then that would render them considerably less useful.
The method employed** was similar to the one used in the first post. Because of the relative complexity of the process, including a step-by-step description may be helpful.
Firstly, I randomly selected a certain number of games from each team's schedule, with each team having an equal number of home and road games selected.
Secondly, I calculated how each team performed over those games with respect to certain variables. The variables that were calculated were even strength Corsi with the score tied, overall goal ratio
(with empty net and shootout goals excluded), and winning percentage. Winning percentage was defined as WINS/(WINS+LOSSES). Games that ended in a shootout were considered ties, and were therefore not
included in the calculation.
I then randomly selected a second, independent group of games. That is, if a game was included in the first grouping, it was not eligible for selection in the second grouping. As with the first
grouping, an equal number of home and road games were selected for each team.
I then determined how each team did in terms of winning percentage over this second group of games, and looked at how each of the three variables calculated in relation to the first group correlated
with winning percentage in the second group.
The relationship between the size of the two groups can be expressed as y=(80-x), where x represents the number of games included in the first group, and y the number of games in the second group.
So, for example, if 20 games were selected for the first group, the second group would consist of 60 games. Ultimately, I elected to use x values of 20, 30, 40, 50, 60 and 70.
The raw data used was from the 2007-08, 2008-09 and 2009-10 regular seasons. The table included below shows the results for each individual season, as well as the average results. The values
represent the average correlation over 1000 calculations.
A couple points:
- Corsi Tied is the best predictor of how a team will perform over the remainder of its schedule, regardless of the point in the schedule at which the calculation occurs.
- Corsi Tied is only marginally more predictive of future success than goal ratio or winning percentage when looking at samples of 60 games or more. In other words, as the sample size becomes
increasingly large, there are diminishing returns with respect to the predictive advantage of Corsi. By the end of the season, all three variables seem to predict future success equally well
- The above fact has implications in terms of determining playoff probabilities at the team level, with the results suggesting that a composite metric would work best
- The aggregate values for Goal Ratio and Winning Percentage are remarkably similar. The implication is that once shootout results are controlled for, winning percentage is as good of a measure of a
team as goal ratio is
Next up: Score Effects and Minor Penalties.
*Some readers may have observed that the split-half reliability of goal ratio (0.417) was lower than the predictive validity co-efficients for both Corsi Tied (0.444) and Fenwick Tied (0.429). The
implication is this is that the two latter variables are better able to predict goal ratio from one half of the schedule to the other than goal ratio is itself.
** I should note that this method was actually developed and first used by Vic Ferrari. See here.
Scott Reynolds had a question in the comments section on how the results would differ if we looked at future EV performance rather than overall performance. Using the same method as the one described
above, I looked at which of EV Corsi Tied and EV goal ratio (empty netters removed) was better able to predict future performance at even strength (which I operationalized as future EV goal ratio).
Here are the results:
The results aren't too different - Corsi Tied is a much better predictor early in the schedule, but the two measures have about the same predictive power by the end of the year. | {"url":"http://objectivenhl.blogspot.com/2011_03_01_archive.html","timestamp":"2014-04-17T15:26:23Z","content_type":null,"content_length":"55674","record_id":"<urn:uuid:24f4b1db-b9ca-4baf-9647-3656c50b87e2>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00486-ip-10-147-4-33.ec2.internal.warc.gz"} |
Icehouse-2 velocity analysis
> Icehouse-2 velocity analysis
Icehouse-2 velocity analysis
Looking at our recently-concluded icehouse-2 development timeframe, we landed far less features and bugfixes than we wanted and expected. That created concerns about us losing our velocity, so I run
a little analysis to confirm or deny that feeling.
Velocity loss ?
If we compare icehouse to the havana cycle and focus on implemented blueprints (not the best metric), it is pretty obvious that icehouse-2 was disappointing:
havana-1: 63
havana-2: 100
icehouse-1: 69
icehouse-2: 50
Using the first milestone as a baseline (growth of 10% expected), we should have been at 110 blueprints, so we are at 45% of the expected results. That said, looking at bugs gives a slightly
different picture:
havana-1: 671
havana-2: 650
icehouse-1: 738
icehouse-2: 650
The first milestone baseline again gives a 10% expected growth, which means the target was 715 bugs… but we “only” fixed 650 bugs (like in havana-2). So on the bugfixes front, we are at 91% of the
expected result.
Comparing with grizzly
But havana is not really the cycle we should compare icehouse with. We should compare with another cycle where the end-of-year holidays hit during the -2 milestone development… so grizzly. Let’s look
at the number of commits (ignoring merges), for a number of projects that have been around since then. Here are the results for nova:
nova grizzly-1: 549 commits
nova grizzly-2: 465 commits
nova icehouse-1: 548 commits
nova icehouse-2: 282 commits
Again using the -1 milestone as a baseline for expected growth (here +0%), nova in icehouse-2 ended up at 61% of the expected number of commits. The results are similar for neutron:
neutron grizzly-1: 155 commits
neutron grizzly-2: 128 commits
neutron icehouse-1: 203 commits
neutron icehouse-2: 110 commits
Considering the -1 milestones gives an expected growth in commits between grizzly and icehouse of +31%. Icehouse-2 is at 66% of expected result. So not good but not catastrophic either. What about
cinder ?
cinder grizzly-1: 86 commits
cinder grizzly-2: 54 commits
cinder icehouse-1: 175 commits
cinder icehouse-2: 119 commits
Now that’s interesting… Expected cinder growth between grizzly and icehouse is +103%. Icehouse-2 scores at 108% of the expected, grizzly-based result.
keystone grizzly-1: 95 commits
keystone grizzly-2: 42 commits
keystone icehouse-1: 116 commits
keystone icehouse-2: 106 commits
That’s even more apparent with keystone, which had a quite disastrous grizzly-2: expected growth is +22%, Icehouse-2 is at 207% of the expected result. Same for Glance:
glance grizzly-1: 100 commits
glance grizzly-2: 38 commits
glance icehouse-1: 98 commits
glance icehouse-2: 89 commits
Here we expect 2% less commits, so based on grizzly-2 we should have had 37 commits… icehouse-2 here is at 240% !
In summary, while it is quite obvious that we delivered far less than we wanted to, due to the holidays and the recent gate issues, from a velocity perspective icehouse-2 is far from being disastrous
if you compare it to the last development cycle where the holidays happened at the same time in the cycle. Smaller projects in particular have handled that period significantly better than last year.
We just need to integrate the fact that the October – April cycle includes a holiday period that will reduce our velocity… and lower our expectations as a result.
1. February 3, 2014 at 22:38 |
Can you also dig up the number of reviews submitted (so also the ones that did not yet get approved and merged)? My feeling is that the incoming rate of reviews was the same or better, just fewer
patches got approved. However I have no numbers to back this up.
1. No trackbacks yet. | {"url":"http://fnords.wordpress.com/2014/02/03/icehouse-2-velocity-analysis/","timestamp":"2014-04-20T10:46:58Z","content_type":null,"content_length":"52960","record_id":"<urn:uuid:f0dfd282-ab8c-45ec-93d0-7df5bc8aa7aa>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00167-ip-10-147-4-33.ec2.internal.warc.gz"} |
Friday, November 21st, 2008 by Wouter
I talked about my recent attempts to understand call-by-push-value, a typed lambda calculus that is very precise about when evaluation occurs.
The idea underlying CBPV is to distinguish the type of values and computations. You can think of a value as a fully evaluated Int, String, pair of values, etc. Functions, on the other hand, are
computations abstracting over values. I’ve put together an overview of all the types, terms, typing judgements, and evaluation rules (forgive me, but I’d much rather edit large formulas in latex
directly than WordPress)
My motivation for this is very different from Paul Levy’s: I’m dreaming of a language with enough laziness to be interesting, but with more predictable memory usage than Haskell. The next step is to
figure out how to add (value) data, and (computational) codata, effects (no IO monad), and enough syntactic sugar that you don’t need to be explicit about every thunk/force. There’s a relationship
with Conor’s ideas about Frank and the recent work about focussing and duality at CMU that I haven’t managed to nail it down. | {"url":"http://sneezy.cs.nott.ac.uk/fplunch/weblog/?m=200811","timestamp":"2014-04-21T07:04:28Z","content_type":null,"content_length":"25352","record_id":"<urn:uuid:5cef4028-033f-431b-aa80-19b0e2c6351e>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00132-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dividing Rational Expressions
Hi, can someone please help me understand how this problem is solved?
This is the problem:
This is as far as I get (multiply first fraction by reciprocal of second fraction):
(ignore the red question mark)
I understand the steps and how to divide rational expressions (including this one). I just don't know how I should simplify the numerator in this case.
I know the answer. It is:
Particularly I don't understand how x^2-6x+15 became (x-5)(x-1). Because, (x-5)(x-1)= x^2-6x-5 and not x^2-6x+15. Can someone please explain how the numerator was simplified? | {"url":"http://www.purplemath.com/learning/viewtopic.php?f=8&t=1992","timestamp":"2014-04-21T11:03:09Z","content_type":null,"content_length":"19469","record_id":"<urn:uuid:73cf4e50-a9c9-4d17-8501-6b24751177d8>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00179-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] blind mathematicians and logicians: findings
catarina dutilh cdutilhnovaes at yahoo.com
Mon May 25 11:08:11 EDT 2009
Many thanks to those who have replied to my query concerning
blind logicians and mathematicians. Most of you have replied to me directly, so
I thought it would be useful to post the data I have assembled to the whole
The most often mentioned blind mathematicians were
Pontryagin and Morin. Also worth mentioning are the cases of Euler (blind for
the last twenty years of his life) and Nemeth, who developed a Braille notation
for mathematics. There is a very interesting notice of the AMS on blind
The only person who could qualify as a ‘logician’ among
those mentioned to me seems to be Larry Wos, given his remarkable
accomplishments in automated theorem proving. This in itself is interesting for
my purposes: while there are quite a few remarkable blind mathematicians, often
with impressive spatial (as opposed to merely visual) intuitions, blind
logicians seem to be almost non-existent. Whether the virtual non-existence of
blind logicians is a purely social fact or whether it reflects something deeper
about the cognitive abilities relevant for work in logic is of course a
pressing question. Interestingly, blind mathematicians are often topologists
and geometers, and, as Morin pointed out in the notice mentioned above, a blind
person can in a sense have a privileged spatial perception of an object in that
it includes a perception of its inner part, besides a perception of its
Some other interesting facts that were mentioned:
- Laurence Goldstein (http://www.kent.ac.uk/secl/philosophy/staff/goldstein.html)
developed some tools to teach logic to blind people.
- The distinction between visualizers and verbalizers, which
has been the object of significant work in experimental psychology, may also be
relevant for the issue of the visual aspects and implications of mathematical
practice. Here is a reference:
- Giuliano Artico is a mathematician who has explored the
visual aspects of mathematical notation and practice:
Finally, let me just add something that I did not mention in
my original post: it seems to me that it is important to distinguish between mathematicians
who became blind after receiving mathematical training and those who were either
born blind or who became blind at an early age. For my purposes, more important
than being blind from birth or not (and thus having hador not having had some exposure to visual
sensory perception) is the effect of being visual or non-visual upon being
trained as a mathematician or logician.
Best regards, and thanks again for the cooperation,
Catarina Dutilh Novaes
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2009-May/013731.html","timestamp":"2014-04-18T04:38:37Z","content_type":null,"content_length":"5521","record_id":"<urn:uuid:3bc77c21-f531-44d2-8890-54070efb720f>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00616-ip-10-147-4-33.ec2.internal.warc.gz"} |
Do-it-yourself type theory (part 1
- Logical Frameworks , 1991
"... Martin-Lof's type theory is presented in several steps. The kernel is a dependently typed -calculus. Then there are schemata for inductive sets and families of sets and for primitive recursive
functions and families of functions. Finally, there are set formers (generic polymorphism) and universes. ..."
Cited by 76 (13 self)
Add to MetaCart
Martin-Lof's type theory is presented in several steps. The kernel is a dependently typed -calculus. Then there are schemata for inductive sets and families of sets and for primitive recursive
functions and families of functions. Finally, there are set formers (generic polymorphism) and universes. At each step syntax, inference rules, and set-theoretic semantics are given. 1 Introduction
Usually Martin-Lof's type theory is presented as a closed system with rules for a finite collection of set formers. But it is also often pointed out that the system is in principle open to extension:
we may introduce new sets when there is a need for them. The principle is that a set is by definition inductively generated - it is defined by its introduction rules, which are rules for generating
its elements. The elimination rule is determined by the introduction rules and expresses definition by primitive recursion on the way the elements of the set are generated. (In this paper I shall use
the term ...
- Formal Aspects of Computing , 1997
"... A general formulation of inductive and recursive definitions in Martin-Lof's type theory is presented. It extends Backhouse's `Do-It-Yourself Type Theory' to include inductive definitions of
families of sets and definitions of functions by recursion on the way elements of such sets are generated. Th ..."
Cited by 65 (13 self)
Add to MetaCart
A general formulation of inductive and recursive definitions in Martin-Lof's type theory is presented. It extends Backhouse's `Do-It-Yourself Type Theory' to include inductive definitions of families
of sets and definitions of functions by recursion on the way elements of such sets are generated. The formulation is in natural deduction and is intended to be a natural generalization to type theory
of Martin-Lof's theory of iterated inductive definitions in predicate logic. Formal criteria are given for correct formation and introduction rules of a new set former capturing definition by
strictly positive, iterated, generalized induction. Moreover, there is an inversion principle for deriving elimination and equality rules from the formation and introduction rules. Finally, there is
an alternative schematic presentation of definition by recursion. The resulting theory is a flexible and powerful language for programming and constructive mathematics. We hint at the wealth of
possible applic...
- In STOP Summer School on Constructive Algorithmics, Abeland , 1989
"... Two formalisms that have been used extensively in the last few years for the calculation of programs are the Eindhoven quantifier notation and the formalism developed by Bird and Meertens.
Although the former has always been applied with ultimate goal the derivation of imperative programs and th ..."
Cited by 32 (3 self)
Add to MetaCart
Two formalisms that have been used extensively in the last few years for the calculation of programs are the Eindhoven quantifier notation and the formalism developed by Bird and Meertens. Although
the former has always been applied with ultimate goal the derivation of imperative programs and the latter with ultimate goal the derivation of functional programs there is a remarkable similarity in
the formal games that are played. This paper explores the Bird-Meertens formalism by expressing and deriving within it the basic rules applicable in the Eindhoven quantifier notation. 1 Calculation
was an endless delight to Moorish scholars. They loved problems, they enjoyed finding ingenious methods to solve them, and sometimes they turned their methods into mechanical devices. (J. Bronowski,
The Ascent of Man. Book Club Associates: London (1977).) 1 Introduction Our ability to calculate --- whether it be sums, products, differentials, integrals, or whatever --- would be woefull...
- Informal Proc. of Workshop on Generic Programming, WGP’98, Marstrand , 1998
"... We first present a finite axiomatization of strictly positive inductive types in the simply typed lambda calculus. Then we show how this axiomatization can be modified to encompass simultaneous
inductive-recursive definitions in intuitionistic type theory. A version of this has been implemented in t ..."
Cited by 7 (4 self)
Add to MetaCart
We first present a finite axiomatization of strictly positive inductive types in the simply typed lambda calculus. Then we show how this axiomatization can be modified to encompass simultaneous
inductive-recursive definitions in intuitionistic type theory. A version of this has been implemented in the Half system which is based on Martin-Lf's logical framework. 1 Introduction The present
note summarizes a presentation to be given at the Workshop on Generic Programming, Marstrand, Sweden, June 18th, 1998. We use Martin-Lof's logical framework as a metalanguage for axiomatizing
inductive definitions in the simply typed lambda calculus. We also show how to generalize this axiomatization to the case of inductive-recursive definitions in the lambda calculus with dependent
types. The reader is referred to the full paper [7] for a more complete account focussing on induction-recursion. Related papers discussing inductive definitions in intuitionistic type theory include
Backhouse [1, 2], Co...
- Fundam. Inf , 1993
"... We give an interpretation of Martin-Lof's type theory (with universes) extended with generalized inductive types. The model is an extension of the recursive model given by Beeson. By restricting
our attention to PER model, we show that the strictness of positivity condition in the definition of gene ..."
Cited by 2 (1 self)
Add to MetaCart
We give an interpretation of Martin-Lof's type theory (with universes) extended with generalized inductive types. The model is an extension of the recursive model given by Beeson. By restricting our
attention to PER model, we show that the strictness of positivity condition in the definition of generalized inductive types can be dropped. It therefore gives an interpretation of general inductive
types in Martin-Lof's type theory. Copyright c fl1993. All rights reserved. Reproduction of all or part of this work is permitted for educational or research purposes on condition that (1) this
copyright notice is included, (2) proper attribution to the author or authors is made and (3) no commercial gain is involved. Technical Reports issued by the Department of Computer Science,
Manchester University, are available by anonymous ftp from m1.cs.man.ac.uk (130.88.13.4) in the directory /pub/TR. The files are stored as PostScript, in compressed form, with the report number as
filename. Alternative... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=2583121","timestamp":"2014-04-19T15:02:34Z","content_type":null,"content_length":"25069","record_id":"<urn:uuid:aa4b0168-9093-4383-80f4-3451e0667252>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00283-ip-10-147-4-33.ec2.internal.warc.gz"} |
Classification of triangles
The set of 'triangles' can be subdivided into different types of triangles: equilateral, isosceles, right-angled and scalene. However, note that an equilateral triangle can be considered to be a
special case of an isosceles triangle while a right-angled triangle can be either isosceles or scalene. It is important that pupils have their attention drawn to different ways of classifying the
same shape. The diagram below summarises the classification of triangles:
Such classifications are quite hard for pupils to understand because any given shape may be classified in different ways. | {"url":"http://ictedusrv.cumbria.ac.uk/maths/SecMaths/U5/U5/page_39.htm","timestamp":"2014-04-16T21:51:58Z","content_type":null,"content_length":"8078","record_id":"<urn:uuid:b721ef90-2452-4745-8436-ce042a6e4097>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00220-ip-10-147-4-33.ec2.internal.warc.gz"} |
Basic Terms In Statistics
Just before I start with detailing some basic terms in statistics, I would emphasise that I am not a statistician, however, anticipate that I have presented enough details below to clarify particular
terms. This article details a couple of simple statistics and attempts to expand these with some comments on just how they may be used in practice. The use of statistics in the management of risk is
enormous and it is not actually the intention below to produce authentic examples in this field but to make certain that you get some notion of their essential meaning.
Mean (expected value or average):
We could take any kind of activity and measure values that illustrate that event. Let us say that we measure 12 values, they could be:
Values: 3, 5, 5, 6, 7, 9, 10, 11, 12, 12, 15, 16
The total of these values is: 111
The number of values is: 12
Hence, plainly the mean will be 111 divided by 12 = 9.25
(3 + 5 + 5 + 6 + 7 + 9 + 10 + 11 + 12 + 12 + 15 + 16) / 12 = 111/12 = 9.25
The value of 9.25 is the 'expected value' which you may anticipate through determining a one off value for the activity, but of course, in practice there is a range of values. The 'expected values'
are accumulative.
Supposing that we had 4 individual events and we took similar measurements for each one we might finish up having 'expected values' for each activity of:
5.3, 8.4, 6.3 and 11.3
If we next took into account the result of determining the outcome of the 4 activities we might just predict an overall value of:
(5.3 + 8.4 + 6.3 + 11.3) = 31.3
Once again, the true value will be in a range.
We referred above to an activity where we attained the following values:
Values: 3, 5, 5, 6, 7, 9, 10, 11, 12, 12, 15, 16
If we were to anticipate an activity with these values we would be assuming that they are each equally likely for the calculation of the 'expected value' or the mean. Simply put if there is little
possibility of the initial 6 values being measured, then just the last 6 matter, so the 'expected value' will be:
(10 + 11 + 12 + 12 + 15 + 16) / 6 = 76/6 = 12.7
Thus, in practice not only is there a distribution of values, their likelihood of occurring may equally be different.
We may have a basic look at a task in a project. We may want to know how long it could be postponed before it begins. A project manager may ask the expert for his opinion and he might advocate 16
weeks. The project manager can take advantage of this information concerning his planning. However, this estimation is going to be based upon certain assumptions which the project manager ought to
If the expert is 100 % certain that there will be 16 working weeks delay that's good, but it is not usually the situation. What we do realize is that there will be a setback. The probability of this
coming about is 1, that is,, it will occur. We can, at present, take into account other potential scenarios.
If we suppose that we possess a range of hold ups (in weeks) each with a particular likelihood. We could get:
6.........................0.3......................6 x 0.3 = 1.8
16.......................0.5.....................16 x 0.5 = 8.0
20.......................0.2.....................20 x 0.2 = 4.0
The 'contribution' represents a 'weighted' value. Note that the sum of each one of the chances adds up to 1, which must be the probability of the hold up actually occurring.
The expected value this time becomes:
(1.8 + 8.0 + 4.0) = 13.8 weeks
This is the more probable value that the project manager could make use of in his plans rather than 16 working weeks.
If we considered a selection of 4 comparable activities we may finish up with an overall hold up of 13.8 x 4 = 55.2 working weeks (if executed in series and not in parallel). If we had taken a single
estimate of 16 working weeks the potential complete hold up to the project would have come to 64 weeks (approximately 16 % longer).
When evaluating a single activity the effect is not much of a problem, but when evaluating several events the differences can certainly accumulate.
The value of 16 working weeks hold up becomes the most likely (highest chance) and is 2.2 weeks more than the 13.8 working weeks above.
It is due to the fact that the spread of the values is 'skewed' a little. Had there been a balanced distribution the 'expected value' would have worked out as being the same as the initial
estimation, that is, 16 weeks.
12......................0.25......................12 x 0.25 = 3.0
16......................0.5........................16 x 0.5 = 8.0
20......................0.25.......................20 x 0.2 = 5.0
Here the contributions are:
(3.0 + 8.0 + 5.0) = 16 weeks
Ideally, the above has given a small awareness into one of the basic terms in statistics, the 'mean' of a group of values'.
There will be a few more in the next article. | {"url":"http://www.sooperarticles.com/business-articles/project-management-articles/basic-terms-statistics-916674.html","timestamp":"2014-04-16T16:31:16Z","content_type":null,"content_length":"41518","record_id":"<urn:uuid:b6df0e97-01d6-41eb-9c0c-363cbe4d855e>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00349-ip-10-147-4-33.ec2.internal.warc.gz"} |
Michael Atiyah
AKA Michael Francis Atiyah
Born: 22-Apr-1929
Birthplace: London, England
Gender: Male
Race or Ethnicity: Multiracial
Sexual orientation: Straight
Occupation: Mathematician
Nationality: England
Executive summary: Atiyah-Singer index theorem
Military service: Royal Army (1947-49)
British geometer Michael Atiyah is known for his work in advanced algebra and topology, including the theory of complex manifolds, superstring theory in mathematical physics, and Yang-Mills equations
and gauge theory. In collaboration with American mathematician Isadore Singer, he developed the Atiyah-Singer index theorem, which concerns the existence and singularity of solutions to linear
partial differential equations of elliptic type.
Father: Edward Atiyah
Mother: Jean Atiyah
Wife: Lily Brown Atiyah (m. 1955)
High School: Victoria College, Cairo, Egypt (attended)
High School: Victoria College, Alexandria, Egypt (attended)
High School: Manchester Grammar School, Manchester, England
University: BA, Cambridge University
University: MA, Cambridge University
University: PhD, Cambridge University (1955)
Teacher: Geometry, Cambridge University (1956-61)
Professor: Geometry, Oxford University (1961-69, 1972-)
Administrator: Master of Trinity College, Cambridge University (1990-2005)
Administrator: Director, Isaac Newton Institute for Mathematical Sciences,, Cambridge University (1990-96)
Administrator: Chancellor, University of Leicester (1995-2005)
Abel Prize 2004 (with Isadore Singer)
Accademia dei Lincei
Order of Merit 1992
Copley Medal 1988
King Faisal International Prize for Science 1987
Knight of the British Empire 1983
Feltrinelli Prize 1981
De Morgan Medal 1980
Royal Medal 1968
Fields Medal 1966
Berwick Prize 1961
London Mathematical Society President (1974-76)
Royal Society President (1990-95)
Royal Society of Edinburgh President (2005-)
Institute for Advanced Study (1955-56 and 1969-72)
Lebanese Ancestry Paternal
Scottish Ancestry Maternal
Author of books:
K-Theory (1967, with I.G. Macdonald)
Introduction to Commutative Algebra (1969)
Elliptic Operators and Compact Groups (1974)
Geometry of Yang-Mills Fields (1979, with Nigel Hitchin)
The Geometry and Dynamics of Magnetic Monopoles (1988)
Collected Works of Michael Francis Atiyah (1988, five volumes)
The Geometry and Physics of Knots (1990)
New! Create a map starting with Michael Atiyah
Requires Flash 7+ and Javascript.
Do you know something we don't?
Submit a correction or make a comment about this profile
Copyright ©2014 Soylent Communications | {"url":"http://www.nndb.com/people/095/000179555/","timestamp":"2014-04-17T01:03:16Z","content_type":null,"content_length":"11082","record_id":"<urn:uuid:28e8d2e6-acbc-4ad5-bbda-377f1fd861ca>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00410-ip-10-147-4-33.ec2.internal.warc.gz"} |
Material Results
Search Materials
Return to What's new in MERLOT
Get more information on the MERLOT Editors' Choice Award in a new window.
Get more information on the MERLOT Classics Award in a new window.
Get more information on the JOLT Award in a new window.
Go to Search Page
View material results for all categories
Click here to go to your profile
Click to expand login or register menu
Select to go to your workspace
Click here to go to your Dashboard Report
Click here to go to your Content Builder
Click here to log out
Search Terms
Enter username
Enter password
Please give at least one keyword of at least three characters for the search to work with. The more keywords you give, the better the search will work for you.
select OK to launch help window
cancel help
You are now going to MERLOT Help. It will open a new window.
This pod was created for Saylor.org course K12MATH: Advanced Statistics
Material Type:
The Saylor Foundation
Date Added:
Jan 31, 2014
Date Modified:
Mar 14, 2014
This pod was created for Saylor.org course K12MATH: Advanced Statistics
Material Type:
The Saylor Foundation
Date Added:
Jan 31, 2014
Date Modified:
Mar 14, 2014
This pod was created for Saylor.org course K12MATH: Advanced Statistics
Material Type:
The Saylor Foundation
Date Added:
Jan 31, 2014
Date Modified:
Mar 14, 2014
This pod was created for Saylor.org course K12MATH: Advanced Statistics
Material Type:
The Saylor Foundation
Date Added:
Jan 31, 2014
Date Modified:
Mar 14, 2014
This pod was created for Saylor.org course K12MATH: Advanced Statistics
Material Type:
The Saylor Foundation
Date Added:
Jan 31, 2014
Date Modified:
Mar 14, 2014
This pod was created for Saylor.org course K12MATH: Advanced Statistics
Material Type:
The Saylor Foundation
Date Added:
Jan 31, 2014
Date Modified:
Mar 14, 2014
This pod was created for Saylor.org course K12MATH: Advanced Statistics
Material Type:
The Saylor Foundation
Date Added:
Jan 31, 2014
Date Modified:
Mar 15, 2014
This pod was created for Saylor.org course K12MATH: Advanced Statistics
Material Type:
The Saylor Foundation
Date Added:
Jan 31, 2014
Date Modified:
Mar 15, 2014
This pod was created for Saylor.org course K12MATH: Advanced Statistics
Material Type:
The Saylor Foundation
Date Added:
Jan 31, 2014
Date Modified:
Mar 14, 2014
Learn to convert and operate on time units such as seconds, minutes, hours, days, weeks, months and years. Automatically...
see more
Material Type:
Qedoc Educational Team
Date Added:
Feb 07, 2008
Date Modified:
Feb 07, 2008 | {"url":"http://www.merlot.org/merlot/materials.htm?page=6&materialType=Quiz/Test&category=2513&sort.property=overallRating&nosearchlanguage=&pageSize=","timestamp":"2014-04-20T02:47:00Z","content_type":null,"content_length":"169176","record_id":"<urn:uuid:65491f40-56e4-4c7f-9e5e-31f4db284d7f>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00647-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kendall Park Precalculus Tutor
...I had 2 semesters of linear algebra as an undergrad when I earned my BA in Mathematics. Then, I used it extensively when I earned my MS in Statistics. I studied probability extensively during
my work towards my BA in Mathematics and my MS in Statistics.
15 Subjects: including precalculus, calculus, algebra 1, geometry
...The writing section includes sentence correction, sentence error identification and essay corrections. Many students have developed bad habits in regard to grammar and the lessons are
exceptionally important to developing some new habits. Taking the PSAT is sometimes a frightening time for a student.
43 Subjects: including precalculus, English, writing, reading
...In math I never solve the problem, Instead, I give a tip and let her think about the pb. If unsuccessful, I give her an other tip, and an other until the problem is solved. When I teach
languages, I speak always in the target language.
14 Subjects: including precalculus, Spanish, physics, French
...My background also includes the titles of Math Assessment Specialist for Educational Testing Service and Math Editor for a major test publishing firm. I strive to be attentive to students'
needs and goals.Currently teaching and have taught Precalculus. Worked for a major publishing firm and wrote Precalculus questions.
21 Subjects: including precalculus, calculus, statistics, geometry
...In addition, I have many years of experience of tutoring for the SAT. I received a score of 2360 (800 on Verbal, 800 on Writing and 760 on Math) on the SAT and have recent experience tutoring
for both the SAT and ACT. I led an SAT tutoring program (SAT for Temple) at the Sri Guruvayurappan Temp...
28 Subjects: including precalculus, Spanish, reading, English | {"url":"http://www.purplemath.com/Kendall_Park_Precalculus_tutors.php","timestamp":"2014-04-18T18:48:06Z","content_type":null,"content_length":"24298","record_id":"<urn:uuid:f53f8dac-2b3b-4e89-9002-03f193b47f5f>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00496-ip-10-147-4-33.ec2.internal.warc.gz"} |
NEW Propel Zero Giveaway (10 Winners), also Learn How to Win A Year Supply!
Do you love Propel, but not the calories? Propel has just introduced Propel Zero! There are seven delicious flavors to try - Berry, Grape, Black Cherry, Lemon, Peach, Kiwi-Strawberry and Blueberry
Pomegranate. I'm lovin' Blueberry Pomegranate! You can try it too courtesy of Propel and Walmart.
From March 19 – April 24, three million samples of new Propel Zero will be given away at Walmart stores across the country. Locate a participating Walmart in your area by going HERE and entering your
zip code.
Propel Zero is an enhanced water beverage designed to help hydrate and nourish your body’s momentum and keep you moving forward. It like a treat for your body, replenishing C and E vitamins, the
energy of B vitamins, and the protection of antioxidants. Me, like-y!
Want to win a Year Supply of Propel Zero? Yes.... me too!
Here's how to enter:
• Find Walmart sampling locations and dates HERE
• Snap a photo of yourself with your favorite Propel Zero flavor at a sampling event (or wherever you happen to be)
• "Like" Propel Zero on Facebook
• Upload your photo to the Propel Zero Facebook wall and answer the question: "I stay active by... " (running, taking the stairs instead of the elevator, taking my dog for a walk, etc.)
• Leave a comment on the Propel Zero Facebook wall and the them ace and friends sent you.
• Check back - one lucky winner will be announced on April 29 (winner will be selected randomly from their Facebook page)
Learn More
Win It
Ten {10} LUCKY *ace and friends* readers will each win
a Propel Zero Gift Kit. Each kit includes two 20oz bottles of Propel Zero, a Propel Zero t-shirt and 3 coupons for a free single-serve Propel Zero in a drawstring backpack!
Mandatory Entry: Tell me which Propel Zero FLAVOR you would like to try or the one that is your favorite. (make sure to include your email).
Bonus Entries: After fulfilling the Mandatory Entry, feel free to earn yourself some Bonus Entries. Please leave a separate comment for each one along with your email.
Earns 10 Entries: Enter the sweepstakes (and leave me the link) Earns 2 Entries: Follow ace and friends via Google Friends Connect. Earns 2 Entries: Follow ace and friends on Facebook.
Earns 2 Entries: Follow ace and friends on Twitter. Earns 2 Entries: Blog about this giveaway .
Earns 1 Entries: Follow ace and friends on Networked Blog.
Earns 1 Entry: Place ace and friends Button on your blog. (leave link)
Earns 1 Entry: Subscribe to our blog by email (top left)
Earns 1 Entry Daily: You can copy and paste the following Tweet at Twitter once per day. (leave status link)
Copy and Paste The Following Tweet:
#Win a {a (@nowpropelled) Propel Zero Gift Kit} @aceandfriendsco Blog! http://bit.ly/e8Pp1F ends 4.12 #giveaway #moms #health
Entry Rules: The giveaway is open to US residents only. Entries are valid through Tuesday, April 12th at 11:59 PM EST.
The winners will have 48 hours to respond via email before a new winner is selected.
[Product review & giveaway disclosure: I received one or more of the products mentioned above for free from the manufacturer of a PR Agency. Regardless, I only recommend products or services I use
personally and believe will be good for my readers. I am disclosing this in accordance with the Federal Trade Commissions 16 CFR, Part 255 "Guides Concerning the Use of Endorsements and Testimonials
in Advertising." See ace and friends Disclosure Policy HERE.]
Pin It
170 comments:
I'd like to try the lemon flavor.
mami2jcn at gmail dot com
I follow you in GFC.
mami2jcn at gmail dot com
I follow you in GFC.
mami2jcn at gmail dot com
I follow you on Facebook with my username Mary Happymommy.
mami2jcn at gmail dot com
I follow you on Facebook with my username Mary Happymommy.
mami2jcn at gmail dot com
I follow you on Twitter (@mami2jcn).
mami2jcn at gmail dot com
I follow you on Twitter (@mami2jcn).
mami2jcn at gmail dot com
I follow you on networked blogs (Mary Happymommy).
mami2jcn at gmail dot com
I subscribe to your blog by email.
mami2jcn at gmail dot com
I'd try strawberry kiwi.
hewella1 at gmail dot com
I'm a gfc follower.
hewella1 at gmail dot com
strawberry kiwi mrs.mommyyatgmail
I would love to try the Grape.
I follow on gfc #1.
I follow on gfc #2.
I follow on Networked Blog.
Strawberry kiwi
I'd like to try grape.
kport207 at gmail dot com
GFC Follower (kport207) #1
kport207 at gmail dot com
GFC Follower (kport207) #2
kport207 at gmail dot com
blueberry pomegranate crystletellerday@yahoo.com
I would love to try Black Cherry.
judywhatilivefor at gmail dot com
I follow you via GFC.
judywhatilivefor at gmail dot com
I follow you on twitter @judywhatilive4
judywhatilivefor at gmail dot com
thanks for entering me!
Janna Johnson
i would like to try the Peach flavor :)
marci h
tristatecruisers at yahoo dot com
follow via GFC :)
marci h
tristatecruisers at yahoo dot com
a fan on FB :)
marci h
tristatecruisers at yahoo dot com
subscribe via e-mail :)
marci h
tristatecruisers at yahoo dot com
follow via GFC entry 2
marci h
tristatecruisers at yahoo dot com
a fan on FB entry 2
marci h
tristatecruisers at yahoo dot com
follow via Networked Blogs :)
marci h
tristatecruisers at yahoo dot com
my favortie flavor is grape
I would love to try strawberry kiwi!
Jhbalvin at gmail dot com
I'd like to try black cherry.
lazybones344 at gmail dot com
GFC follower
lazybones344 at gmail dot com
GFC follower 2
lazybones344 at gmail dot com
I follow you on twitter #1
I follow you on twitter #2
I follow on networked blogs
lazybones344 at gmail dot com
I'd like to try the lemon flavor
danielleaknapp at gmail dot com
I'd like to try the Kiwi-Strawberry flavor. LKVOYER at aol dot com
Following on GFC #1. LKVOYER at aol dot com
Following on GFC #2. LKVOYER at aol dot com
Like Ace & Friends on FB #1 (leann brandner voyer) LKVOYER at aol dot com
Like Ace & Friends on FB #2 (leann brandner voyer) LKVOYER at aol dot com
Following Ace & Friends on Twitter @dancersmom69 #1. LKVOYER at aol dot com
Following Ace & Friends on Twitter @dancersmom69 #2. LKVOYER at aol dot com
Tweeted: http://twitter.com/#!/dancersmom69/status/54357327754690560 LKVOYER at aol dot com
I'd like to try the grape.
Thanks for the giveaway!
eswright18 at gmail dot com
I follow via GFC (ellie)
eswright18 at gmail dot com
I follow via GFC (ellie)#2
eswright18 at gmail dot com
I follow you on FB (Ellie W)
eswright18 at gmail dot com
I follow you on FB (Ellie W)#2
eswright18 at gmail dot com
I follow you on Twitter @eswright18
eswright18 at gmail dot com
I follow you on Twitter @eswright18
eswright18 at gmail dot com
I follow on Network Blogs
eswright18 at gmail dot com
Daily Tweet @eswright18
eswright18 at gmail dot com
I would love to try the peach.
hebert024 at aoldot com
Daily Tweet @eswright18
eswright18 at gmail dot com
i would like to try the berry
madamerkf at aol dot com
sps1113 at yahoo dot com
tweet sps1113 at yahoo dot com
@sueparks2003 Gladys Parker
#Win a {a (@nowpropelled) Propel Zero Gift Kit} @aceandfriendsco Blog! http://bit.ly/e8Pp1F ends 4.12 #giveaway #moms #health
6 seconds ago Favorite Reply Delete
sps1113 at yahoo dot com
fb/u gladys p
sps1113 at yahoo dot com
tw/u @sueparks2003
sps1113 at yahoo dot com
sps1113 at yahoo dot com
button on blog
sps1113 at yahoo dot com
sps1113 at yahoo dot com
fb/u gladys p
sps1113 at yahoo dot com
tw/u @sueparks2003
sps1113 at yahoo dot com
Daily Tweet @eswright18
eswright18 at gmail dot com
black cherry
Daily Tweet @eswright18
eswright18 at gmail dot com
I love Blueberry Pomegranate!
I follow on GFC
I follow on GFC
I follow you on FB
facebook.com/zcscooby Zabrina C.
I follow you on FB
facebook.com/zcscooby Zabrina C.
I follow you on Twitter
I follow you on Twitter
I follow on Networked Blogs
I tweeted!
I would love to try the Kiwi Strawberry
nightowlmamablogs at gmail dot com
following via GFC
like you on facebook
tricia fand rey
following you on twitter @nightowlmama
nightowlmamablogs at gmail dot com
follown on networked blogs
tricia fand rey
I would like to try kiwi strawberry!
strawberry kiwi
I'd like to try the Lemon.
I follow on GFC (1)
I follow on GFC (2)
The Berry flavor is my favorite Propel.
dchrisg3 @ gmail . com
I follow you on Google Friend Connect/Blogger. (Debbie C)
dchrisg3 @ gmail . com
My favorite flavor is Strawberry Kiwi.
I follow you on GFC. #1
I follow you on GFC. #2
Daily Tweet @eswright18
eswright18 at gmail dot com
i'd like the lemon
email sub djackson1958 at hotmail dot com
fblikeu debbie jackson
twu as jacksondeb
my fav flavor is grape (chucosbabygirl(at)yahoo(dot)com)
GFC follower (B.J.) (chucosbabygirl(at)yahoo(dot)com)
GFC follower (B.J.) entry 2 (chucosbabygirl(at)yahoo(dot)com)
follow u on FB (bjnchuco) (chucosbabygirl(at)yahoo(dot)com)
follow u on FB (bjnchuco) (chucosbabygirl(at)yahoo(dot)com)
follow u on twitter @chucosgirl (chucosbabygirl(at)yahoo(dot)com)
follow u on twitter @chucosgirl (chucosbabygirl(at)yahoo(dot)com)
networked blog follower (chucosbabygirl(at)yahoo(dot)com)
email subscriber (chucosbabygirl(at)yahoo(dot)com)
I want to try the Black Cherry.
sam_oh52 at hotmail dot com
GFC follower as Sarah
Daily Tweet @eswright18
eswright18 at gmail dot com
Peach & Kiwi-strawberry sound tasty.
I would like to try Black Cherry
s8r8l33 at yahoo dot com
GFC @s8r8l33
GFC @s8r8l33 #2
I Follow ace and friends on Facebook @SaraLee E
I Follow ace and friends on Facebook @SaraLee E #2
I Follow ace and friends on Twitter @s8r8l33
I Follow ace and friends on Twitter @s8r8l33 #2
I love the Lemon Flavor!
I'd like to try peach.
Email subscriber
GFC follower 1
GFC follower 2
Like you on FB (Annette E) #1
Like you on FB (Annette E) #2
I want to try the Kiwi-Strawberry, thanks! ard1977@gmail dot com
GFC follower amied027 #1. ard1977@gmail dot com
GFC follower amied027 #2. ard1977@gmail dot com
like you on FB under Rene Denning #1. ard1977@gmail dot com
like you on FB under Rene Denning #2. ard1977@gmail dot com
follow you on twitter @amied027 #1. ard1977@gmail dot com
follow you on twitter @amied027 #2. ard1977@gmail dot com
tweeted- http://twitter.com/#!/amied027/status/57668106713169920
ard1977@gmail dot com
I like Black Cherry
i really want to try the grape
susansmoaks at gmail dot com
black cherry please.
BLACK CHERRY
fb fan of yours 1
fb fan of yours 2
I would like to try the Black Cherry flavor
willdebbie97 at yahoo dot com
Follow ace and friends via Google Friends Connect
willdebbie97 at yahoo dot com
Follow ace and friends via Google Friends Connect
willdebbie97 at yahoo dot com
Follow ace and friends on Facebook
(christal fuller couturier)
willdebbie97 at yahoo dot com
Follow ace and friends on Facebook
(christal fuller couturier)
willdebbie97 at yahoo dot com
Follow ace and friends on Networked Blog
willdebbie97 at yahoo dot com
email subscriber
willdebbie97 at yahoo dot com
I would like to try the grape flavor...please, please,please-
Thank you
diane Baum
Kiwi-Strawberry would be my flavor of choice!
the lemon sounds good!
i want to try the grape
i follow via GFC Entry 1
i follow via GFC Entry 2
i like you on facebook entry 1
i like you on facebook entry 2
i follow via twitter
i follow via twitter entry 2
i'm an email subscriber
i tweeted
Daily Tweet @eswright18
eswright18 at gmail dot com
I want to try the Blueberry Pomegranate!
atreau at gmail dotcom. | {"url":"http://www.aceandfriendsco.com/2011/03/new-propel-zero-giveaway-10-winners.html","timestamp":"2014-04-20T03:43:50Z","content_type":null,"content_length":"341379","record_id":"<urn:uuid:a7ec1d83-6b21-4926-bf54-7ea250b73bb4>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00246-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent application title: IMAGE SENSING APPARATUS AND METHOD OF CONTROLLING THE APPARATUS
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
This invention makes it possible to provide a technique for suppressing a decrease in resolution of a sensed image even when an image sensor on which solid-state image sensing elements with different
sensitivities are arranged is used. A demosaic unit obtains a color component of a given pixel, sensed at the first sensitivity, by performing interpolation calculation using color components of
pixels each of which is adjacent to the given pixel and is sensed at the first sensitivity. The demosaic unit also obtains a color component of a given pixel, sensed at the second sensitivity, by
performing interpolation calculation using color components of pixels each of which is adjacent to the given pixel and is sensed at the second sensitivity.
An image sensing apparatus including an image sensor on which solid-state image sensing elements which sense color components at a first sensitivity and solid-state image sensing elements which sense
color components at a second sensitivity higher than the first sensitivity are alternately, two-dimensionally arranged, comprising: a first calculation unit that obtains a color component of a given
pixel, sensed at the first sensitivity, by performing interpolation calculation using color components of pixels each of which is adjacent to the given pixel and is sensed at the first sensitivity,
in a color image based on an image signal output from the image sensor; a second calculation unit that obtains a color component of a given pixel, sensed at the second sensitivity, by performing
interpolation calculation using color components of pixels each of which is adjacent to the given pixel and is sensed at the second sensitivity, in the color image output from the image sensor; and a
unit that outputs the color image in which color components of all pixels are determined by said first calculation unit and said second calculation unit.
The image sensing apparatus according to claim 1, wherein the solid-state image sensing elements which sense the color components at the first sensitivity include solid-state image sensing elements
DR which sense R components at a first brightness, solid-state image sensing elements DG which sense G components at the first brightness, and solid-state image sensing elements DB which sense B
components at the first brightness, the solid-state image sensing elements which sense the color components at the second sensitivity include solid-state image sensing elements LR which sense R
components at a second brightness higher than the first brightness, solid-state image sensing elements LG which sense G components at the second brightness, and solid-state image sensing elements LB
which sense B components at the second brightness, and on the image sensor, a column of solid-state image sensing elements formed from the solid-state image sensing elements DG and the solid-state
image sensing elements LG is arranged for every other column, and a ratio between the numbers of solid-state image sensing elements DR, solid-state image sensing elements DG, and solid-state image
sensing elements DB is 1:2:1, and a ratio between the numbers of solid-state image sensing elements LR, solid-state image sensing elements LG, and solid-state image sensing elements LB is 1:2:
1. 3.
The image sensing apparatus according to claim 1, wherein said first calculation unit determines a pixel value, in the color image, of a pixel sensed by the solid-state image sensing element DG as a
G component of this pixel, that has the first brightness, and said second calculation unit determines a pixel value, in the color image, of a pixel sensed by the solid-state image sensing element LG
as a G component of this pixel, that has the second brightness.
The image sensing apparatus according to claim 1, wherein letting (i,j) be a pixel position of a pixel sensed by the solid-state image sensing element DG, said first calculation unit obtains a G
component, that has the first brightness, of a pixel Q at a pixel position (i+1,j+1) by performing interpolation calculation using pixel values of pixels adjacent to the pixel Q in the color image.
The image sensing apparatus according to claim 1, wherein letting (i,j) be a pixel position of a pixel sensed by the solid-state image sensing element LG, said second calculation unit obtains a G
component, that has the second brightness, of a pixel Q at a pixel position (i+1,j+1) by performing interpolation calculation using pixel values of pixels adjacent to the pixel Q in the color image.
The image sensing apparatus according to claim 1, wherein when a pixel position of a pixel sensed by the solid-state image sensing element DG is one of (i-1,j) and (i,j-1), said first calculation
unit obtains a G component, that has the first brightness, of a pixel Q at a pixel position (i,j) by performing interpolation calculation using pixel values of two pixels adjacent to the pixel Q in
the color image.
The image sensing apparatus according to claim 1, wherein when a pixel position of a pixel sensed by the solid-state image sensing element LG is one of (i-1,j) and (i,j-1), said second calculation
unit obtains a G component, that has the second brightness, of a pixel Q at a pixel position (i,j) by performing interpolation calculation using pixel values of two pixels adjacent to the pixel Q in
the color image.
The image sensing apparatus according to claim 1, wherein said first calculation unit obtains an R component, having the first brightness, of a pixel, sensed by the solid-state image sensing element
DG, by performing interpolation calculation using an R component and a G component, both having the first brightness, of a pixel adjacent to the pixel sensed by the solid-state image sensing element
DG, and obtains a B component, having the first brightness, of the pixel, sensed by the solid-state image sensing element DG, by performing interpolation calculation using a B component and a G
component, both having the first brightness, of a pixel adjacent to the pixel sensed by the solid-state image sensing element DG, and said second calculation unit obtains an R component, having the
second brightness, of the pixel, sensed by the solid-state image sensing element DG, by performing interpolation calculation using an R component and a G component, both having the second brightness,
of a pixel adjacent to the pixel sensed by the solid-state image sensing element DG, and obtains a B component, having the second brightness, of the pixel, sensed by the solid-state image sensing
element DG, by performing interpolation calculation using a B component and a G component, both having the second brightness, of a pixel adjacent to the pixel sensed by the solid-state image sensing
element DG.
The image sensing apparatus according to claim 1, wherein said first calculation unit obtains an R component, having the first brightness, of a pixel, sensed by the solid-state image sensing element
LR, by performing interpolation calculation using an R component and a G component, both having the first brightness, of a pixel adjacent to the pixel sensed by the solid-state image sensing element
LR, and obtains a B component, having the first brightness, of the pixel, sensed by the solid-state image sensing element LR, by performing interpolation calculation using a B component and a G
component, both having the first brightness, of a pixel adjacent to the pixel sensed by the solid-state image sensing element LR, and said second calculation unit determines an R component, having
the second brightness, of the pixel sensed by the solid-state image sensing element LR as a pixel value, in the color image, of the pixel sensed by the solid-state image sensing element DG, and
obtains a B component, having the second brightness, of the pixel, sensed by the solid-state image sensing element LR, by performing interpolation calculation using a B component and a G component,
both having the second brightness, of a pixel adjacent to the pixel sensed by the solid-state image sensing element LR.
The image sensing apparatus according to claim 1, wherein said first calculation unit obtains an R component, having the first brightness, of a pixel, sensed by the solid-state image sensing element
LB, by performing interpolation calculation using an R component and a G component, both having the first brightness, of a pixel adjacent to the pixel sensed by the solid-state image sensing element
LB, and obtains a B component, having the first brightness, of the pixel, sensed by the solid-state image sensing element LB, by performing interpolation calculation using a B component and a G
component, both having the first brightness, of a pixel adjacent to the pixel sensed by the solid-state image sensing element LB, and said second calculation unit obtains an R component, having the
second brightness, of the pixel, sensed by the solid-state image sensing element LB, by performing interpolation calculation using an R component and a G component, both having the second brightness,
of a pixel adjacent to the pixel sensed by the solid-state image sensing element LB, and determines a B component, having the second brightness, of the pixel sensed by the solid-state image sensing
element LB as a pixel value, in the color image, of the pixel sensed by the solid-state image sensing element LB.
The image sensing device according to claim 1, wherein said first calculation unit obtains an R component, having the first brightness, of a pixel, sensed by the solid-state image sensing element LG,
by performing interpolation calculation using an R component and a G component, both having the first brightness, of a pixel adjacent to the pixel sensed by the solid-state image sensing element LG,
and obtains a B component, having the first brightness, of the pixel, sensed by the solid-state image sensing element LG, by performing interpolation calculation using a B component and a G
component, both having the first brightness, of a pixel adjacent to the pixel sensed by the solid-state image sensing element LG, and said second calculation unit obtains an R component, having the
second brightness, of the pixel, sensed by the solid-state image sensing element LG, by performing interpolation calculation using an R component and a G component, both having the second brightness,
of a pixel adjacent to the pixel sensed by the solid-state image sensing element LG, and obtains a B component, having the second brightness, of the pixel, sensed by the solid-state image sensing
element LG, by performing interpolation calculation using a B component and a G component, both having the second brightness, of a pixel adjacent to the pixel sensed by the solid-state image sensing
element LG.
The image sensing device according to claim 1, wherein said first calculation unit determines an R component, having the first brightness, of a pixel sensed by the solid-state image sensing element
DR as a pixel value, in the color image, of the pixel sensed by the solid-state image sensing element DR, and obtains a B component, having the first brightness, of the pixel, sensed by the
solid-state image sensing element DR, by performing interpolation calculation using a B component and a G component, both having the first brightness, of a pixel adjacent to the pixel sensed by the
solid-state image sensing element DR, and said second calculation unit obtains an R component, having the second brightness, of the pixel, sensed by the solid-state image sensing element DR, by
performing interpolation calculation using an R component and a G component, both having the second brightness, of a pixel adjacent to the pixel sensed by the solid-state image sensing element DR,
and determines a B component, having the second brightness, of the pixel sensed by the solid-state image sensing element DR as a pixel value, in the color image, of the pixel sensed by the
solid-state image sensing element DR.
The image sensing device according to claim 1, wherein said first calculation unit obtains an R component, having the first brightness, of a pixel, sensed by the solid-state image sensing element DB,
by performing interpolation calculation using an R component and a G component, both having the first brightness, of a pixel adjacent to the pixel sensed by the solid-state image sensing element DB,
and determines a B component, having the first brightness, of the pixel sensed by the solid-state image sensing element DB as a pixel value, in the color image, of the pixel sensed by the solid-state
image sensing element DB, and said second calculation unit obtains an R component, having the second brightness, of the pixel, sensed by the solid-state image sensing element DB, by performing
interpolation calculation using an R component and a G component, both having the second brightness, of a pixel adjacent to the pixel sensed by the solid-state image sensing element DB, and obtains a
B component, having the second brightness, of the pixel, sensed by the solid-state image sensing element DB, by performing interpolation calculation using a B component and a G component, both having
the second brightness, of a pixel adjacent to the pixel sensed by the solid-state image sensing element DB.
The image sensing apparatus according to claim 1, further comprising: a unit that determines, for each pixel in the color image based on the image signal output from the image sensor, whether a pixel
value is saturated, wherein said first calculation unit and said second calculation unit obtain color components of a pixel for which it is determined that the pixel value is unsaturated.
A method of controlling an image sensing apparatus including an image sensor on which solid-state image sensing elements which sense color components at a first sensitivity and solid-state image
sensing elements which sense color components at a second sensitivity higher than the first sensitivity are alternately, two-dimensionally arranged, comprising: a first calculation step of obtaining
a color component of a given pixel, sensed at the first sensitivity, by performing interpolation calculation using color components of pixels each of which is adjacent to the given pixel and is
sensed at the first sensitivity, in a color image based on an image signal output from the image sensor; a second calculation step of obtaining a color component of a given pixel, sensed at the
second sensitivity, by performing interpolation calculation using color components of pixels each of which is adjacent to the given pixel and is sensed at the second sensitivity, in the color image
output from the image sensor; and a step of outputting the color image in which color components of all pixels are determined in the first calculation step and the second calculation step.
TECHNICAL FIELD [0001]
The present invention relates to a single-plate HDR image sensing technique.
BACKGROUND ART [0002]
The dynamic range can be widened by imparting different sensitivities to adjacent pixels and synthesizing a signal of a high-sensitivity pixel and a signal of a low-sensitivity pixel. For example,
PTL1 discloses a color filter array in which all colors: light R, G, B, and W and dark r, g, b, and w are arranged on all rows and columns. Also, PTL2 discloses a sensor on which RGB rows and W rows
are alternately arranged.
However, in the technique disclosed in PTL1, all pixels are provided at the same ratio, so the sampling interval of G (Green), for example, is every two pixels. Thus, only a resolution becomes half
that of a normal Bayer array. Also, when G of a high-sensitivity pixel is saturated, the sampling interval of G becomes every four pixels. Thus, a resolution becomes one quarter of that of the Bayer
In the technique disclosed in PTL2, only luminance information is used for a low-sensitivity pixel, so color information cannot be held for a high-luminance portion. Also, the resolution in the
vertical direction halves.
CITATION LIST Patent Literature [0005]
PTL1: Japanese Patent Laid-Open No. 2006-253876
PTL2: Japanese Patent Laid-Open No. 2007-258686
SUMMARY OF INVENTION Technical Problem [0007]
As described above, when pixels with different sensitivities are arranged on the same sensor, the resolution inevitably decreases. Also, when a high-sensitivity pixel is saturated, the resolution
further decreases.
The present invention has been made in consideration of the above-mentioned problem, and has as its object to provide a technique for suppressing a decrease in resolution of a sensed image even when
an image sensor on which solid-state image sensing elements with different sensitivities are arranged is used.
Solution to Problem [0009]
In order to achieve the object of the present invention, an image sensing apparatus according to the present invention has, for example, the following arrangement. That is, there is provided an image
sensing apparatus including an image sensor on which solid-state image sensing elements which sense color components at a first sensitivity and solid-state image sensing elements which sense color
components at a second sensitivity higher than the first sensitivity are alternately, two-dimensionally arranged, comprising a first calculation unit that obtains a color component of a given pixel,
sensed at the first sensitivity, by performing interpolation calculation using color components of pixels each of which is adjacent to the given pixel and is sensed at the first sensitivity, in a
color image based on an image signal output from the image sensor, a second calculation unit that obtains a color component of a given pixel, sensed at the second sensitivity, by performing
interpolation calculation using color components of pixels each of which is adjacent to the given pixel and is sensed at the second sensitivity, in the color image output from the image sensor, and a
unit that outputs the color image in which color components of all pixels are determined by the first calculation unit and the second calculation unit.
Advantageous Effects of Invention [0010]
With the arrangement according to the present invention, it is possible to suppress a decrease in resolution of a sensed image even when an image sensor on which solid-state image sensing elements
with different sensitivities are arranged is used.
Other features and advantages of the present invention will be apparent from the following descriptions taken in conjunction with the accompanying drawings. Noted that the same reference characters
denote the same or similar parts throughout the figures thereof.
BRIEF DESCRIPTION OF DRAWINGS [0012]
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the invention and, together with the description, serve to explain the
principles of the invention.
FIG. 1 is a block diagram illustrating an example of the functional configuration of an image sensing apparatus according to the first embodiment;
FIG. 2 is a view illustrating an example of the arrangement of solid-state image sensing elements on an image sensor 103;
FIG. 3A is a flowchart of processing for obtaining the DG value of each pixel;
FIG. 3B is a flowchart of the processing for obtaining the DG value of each pixel;
FIG. 4A is a flowchart of processing for obtaining the DR and RB values of each pixel;
FIG. 4B is a flowchart of the processing for obtaining the DR and RB values of each pixel;
[0019] FIG. 5
is a block diagram illustrating an example of the configuration of a demosaic unit 109 according to the second embodiment;
FIG. 6A is a flowchart of processing for obtaining the DG value of each pixel;
FIG. 6B is a flowchart of the processing for obtaining the DG value of each pixel;
FIG. 7 is a flowchart of processing performed by a first interpolation unit 502;
FIG. 8A is a flowchart of processing for obtaining the DG value of each pixel;
FIG. 8B is a flowchart of the processing for obtaining the DG value of each pixel;
FIG. 9 is a view illustrating an example of the arrangement of solid-state image sensing elements on an image sensor 103;
FIG. 10A is a flowchart of processing for obtaining the DR and DB values of each pixel which constitutes a color image; and
[0027] FIG. 10B
is a flowchart of the processing for obtaining the DR and DB values of each pixel which constitutes a color image.
DESCRIPTION OF EMBODIMENTS [0028]
Preferred embodiments of the present invention will be described below with reference to the accompanying drawings. Note that the embodiments to be described hereinafter merely exemplify a case in
which the present invention is actually practiced, and are practical embodiments of the arrangement defined in claims.
First Embodiment [0029]
An example of the functional configuration of an image sensing apparatus according to this embodiment will be described first with reference to a block diagram shown in FIG. 1. Because light in the
external world in which an object 90 is present enters an image sensor 103 via an optical system 101, the image sensor 103 accumulates charges corresponding to the light incident on it. The image
sensor 103 outputs an analog image signal corresponding to the accumulated charges to an A/D conversion unit 104 in a subsequent stage.
The A/D conversion unit 104 converts the analog image signal input from the image sensor 103 into a digital image signal, and outputs the converted digital image signal to a signal processing unit
105 and a media I/F 107 in subsequent stages.
The signal processing unit 105 performs various types of image processing (to be described later) for a color image represented by the digital image signal input from the A/D conversion unit 104. The
signal processing unit 105 outputs the color image having undergone the various types of image processing to a display unit 106 and the media I/F 107 in subsequent stages.
The image sensor 103 will be described next. As shown in FIG. 2, the image sensor 103 includes solid-state image sensing elements DR, DG, and DB which are two-dimensionally arranged on it. The
solid-state image sensing elements DR are used to sense R components at a first sensitivity. The solid-state image sensing elements DG are used to sense G components at the first sensitivity. The
solid-state image sensing elements DB are used to sense B components at the first sensitivity. Note that "DR" indicates dark red (red with a first brightness), "DG" indicates dark green (green with
the first brightness), and "DB" indicates dark blue (blue with the first brightness). The image sensor 103 also includes solid-state image sensing elements LR, LG, and LB which are two-dimensionally
arranged on it. The solid-state image sensing elements LR are used to sense R components at a second sensitivity higher than the first sensitivity. The solid-state image sensing elements LG are used
to sense G components at the second sensitivity. The solid-state image sensing elements LB are used to sense B components at the second sensitivity. Note that "LR" indicates light red (red with a
second brightness higher than the first brightness), "LG" indicates light green (green with the second brightness), and "LB" indicates light blue (blue with the second brightness).
The layout pattern of these solid-state image sensing elements arranged on the image sensor 103 will be described in more detail herein. As shown in FIG. 2, two-dimensional arrays each including 4×4
solid-state image sensing elements formed from the solid-state image sensing elements DR, DG, DB, LR, LG, and LB are repeatedly arranged on the image sensor 103 without overlapping. In one
two-dimensional array of solid-state image sensing elements, the ratio between the numbers of solid-state image sensing elements DR, DG, and DB is 1:2:1. Again in this array, the ratio between the
numbers of solid-state image sensing elements LR, LG, and LB is 1:2:1. Moreover, a column of solid-state image sensing elements (or a row of solid-state image sensing elements) formed from the
solid-state image sensing elements DG and LG is arranged on the image sensor 103 for every other column (row). The solid-state image sensing elements LR and LB can be interchanged with each other,
and the solid-state image sensing elements DR and DB can similarly be interchanged with each other.
In this manner, the image sensing apparatus according to this embodiment includes an image sensor on which solid-state image sensing elements for sensing color components at a first sensitivity and
solid-state image sensing elements for sensing color components at a second sensitivity higher than the first sensitivity are alternately, two-dimensionally arranged.
The signal processing unit 105 will be described next. A camera control unit 108 performs AE/AF/AWB control. A demosaic unit 109 generates an HDR image by performing interpolation processing for
pixels sensed at the first sensitivity and interpolation processing for pixels sensed at the second sensitivity, in a color image represented by the digital image signal input from the A/D conversion
unit 104.
A color processing unit 111 performs various types of color processing such as color balance processing, γ processing, sharpness processing, and noise reduction processing for the HDR image. The
color processing unit 111 outputs the HDR image having undergone the various types of color processing to the display unit 106 and media I/F 107 in subsequent stages.
Processing performed by the demosaic unit 109 to obtain the DG value of each pixel in the color image will be described next with reference to FIGS. 3A and 3B showing flowcharts of this processing.
Note that processing performed by the demosaic unit 109 to obtain the LG value of each pixel which constitutes the color image is processing in which "DG" is substituted by "LG" in the flowcharts
shown in FIGS. 3A and 3B.
In step S301, the demosaic unit 109 secures a memory area, used to perform processing to be described later, in a memory which is provided in itself or managed by it, and initializes both the
variables i and j indicating the pixel position in the above-mentioned color image to zero. Note that the variable i indicates the x-coordinate value in the color image, and the variable j indicates
the y-coordinate value in the color image. Note also that the position of the upper left corner in the color image is defined as an origin (i,j)=(0,0). The setting of a coordinate system defined in
the color image is not limited to this, as a matter of course.
In step S302, the demosaic unit 109 reads out, from the above-mentioned memory, map information (filter array) indicating which of solid-state image sensing elements DR, DG, DB, LR, LG, and LB is
placed at each position on the image sensor 103, as shown in FIG. 2.
In step S303, the demosaic unit 109 determines using the map information whether the solid-state image sensing element at the position on the image sensor 103, which corresponds to the pixel position
(i,j) in the color image, is the solid-state image sensing element DG. This amounts to determining whether the pixel at the pixel position (i,j) in the color image is sensed by the solid-state image
sensing element DG. This determination can be done in accordance with whether the solid-state image sensing element at the position (i,j) on the image sensor 103 is the solid-state image sensing
element DG, upon defining the position of the upper left corner on the image sensor 103 as an origin.
If it is determined in step S303 that the solid-state image sensing element corresponding to the pixel position (i,j) is the solid-state image sensing element DG, the process advances to step S309;
otherwise, the process advances to step S304.
In step S304, the demosaic unit 109 determines using the map information whether the solid-state image sensing element at the position on the image sensor 103, which corresponds to the pixel position
(i-1,j-1) in the color image, is the solid-state image sensing element DG. This determination is done in the same way as in step S303.
If it is determined in step S304 that the solid-state image sensing element corresponding to the pixel position (i-1,j-1) is the solid-state image sensing element DG, the process advances to step
S305; otherwise, the process advances to step S310.
In step S305, the demosaic unit 109 calculates equations presented in mathematical 1. This yields a variation evaluation value deff1 for five pixels juxtaposed from the upper left to the lower right
with the pixel at the pixel position (i,j) as the center, and a variation evaluation value deff2 for five pixels juxtaposed from the upper right to the lower left with the pixel at the pixel position
(i,j) as the center. Note that P(i,j) indicates the pixel value at the pixel position (i,j) in the color image.
2=|2×P(i,j)-P(i-2,j+2)-P(i+2,j-2)|+|P(i-1,j+1)-P(i+1,j-1)| [Mathematical 1]
In step S306, the demosaic unit 109 compares the variation evaluation values deff1 and deff2. If the comparison result shows deff1<deff2, the process advances to step S307; or if this comparison
result shows deff1≧deff2, the process advances to step S308.
In step S307, the demosaic unit 109 performs interpolation calculation using the pixel values of pixels adjacent to the pixel at the pixel position (i,j) to obtain DG(i,j) indicating the DG value of
the pixel at the pixel position (i,j) in accordance with an equation presented in mathematical 2.
(i,j)=|(2×P(i,j)-P(i-2,j-2)-P(i+2,j+2))/4|+(P(i-1,j-1)+P(i+1,j+1- ))/2 [Mathematical 2]
This interpolation processing means one-dimensional lowpass filter processing, and the coefficient value for each pixel value in the equation presented in mathematical 2 corresponds to a filter
coefficient. On the other hand, in step S308, the demosaic unit 109 performs interpolation calculation using the pixel values of pixels adjacent to the pixel at the pixel position (i,j) to obtain DG
(i,j) indicating the DG value of the pixel at the pixel position (i,j) in accordance with an equation presented in mathematical 3.
(i,j)=|(2×P(i,j)-P(i-2,j+2)-P(i+2,j-2))/4|+(P(i-1,j+1)+P(i+1,j-1- ))/2 [Mathematical 3]
In step S309, the demosaic unit 109 substitutes the pixel value P(i,j) for DG(i,j). In step S310, the demosaic unit 109 determines whether the value of the variable i is equal to "pel" (the total
number of pixels in the x direction in the color image)-1. If it is determined that i=pel-1, the process advances to step S311; or if it is determined that i≠pel-1, the value of the variable i is
incremented by one and the processes in step S303 and subsequent steps are repeated.
In step S311, the demosaic unit 109 determines whether the value of the variable j is larger than "line" (the total number of pixels in the y direction in the color image)-1. If it is determined in
step S311 that j>line-1, the process advances to step S312; or if it is determined in step S311 that j≦line-1, the value of the variable i is initialized to zero, the value of the variable j is
incremented by one, and the processes in step S303 and subsequent steps are repeated.
In step S312, the demosaic unit 109 initializes both the variables i and j to zero. In step S313, the demosaic unit 109 determines using the map information whether the solid-state image sensing
element at the position on the image sensor 103, which corresponds to the pixel position (i-1,j) or (i,j-1) in the color image, is the solid-state image sensing element DG. This determination is done
in the same way as in step S303.
If it is determined in step S313 that the solid-state image sensing element corresponding to the pixel position (i-1,j) or (i,j-1) is the solid-state image sensing element DG, the process advances to
step S314; otherwise, the process advances to step S318.
In step S314, the demosaic unit 109 calculates equations presented in mathematical 4 to obtain a variation evaluation value deff3 for pixels which are adjacent to a pixel Q at the pixel position
(i,j) vertically (in the y direction), and a variation evaluation value deff4 for pixels which are adjacent to the pixel Q horizontally (in the x direction).
4=|P(i-1,j)-P(i+1,j)| [Mathematical 4]
In step S315, the demosaic unit 109 compares the variation evaluation values deff3 and deff4. If the comparison result shows deff3<deff4, the process advances to step S316; or if this comparison
result shows deff3≧deff4, the process advances to step S317.
In step S316, the demosaic unit 109 performs interpolation calculation using the pixel values of pixels adjacent to the pixel at the pixel position (i,j) to obtain DG(i,j) indicating the DG value of
the pixel at the pixel position (i,j) in accordance with an equation presented in mathematical 5.
(i,j)=(P(i,j-1)+P(i,j+1))/2 [Mathematical 5]
On the other hand, in step S317, the demosaic unit 109 performs interpolation calculation using the pixel values of pixels adjacent to the pixel at the pixel position (i,j) to obtain DG(i,j)
indicating the DG value of the pixel at the pixel position (i,j) in accordance with an equation presented in mathematical 6.
(i,j)=(P(i-1,j)+P(i+1,j))/2 [Mathematical 6]
In step S318, the demosaic unit 109 determines whether the value of the variable i is equal to pel-1. If it is determined in step S318 that i=pel-1, the process advances to step S319; or if it is
determined in step S318 that i≠pel-1, the value of the variable i is incremented by one and the processes in step S313 and subsequent steps are repeated.
In step S319, the demosaic unit 109 determines whether the value of the variable j is larger than line-1. If it is determined in step S319 that j>line-1, the process ends and a shift to processing
according to flowcharts shown in FIGS. 4A and 4B is made. On the other hand, if it is determined in step S319 that j≦line-1, the value of the variable i is initialized to zero, the value of the
variable j is incremented by one, and the processes in step S313 and subsequent steps are repeated.
Processing performed by the demosaic unit 109 to obtain the DR and RB values of each pixel which constitutes the color image will be described next with reference to FIGS. 4A and 4B showing
flowcharts of this processing. Note that processing performed by the demosaic unit 109 to obtain the LR and LB values of each pixel which constitutes the color image is processing in which "DR" is
substituted by "LR" and "DB" is substituted by "LB" in the flowcharts shown in FIGS. 4A and 4B. Note also that the processing according to the flowcharts shown in FIGS. 4A and 4B follows the
processing (the processing for DG and LG) according to the flowcharts shown in FIGS. 3A and 3B.
First, in step S401, the demosaic unit 109 secures a memory area, used to perform processing to be described later, in a memory which is provided in itself or managed by it, and initializes both the
variables i and j indicating the pixel position in the above-mentioned color image to zero.
In step S402, the demosaic unit 109 reads out, from the above-mentioned memory, map information (filter array) indicating which of solid-state image sensing elements DR, DG, DB, LR, LG, and LB is
placed at each position on the image sensor 103, as shown in FIG. 2.
In step S403, the demosaic unit 109 determines using the map information whether the solid-state image sensing element at the position on the image sensor 103, which corresponds to the pixel position
(i,j) in the color image, is the solid-state image sensing element DG. This determination is done in the same way as in step S303.
If it is determined in step S403 that the solid-state image sensing element corresponding to the pixel position (i,j) is the solid-state image sensing element DG, the process advances to step S404;
otherwise, the process advances to step S407.
In step S404, the demosaic unit 109 determines using the map information whether the solid-state image sensing element at the position on the image sensor 103, which corresponds to the pixel position
(i-1,j) in the color image, is the solid-state image sensing element LB. This determination is done in the same way as in step S303.
If it is determined in step S404 that the solid-state image sensing element corresponding to the pixel position (i-1,j) is the solid-state image sensing element LB, the process advances to step S405;
otherwise, the process advances to step S406.
In step S405, the demosaic unit 109 performs interpolation calculation using equations presented in mathematical 7. This yields DR(i,j) indicating the DR value of the pixel at the pixel position
(i,j), DB(i,j) indicating the DB value of this pixel, LR(i,j) indicating the LR value of this pixel, and LB(i,j) indicating the LB value of this pixel.
(i,j)=(LR(i+1,j)-LG(i+1,j))/2+(LR(i-1,j+2)-LG(i-1,j+2)+LR(i-1,j-2)-LG(- i-1,j-2))/4
(i,j)=(LB(i-1,j)-LG(i-1,j))/2+(LB(i+1,j+2)-LG(i+1,j+2)+LB(i+1,j-2)-LG(- i+1,j-2))/4 [Mathematical 7]
In step S406, the demosaic unit 109 performs interpolation calculation using equations presented in mathematical 8. This yields DR(i,j), DB(i,j), LR(i,j), and LB(i,j) of the pixel at the pixel
position (i,j).
(i,j)=(LR(i-1,j)-LG(i-1,j))/2+(LR(i+1,j+2)-LG(i+1,j+2)+LR(i+1,j-2)-LG(- i+1,j-2))/4
(i,j)=(LB(i+1,j)-LG(i+1,j))/2+(LB(i-1,j+2)-LG(i-1,j+2)+LB(i-1,j-2)-LG(- i-1,j-2))/4 [Mathematical 8]
In step S407, the demosaic unit 109 determines using the map information whether the solid-state image sensing element at the position on the image sensor 103, which corresponds to the pixel position
(i,j) in the color image, is the solid-state image sensing element LR. This determination is done in the same way as in step S303.
If it is determined in step S407 that the solid-state image sensing element corresponding to the pixel position (i,j) is the solid-state image sensing element LR, the process advances to step S408;
otherwise, the process advances to step S409.
In step S408, the demosaic unit 109 performs interpolation calculation using equations presented in mathematical 9. This yields DR(i,j), DB(i,j), LR(i,j), and LB(i,j) of the pixel at the pixel
position (i,j).
(i,j)=(DR(i,j+1)-DG(i,j+1))/2+(DR(i-2,j-1)-DG(i-2,j-1)+DR(i+2,j-1)-DG(- i+2,j-1))/4
(i,j)=(DB(i,j-1)-DG(i,j-1))/2+(DB(i-2,j+1)-DG(i-2,j+1)+DB(i+2,j+1)-DG(- i+2,j+1))/4
(i,j)=(LB(i-2,j)-LG(i-2,j))/4+(LB(i+2,j)-LG(i+2,j))/4+(LB(i,j-2)-LR(i,- j-2))/4+(LB(i,j+2)-LG(i,j+2))/4 [Mathematical 9]
In step S409, the demosaic unit 109 determines using the map information whether the solid-state image sensing element at the position on the image sensor 103, which corresponds to the pixel position
(i,j) in the color image, is the solid-state image sensing element LB. This determination is done in the same way as in step S303.
If it is determined in step S409 that the solid-state image sensing element corresponding to the pixel position (i,j) is the solid-state image sensing element LB, the process advances to step S410;
otherwise, the process advances to step S411.
In step S410, the demosaic unit 109 performs interpolation calculation using equations presented in mathematical 10. This yields DR(i,j), DB(i,j), LR(i,j), and LB(i,j) of the pixel at the pixel
position (i,j).
(i,j)=(DR(i,j-1)-DG(i,j-1))/2+(DR(i-2,j+1)-DG(i-2,j+1)+DR(i+2,j+1)-DG(- i+2,j+1))/4
(i,j)=(DB(i,j+1)-DG(i,j+1))/2+(DB(i-2,j-1)-DG(i-2,j-1)+DB(i+2,j-1)-DG(- i+2,j-1))/4
(i,j)=(LR(i-2,j)-LG(i-2,j))/4+(LR(i+2,j)-LG(i+2,j))/4+(LR(i,j-2)-LR(i,- j-2))/4+(LR(i,j+2)-LG(i,j+2))/4
(i,j)=LB(i,j) [Mathematical 10]
In step S411, the demosaic unit 109 determines using the map information whether the solid-state image sensing element at the position on the image sensor 103, which corresponds to the pixel position
(i,j) in the color image, is the solid-state image sensing element LG. This determination is done in the same way as in step S303.
If it is determined in step S411 that the solid-state image sensing element corresponding to the pixel position (i,j) is the solid-state image sensing element LG, the process advances to step S412;
otherwise, the process advances to step S415.
In step S412, the demosaic unit 109 determines using the map information whether the solid-state image sensing element at the position on the image sensor 103, which corresponds to the pixel position
(i-1,j) in the color image, is the solid-state image sensing element DR. This determination is done in the same way as in step S303.
If it is determined in step S412 that the solid-state image sensing element corresponding to the pixel position (i-1,j) is the solid-state image sensing element DR, the process advances to step S413;
otherwise, the process advances to step S414.
In step S413, the demosaic unit 109 performs interpolation calculation using equations presented in mathematical 11. This yields DR(i,j), DB(i,j), LR(i,j), and LB(i,j) of the pixel at the pixel
position (i,j).
(i,j)=(DR(i-1,j)-DG(i-1,j))/2+(DR(i+1,j+2)-DG(i+1,j+2)+DR(i+1,j-2)-DG(- i+1,j-2))+4
(i,j)=(DB(i+1,j)-DG(i+1,j))/2+(DB(i-1,j+2)-DG(i-1,j+2)+DB(i-1,j-2)-DG(- i-1,j-2))/4
(i,j)=(LB(i-1,j-1)-LG(i-1,j-1)+LB(i+1,j+1)-LG(i+1,j+1))=2 [Mathematical 11]
On the other hand, in step S414, the demosaic unit 109 performs interpolation calculation using equations presented in mathematical 12. This yields DR(i,j), DB(i,j), LR(i,j), and LB(i,j) of the pixel
at the pixel position (i,j).
(i,j)=(DR(i+1,j)-DG(i+1,j))/2+(DR(i-1,j+2)-DG(i-1,j+2)+DR(i-1,j-2)-DG(- i-1,j-2))/4
(i,j)=(DB(i-1,j)-DG(i-1,j))/2+(DB(i+1,j+2)-DG(i+1,j+2)+DB(i+1,j-2)-DG(- i+1,j-2))/4
(i,j)=(LB(i-1,j+1)-LG(i-1,j+1)+LB(i-1,j+1)-LG(i-1,j+1))/2 [Mathematical 12]
In step S415, the demosaic unit 109 determines using the map information whether the solid-state image sensing element at the position on the image sensor 103, which corresponds to the pixel position
(i,j) in the color image, is the solid-state image sensing element DR. This determination is done in the same way as in step S303.
If it is determined in step S415 that the solid-state image sensing element corresponding to the pixel position (i,j) is the solid-state image sensing element DR, the process advances to step S416;
otherwise, that is, if it is determined in step S415 that the solid-state image sensing element corresponding to the pixel position (i,j) is the solid-state image sensing element DB, the process
advances to step S417.
In step S416, the demosaic unit 109 performs interpolation calculation using equations presented in mathematical 13. This yields DR(i,j), DB(i,j), LR(i,j), and LB(i,j) of the pixel at the pixel
position (i,j).
(i,j)=(DB(i-2,j)-DG(i-2,j))/4+(DB(i+2,j)-DG(i+2,j))/4+(DB(i,j-2)-DR(i,- j-2))/4+(DB(i,j+2)-DG(i,j+2))/4
(i,j)=(LR(i,j-1)-LG(i,j-1))/2+(LR(i-2,j+1)-LG(i-2,j+1)+LR(i+2,j+1)-LG(- i+2,j+1))/4
(i,j)=(LB(i,j+1)-LG(i,j+1))/2+(LB(i-2,j-1)-LG(i-2,j-1)+LB(i+2,j-1)-LG(- i+2,j-1))/4 [Mathematical 13]
On the other hand, in step S417, the demosaic unit 109 performs interpolation calculation using equations presented in mathematical 14. This yields DR(i,j), DB(i,j), LR(i,j), and LB(i,j) of the pixel
at the pixel position (i,j).
(i,j)=(DR(i-2,j)-DG(i-2,j))/4+(DR(i+2,j)-DG(i+2,j))/4+(DR(i,j-2)-DG(i,- j-2))/4+(DR(i,j+2)-DG(i,j+2))/4
(i,j)=(LR(i,j+1)-LG(i,j+1))/2+(LR(i-2,j-1)-LG(i-2,j-1)+LR(i+2,j-1)-LG(- i+2,j-1))/4
(i,j)=(LB(i,j-1)-LG(i,j-1))/2+(LB(i-2,j+1)-LG(i-2,j+1)+LB(i+2,j+1)-LG(- i+2,j+1))/4 [Mathematical 14]
In step S418, the demosaic unit 109 determines whether the value of the variable i is equal to pel-1. If it is determined in step S418 that i=pel-1, the process advances to step S419; or if it is
determined in step S418 that i≠pel-1, the value of the variable i is incremented by one and the processes in step S403 and subsequent steps are repeated.
In step S419, the demosaic unit 109 determines whether the value of the variable j is larger than line-1. If it is determined in step S419 that j>line-1, the process ends. On the other hand, if it is
determined in step S419 that j≦line-1, the value of the variable i is initialized to zero, the value of the variable j is incremented by one, and the processes in step S403 and subsequent steps are
In this manner, the color components (DR, DG, DB, LR, LG, and LB) for each pixel which constitutes the color image are determined by performing processing according to the flowcharts shown in FIGS.
3A, 3B, 4A, and 4B described above. The feature of this embodiment lies in that this determination processing is realized by the following calculation processing. That is, a color component of a
given pixel sensed at the first sensitivity is obtained by interpolation calculation (first calculation) using a color component of a pixel which is adjacent to the given pixel and is sensed at the
first sensitivity. A color component of a given pixel sensed at the second sensitivity is obtained by interpolation calculation (second calculation) using a color component of a pixel which is
adjacent to the given pixel and is sensed at the second sensitivity.
In this manner, according to this embodiment, even when pixels with different sensitivities are arranged on the same sensor, the resolution of a portion incapable of being sampled can be partially
restored by performing interpolation using the correlation between a color filter with a highest resolution and multiple colors. Also, because demosaicing is performed based only on a pixel with one
sensitivity, a stable resolution can be obtained regardless of saturation or unsaturation.
Second Embodiment [0087]
This embodiment is different from the first embodiment only in the configuration and operation of the demosaic unit 109. A demosaic unit 109 according to this embodiment has a configuration as shown
FIG. 5
. A saturation determination unit 501 determines, for each pixel which constitutes a color image, whether the pixel value is saturated. A first interpolation unit 502 processes a pixel with a
saturated pixel value, and a second interpolation unit 503 processes a pixel with an unsaturated pixel value.
Since the operation of the second interpolation unit 503 is the same as that of the demosaic unit 109, having been described in the first embodiment, only the operation of the first interpolation
unit 502 will be mentioned below, and that of the second interpolation unit 503 will not be described. Also, only differences from the first embodiment will be mentioned below, and the second
embodiment is the same as the first embodiment except for points to be described hereinafter.
Processing with which the demosaic unit 109 according to this embodiment obtains the DG value of each pixel which constitutes a color image will be described with reference to FIGS. 6A and 6B showing
flowcharts of this processing. Note that the following description assumes that a pixel value P(i,j) is stored in advance for DG(i,j).
In step S601, the demosaic unit 109 secures a memory area, used to perform processing to be described later, in a memory which is provided in itself or managed by it, and initializes both the
variables i and j indicating the pixel position in the above-mentioned color image to zero.
In step S602, the demosaic unit 109 reads out, from the above-mentioned memory, map information (filter array) indicating which of solid-state image sensing elements DR, DG, DB, LR, LG, and LB is
placed at each position on an image sensor 103, as shown in FIG. 2.
In step S603, the demosaic unit 109 determines using the map information whether the solid-state image sensing element at the position on the image sensor 103, which corresponds to the pixel position
(i,j) in the color image, is the solid-state image sensing element LG. This determination is done in the same way as in step S303.
If it is determined in step S603 that the solid-state image sensing element corresponding to the pixel position (i,j) is the solid-state image sensing element LG, the process advances to step S604;
otherwise, the process advances to step S607.
In step S604, the demosaic unit 109 determines whether the pixel value of the pixel at the pixel position (i,j) in the color image is saturated. If it is determined in step S604 that this pixel value
is saturated, the process advances to step S605; otherwise, the process advances to step S606. Determination as to whether the pixel value is saturated is done in the following way. That is, if the
pixel value is equal to or larger than a predetermined value, it is determined that this pixel value is saturated; or if the pixel value is smaller than the predetermined value, it is determined that
this pixel value is unsaturated. Although this "predetermined value" is not limited to a specific value, the following description assumes that the maximum value of a sensor analog value is used for
the sake of convenience.
In step S605, the demosaic unit 109 operates the first interpolation unit 502, so the first interpolation unit 502 performs interpolation processing (to be described later) to obtain the DG value=DG
(i,j) of the pixel at the pixel position (i,j). Processing performed by the first interpolation unit 502 in this step will be described in detail later.
In step S606, the demosaic unit 109 operates the second interpolation unit 503, so the second interpolation unit 503 performs the same operation as that of the demosaic unit 109, having been
described in the first embodiment, to obtain the DG value=DG(i,j) of the pixel at the pixel position (i,j).
In step S607, the demosaic unit 109 determines whether the value of the variable i is equal to pel-1. If it is determined in step S607 that i=pel-1, the process advances to step S608; or if it is
determined in step S607 that i≠pel-1, the value of the variable i is incremented by one and the processes in step S603 and subsequent steps are repeated.
In step S608, the demosaic unit 109 determines whether the value of the variable j is larger than line-1. If it is determined in step S608 that j>line-1, the process advances to step S609. On the
other hand, if it is determined in step S608 that j≦line-1, the value of the variable i is initialized to zero, the value of the variable j is incremented by one, and the processes in step S603 and
subsequent steps are repeated.
In step S609, the demosaic unit 109 initializes both the variables i and j to zero. In step S610, the demosaic unit 109 determines using the map information whether the solid-state image sensing
element at the position on the image sensor 103, which corresponds to the pixel position (i,j) in the color image, is the solid-state image sensing element DR or DB. This determination is done in the
same way as in step S303 mentioned above.
If it is determined in step S610 that the solid-state image sensing element corresponding to the pixel position (i,j) is the solid-state image sensing element DR or DB, the process advances to step
S611; otherwise, the process advances to step S614.
In step S611, the demosaic unit 109 determines whether the pixel value of the pixel at the pixel position (i-1,j) or (i+1,j) in the color image is saturated. If it is determined in step S611 that
this pixel value is saturated, the process advances to step S612; otherwise, the process advances to step S613.
In step S612, the demosaic unit 109 operates the first interpolation unit 502, so the first interpolation unit 502 performs interpolation processing (to be described later) to obtain the DG value=DG
(i,j) of the pixel at the pixel position (i,j). Processing performed by the first interpolation unit 502 in this step will be described in detail later.
In step S613, the demosaic unit 109 operates the second interpolation unit 503, so the second interpolation unit 503 performs the same operation as that of the demosaic unit 109, having been
described in the first embodiment, to obtain the DG value=DG(i,j) of the pixel at the pixel position (i,j).
In step S614, the demosaic unit 109 determines using the map information whether the solid-state image sensing element at the position on the image sensor 103, which corresponds to the pixel position
(i,j) in the color image, is the solid-state image sensing element LR or LB. This determination is done in the same way as in step S303 mentioned above.
If it is determined in step S614 that the solid-state image sensing element corresponding to the pixel position (i,j) is the solid-state image sensing element LR or LB, the process advances to step
S615; otherwise, the process advances to step S618.
In step S615, the demosaic unit 109 determines whether the pixel value of the pixel at the pixel position (i-1,j-1), (i-1,j+1), (i+1,j-1), or (i+1,j+1) in the color image is saturated. If it is
determined in step S615 that this pixel value is saturated, the process advances to step S616; otherwise, the process advances to step S617.
In step S616, the demosaic unit 109 operates the first interpolation unit 502, so the first interpolation unit 502 performs interpolation processing (to be described later) to obtain the DG value=DG
(i,j) of the pixel at the pixel position (i,j). Processing performed by the first interpolation unit 502 in this step will be described in detail later.
In step S617, the demosaic unit 109 operates the second interpolation unit 503, so the second interpolation unit 503 performs the same operation as that of the demosaic unit 109, having been
described in the first embodiment, to obtain the DG value=DG(i,j) of the pixel at the pixel position (i,j).
In step S618, the demosaic unit 109 determines whether the value of the variable i is equal to pel-1. If it is determined in step S618 that i=pel-1, the process advances to step S619; or if it is
determined in step S618 that i≠pel-1, the value of the variable i is incremented by one and the processes in step S610 and subsequent steps are repeated.
In step S619, the demosaic unit 109 determines whether the value of the variable j is larger than line-1. If it is determined in step S619 that j>line-1, the process ends. On the other hand, if it is
determined in step S619 that j≦line-1, the value of the variable i is initialized to zero, the value of the variable j is incremented by one, and the processes in step S610 and subsequent steps are
Processing performed by the first interpolation unit 502 will be described with reference to FIG. 7 showing a flowchart of this processing. In step S703, the first interpolation unit 502 determines
using the map information whether the solid-state image sensing element at the position on the image sensor 103, which corresponds to the pixel position (i,j) in the color image, is the solid-state
image sensing element LG. This determination is done in the same way as in step S303 mentioned above.
If it is determined in step S703 that the solid-state image sensing element corresponding to the pixel position (i,j) is the solid-state image sensing element LG, the process advances to step S704;
otherwise, the process advances to step S705. In step S704, the first interpolation unit 502 calculates an equation presented in mathematical 15 to determine DG(i,j).
(i,j)=α×LG(i,j) [Mathematical 15]
where α is a constant (0<α≦1) which represents the gain and is set in advance.
In step S
705, the first interpolation unit 502 determines using the map information whether the solid-state image sensing element at the position on the image sensor 103, which corresponds to the pixel
position (i,j) in the color image, is the solid-state image sensing element DR or DB. This determination is done in the same way as in step S303 mentioned above.
If it is determined in step S705 that the solid-state image sensing element corresponding to the pixel position (i,j) is the solid-state image sensing element DR or DB, the process advances to step
S706; otherwise, the process ends. In step S706, the first interpolation unit 502 calculates equations presented in mathematical 16. This calculation yields an evaluation value deff5-1, variation
evaluation value deff6-1, and evaluation value deff7-1. The evaluation value deff5-1 is an evaluation value for pixels adjacent to the pixel position (i,j) on the upper left and lower right sides.
The variation evaluation value deff6-1 is a variation evaluation value for pixels adjacent to the pixel position (i,j) on the right and left sides. The evaluation value deff7-1 is an evaluation value
for pixels adjacent to the pixel position (i,j) on the lower left and upper right sides. Note that if the calculation result of each evaluation value shows that the pixel value of one of the pair of
upper left and lower right pixels is saturated, an evaluation value deff5-2 is obtained in place of the evaluation value deff5-1. Also, if the pixel value of one of the pair of right and left pixels
is saturated, an evaluation value deff6-2 is obtained in place of the evaluation value deff6-1. Moreover, if the pixel value of one of the pair of lower left and upper right pixels is saturated, an
evaluation value deff7-2 is obtained in place of the evaluation value deff7-1. Note that MAX is the difference between the minimum and maximum pixel values, and is 255 for a pixel value with 8 bits
and 65535 for a pixel value with 16 bits.
7-2=MAX [Mathematical 16]
In the following description, the obtained one of deff5-1 and deff5-2 will be represented as deff5. Similarly, the obtained one of deff6-1 and deff6-2 will be represented as deff6. Again similarly,
the obtained one of deff7-1 and deff7-2 will be represented as deff7. In step S706, the first interpolation unit 502 compares the evaluation values deff5, deff6, and deff7. If the comparison result
shows that both conditions: deff5<deff6 and deff5<deff7 are satisfied, the process advances to step S707; otherwise, the process advances to step S708.
In step S707, the first interpolation unit 502 performs interpolation calculation using the pixel values of pixels adjacent to the pixel position (i,j) on the upper left and lower right sides to
obtain DG(i,j) of the pixel at the pixel position (i,j) in accordance with an equation presented in mathematical 17.
(i,j)=(P(i-1,j-1)+P(i+1,j+1))+2 [Mathematical 17]
In step S708, the first interpolation unit 502 compares the evaluation values deff6 and deff7. If the comparison result shows deff6<deff7, the process advances to step S709; or if this comparison
result shows deff6≧deff7, the process advances to step S710.
In step S709, the first interpolation unit 502 performs interpolation calculation using the pixel values of pixels adjacent to the pixel position (i,j) on the right and left sides to obtain DG(i,j)
of the pixel at the pixel position (i,j) in accordance with an equation presented in mathematical 18.
(i,j)=(P(i-1,j)+P(i+1,j))/2 [Mathematical 18]
In step S710, the first interpolation unit 502 performs interpolation calculation using the pixel values of pixels adjacent to the pixel position (i,j) on the lower left and upper right sides to
obtain DG(i,j) of the pixel at the pixel position (i,j) in accordance with an equation presented in mathematical 19.
(i,j)=(P(i-1,j+1)+P(i+1,j-1))/2 [Mathematical 19]
In step S711, the first interpolation unit 502 performs interpolation calculation using the pixel values of pixels adjacent to the pixel position (i,j) on the right and left sides to obtain DG(i,j)
of the pixel at the pixel position (i,j) in accordance with an equation presented in mathematical 20.
(i,j)=(P(i-1,j)+P(i+1,j))/2 [Mathematical 20]
In this manner, according to this embodiment, even when pixels with different sensitivities are arranged on the same sensor, the resolution of a portion incapable of being sampled can be partially
restored by performing interpolation using the correlation between a color filter with a highest resolution and multiple colors.
Also, because pixel value determination which uses two pixels with different sensitivities is performed for an unsaturated pixel value, it is possible to obtain a higher resolution. Moreover, even if
either pixel value is saturated, it is possible to perform pixel interpolation, which improves the resolution.
Third Embodiment [0123]
Another embodiment of the demosaic unit 109 according to the second embodiment will be described in the third embodiment. Note that in this embodiment, solid-state image sensing elements arranged on
an image sensor 103 preferably have a layout distribution shown in FIG. 9. Also, only differences from the second embodiment will be mentioned below, and the third embodiment is the same as the first
embodiment except for points to be described hereinafter.
Processing with which a demosaic unit 109 according to this embodiment obtains the DG value of each pixel which constitutes a color image will be described with reference to FIGS. 8A and 8B showing
flowcharts of this processing. Note that the following description assumes that a pixel value P(i,j) is stored in advance for DG(i,j).
In step S801, the demosaic unit 109 secures a memory area, used to perform processing to be described later, in a memory which is provided in itself or managed by it, and initializes both the
variables i and j indicating the pixel position in the above-mentioned color image to zero.
In step S802, the demosaic unit 109 reads out, from the above-mentioned memory, map information (filter array) indicating which of solid-state image sensing elements DR, DG, DB, LR, LG, and LB is
placed at each position on the image sensor 103, as shown in FIG. 9.
In step S803, the demosaic unit 109 determines using the map information whether the solid-state image sensing element at the position on the image sensor 103, which corresponds to the pixel position
(i,j) in the color image, is the solid-state image sensing element DG. This determination is done in the same way as in step S303 mentioned above.
If it is determined in step S803 that the solid-state image sensing element corresponding to the pixel position (i,j) is not the solid-state image sensing element DG, the process advances to step
S804; otherwise, the process advances to step S811.
In step S804, the demosaic unit 109 determines using the map information whether the solid-state image sensing element at the position on the image sensor 103, which corresponds to the pixel position
(i,j) in the color image, is the solid-state image sensing element LG. This determination is done in the same way as in step S303 mentioned above.
If it is determined in step S804 that the solid-state image sensing element corresponding to the pixel position (i,j) is not the solid-state image sensing element LG, the process advances to step
S812; otherwise, the process advances to step S805.
In step S805, the demosaic unit 109 determines whether the pixel value of the pixel at the pixel position (i,j) in the color image is saturated. If it is determined in step S805 that this pixel value
is saturated, the process advances to step S806; otherwise, the process advances to step S810.
In step S806, a first interpolation unit 502 calculates equations presented in mathematical 21. This calculation yields an evaluation value deff8 for pixels adjacent to the pixel position (i,j) on
the upper left and lower right sides, and a variation evaluation value deff9 for pixels adjacent to the pixel position (i,j) on the lower left and upper right sides.
9=|P(i-1,j+1)-P(i+1,j-1)| [Mathematical 21]
In step S807, the evaluation values deff8 and deff9 are compared with each other. If deff8<deff9, the process advances to step S808; or if deff8≧deff9, the process advances to step S809.
In step S808, the first interpolation unit 502 performs interpolation calculation using an equation presented in mathematical 22. This yields DG(i,j) of the pixel at the pixel position (i,j).
(i,j)=(P(i-1,j-1)+P(i+1,j+1))/2 [Mathematical 22]
In step S809, the first interpolation unit 502 performs interpolation calculation using an equation presented in mathematical 23. This yields DG(i,j) of the pixel at the pixel position (i,j).
(i,j)=(P(i-1,j+1)+P(i+1,j-1))/2 [Mathematical 23]
In step S810, the first interpolation unit 502 performs interpolation calculation using an equation presented in mathematical 24. This yields DG(i,j) of the pixel at the pixel position (i,j).
(i,j)=α×P(i,j) [Mathematical 24]
In step S811, the demosaic unit 109 substitutes the pixel value P(i,j) for DG(i,j). In step S812, the demosaic unit 109 determines whether the value of the variable i is equal to pel-1. If it is
determined in step S812 that i=pel-1, the process advances to step S813; or if it is determined in step S812 that i≠pel-1, the value of the variable i is incremented by one and the processes in step
S803 and subsequent steps are repeated.
In step S813, the demosaic unit 109 determines whether the value of the variable j is larger than line-1. If it is determined in step S813 that j>line-1, the process advances to step S814. On the
other hand, if it is determined in step S813 that j≦line-1, the value of the variable i is initialized to zero, the value of the variable j is incremented by one, and the processes in step S803 and
subsequent steps are repeated.
In step S814, the demosaic unit 109 initializes both the variables i and j to zero. In step S815, the demosaic unit 109 determines whether the pixel position (i,j) corresponds to DG or LG. If it is
determined that the pixel position (i,j) corresponds to DG or LG, the process advances to step S825; otherwise, the process advances to step S816.
In step S816, the demosaic unit 109 determines whether the solid-state image sensing element at the position on the image sensor 103, which corresponds to the pixel position (i-2,j), (i+2,j), (i,j),
(i,j-2), or (i,j+2), is saturated. This determination is done in the same way as in step S805. If it is determined in step S816 that this solid-state image sensing element is saturated, the process
advances to step S817; otherwise, the process advances to step S821.
In step S817, the first interpolation unit 502 calculates equations presented in mathematical 25. This calculation yields an evaluation value deff10 for pixels adjacent to the pixel position (i,j) on
the upper and lower sides, and a variation evaluation value deff11 for pixels adjacent to the pixel position (i,j) on the right and left sides.
11=|P(i-1,j)-P(i+1,j)| [Mathematical 25]
In step S818, the demosaic unit 109 compares the evaluation values deff10 and deff11. If the comparison result shows deff10<deff11, the process advances to step S819; or if this comparison result
shows deff10≧deff11, the process advances to step S820.
In step S819, the first interpolation unit 502 performs interpolation calculation using an equation presented in mathematical 26. This yields DG(i,j) of the pixel at the pixel position (i,j).
(i,j)=(P(i,j-1)+P(i,j+1))/2 [Mathematical 26]
In step S820, the first interpolation unit 502 performs interpolation calculation using an equation presented in mathematical 27. This yields DG(i,j) of the pixel at the pixel position (i,j).
(i,j)=(P(i-1,j)+P(i+1,j))/2 [Mathematical 27]
In step S821, a second interpolation unit 503 calculates equations presented in mathematical 28. This calculation yields an evaluation value deff12 for pixels which are adjacent to the pixel position
(i,j) vertically, and a variation evaluation value deff13 for pixels which are adjacent to the pixel position (i,j) horizontally.
13=|2×(P(i,j)-P(i-2,j)-P(i+2,j))|+|P(i-1,j)-P(i+1,j)| [Mathematical 28]
In step S822, the demosaic unit 109 compares the evaluation values deff12 and deff13. If the comparison result shows deff12<deff13, the process advances to step S823; or if this comparison result
shows deff12≧deff13, the process advances to step S824.
In step S823, the second interpolation unit 503 performs interpolation calculation using an equation presented in mathematical 29. This yields DG(i,j) of the pixel at the pixel position (i,j).
(i,j)=|2×(P(i,j)-P(i,j-2)-P(i,j+2))|/4+(P(i,j-1)+P(i,j+1))/2 [Mathematical 29]
In step S824, the second interpolation unit 503 performs interpolation calculation using an equation presented in mathematical 30. This yields DG(i,j) of the pixel at the pixel position (i,j).
(i,j)=|2×(P(i,j)-P(i-2,j)-P(i+2,j))|/4+(P(i-1,j)+P(i+1,j))=2 [Mathematical 30]
In step S825, the demosaic unit 109 determines whether the value of the variable i is equal to pel-1. If it is determined in step S825 that i=pel-1, the process advances to step S826; or if it is
determined in step S825 that i≠pel-1, the value of the variable i is incremented by one and the processes in step S815 and subsequent steps are repeated.
In step S826, the demosaic unit 109 determines whether the value of the variable j is larger than line-1. If it is determined in step S826 that j>line-1, the process ends. On the other hand, if it is
determined in step S826 that j≦line-1, the value of the variable i is initialized to zero, the value of the variable j is incremented by one, and the processes in step S815 and subsequent steps are
Processing with which the demosaic unit 109 according to this embodiment obtains the DR and DB values of each pixel which constitutes a color image will be described with reference to FIGS. 10A and
10B showing flowcharts of this processing.
In step S1001, the demosaic unit 109 secures a memory area, used to perform processing to be described later, in a memory which is provided in itself or managed by it, and initializes both the
variables i and j indicating the pixel position in the above-mentioned color image to zero.
In step S1002, the demosaic unit 109 reads out, from the above-mentioned memory, map information (filter array) indicating which of solid-state image sensing elements DR, DG, DB, LR, LG, and LB is
placed at each position on the image sensor 103, as shown in FIG. 9.
In step S1003, the demosaic unit 109 determines using the map information whether the solid-state image sensing element at the position on the image sensor 103, which corresponds to the pixel
position (i,j) in the color image, is the solid-state image sensing element DG. This determination is done in the same way as in step S303 mentioned above. If it is determined that the pixel position
(i,j) corresponds to DG, the process advances to step S1004; otherwise, the process advances to step S1007.
In step S1004, the demosaic unit 109 determines whether the pixel value of the pixel at the pixel position (i-1,j), (i+1,j), (i,j-1), or (i,j+1) in the color image is saturated. If it is determined
in step S1004 that this pixel value is saturated, the process advances to step S1005; otherwise, the process advances to step S1006.
In step S1005, the first interpolation unit 502 performs interpolation calculation using equations presented in mathematical 31. This yields DR(i,j) and DB(i,j) of the pixel at the pixel position
when P
(i-1,j) corresponds to LB
(i,j)=(DR(i,j-1)-DR(i,j-1))/2+(DR(i-2,j+1)-DG(i-2,j+1)+DR(i+2,j+1)-DG(- i+2,j+1))/4
(i,j)=(DB(i+1,j)-DG(i+1,j))/2+(DB(i-1,j-2)-DG(i-1,j-2)+DB(i-1,j+2)-DG(- i-1,j+2))/4
when P
(i-1,j) corresponds to DB
(i,j)=(DR(i,j+1)-DG(i,j+1))/2+(DR(i-2,j-1)-DG(i-2,j-1)+DR(i+2,j-1)-DG(- i+2,j-1))/4
(i,j)=(DB(i-1,j)-DG(i-1,j))/2+(DB(i+1,j-2)-DG(i+1,j-2)+DB(i+1,j+2)-DG(- i+1,j+2))/4 [Mathematical 31]
In step S1006, the second interpolation unit 503 performs interpolation calculation using equations presented in mathematical 32. This yields DR(i,j) and DB(i,j) of the pixel at the pixel position
when P
(i-1,j) corresponds to LB
when P
(i-1,j) corresponds to DB
(i,j)=(DB(i-1,j)-LB(i+1,j))+2 [Mathematical 32]
In step S1007, the demosaic unit 109 determines using the map information whether the solid-state image sensing element at the position on the image sensor 103, which corresponds to the pixel
position (i,j) in the color image, is the solid-state image sensing element LR. This determination is done in the same way as in step S303 mentioned above. If it is determined that the pixel position
(i,j) corresponds to LR, the process advances to step S1008; otherwise, the process advances to step S1011.
In step S1008, the demosaic unit 109 determines whether the pixel value of the pixel at the pixel position (i,j) in the color image is saturated. If it is determined in step S1008 that this pixel
value is saturated, the process advances to step S1009; otherwise, the process advances to step S1010.
In step S1009, the first interpolation unit 502 performs interpolation calculation using equations presented in mathematical 33. This yields DR(i,j) and DB(i,j) of the pixel at the pixel position
(i,j)=(DR(i-2,j)-DG(i-2,j))/4+(DR(i+2,j)-DG(i+2,j))/4+(DR(i,j-2)-DR(i,- j-2))/4+(DR(i,j+2)-DG(i,j+2))/4
(i,j)=(DB(i-1,j-1)+DB(i+1,j+1))/2 [Mathematical 33]
In step S1010, the second interpolation unit 503 performs interpolation calculation using equations presented in mathematical 34. This yields DR(i,j) and DB(i,j) of the pixel at the pixel position
(i,j)=(DB(i-1,j-1)+DB(i+1,j+1))/2 [Mathematical 34]
In step S1011, the demosaic unit 109 determines using the map information whether the solid-state image sensing element at the position on the image sensor 103, which corresponds to the pixel
position (i,j) in the color image, is the solid-state image sensing element LB. This determination is done in the same way as in step S303 mentioned above. If it is determined that the pixel position
(i,j) corresponds to LB, the process advances to step S1012; otherwise, the process advances to step S1015.
In step S1012, the demosaic unit 109 determines whether the pixel value of the pixel at the pixel position (i,j) in the color image is saturated. If it is determined in step S1012 that this pixel
value is saturated, the process advances to step S1013; otherwise, the process advances to step S1014.
In step S1013, the first interpolation unit 502 performs interpolation calculation using equations presented in mathematical 35. This yields DR(i,j) and DB(i,j) of the pixel at the pixel position
(i,j)=(DB(i-2,j)-DG(i-2,j))/4+(DB(i+2,j)-DG(i+2,j))/4+(DB(i,j-2)-DG(i,- j-2))/4+(DB(i,j+2)-DG(i,j+2))/4 [Mathematical 35]
In step S1014, the second interpolation unit 503 performs interpolation calculation using equations presented in mathematical 36. This yields DR(i,j) and DB(i,j) of the pixel at the pixel position
(i,j)=P(i,j)×α [Mathematical 36]
In step S1015, the demosaic unit 109 determines using the map information whether the solid-state image sensing element at the position on the image sensor 103, which corresponds to the pixel
position (i,j) in the color image, is the solid-state image sensing element LG. This determination is done in the same way as in step S303 mentioned above. If it is determined that the pixel position
(i,j) corresponds to LG, the process advances to step S1016; otherwise, the process advances to step S1019.
In step S1016, the demosaic unit 109 determines whether the pixel value of the pixel at the pixel position (i-1,j), (i+1,j), (i,j-1), or (i,j+1) in the color image is saturated. If it is determined
in step S1016 that this pixel value is saturated, the process advances to step S1017; otherwise, the process advances to step S1018.
In step S1017, the first interpolation unit 502 performs interpolation calculation using equations presented in mathematical 37. This yields DR(i,j) and DB(i,j) of the pixel at the pixel position
when P
(i-1,j) corresponds to LR
(i,j)=(DR(i+1,j)-DG(i+1,j))/2+(DR(i-1,j-2)-DG(i-1,j-2)+DR(i-1,j+2)-DG(- i-1,j+2))/4
(i,j)=(DB(i,j-1)-DR(i,j-1))/2+(DB(i-2,j+1)-DG(i-2,j+1)+DB(i+2,j+1)-DG(- i+2,j+1))/4
when P
(i-1,j) corresponds to DR
(i,j)=(DR(i-1,j)-DG(i-1,j))/2+(DR(i+1,j-2)-DG(i+1,j-2)+DR(i+1,j+2)-DG(- i+1,j+2))/4
(i,j)=(DB(i,j+1)-DG(i,j+1))/2+(DB(i-2,j-1)-DG(i-2,j-1)+DB(i+2,j-1)-DG(- i+2,j-1))/4 [Mathematical 37]
In step S1018, the second interpolation unit 503 performs interpolation calculation using equations presented in mathematical 38. This yields DR(i,j) and DB(i,j) of the pixel at the pixel position
when P
(i-1,j) corresponds to LR
when P
(i-1,j) corresponds to DR
(i,j)=(LB(i-1,j)-DB(i+1,j))/2 [Mathematical 38]
In step S1019, the demosaic unit 109 determines using the map information whether the solid-state image sensing element at the position on the image sensor 103, which corresponds to the pixel
position (i,j) in the color image, is the solid-state image sensing element DR. This determination is done in the same way as in step S303 mentioned above. If it is determined that the pixel position
(i,j) corresponds to DR, the process advances to step S1020; otherwise, the process advances to step S1021.
In step S1020, the first interpolation unit 502 performs interpolation calculation using equations presented in mathematical 39. This yields DR(i,j) and DB(i,j) of the pixel at the pixel position
(i,j)=(DB(i-1,j+1)+DB(i-1,j+1))/2 [Mathematical 39]
In step S1021, the first interpolation unit 502 performs interpolation calculation using equations presented in mathematical 40. This yields DR(i,j) and DB(i,j) of the pixel at the pixel position
As has been described above, according to this embodiment, even when pixels with different sensitivities are arranged on the same sensor, the resolution of a portion incapable of being sampled can be
partially restored by performing interpolation using the correlation between a color filter with a highest resolution and multiple colors.
Also, because pixel value determination which uses two pixels with different sensitivities is performed for an unsaturated pixel value, it is possible to obtain a higher resolution. Moreover, even if
either pixel value is saturated, it is possible to perform pixel interpolation, which improves the resolution.
Other Embodiments [0175]
The present invention can also be practiced by executing the following processing. That is, software (program) which implements the functions of the above-described embodiments is supplied to a
system or apparatus via a network or various kinds of storage media, and read out and executed by a computer (or, for example, a CPU or an MPU) of the system or apparatus.
The present invention is not limited to the above-described embodiments, and various changes and modifications can be made within the spirit and scope of the present invention. Therefore, to apprise
the public of the scope of the present invention, the following claims are made.
This application claims the benefit of Japanese Patent Application Nos. 2010-010366, filed Jan. 20, 2010, and 2010-288556, filed Dec. 24, 2010, which are hereby incorporated by reference herein in
their entirety.
Patent applications by Kimitaka Arai, Yokohama-Shi JP
Patent applications by CANON KABUSHIKI KAISHA
Patent applications in class Color balance (e.g., white balance)
Patent applications in all subclasses Color balance (e.g., white balance)
User Contributions:
Comment about this patent or add new information about this topic: | {"url":"http://www.faqs.org/patents/app/20110234844","timestamp":"2014-04-19T21:25:46Z","content_type":null,"content_length":"119316","record_id":"<urn:uuid:0c17d520-e7c3-4e66-8e5f-79c9464ea35b>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00001-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Phasor Representation for Three-Phase Power Transmission
Three-phase electric power is a method of alternating-current electric power transmission, which is in worldwide use in electric power distribution grids. The system was invented by Nikola Tesla in
The three conducting wires are commonly colored black, red, and blue. The graphic shows the three correspondingly colored phasors representing voltage and current 120º apart, rotating at 50–60 Hz.
Depending on whether the reactance of the load is inductive or capacitive, the voltage leads or lags the current, respectively. (You can refer to the handy mnemonic "ELI the ICEman".) The power
factor is defined as the ratio , where is the active power (measured in watts), which depends on the resistance, and , the apparent power (measured in volt-amperes), which depends on the total
impedance. The phase angle between the voltage and current phasors is then given by . The power factor is equal to 1 for a pure resistance, but decreases while the phase angle increases for larger
Relay Tech, Tacoma Power, and Wolfram Demonstrations Project | {"url":"http://demonstrations.wolfram.com/PhasorRepresentationForThreePhasePowerTransmission/","timestamp":"2014-04-17T12:51:00Z","content_type":null,"content_length":"44588","record_id":"<urn:uuid:0b32ba48-60dc-46b3-b3d9-7c7a748fe1ca>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00305-ip-10-147-4-33.ec2.internal.warc.gz"} |
Number of results: 14,607
can u tell me the rating of this chocolate bar? if it was rated 1st, 2nd etc...
Tuesday, February 8, 2011 at 8:19pm by Monica
that doesnt make sense to me? i need evidence that relates to the claim and the mars bar. how do figuring wut chocolate is made from help?
Tuesday, February 8, 2011 at 8:19pm by Monica
Erika eat 1/3 of a chocolate bar and gave the remaining 2/3 of the chocolate bar to her 8 friends to share equally. How much chocolate bar will each friend get?
Monday, January 21, 2013 at 7:31pm by Cherl
The evidence that it's chocolate can be the listing of ingredients on the wrapper plus the origin of the chocolate, if known.
Tuesday, February 8, 2011 at 8:19pm by Ms. Sue
sophie ruth is eating a 50 gram chocolate bar which is labled 30 % cocoa how many grams of chocolate are in the chocolate bar
Tuesday, February 18, 2014 at 12:03am by ashley
1. A chocolate bar is made up of 12 equal pieces.Tom ate 3/4 of a chocolate bar. Sarah ate 2/3 of the same kind of chocolate bar. Tom said he ate more chocolate than Sarah. Is he correct?
Monday, January 30, 2012 at 7:49pm by Jayda
for the first one perhaps say what chocolate is made from....even include from what country the cacao beans come from
Tuesday, February 8, 2011 at 8:19pm by hw
TO MS. SUE Writing (CHOCOLATE BAR)
FOR THE CARAMEL, ARUSHI, TOLD ME THAT CHOCOLATE AND CARAMEL MAKE A GTEAT COMBO. IS THAT OK WITH U?
Tuesday, February 8, 2011 at 8:53pm by Monica
for the caramel all i wrote was that the chocolate and caramel make a great combo is that ok?
Tuesday, February 8, 2011 at 8:19pm by Arushi
our goal is to pick a candy. My candy i picked was Mars Bar. The teacher told us to make 3 claims of our chocolate and then beside those 3 claims we have to write 3 evidences. The 3 evidences are the
things im having trouble on. I have my 3 claims already but i dont have any ...
Tuesday, February 8, 2011 at 8:19pm by Monica
Could someone please check this and let me know if I'm doing it correctly? Sammy and Sally each carry a bag containing banana, chocolate bar, and a licorice stick. Simutaneously, they take out a
single good item and consume it. The possible food items that Sally and Sammy ...
Saturday, October 28, 2006 at 1:09pm by Mary
WRITING(CHOCOLATE BAR) - correction
Tuesday, February 8, 2011 at 8:19pm by Ms. Sue
i think its great!
Tuesday, February 8, 2011 at 8:19pm by Monica
managerial economics
Robert E. Lee Grade School is contemplating a chocolate bar fund raiser. Weekly sales data from Mrs. Grant's fifth grade class indicate that: Q = 4,000 - 1,000P where Q is chocolate bar sales and P
is price. i) How many chocolate bars could be sold at RM2 each? ii) iii) What ...
Wednesday, May 5, 2010 at 1:16am by genise
TO MS. SUE Writing (CHOCOLATE BAR)
sorry i meant great
Tuesday, February 8, 2011 at 8:53pm by Monica
Jack divided his chocolate bar into 6 equal pieces. He gave 2 piece to Tristan and 1 piece to Emilio Which expression can be used to find the fraction of Jack's chocolate bar that he gave away A.1/
2+1/3 B.1/3+1/6 C.1+(1/3+1/6) D.6-(1/3+1/6) D?
Thursday, November 8, 2012 at 5:51pm by Jerald
say that it is made from delicious cocoa powder, sugar and milk
Tuesday, February 8, 2011 at 8:19pm by hw
The cost of a X chocolate bar is $3.50. The cost of a similar Y bar is $3.20. (a) Find the percentage saving in buying a Y bar instead of the X bar. (b) Find the percentage loss in buying a X bar
instead of the Y bar. Please help...........
Tuesday, February 7, 2012 at 11:11am by running.from.myself
wut do i do for the caramel and the tastes soo good mouthwatering crunchy part?
Tuesday, February 8, 2011 at 8:19pm by Monica
The average number of calories in a 1.5 ounce chocolate bar is 225. Suppose that the distribution of calories is approximately normal with standard deviation =10. Find the probability that a randomly
selected chocolate bar will have a) Between 200 and 220 calories b) Less than...
Wednesday, April 10, 2013 at 8:39pm by Kath
TO MS. SUE Writing (CHOCOLATE BAR)
What is the evidence that chocolate and caramel make a great combo? You could say that your taste buds relish this combination. But other than that, there's no real evidence. However, the evidence
for your statement that the candy has a little bit of caramel is found on the ...
Tuesday, February 8, 2011 at 8:53pm by Ms. Sue
chris buys a chocolate bar and a pack of gum for 1.75 if the chocolate bar costs .25 more than the pack of gum how much does the pack of gum costs?
Monday, July 13, 2009 at 4:16pm by christina
but wut about the caramel part, what do i write about that?
Tuesday, February 8, 2011 at 8:19pm by Monica
I no nothing about rating candy bars. Who does the ratings? Why? What about differences in taste?
Tuesday, February 8, 2011 at 8:19pm by Ms. Sue
I'd say 50 g of chocolate are in the chocolate bar. Now, you probably want to know how much cocoa is there: .30*50 = 15g
Tuesday, February 18, 2014 at 12:03am by Steve
1. Oh, it¡¯s on aisle 6, next to the coffee. 2. Oh, it¡¯s in aisle 6, next to the coffee. 3. Where can I find a dark chocolate bar? It's in Aisle 3. 4. Where can I find a dark chocolate bar? It's on
Aisle 3. (Do we have to use 'in' or 'on' before 'Aisle'? Are both OK? Which ...
Wednesday, November 30, 2011 at 6:42pm by rfvv
Math Probability
a bag of hershey miniatures candies contain 18 milk chocolates, 9 Mr.good bars, 9 krackel bars and 8 hershey dark chocolate. if a candy is chosen at random, find the probability that it is A) A
krackel or a Dark chocolate B)Not a milk Chocolate bar?
Monday, April 16, 2012 at 3:51pm by Antonio
If the government imposes a quantity tax on the consumption of a good, it means that the consumer has to pay for each unit of the good its price plus the tax. For example, if the price of a chocolate
bar is $5 and the government imposes a tax of 20 cents on the consumption of ...
Saturday, February 2, 2013 at 5:29pm by raina
i changed my 3rd claim: its now: there are lots of ways to eat it.. and my evidence is you can eat it as a snack or many people eat it as a treat on birthdays im still having trouble for the caramel
Tuesday, February 8, 2011 at 8:19pm by Monica
Sally bought three chocolate bars and a pack of gum and paid $1.75. jake bought two chocolate bars and four packs of gum and paid $2.00. Find the cost of a chocolate bar nad the cost of a pack of
gum. 3C + 1G= 1.75 2C + 4G=2.00 solve. I know that i have to eliminate the c's or...
Sunday, June 10, 2007 at 11:26am by Diana
Discreet MAth
Because the president saw the chocolate and he said chocolate and that is how its chocolate
Wednesday, May 2, 2012 at 12:47am by Jamie
Could someone please help me to asnwer these tw economics questions? The following table shows the marginal benefits (MB) of consuming chocolate bars. Chocolate Bars (unit) 1 2 3 4 5 MB $10 $8 $6 $4
$2 Suppose that the market price of chocolate bars is $7 per unit and you ...
Thursday, February 6, 2014 at 5:03am by Kaunis
6th grade
a chocolate bar is separated into several equal pieces, if one person eats 1/4 of the piecel; and the second person eats 1/2 of the remaining pieces; there are six pieces left over. into how many
pieces was the orinal bar divided?
Monday, August 31, 2009 at 9:49pm by joshua
6th grade math
A chocolate bar is separated into several equal pieces. If one person eats 1/4 of the pieces, and a second person eats 1/2 of the remaining pieces, there are six pieces left over. Into how many
pieces was the original bar divided?
Thursday, August 27, 2009 at 10:19pm by joey
Does anybody know where I can find Willy Wonka Chocolate Bars? I've never seen it in any of the local grocery stores, I've only seen willy wonka nerds and other fruit candies. Never chocolate. I
thought maybe I should look in the mall, But I looked in the mall directory and ...
Sunday, November 22, 2009 at 11:13am by y912f
Think of a chocolate chip in the cookie dough who suddenly finds himself in a freezer... He came from the cacao fields of Mexico, warm climate expecting to become a chocolate chip cookie
inhabitant... NOT in a freezer. That might be fun to write.
Wednesday, November 14, 2007 at 5:41pm by GuruBlue
S = {Chocolate, Vanilla, Mint} 11) ____D_ A) {Vanilla, Mint}, {Chocolate, Mint}, {Chocolate, Vanilla}, {Chocolate}, {Vanilla}, {Mint} B) {Chocolate, Vanilla, Mint}, {Vanilla, Mint}, {Chocolate,
Mint}, {Chocolate, Vanilla}, {Chocolate}, {Vanilla}, {Mint} C) {Chocolate, Vanilla...
Sunday, February 20, 2011 at 9:46pm by cassie
A bar of aluminum (bar A) is in thermal contact with a bar of iron (bar B) of the same length and area. One end of the compound bar is maintained at Th = 75.5°C while the opposite end is at 30.0°C.
Find the temperature at the junction when the energy flow reaches a steady state.
Saturday, November 27, 2010 at 7:50pm by Chelsea
Thank you for using the Jiskha Homework Help Forum. Doing this research, I learned that white chocolate is not really chocolate! But, I prefer it anyway, even though dark chocolate is best for the
immune system. i need a good lead for a paragraph about white chocolate versus ...
Tuesday, May 22, 2007 at 7:44pm by SraJMcGin
Hello everyone, I need help with this question: A bartender slides a glass of beer down a bar with an initial velocity of 2m/s. The coefficient of kinetic friction of the bar is 0.05 and the length
of the bar is 3m and the height of the bar is 1m. The mas 0.5kg. How far from ...
Tuesday, September 4, 2012 at 7:42pm by Leo
college stats
I am having a lot of trouble with these questions can anyone help me please. 2. On September 2, 2004, a Dan Jones poll based on a random sample of 408 Salt Lake County residents reported that 26% of
the respondents thought that Mayor Nancy Workman should resign. Find a 95% ...
Friday, July 12, 2013 at 12:54am by penelope
6th grade
vallie works at a chocolate factory in town. the store has 6 1/4 lbs of chocolate to be sorted in bags. if she puts 1 7/8 lbs of chocolate in each bag how many bags will she be able to fill
Monday, February 15, 2010 at 8:43pm by cortnei
A waffle cone has a width of 7.6 cm and height of 15.2cm and a chocolate coating that is 2mm thick. What is the volume of the chocolate coating? How many of these cones if they were filled up to the
brim would be needed for 400 gallons of ice cream given 2 gallons is 0.01m^3? ...
Monday, February 14, 2011 at 5:42am by CJ
A cup of hot chocolate has temperature 80 C in a room kept at 20 C.? After a half hour the hot chocolate cools to 60 C. a) what is the temperature of the hot chocolate after another half hour? b)
when will the chocolate have cooled to 40 C? I think I know how to do b, I'm ...
Thursday, October 10, 2013 at 12:49pm by Ezra
a uniform bar of mass is supported by support pivoted at top about which bar can swing like simple pendulum. If force F is applied perpendicular to lower end of the bar.How big F must be in order to
held the bar in equilibrium.
Sunday, January 16, 2011 at 2:47am by bharat narke
{Hello everyone i'm back}:) I have a problem..: Kevin has 5 containers of chocolate. The first container has 110 peices of chocolate, the third container has twice as many peices of chocolate as the
second container and the fourth container has half as many peices as the first...
Thursday, December 2, 2010 at 7:21pm by Sherkyra8456
bar subjected to gravity load (two segments) the bar in the fig has constant cross sectional area A .the top half of the bar is made of material with mass density and youngs module, the bottom half
of the bar made of density and young module total length of bar is 2 when bar ...
Wednesday, May 1, 2013 at 4:49am by mehwish
bar subjected to gravity load (two segments) the bar in the fig has constant cross sectional area A .the top half of the bar is made of material with mass density and youngs module, the bottom half
of the bar made of density and young module total length of bar is 2 when bar ...
Wednesday, May 1, 2013 at 4:50am by mehwish
bar subjected to gravity load (two segments) the bar in the fig has constant cross sectional area A .the top half of the bar is made of material with mass density and youngs module, the bottom half
of the bar made of density and young module total length of bar is 2 when bar ...
Wednesday, May 1, 2013 at 4:50am by mehwish
a box of munchkins contains chocolate and glazed donut holes. if Gloria ate 2 chocolate , then 1/11 of the remianing munchkins would be chocolate. if she added 4 glazed to the box, 1/7 of the box
would chocolate. how many munchkins are in the original box?
Wednesday, September 28, 2011 at 2:17pm by j
Does this poem make sense? Chocolate This is my weakness I must confess The smell of chocolate is so sweet It is so good and a wonderful delight My one and only friend late at night Here to satisfy
all my needs and wants Without I don’t know what I would do Chocolate is my one...
Saturday, June 18, 2011 at 1:28pm by Sheila
A 60-cm-long, 500 g bar rotates in a horizontal plane on an axle that passes through the center of the bar. Compressed air is fed in through the axle, passes through a small hole down the length of
the bar, and escapes as air jets from holes at the ends of the bar. The jets ...
Monday, November 30, 2009 at 2:17pm by Zach
The bar goes in for partial pressure of H2 gas so if we have H2 ==> 2H^+ + 2e then E = Eo - (0.059/2)*log [(H^+)^2/pH2] I'm unclear if the bar has become the standard of if your prof is using 1 atm
for the standard. The difference a. if bar is standard, you simply ...
Monday, March 29, 2010 at 9:10pm by DrBob222
A box containing Munchkins contains chocolate and glazed donut holes. If Gloria ate 2 chocolate Munchkins, then 1/11 of the remaining Munchkins would be chocolate. If Gloria added 4 glazed Munchkins
to the box, 1/7 of the Munchkins would be chocolate. How many Munchkins are in...
Monday, May 23, 2011 at 9:24pm by Joe
A box containing Munchkins contains chocolate and glazed donut holes. If Gloria ate 2 chocolate Munchkins, then 1/11 of the remaining Munchkins would be chocolate. If Gloria added 4 glazed Munchkins
to the box, 1/7 of the Munchkins would be chocolate. How many Munchkins are in...
Monday, May 23, 2011 at 10:36pm by Joe
draw a diagram of this statement. three fifths of the baker 60 cookies were chocolate cookies (a) how many of the baker cookies were chocolate? (b) what percent of the bakers cookies were not
chocolate cookies
Monday, January 7, 2013 at 12:44pm by jasmine
yesterday, when i ate chocolate cereal dry, it taste good, but when I ate the chocolate cereal with milk, after me and my sister finished the cereal, our stomach started to hurt. the chocolate cereal
brand is KRAVE. its a new brand.
Monday, April 30, 2012 at 5:55pm by Celest
Monday, December 3, 2012 at 10:56pm by Ms. Sue
Six people shares 3/4kg of chocolate equally. How much chocolate does each person get?
Tuesday, December 3, 2013 at 6:23pm by Danny
Sandra baked 2 dozen cupcakes. She frosted one half with pink icing. One eighth she covered with yellow icing. The rest were chocolate. How many cupcakes were chocolate? How did you get the answer 24
/2=12 pink 24/8=3 yellow 3+12=15 24-15=9 chocolate 9 chocolate
Monday, April 30, 2007 at 7:00pm by Shannon
Im trying to say chocolate covered strawberries, would it be el chocolate fresas?
Wednesday, March 5, 2008 at 7:30pm by Emily
You could start by eating some chocolate. <g> What do you want to learn about chocolate?
Saturday, October 12, 2013 at 9:59am by Ms. Sue
There's a bar with length of 6.49m and mass of 5.45kg leaning against a frictionless wall. It makes an angle of 66.08 deg with the ground. A block with mass of 44.98 kg hangs from bar at distance d
up from point of contact between bar and floor. The frictional force (us=.371) ...
Wednesday, October 24, 2007 at 1:28pm by Alie
True or False Questions. Statistics' students of a class of Lyceum want to calculate the average number of chocolate pieces in a standard package of biscuits SANTAS. They choose a random sample of
biscuits, measure chocolate pieces in each cookie and calculate the 95% ...
Friday, July 26, 2013 at 5:49am by Andrew
True or False. Help please! Statistics' students of a class of Lyceum want to calculate the average number of chocolate pieces in a standard package of biscuits SANTAS. They choose a random sample of
biscuits, measure chocolate pieces in each cookie and calculate the 95% ...
Sunday, July 28, 2013 at 3:12pm by Andrew
It takes Andy and Ben the same amount of time to eat 1 large piece of chocolate. It takes Charlotte 3 more mins than Andy and Ben to eat the same piece of chocolate. One day, Andy, Charlotte and Ben
share 2 pieces of chocolate among them. It takes them 5 mins to eat 2 pieces ...
Sunday, December 2, 2012 at 5:47pm by Tracy
Math Help
It takes Andy and Ben the same amount of time to eat 1 large piece of chocolate. It takes Charlotte 3 more mins than Andy and Ben to eat the same piece of chocolate. One day, Andy, Charlotte and Ben
share 2 pieces of chocolate among them. It takes them 5 mins to eat 2 pieces ...
Sunday, December 2, 2012 at 7:16pm by Tracy
Susan owns a chocolate store. She sells her chocolate by the gram and needs to compare the weights of two chocolate batches. She randomly selects 100 pieces from each batch. The distrubition of both
samples are normal. Batch 1 - (Mean Weight) 56 grams Standard Deviation - 6 ...
Saturday, March 30, 2013 at 11:33am by Caroline
solid mensuration
A cylindrical mug of hot chocolate measures 9.2 cm in diameter and has a height of 12.6 cm. The top 2.5 cm of the mug is filled with whipped cream; the rest is hot chocolate. Rounding to the nearest
mL, how much hot chocolate is in the mug? (1 cubic centimeter = 1 liter)
Saturday, May 11, 2013 at 8:54am by Anonymous
1. When you hold a piece of chocolate in your hand, why does the chocolate melt? 2. Which is a larger unit of heat: calorie, kilocalorie, Btu, or joule?
Thursday, February 7, 2013 at 4:18pm by alex
Tapered bar with end load The small tapered bar BC has length L=0.1 m and is made of a homogeneous material with Young’s modulus E=10 GPa. The cross sectional area of the bar is slowly varying
between A0=160 mm^2 (at B) and A0/2 (at C), as described by the function: A(x)=A0/(1...
Tuesday, April 30, 2013 at 3:36am by helpless
x d=(x -x bar) (x -x bar)^2 2 -1.2 1.44 6 2.8 7.84 4 .8 0.64 3 0-.2 0.04 1 -2.2 4.84 d bar = 3.2 ∑(x-d bar)^2 =14.8 sd = sqrt((14.8/4)) = 1.92 t = 3.72 p-value = 0 .9898 Degree of freedom = n -1 = 4
Since the computed value of t is less than the table value of t we ...
Sunday, November 17, 2013 at 7:41am by Kuai
Calculus I
A tank in the shape of an inverted right circular cone has height $ 10$ meters and radius $ 8$ meters. It is filled with $ 6$ meters of hot chocolate. Find the work required to empty the tank by
pumping the hot chocolate over the top of the tank. Note: the density of hot ...
Thursday, February 14, 2008 at 2:30pm by Jeff
Smitty's Bar and Grill has brand name recognition of 61% around the world. Assuming we randomly select 2 people. The assumptions of a Bernoulli process are met. What is the probability a)exacatly 5
of the 12 recognize the name of Smitty's Bar and Grill? b)5 or fewer recognize ...
Friday, September 23, 2011 at 11:51pm by dee
1st: half cup of chocolate left -- drank 1/2 cup 2nd : 1/4 cup of chocolate left left --- drank 1 cup 3rd: 1/8 cup of chocolate left left --- drank 1 1/2 cups 4th: 1/16 cup of chocolate left left ---
drank 2 cups
Monday, October 10, 2011 at 10:03pm by Reiny
1st: half cup of chocolate left -- drank 1/2 cup 2nd : 1/4 cup of chocolate left left --- drank 1 cup 3rd: 1/8 cup of chocolate left left --- drank 1 1/2 cups 4th: 1/16 cup of chocolate left left ---
drank 2 cups
Monday, October 10, 2011 at 10:03pm by Thank you Reiny !!!!!
Mechanics _ physics
A uniform bar 1m long is pivoted at the centre has a mass of 3kg. It is acted upon by a conquer which cause to possess an angular acceleration of 6rad/sec^2; calculate a. The moment of inertia of the
bar b. The torque on the bar
Thursday, February 14, 2013 at 2:07pm by Precious
Your formula is correct. The C ratio is the inverse of the delta T ratio, when the M's are the same. delta T for bar 1 = 83 C delta T for bar 2 = 258 C Bar 2 must have 83/258 of the specific heat
capacity of bar 1, which would be 270 J/kg*C
Thursday, December 8, 2011 at 10:28pm by drwls
On a frictionless table, a glob of clay of mass 0.54 kg strikes a bar of mass 1.24 kg perpendicularly at a point 0.27 m from the center of the bar and sticks to it. 1. If the bar is 1.02 m long and
the clay is moving at 6.3 m/s before striking the bar, what is the final speed ...
Wednesday, February 12, 2014 at 11:14am by Neomi
A "Bus" bar is in the form of a slab of copper 2meters long, 1cm wide, and 10 cm thick. a) What is the resistance of the bar at 0 degrees celcius? b) What potential difference is needed to push 5000
amps through the bar?
Sunday, May 16, 2010 at 10:48pm by Tristan
Could you use a bar graph? One bar for eyes and one bar for ears??
Wednesday, February 20, 2013 at 7:43pm by JJ
Carlos had 7 cups of chocolate chips. He used 1 2/3 cups to make a chocolate sauce and 3 1/3 cups to make cookies. How many cups of chocolate chips does Carlos have now?
Friday, November 2, 2012 at 5:31pm by Shion
A nonuniform, horizontal bar of mass 3.43 kg is supported by two massless wires against gravity. The left wire makes an angle 21.9 degrees, with the horizontal, and the right wire makes an angle 61.8
degrees. The bar has length 1.04 m. A) Find the position of the center of ...
Friday, November 26, 2010 at 1:00am by hii
A 42-kg pole vaulter running at 14 m/s vaults over the bar. Her speed when she is above the bar is 1.7 m/s. Neglect air resistance, as well as any energy absorbed by the pole, and determine her
altitude as she crosses the bar.
Wednesday, October 12, 2011 at 3:01pm by ben
A 52-kg pole vaulter running at 12 m/s vaults over the bar. Her speed when she is above the bar is 0.7 m/s. Neglect air resistance, as well as any energy absorbed by the pole, and determine her
altitude as she crosses the bar.
Friday, October 12, 2012 at 12:01am by monic
A 5.80 kg ball is dropped from a height of 14.5 m above one end of a uniform bar that pivots at its center. The bar has mass 9.00 kg and is 6.40 m in length. At the other end of the bar sits another
5.30 kg ball, unattached to the bar. The dropped ball sticks to the bar after ...
Thursday, April 5, 2012 at 10:35am by Joe
13/18 = .72 with a bar over the 2 5/9 = .55 with a bar over the second 5. If two different numerals repeat infinitely, then both need a bar over them. I hope this helps. Thanks for asking.
Friday, October 12, 2007 at 8:03pm by PsyDAG
A uniform bar of iron is supported by a lon, uniform Hooke's Law spring. The spring is cut in half exactly and the two pieces are used to support the same bar. If the whole spring is stretched by 4
cm in the first bar (that is not cut in half) how much would each spring ...
Wednesday, July 20, 2011 at 5:02pm by Capreeca
John bought 2 pounds of chocolate for $7.50. Write expression to find the cost of 3.5 pounds of chocolate?
Monday, April 4, 2011 at 9:44pm by Haley
The chocolate lover bought 6 boxes of chocolate, each containing 12 truffles. How many truffles did she buy?
Thursday, June 13, 2013 at 4:23pm by Ms. Sue
Mrs.dude is buying 2 pounds of chocolate for $7.50 . Write an expression to find the cost of 4.5 pounds of chocolate
Tuesday, April 5, 2011 at 6:17pm by John
A box is lifted with a pry bar, by slipping the bar under the box and lifting up on the bar. If the pry bar is 2m in length and it is grasped at the end, and the box is located at the opposite end,
the fulcrum, with its center of gravity at a distance of 0.5 m from the fulcrum...
Sunday, March 6, 2011 at 11:45pm by DMITRIC
A box is lifted with a pry bar, by slipping the bar under the box and lifting up on the bar. If the pry bar is 2m in length and it is grasped at the end, and the box is located at the opposite end,
the fulcrum, with its center of gravity at a distance of 0.5 m from the ...
Monday, March 7, 2011 at 10:58am by CHRIST
9th grade english
I'm tough like the shell on the outside on the inside I'm as soft as the chocolate I'm as sweet as the chocolate I'm cheerful like all the bright colors
Thursday, October 23, 2008 at 5:48pm by Ashley
2. A sample of an ideal gas underwent an expansion against a constant external pressure of 1.0 bar from 1.0 m3, 20.0 bar, and 273 K to 100.0 m3, 1.0 bar, and 273 K. What is the work done by the
system on the surroundings
Friday, February 28, 2014 at 5:22pm by dhaval
A bar of gold is in thermal contact with a bar of silver of the same length and area. One end of the compound bar is maintained at 80.0°C while the opposite end is at 30.0°C. When the energy transfer
reaches steady state, what is the temperature at the junction?
Thursday, January 30, 2014 at 8:40pm by Brody
7th grade (subject?)
What is the context? "Bar" can have many diverse meanings. A solid rod; a brick-like shape (bar of soap or gold); a unit of pressure; a tavern; a sandbar; the practice of law. Consult a dictionary
for the meaning that makes sense. http://www.thefreedictionary.com/BAR It can ...
Friday, December 11, 2009 at 12:08am by drwls
A 51.8 kg pole vaulter running at 11.5 m/s vaults over the bar. Her speed when she is over the bar is 1.26 m/s. Neglect air resistance, as well as any energy absorbed by the pole, and determine her
altitude as she crosses the bar
Sunday, March 10, 2013 at 7:32pm by Anonymous
Consider two straight bars of uniform cross section made of the same material. Bar 1 has an axial length of and a square cross section with side length . Bar 2 has an axial length of and a round
cross section with diameter. When subjected to axial tension, bar 1 elongates by...
Sunday, April 28, 2013 at 11:14am by asas
English CRT 250
your child is trying to prove that she did not steal chocolate chip cookies from the cookie jar, so she makes this argument: “There are no chocolate stains on my hands, so I couldn’t have stolen the
Monday, November 30, 2009 at 6:16pm by diane
Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>> | {"url":"http://www.jiskha.com/search/index.cgi?query=WRITING(CHOCOLATE+BAR)","timestamp":"2014-04-17T21:55:43Z","content_type":null,"content_length":"41122","record_id":"<urn:uuid:93475e72-81e3-4c28-9458-3cfec462ba84>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00591-ip-10-147-4-33.ec2.internal.warc.gz"} |
General Education Mathematics
MATHEMATICS 1218
General Education Mathematics
Designed to fulfill general education requirements, and not designed as a prerequisite for any other college mathematics course. Focuses on mathematical reasoning and the solving of real-life
problems, rather than routine skills. Three topics from the following list will be studied in depth: counting techniques and probability, game theory, geometry, graph theory, logic and set theory,
and statistics. The regular use of calculators and/or computers is emphasized.
• Mathematics 0470 (or one year of high-school geometry) and a grade of C or better in the equivalent of Mathematics 0482, OR
• Mathematics 0470 (or one year of high-school geometry) and a grade of C or better in the equivalent of Mathematics 0482, and a qualifying score on the mathematics placement test.
Students are required to come to campus to take exams.
This course is IAI compliant (IAI M1 904) and will readily transfer to schools in the state of Illinois. The course is 3 semester credit hours. | {"url":"http://www.cod.edu/course/information/math1218f5/math1218_p.htm","timestamp":"2014-04-20T18:26:06Z","content_type":null,"content_length":"3559","record_id":"<urn:uuid:2b6e6d5f-78d9-4cc5-9c59-08e43cb6eacd>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00260-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fast way to evaluate a function
Fast way to evaluate a function
What is a fast way to evaluate function. Now, I am evaluating a function
using for-loop and vectors. For example,
1 void evaluate(std::vector <double>& f,std::vector <double>& x){
2 for ( int i = 0; i < x.size(); i++ ){
3 f.push_back(2*x[i]);
4 }
5 }
where I am evaluating function f(x) = 2*x;
But, I am a quite new with C++ so I don't know is this method slow or fast.
Length of the vector x is something between 3 and 15, but I need to evaluate
this many times.
Topic archived. No new replies allowed. | {"url":"http://www.cplusplus.com/forum/general/87655/","timestamp":"2014-04-21T02:08:30Z","content_type":null,"content_length":"6303","record_id":"<urn:uuid:70be5750-973a-4160-a28a-8a07ca2737c4>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00106-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: "Simple" filter design wrong answer
Replies: 1 Last Post: Sep 17, 1996 11:25 AM
Messages: [ Previous | Next ]
"Simple" filter design wrong answer
Posted: Sep 15, 1996 1:25 AM
Here is a little problem that should work but does not. I'm using the Signal
Processing Toolbox 3 on Matlab 4.2c. I tell it to design a second order FIR
notch filter and display the frequency domain transfer function:
b=fir1(2,[.49 .51],'stop');
The correct answer is b = [.5 0 .5], (this really is a notch at 1/2)but the
output of fir1 is completely wrong. If I use n=4 or more it always works
fine. If I try making the points farther apart, such as .25 and .75, it still
fails. If I try fir2 with many different appropriate choices of line
segments, it still always fails for n=2 and always works for n=4 or more.
If I run a firls design it also always fails for n=2 and always works for
n=4 or more.
If I ask for the notch to be some other frequency, I get the same problem.
There are no warnings or errors.
This is very strange. I feel I must be doing something terribly obvious
Does anyone have any suggestions? I am a somewhat experienced user. I
never had this kind of results before, but I guess I never told it to do just
this problem before. I've never been this old before, either.
Thanks for your help.
Dan Babitch
Date Subject Author
9/15/96 "Simple" filter design wrong answer Dan Babitch
9/17/96 Re: "Simple" filter design wrong answer Thomas P. Krauss | {"url":"http://mathforum.org/kb/thread.jspa?threadID=240455","timestamp":"2014-04-16T11:32:24Z","content_type":null,"content_length":"18157","record_id":"<urn:uuid:2d2c8e76-dc72-4484-b7b8-d98c5ef303f5>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00582-ip-10-147-4-33.ec2.internal.warc.gz"} |
Comments on The NAG Blog: Easter egg or not?Hi Feyn,
Thanks for your interest in this blog.
If in the second graph, you plot the average o...Jeremy, Mike and Kai, thank you for your interest ...This could due to the poor quality of quasi-random...Is it something to do with the fact that quasi ran...Well, let me take a crack at this egg. The main (...
tag:blogger.com,1999:blog-1352688022620133175.post1994241734843955736..comments2014-04-14T04:30:29.474+01:00Katie O'Harehttp://www.blogger.com/profile/
09366741271809330805noreply@blogger.comBlogger6125tag:blogger.com,1999:blog-1352688022620133175.post-35397241380755251492010-11-29T11:15:46.213Z2010-11-29T11:15:46.213ZHi Feyn,<br /><br />Thanks for
your interest in this blog.<br /><br />In both second and third graphs I plot the prices of Asian options, but only the one where I use pseudorandoms shows a proper distribution. What you say about
lognormal distributions is right, but note that I don't really plot an average of MC simulations, but for every simulation I compute average as in Asian options and this is proper way to do it
(as seen on the 3rd pic).<br /><br />I don't need to fill the bits inbetween with numbers from BB, they are not required. In the first example I move straight from S0 to ST and I don't need
what's between them. If I calculate the prices between, then I have the 3rd example.<br /><br />I'm not sure what you mean by projecting the problem to the 1st axis, could you elaborate more
on that please?<br /><br />And then 252 is not a special dimension of the NAG Sobol generator. It can be an arbitrary number. I tried it with 50 steps, with 252 steps and the diestribution in example
2 (using quasirandoms) is still hairy.Marcin Krzysztofikhttp://www.blogger.com/profile/
18261558624689201066noreply@blogger.comtag:blogger.com,1999:blog-1352688022620133175.post-24491220581590283042010-11-25T10:21:14.000Z2010-11-25T10:21:14.000ZHi,<br />If in the second graph, you plot
the average of Spot, then it is not a log normal distribution (it is well know that sum of log normal random variable is not a log normal random variable).<br /><br />Now suppose that for each path
you generate 252 quasi-random number and take the first one to contruct the S_T (the other is constructed with Brownian Bridge). It is like you consider a 252 dimension problem and consider the
projection to the 1st axis.<br /><br />I guess then 252 is a special dimension of your Sobol manager?<br /><br />FeynJekyllhttp://www.blogger.com/profile/
15068334311443161654noreply@blogger.comtag:blogger.com,1999:blog-1352688022620133175.post-75850541209456743032010-04-12T10:21:01.292+01:002010-04-12T10:21:01.292+01:00Jeremy, Mike and Kai, thank you
for your interest in this post of mine. This wasn't an Easter egg, the problem I encountered was real.<br /><br />Jeremy, you're right that quasirandoms are more evenly distributed, but when
I perform the Monte Carlo using one single step the results are fine. The problem arises in the stepwise MC.<br /><br />Mike, the numbers are indeed not statistically independent, you're right.
Do you have an idea how this could be solved? You can use pseudorandoms of course, but is there a way to this properly with quasirandoms? I actually have an idea how this can be done, but need to try
it out. The answer is to use a scrambled sequence...<br /><br />Kai, I agree with you about the higher dimensions, but in this case I've taken numbers just from the first dimension. There are 20k
simulations and 50 steps in each, altogether 1m quasirandom numbers from the first dimension!Marcin Krzysztofikhttp://www.blogger.com/profile/
18261558624689201066noreply@blogger.comtag:blogger.com,1999:blog-1352688022620133175.post-79585429210170057152010-04-07T12:02:38.920+01:002010-04-07T12:02:38.920+01:00This could due to the poor
quality of quasi-random number beyond a certain dimension (say 100). Even if NAG's sobol generator is able to generate up to 50,000 dimensions, there is no guarantee that projection of any two of
them is evenly distributed over the unit square. <br /><br />The fact is the convergence of low discrepancy number is of order n*(log n)^d where d is dimension. With a large d, it barely has any
advantage of pseudo-random numbers. This has not taken into account the reliability of Sobol' numbers in high dimension.Kai
Zhangnoreply@blogger.comtag:blogger.com,1999:blog-1352688022620133175.post-28779377928757652132010-04-07T10:56:07.320+01:002010-04-07T10:56:07.320+01:00Is it something to do with the fact that quasi
random numbers are not statistically independent from each other?Mike Croucherhttp://
www.walkingrandomly.comnoreply@blogger.comtag:blogger.com,1999:blog-1352688022620133175.post-57389847066530908722010-04-06T15:31:42.669+01:002010-04-06T15:31:42.669+01:00Well, let me take a crack at
this egg. The main (only?) difference between pseudo and quasi random numbers seems to be that the latter are more evenly distributed across the interval of interest than the former (there are more
words about this, along with a rather nice graphic which illustrates the point <a href="http://www.nag.co.uk/industryarticles/usingtoolboxmatlabpart3.asp#g05randomnumber" rel="nofollow">here</a>).
Perhaps this has something to do with why their distribution is closer-to-normal than that of the pseudo-random sequence?Jeremy Waltonhttp://www.blogger.com/profile/ | {"url":"http://blog.nag.com/feeds/1994241734843955736/comments/default","timestamp":"2014-04-16T18:56:57Z","content_type":null,"content_length":"15025","record_id":"<urn:uuid:f354249a-b4cc-44b9-b3bd-2f8c56dea0fe>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00214-ip-10-147-4-33.ec2.internal.warc.gz"} |
Berwyn, PA Prealgebra Tutor
Find a Berwyn, PA Prealgebra Tutor
...I make grammar and writing as fun as it's ever going to get. My pricing and scheduling are both flexible. Not only do I tutor the tests, I take them too!
23 Subjects: including prealgebra, geometry, GRE, algebra 1
...I have experience with after school tutoring from 2003-2006. I was an Enon Tabernacle after school ministry tutor for elementary and high school students 2011-2012. These are just a few
13 Subjects: including prealgebra, chemistry, geometry, biology
...I have five years classroom experience, and have been tutoring on the side since college. I was an employee of West Chester's tutoring center and achieved Master - Level 3 certification from
the College Reading and Learning Association. I am comfortable tutoring all skill, age, and confidence levels from middle school math up through Calculus.
8 Subjects: including prealgebra, calculus, geometry, algebra 1
No one has more experience. No one has more expertise. Over the last 15 years I've worked for several different test-prep companies.
23 Subjects: including prealgebra, English, calculus, geometry
...I have tutored subjects ranging from English to astronomy. I am a friendly and supportive person who gets excellent results by making lessons fun as well as informative. If you are interested
in a tutor with excellent experience of teaching then please contact me.
38 Subjects: including prealgebra, reading, public speaking, economics
Related Berwyn, PA Tutors
Berwyn, PA Accounting Tutors
Berwyn, PA ACT Tutors
Berwyn, PA Algebra Tutors
Berwyn, PA Algebra 2 Tutors
Berwyn, PA Calculus Tutors
Berwyn, PA Geometry Tutors
Berwyn, PA Math Tutors
Berwyn, PA Prealgebra Tutors
Berwyn, PA Precalculus Tutors
Berwyn, PA SAT Tutors
Berwyn, PA SAT Math Tutors
Berwyn, PA Science Tutors
Berwyn, PA Statistics Tutors
Berwyn, PA Trigonometry Tutors | {"url":"http://www.purplemath.com/berwyn_pa_prealgebra_tutors.php","timestamp":"2014-04-21T10:46:29Z","content_type":null,"content_length":"23660","record_id":"<urn:uuid:8f851ddd-5f64-4e71-a01b-a8cdaa100e5a>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00141-ip-10-147-4-33.ec2.internal.warc.gz"} |
Prime Factors
Number theory has fascinated mathematicians for years. Fundamental to number theory are numbers themselves, and the basic building blocks for numbers are prime numbers. A prime number is a counting
number that only has two factors, itself and one. Counting numbers which have more than two factors (such as six, whose factors are 1, 2, 3 and 6), are said to be composite numbers. The number one
only has one factor and is considered to be neither prime nor composite.
When a composite number is written as a product of all of its prime factors, we have the prime factorization of the number. For example, the number 72 can be written as a product of primes as: 72 =
The expression "prime factorization of 72. The Fundamental Theorem of Arithmetic states that every composite number can be factored uniquely (except for the order of the factors) into a product of
prime factors. What this means is that how you choose to factor a number into prime factors makes no difference. When you are done, the prime factorizations are essentially the same. Examine the two
factor trees for 72 given below.
When we get done factoring using either set of factors to start with, we still have three factors of two and two factors of three or
Knowing the rules for divisibility will be very helpful when seeking to write a number in prime factorization form. Since a number is divisible by two if it ends in either 0, 2, 4, 6, or 8, it should
be noted that two is the only even prime number. Another way to factor a number other than using factor trees is to start dividing by prime numbers, as shown below.
Once again, we can see that 72 =exponents. An exponent tells how many times the base is used as a factor. In the prime factorization of 72 =
When checking to see if a number is prime or not, you need only divide by those prime numbers which when squared remain less than the given number. For example to see if 131 is prime, you need only
check for divisibility by 2, 3, 5, 7, and 11, since 13^2 = 169. If a prime number greater than 13 divided 131, then the other factor would have to be less than 13 and you would have checked those | {"url":"http://www.eduplace.com/math/mathsteps/5/b/index.html","timestamp":"2014-04-21T15:13:37Z","content_type":null,"content_length":"7827","record_id":"<urn:uuid:a1fea3ef-06d7-4b93-8a06-324ad6afce34>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00282-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linear Algebra : Fields, Matrices
July 24th 2009, 09:18 AM
Linear Algebra : Fields, Matrices
I'm in the subject 'Matrices', but there still is a use of the Fields topic.
I was asked to find a system of equations with exactly 49 different solutions. (no further information included)
Now, I thought of the following system:
x,y(belong to)Z7
Z7 is the following field:
By looking at the system of equations, there are infinite possibilities. If we are limited only to Z7, then it means that there are 7 options for x, and 7 options for y, and together - 49.
(0,0), (0,1), (0,2), ... (0,6)
(1,0) . . . (1,6)
(6,0), . . . . . (6,6)
and so on...
Now, this is the subject of matrices, and I'm really not sure that it's right to use such ways to solve this problem.
Can you please help me?
Thank you :)
July 29th 2009, 08:16 AM
I'm in the subject 'Matrices', but there still is a use of the Fields topic.
I was asked to find a system of equations with exactly 49 different solutions. (no further information included)
Now, I thought of the following system:
x,y(belong to)Z7
Z7 is the following field:
By looking at the system of equations, there are infinite possibilities. If we are limited only to Z7, then it means that there are 7 options for x, and 7 options for y, and together - 49.
(0,0), (0,1), (0,2), ... (0,6)
(1,0) . . . (1,6)
(6,0), . . . . . (6,6)
and so on...
Now, this is the subject of matrices, and I'm really not sure that it's right to use such ways to solve this problem.
Can you please help me?
Thank you :)
if there are m variables and the rank of these equations(or matrix) is r,then there is 7^(m-r) solutions.....
July 29th 2009, 10:08 AM
I'm in the subject 'Matrices', but there still is a use of the Fields topic.
I was asked to find a system of equations with exactly 49 different solutions. (no further information included)
Now, I thought of the following system:
x,y(belong to)Z7
Z7 is the following field:
By looking at the system of equations, there are infinite possibilities. If we are limited only to Z7, then it means that there are 7 options for x, and 7 options for y, and together - 49.
(0,0), (0,1), (0,2), ... (0,6)
(1,0) . . . (1,6)
(6,0), . . . . . (6,6)
and so on...
Now, this is the subject of matrices, and I'm really not sure that it's right to use such ways to solve this problem.
Can you please help me?
Thank you :)
I'm not sure what you mean by that. Certainly $Z_7\times Z_7$ contains exactly 49 members but what makes you think they are all solutions to the equations? You mention (1,0) but if x= 1, y= 0,
the equations become 3(1)+ 7(0)= 14 and 6(1)+ 14(0)= 28 which are not true, even in $Z_7$.
In fact, in $Z_7$, 7, 14, and 48 are all congruent to 0, so your equations become just 3x= 0 and 6x= 0 (mod 7). While y can be any number, x must be 0. There are only 7 solutions, not 49.
July 29th 2009, 10:59 AM
Bruno J.
How about the system $x=x$ over the field with 49 elements? (Nod) | {"url":"http://mathhelpforum.com/advanced-algebra/95971-linear-algebra-fields-matrices-print.html","timestamp":"2014-04-18T08:24:12Z","content_type":null,"content_length":"8789","record_id":"<urn:uuid:f87c8a97-5697-49dc-8c20-e07d63ef6099>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00525-ip-10-147-4-33.ec2.internal.warc.gz"} |
Michael O.
I have both a BA in Mathematics and an MS in Statistics from Rutgers University. I have been tutoring math since 1991 and statistics since 2003. I also have my Certificate of Eligibility for
Secondary School Mathematics from the State of NJ. I tutor part-time but I am actually a biostatistician by profession. I have tutored over 70 different students - anywhere from a few weeks to 5
years in duration. I can supply references if needed.
I believe that effective communication and a supportive environment is crucial to a good teaching/learning relationship. I don't mind questions, in fact I encourage them at any time!I see learning as
a 2-way street. It is the students responsibility to stop me immediately and ask questions if he/she does not understand. It is my responsibility, in response, to answer those questions and find a
new way of explaining the problem so he/she does understand it. I believe a student should be able to "talk" mathematics as well as "write" it. I am picky when it comes to using the correct
terminology, but it is done with a supportive attitude and with good humor! I am very easy going and approachable!
My philosophy regarding math is as follows: it is a fantastic way of looking at the world and also improves necessary critical thinking skills in a student. In my opinion, nearly everyone can learn
math if given the correct support and instructional methods. Let me share this great world with you!
Michael's subjects | {"url":"http://www.wyzant.com/Tutors/NJ/Kendall-Park/7618802/?g=3JY","timestamp":"2014-04-18T03:08:46Z","content_type":null,"content_length":"84940","record_id":"<urn:uuid:47d0a002-4fa0-42e4-9de9-5b53f63af7c2>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00163-ip-10-147-4-33.ec2.internal.warc.gz"} |
West Covina Trigonometry Tutor
...Understanding Algebra 2 is quite important since success in College Algebra and/or Pre-Calculus depends on understanding Algebra 2. Also, ACT and SAT Math both test Algebra 2. Having taught
this subject for over 10 years, I can help you whether you are struggling on this math subject or want to ace it.
38 Subjects: including trigonometry, English, reading, writing
...I'd been teaching different areas of chemistry in high school and college level. I graduated with a degree in Bachelor of Science in Chemistry. I also finished my MAT Chemistry, and earned
units in PhD Chemistry Education.
4 Subjects: including trigonometry, chemistry, organic chemistry, CBEST
...I have taken several discrete math courses and I spent a summer solving hard problems in discrete math with a friend. I began programming in high school, so the first advanced math that I did
was discrete math (using Knuth's book called Discrete Mathematics). I have also participated in high sch...
28 Subjects: including trigonometry, Spanish, chemistry, French
...Have tutored students in Algebra 2. Received 5 on Calc BC exam. I have taken several advanced math classes at Caltech since then, and have used Calculus regularly over the course of my physics
15 Subjects: including trigonometry, reading, physics, calculus
...I plan to be a math teacher and am currently employed at a local tutoring center. I have three years of tutoring experience and absolutely love what I do. I am a firm believer in finding out
what method is best for each student.
18 Subjects: including trigonometry, reading, geometry, algebra 1 | {"url":"http://www.purplemath.com/West_Covina_trigonometry_tutors.php","timestamp":"2014-04-16T16:14:00Z","content_type":null,"content_length":"24014","record_id":"<urn:uuid:e14d0b07-53ed-4af9-9320-74487cabfff0>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00090-ip-10-147-4-33.ec2.internal.warc.gz"} |
CS 188, Fall 2005,
CS 188, Fall 2005, Introduction to Artificial Intelligence
Assignment 6 Part 1, due 12/5, total value 4% of grade
This assignment should be done in pairs. Don't leave it to the last minute! Consult early and often with me and/or your TA.
This assignment comes in two parts. The first part is worth 50 points out of 100 and is mainly intended to help you become familiar with the basics of MDP representations, algorithms, and agents and
with Spider solitaire. It does not involve writing much new code. The second part, to be posted shortly, deals with reinforcement learning.
The first thing you need to do is load the CS188 AIMA code in the usual way: load aima.lisp and then do (aima-load 'search) and (aima-load 'mdps). You should also copy, compile, and load all the lisp
files in this directory.
Be sure to use the latest version from ~cs188. Several things have changed and new code has been added. As always, remember to compile the code.
The AIMA code includes a general facility for defining and using MDPs. Code defining MDPs and all the basic operations on them (including I/O) appears in
The main methods defined on MDPs are as follows:
• (actions mdp state) returns the list of actions legal in state, just as in search problems.
• (results mdp action state) returns an enumerated probability distribution giving each possible outcome state and its probability.
• (reward state1 action state2) gives the reward for doing action in state1 leading to state2. In some MDPs, the reward function depends only on state1 or on state1 and action. The reward-type of
the MDP can be S, SA, or SAS to specify which kind of MDP this is.
The simplest kind of MDP is an enumerated-mdp, in which actions, results, and rewards are stored in hash tables. The MDP methods are defined generically for all such MDPs. An example is the 4x3 MDP
used throughout Chapter 17, which is defined in There are also dynamic programming algorithms (value iteration and policy iteration) for solving MDPs: Value iteration outputs the utility function as
a hash table. For example, try:
>> (hprint (value-iteration *4x3-mdp*))
(1 1): 0.70530814
(2 1): 0.655308
(3 2): 0.660274
(1 3): 0.81155825
(2 3): 0.8678082
(4 1): 0.38792402
(4 3): 1.0
(3 1): 0.6114151
(1 2): 0.7615582
(4 2): -1.0
(3 3): 0.91780823
The function value-iteration-policy does value iteration and converts the result into an optimal policy by one-step lookahead:
>> (hprint (value-iteration-policy *4x3-mdp*))
(1 1): UP
(2 1): LEFT
(3 2): UP
(1 3): RIGHT
(2 3): RIGHT
(4 1): LEFT
(4 3): NIL
(3 1): LEFT
(1 2): UP
(4 2): NIL
(3 3): RIGHT
MDP agents and environments
You will first need to understand the basics of how environments and agents work. See Notice in particular how run-environment works: it invokes the agent program with the current percept, and then
updates the state of the environment based on the action that the agent returns. It also keeps track of the agent's score in the environment by updating the score slot of the agent itself.
Any MDP can be converted into an environment using the mdp->environment function in
This function needs a list of one agent to run in the environment. By default, it uses one constructed by new-simple-mdp-solving-agent, which is defined in Such an agent computes a policy for the MDP
(e.g., by the value-iteration-policy algorithm, see mdps/algorithms/dp.lisp) and then executes it.
Question 1 (5 pts). Use the agent-trial function to measure the average score of the simple-mdp-solving-agent in *4x3-mdp* over 1000 trials. Your result should be close to the true utility of the
initial state (1 1), as shown on AIMA2e p.619.
Question 2 (5 pts). Now let's consider an agent that makes decisions using an approximate utility function and a lookahead search. (While this is unnecessary for the 4x3 world, it is essential for
Spider.) Because an MDP has actions with uncertain outcomes, but just one agent, the search we need is an expectimax search that alternates between choosing maximum-utility actions and calculating
expected outcome values. The algorithm is given in
and an agent that uses it is given in Using the approximate utility function in 4x3-eval.lisp, evaluate depth-1 and depth-2 expectimax agents over 1000 trials on *4x3-mdp*.
Question 3 (15 pts). The expectimax algorithm (and indeed any algorithm using Bellman backups) computes expected values by summing over all possible outcome states. This will not be possible in
Spider, where the number of outcomes for one action can exceed 32 quintillion. Instead of summing over all outcomes, we will have to sum over a small sample. First, write the following methods for
enumerated MDPs (one line each):
• (num-results mdp action state) returns the number of possible outcomes of action in state.
• (random-result mdp action state) returns an outcome state sampled from the distribution over outcomes for action in state. [Hint: something similar is required in mdp-env.lisp.]
Now, write sampling versions of the expectimax functions called sampling-expectimax-cutoff-decision, sampling-expected-cutoff-value, and sampling-max-cutoff-value. These should take an additional
argument specifying the number of samples. The only substantial change to the expectimax code will be in sampling-expected-cutoff-value, which should first check if the actual number of outcomes is
greater than the number of samples allowed. If so, it should generate the samples and average over them; if not, it should compute the exact expectation as before. Now use
new-expectimax-cutoff-mdp-agent to make an agent that uses sampling-expectimax-cutoff-decision with 2 samples and depth-2 lookahead. Test this agent over 1000 trials as before; you should find that
the agent does nearly as well as the depth-2 expectimax agent.
Now we will apply similar techniques to Spider. First, we need to understand the game iself and how it is implemented. You should definitely play a few times; the game is available on Windows systems
and there are many free downloads. The game is also available on the unix cluster under ~cs188/spider and as a java applet. There are two ways to think about Spider.
• Spider is a partially observable MDP problem with certain special characteristics: except for the randomly generated initial state, everything proceeds deterministically although the true state
is not fully visible. Because of this determinism, we can define the Spider POMDP as if it were just an ordinary search problem except that we add a get-percept method to generate the partial
percept from the true state. Thus, Spider is a partially observable problem, or poproblem, and its get-percept method simply removes all identifying information from the hidden cards in the
state. See search/domains/poproblems.lisp for the general definition and search/environments/poproblem-env.lisp for a generic method to convert any poproblem into an environment.
• As discussed in class, any POMDP can be viewed by the agent as a fully observable MDP whose state space is the agent's belief state, i.e., probability distribution over all possible states. Now
of course the Spider state space is huge, so this doesn't seem very promising, except for the fact that a Spider agent's belief state is always a uniform distribution over the locations of the
hidden cards. Furthermore, the Spider percept always tells the agent exactly which cards are hidden -- namely, all the ones it can't see. Hence, the Spider percept itself "represents" the agent's
belief state; there is nothing more to be learned from the percept history and no explicit probability distributions need be written out. Thus, we can define a Spider MDP whose states are just
the possible Spider percepts. The transition model for this MDP is also straightforward: for example, if we flip over a hidden card, it is chosen uniformly at random from the hidden cards and is
no longer hidden.
You can see the definitions for the Spider MDP in spider-mdp.lisp, but probably it's best to look first at the underlying Spider implementation itself in spider.lisp. To make a Spider instance, call
make-spider-problem with suitable parameters. Here's a particularly easy one:
>> (setq ps-easy (make-spider-problem :num-packs 1 :num-suits 1 :num-stacks 10 :num-hidden-rows 2))
>> (setq s0-easy (problem-initial-state ps-easy))
0 ??? ??? ??? AH
1 ??? ??? ??? KH
2 ??? ??? 2H
3 ??? ??? 5H
4 ??? ??? AH
5 ??? ??? JH
6 ??? ??? 6H
7 ??? ??? 9H
8 ??? ??? 2H
9 ??? ??? 4H
Reserve: ....................
Notice that the stacks are numbered from 0 to 9 and that the "top-to-bottom" orientation of stacks in the Windows implementation is replaced here by a "left-to-right" ordering. Notice also that this
is the state of the Spider poproblem, not the percept. The percept looks the same to the naked eye:
>> (setq s0-percept (get-percept ps-easy s0-easy))
0 ??? ??? ??? AH
1 ??? ??? ??? KH
2 ??? ??? 2H
3 ??? ??? 5H
4 ??? ??? AH
5 ??? ??? JH
6 ??? ??? 6H
7 ??? ??? 9H
8 ??? ??? 2H
9 ??? ??? 4H
Reserve: ....................
but in the percept the hidden cards really are hidden:
>> (card-number (second (aref (spider-state-stacks s0-easy) 9)))
>> (card-number (second (aref (spider-state-stacks s0-percept) 9)))
Spider moves specify the number of cards to be moved, the origin stack, and the destination stack:
>> (pprint (actions ps-easy s0-easy))
(NEW-ROW #S(SPIDER-MOVE :K 1 :FROM 4 :TO 8)
#S(SPIDER-MOVE :K 1 :FROM 0 :TO 8) #S(SPIDER-MOVE :K 1 :FROM 3 :TO 6)
#S(SPIDER-MOVE :K 1 :FROM 9 :TO 3) #S(SPIDER-MOVE :K 1 :FROM 4 :TO 2)
#S(SPIDER-MOVE :K 1 :FROM 0 :TO 2))
The new-row action deals out a new row of cards from the reserve. The outcome of a Spider action is defined by the result method:
>> (result ps-easy #S(SPIDER-MOVE :K 1 :FROM 9 :TO 3) s0-easy)
0 ??? ??? ??? AH
1 ??? ??? ??? KH
2 ??? ??? 2H
3 ??? ??? 5H 4H
4 ??? ??? AH
5 ??? ??? JH
6 ??? ??? 6H
7 ??? ??? 9H
8 ??? ??? 2H
9 ??? 10H
Reserve: ....................
See spider.lisp for a complete explanation of exactly which moves are allowed. We have eliminated some redundant moves to make the game a little easier for the computer to play. You should also look
at the goal-test, step-cost, and get-percept methods.
A Spider poproblem instance can be converted into an environment as follows:
>> (setq es-easy (poproblem->environment ps-easy :agents (list (new-random-spider-agent :problem ps-easy))))
You can now type (run-environment es-easy) and watch the random agent playing. It usually wins, which shows that this is an easy class of Spider problems. Look in spider-agents.lisp to see the
definition of new-random-spider-agent. This file also contains an expectimax agent specifically for Spider.
The Spider MDP is defined in spider-mdp.lisp. Like any MDP, this has a results method, but it should be avoided, especially for the new-row action. The file defines num-results and random-result
methods for use with sampling expectimax. We can make a Spider MDP as follows:
>> (setq smdp (make-spider-mdp :problem ps-easy :initial-state s0-easy))
Question 4 (5 pts). Evaluate the random spider agent over 1000 instances of the "easy" Spider game. (Be sure that you generate a new instance each time!)
Question 5 (10 pts). Write a function new-random+history-spider-agent that, like new-random-spider-agent, returns an agent that selects moves randomly; however, these agents should keep a history of
all visited states in a hash table (don't forget to use the state-hash-key!) and should never execute a move that leads to a state already visited. [Hint: you need only check the outcome of moves
that have exactly one outcome.] Does this make your agent worse or better on the easy Spider instances?
Question 6 (5 pts). The file spider-eval.lisp contains an approximate utility function for Spider. Explain why the function includes the spider-suits-completed feature. [Hint: this is not a
completely trivial question.]
Question 7 (5 pts). Evaluate a depth-1 sampling expectimax agent that uses 5 samples on both the "easy" instances and on instances with 2 packs, 1 suit, and 4 hidden rows. (Do as many trials as you
can in a reasonable time, but no more than 1000 in any case.) If cycles are available, evaluate a depth-2 agent as well. | {"url":"http://www.cs.berkeley.edu/~russell/classes/cs188/f05/assignments/a6/a6-part1.html","timestamp":"2014-04-21T00:09:52Z","content_type":null,"content_length":"16090","record_id":"<urn:uuid:fe2db484-39fe-494e-b4bf-1b0bbe96c2e5>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00358-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 31 - 40 of 114 matches
Placing a Community: Demographic Contexts part of Examples
This assignment asks students to examine several recent U.S. census tables about Hispanics and educational attainment and write a brief report that details the conclusions they reach.
Subject: American Studies, Sociology
Analyzing Data on American Political Divisions part of Examples
Students conducted data analysis about American political divisions and created two papers from this data analysis. Sutdents were assigned to group projects involving data analysis assigned chapters
in MICROCASE AMERICAN GOVERNMENT, a textbook that includes access to a variety of datasets.
Subject: Political Science
Finding the best water line: the least squares method in action part of Examples
Students experiment with the slope and y-intercept of a line representing a hose used to water several bushes, and try to minimize the total squared error produced by the line.
Subject: Mathematics
Data Rich Economic Policy Brief part of Examples
This assignment asks students to write a data-rich policy brief, showing their ability to apply standard microeconomic models and contextualizing the policy debate with numeric evidence.
Subject: Economics
Calculating Divorce Rates part of Examples
This exercise from a course in family sociology asseses students' ability to interpret divorce rates from provided spreadsheet data and to critically analyze three articles that use divorce rates in
their content.
Subject: Economics, Sociology
Understanding Exponential Growth in the Context of Population Models part of Examples
This set of short assignments gives students practice with exponential models in the context of the growing human population.
Subject: Mathematics, Geography:Human/Cultural
Introducing Introductory Psychology Students to Quantitative Analysis part of Examples
An assignment that involves introductory psychology students in the analysis a data set on personality traits and their relationship to measures of happiness and well-being.
Subject: Psychology | {"url":"https://serc.carleton.edu/sp/library/quantitative_writing/examples.html?results_start=31","timestamp":"2014-04-18T10:56:35Z","content_type":null,"content_length":"35069","record_id":"<urn:uuid:47bfd86f-25dc-4471-8c93-a534526a2f00>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00436-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculating the ISO week number for a date
Calculating the ISO week number for a date
Calculating the ISO week number for a particular date seems to cause problems for some developers. In actuality, the algorithm is very simple, but what seems to happen is that people try and
implement it in one line of code. My answer is don't; I would doubt that your application's performance profile depends on calculating the week number ultra-quickly. Also, as you'll see, there are
some boundary conditions that can trip you up.
So, how's it done? How do you calculate the ISO week number for a particular date?
What is an ISO week number?
First, we need to review what the ISO week number is. According to the ISO (International Standards Organization) in document ISO 8601, an ISO week starts on a Monday (which is counted as day 1 of
the week), and week 1 for a given year is the week that contains the first Thursday in the year.
Calculating the date of ISO week 1 for a given year
If you play around with the numbers, you'll see how to calculate the date of the Monday for week 1. The first Thursday of the year will be either the 1st, 2nd, all the way up to the 7th of January.
If it were the 1st, week 1 will start on the 29-Dec of the previous year (yes, this is correct: the ISO week 1 for a given year may have dates from the previous year); if the 2nd, week 1 will start
on 30-Dec of the previous year; if the 3rd, 31-Dec; if the 4th, 1-Jan; if the 5th, 2-Jan; if the 6th, 3-Jan; and finally if the first Thursday were the 7th, week 1 would start on 4-Jan.
However, calculating the date of the first Thursday is hard. Well, not hard, but complicated. A better way is to see that the ISO week is so defined that the 4-Jan of every year is in week 1. In
other words, that the first week must contain four or more days from the year (if 1-Jan were a Thursday, 4-Jan would be the Sunday, and hence would form week 1). So we calculate 4-Jan and work out
which day of the week it is. If it's Thursday, week 1 starts three days earlier, if Friday, four days earlier, if Saturday, five days earlier, if Sunday, six days earlier. If it's a Monday, we found
the week 1 start date straight away; if Tuesday, week 1 starts one day earlier, if Wednesday, two days earlier.
public static DateTime GetIsoWeekOne(int Year) {
// get the date for the 4-Jan for this year
DateTime dt = new DateTime(Year, 1, 4);
// get the ISO day number for this date 1==Monday, 7==Sunday
int dayNumber = (int) dt.DayOfWeek; // 0==Sunday, 6==Saturday
if (dayNumber == 0) {
dayNumber = 7;
// return the date of the Monday that is less than or equal
// to this date
return dt.AddDays(1 - dayNumber);
Note that I'm assuming that the DayOfWeek enumeration starts with Sunday. One the one hand this is dodgy programming behavior (the .NET Framework people may change this in the future, so I should use
the actual enumerations in a switch statement), but on the other it's acceptable (it's documented like this and it's unlikely that the .NET Framework people would change it now).
(Also note that, according to Design by Contract principles, I should be validating the year value that's passed in. I'm cheating a little by letting the call to the DateTime constructor take care of
the validation: if the year is out of range, it's this constructor that will throw an exception.)
Calculating the ISO week for an easy date
There are a couple of things to point out right away, I think. First is that 29-Dec, 30-Dec, and 31-Dec of a given year could actually be in the first week of the succeeding year, and second is that
1-Jan, 2-Jan, 3-Jan of a given year could be in the last week of the previous year.
Apart from those exceptional 6 days, it's pretty easy to calculate the week number for a given day: calculate the Monday of week 1 in the same year, subtract it from the date you're given to get the
number of days in between, divide this by 7 (discarding the remainder) and add 1. The result is the week number.
Let's illustrate with a concrete example. This year (2003), week 1 started on 30-Dec-2002 (the first Thursday of 2003 was 2-Jan). Say we were trying to calculate the week number for Mon 3-Feb.
Subtracting 30-Dec-2002 from 3-Feb-2003 gives 35 days. Divide this by 7 gives 5. Add 1 to give 6. 3-Feb is thus in week 6. By looking at a diary or a calendar, you can verify that this is correct.
Another test: Sun 2-Feb. The difference in days is 34. Divide by 7 (discarding the remainder) gives 4. Add 1 to give us the answer that Sun 2-Feb is in week 5. The previous test will show us that
this is true: the Sunday prior to a Monday is in a previous week.
Calculating the ISO week for a hard date
Having solved the problem for 359 (or 360) days of the year, we should now solve it for the 6 problematic days. (In other words a 98% success rate for an algorithm isn't good enough <g>.)
Let's look at the case of 1-Jan in depth. We need to see if it's counted as being in the previous year, so we calculate when week 1 of this year starts. If week 1 starts after 1-Jan then obviously
1-Jan is going to appear in the last week of the previous year. So we calculate the start date for week 1 of the previous year, subtract it from 1-Jan, divide by 7 and add 1. Obviously this algorithm
will also work for 2-Jan and 3-Jan.
Next up, let's think about 31-Dec. This may appear in the first week of the following year. So calculate the start of week 1 of the following year. If 31-Dec is less than this, it will be in the last
week of its year, and we'll use the standard algorithm to calculate it. If 31-Dec is greater than or equal to than the start of week 1 of the following year, it's obviously in that week. Notice that
we initially need to calculate week 1 of the following year for 31-Dec, not week 1 of the current year. This is different from all the other dates. (Of course, the same argument and algorithm will
apply for 29-Dec and 30-Dec.)
Implementing the ISO week calculation
Now we've analyzed the situation completely, we can implement the algorithm in code. Rather than return two values, one for the week number and the other for the year, I implemented the method to
return a single int value of the form YYYYWW.
public static int GetIsoWeek(DateTime dt) {
DateTime week1;
int IsoYear = dt.Year;
if (dt >= new DateTime(IsoYear, 12, 29)) {
week1 = GetIsoWeekOne(IsoYear + 1);
if (dt < week1) {
week1 = GetIsoWeekOne(IsoYear);
else {
else {
week1 = GetIsoWeekOne(IsoYear);
if (dt < week1) {
week1 = GetIsoWeekOne(--IsoYear);
return (IsoYear * 100) + ((dt - week1).Days / 7 + 1);
If you want the actual week number and year rather than this compound return value, use the following code:
IsoWeek = GetIsoWeek(myDate);
int year = IsoWeek / 100;
int week = IsoWeek % 100;
This algorithm turned out to be a case of thinking things through carefully and not losing sight of the boundary cases. Of the implementations I've seen, those that attempt to calculate the week
number of a given date usually get it right, however, they lose sight of the fact that the year for the week may be different than the date's year, meaning that their answer could ultimately be
I did find one example on The Code Project that calculates the week number from first principles (as in, ignoring the date calculation methods in the .NET Framework, such as the ones I used) and gets
the week number right, but always returns the date's year, despite pointing out in the text that the year may be different! | {"url":"http://www.boyet.com/Articles/PublishedArticles/CalculatingtheISOweeknumb.html","timestamp":"2014-04-20T15:50:14Z","content_type":null,"content_length":"11335","record_id":"<urn:uuid:49136b5e-e009-470d-b635-9a3cb96c2534>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00039-ip-10-147-4-33.ec2.internal.warc.gz"} |
0-Based Counting
Date: 07/05/2003 at 15:37:40
From: Anonymous
Subject: 0-based counting: Why should we abolish 1-based?
I am trying to convince people, especially my friends, that you
should count from 0. I.e. "Page 0, page 1, page 2."
For example, when I numbered a score (musical composition) from 0, a
person complained and argued that 1-based counting is the standard [in
this case]. So I need a little more information, and especially very
convincing proof or statements.
One person suggests that I am going against the way everyone else
does it, and "everyone is taught to count from 1."
I have many reasons why you should count from 0.
- Modulo based systems (like hour:minute:second, century:year,
- Counting from 1 and 0 is inconsistent.
- 0 is the first digit in the number system, so why not start from it?
- Most programming languages have 0-based arrays.
- A lot of math stuff counts from 0, like the Taylor series.
Date: 07/06/2003 at 23:12:59
From: Doctor Peterson
Subject: Re: 0-based counting: Why should we abolish 1-based?
I certainly wouldn't propose that we abolish 1-based counting, because
it is very useful. When you count objects, you are making a
correspondence between the natural numbers and the objects: 1, 2, 3,
... . The last number you name is the number of objects you counted.
That doesn't work if you start at zero.
What you are talking about is not really counting, but naming items in
a sequence, such as pages. Even there, it is very natural to start at
1, so that the numbers you assign to the items correspond to their
ordinals: first, second, third. It would be very confusing to call the
first page zero, when "page 1" and "first page" are synonymous. So
your friends are right when they say that this is the standard way to
label page numbers, etc.; and trying to change that amounts to trying
to change the English language.
But you are right that there are many situations in which zero-based
counting is appropriate, particularly in mathematical settings. Most
sequences and series are easier to express in a zero-based form, since
then the index is the number of steps away from the first term, "term
0," and we can add (for an arithmetic sequence) or multiply (for a
geometric sequence) by the same thing each time, or use x^n in a
series expansion of a function. In programming, similarly, if you use
0-based indexing, the index is the offset from the storage location of
the first element.
Yet when you do this, you introduce problems, since the number of
terms or array elements is not the last index, but one more than that,
which can be confusing. (Basic has a particularly poor way of handling
this; C is just confusing to beginners.) So it would be hard to say
that 0-based counting is the ideal way to assign indexes.
In general, I would say that zero-based counting is appropriate in
technical contexts (computers and math), but one-based counting is
standard in other settings, and to try to change that would be
On the other hand, it is interesting that in many countries, such as
England, a sort of zero-based counting is used for floors in a
building: the bottom floor is called the ground floor, and the term
"first floor" is used for what Americans call the second floor. That
is, "floor" means a floor above the ground. They don't use the term
"zeroth floor." Any other situation where the first of a sequence is
distinct from those that follow might naturally follow the same
scheme, such as "cover page" and "page 1, 2, ...". But if all pages
are similar, then it makes most sense to follow tradition.
I'm not sure what you are saying about "modulo-based systems," by
which you seem to mean systems of units of time. Are you saying we DO
use zero-based counting there, or that it would solve some perceived
problem? I do see that using times like 0:30 rather than 12:30 would
work better, but that is not counting; and if you think that calling
the first century the "zeroth century" would help, what would you call
the first century BC? See this page for some thoughts on that:
The Second Millennium
- Doctor Peterson, The Math Forum
Date: 07/07/2003 at 10:52:45
From: Anonymous
Subject: Thank you (0-based counting: Why should we abolish 1-based?)
Thank you so much for your answer. I've tried searching the Web for
this topic, but the closest thing that I found was the awkwardness in
handling 1-based years (i.e. "2001 is new millennium, not 2000). Once
again, thanks a lot! | {"url":"http://mathforum.org/library/drmath/view/63371.html","timestamp":"2014-04-20T12:00:45Z","content_type":null,"content_length":"9760","record_id":"<urn:uuid:f2e9c2a2-38aa-41d2-adeb-c20ba98cecb4>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00623-ip-10-147-4-33.ec2.internal.warc.gz"} |
math, correction plz
Posted by jas20 on Saturday, March 24, 2007 at 8:15pm.
Can someone correct these for me.PLZ
Directions solve equation
x+4 = 3
My answer: x = 5
Probelm #2
Directions solve equation
(4x+1) + 3 = 0
My answer x = 2
Problem #3
Directions solve equation
My answer y= 1 and y = 9
the symbol a is a radical it came out wrong.
the first one is correct
Use brackets to show which parts are below the square root sign
in the second:
square both sides etc.
You should have been told that if you obtain algebraic answers after "squaring" , all solutions have to be verfied.
If you check your answer of x=2, it does not satisfy the original equation.
So the second equation has no real solution.
The same is true for your third equation.
Even though y=1 and y=9 are algebraic solutions, when you check the answers in the original equation, only y=9 works, the other answer is not valid.
Related Questions
math, beginning algebra - can someone correct this for me. Directions: Solve ...
math,correction,plz - can someone correct this forme solve 7/5 = 35/x My answer...
math,correction - Is this correct or no. I need help in a couple of problems can...
correction of math - can someone correct this for me please. Directions: Solve ...
math,correction - can someone correct this for me plz.... Write the equation of ...
math,correction - Can someone correct these for me plz... Problem#6 Factor. x(x-...
math, beginning algebra - can someone correct this for me: Directions: Solve ...
Math Help plz!!! Really need help. - Use the Systematic Trial to solve this ...
math, algebra - How do I solve for the following: Directions solve each literal ...
Math - Can someone please explain how to solve 3x^2+7x-15=0 by using the ... | {"url":"http://www.jiskha.com/display.cgi?id=1174781703","timestamp":"2014-04-17T16:50:16Z","content_type":null,"content_length":"9014","record_id":"<urn:uuid:78a25e39-3de4-4b15-af1b-8f838e2cd799>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00527-ip-10-147-4-33.ec2.internal.warc.gz"} |
Messed up factorial formula
Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °
You are not logged in.
#1 2007-02-26 10:09:19
Messed up factorial formula
[expression manipulation/basic calculus]
Last edited by kylekatarn (2007-02-28 10:44:57)
#2 2007-03-06 10:20:59
Re: Messed up factorial formula
1) The binomial coefficients are a good clue
2) don't try to make any calculations before replacing parts of the expression:)
Last edited by kylekatarn (2007-03-06 10:26:14)
#3 2007-03-06 15:02:33
Legendary Member
Re: Messed up factorial formula
Q: Who wrote the novels Mrs Dalloway and To the Lighthouse?
A: Click here for answer.
#4 2007-03-06 15:18:53
Re: Messed up factorial formula
Seeing the heading, I thought the fomula has something to do with James-Stirling formula! I had been completely misled!
Character is who you are when no one is looking.
#5 2007-03-07 05:21:45
Re: Messed up factorial formula
yes it is, Jane, good work!
Ganesh, I tried to make things as obscure as I could lol
#6 2007-03-07 06:12:48
Legendary Member
Re: Messed up factorial formula
kylekatarn wrote:
Ganesh, I tried to make things as obscure as I could lol
And you certainly succeeded.
Q: Who wrote the novels Mrs Dalloway and To the Lighthouse?
A: Click here for answer.
Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °
1) The binomial coefficients are a good clue2) don't try to make any calculations before replacing parts of the expression:)
kylekatarn,Seeing the heading, I thought the fomula has something to do with James-Stirling formula! I had been completely misled!
yes it is, Jane, good work! Ganesh, I tried to make things as obscure as I could lol [[]]
kylekatarn wrote:Ganesh, I tried to make things as obscure as I could lol
Ganesh, I tried to make things as obscure as I could lol | {"url":"http://www.mathisfunforum.com/viewtopic.php?id=6189","timestamp":"2014-04-17T12:40:59Z","content_type":null,"content_length":"14716","record_id":"<urn:uuid:9fe7cce1-3d9f-4a9e-8b71-877524849395>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00435-ip-10-147-4-33.ec2.internal.warc.gz"} |
Possible loss of precision
On 11/14/2013 6:31 PM, Eric Sosman wrote:
> On 11/14/2013 6:03 PM, Gene Wirchenko wrote:
>> On Thu, 14 Nov 2013 20:39:42 +0100, Michael Jung
>> <(E-Mail Removed)> wrote:
(E-Mail Removed)-berlin.de
(Stefan Ram) writes:
>>>> char x = 'C' + 1;
>>>> x = x + 1;
>>>> in a block, one gets an error message
>>>> »possible loss of precision«
>>>> for »x = x + 1«. Ok, fair enough: »x + 1« has type »int«,
>>>> and precision is lost when assigning this to the variable x
>>>> of type char.
>>>> I was somewhat surprised, though, that »char x = 'C' + 1;«
>>>> does not yield an error, even though the type of »'C' + 1«
>>>> is int, too. Possibly, this time, the compiler can figure
>>>> out that the actual value still fits into a char.
>>>> But then, »int i = 2.0;« gives an error, even though the
>>>> compiler should see that »2.0« can be represented as an int.
>>> You are right that x+1 is "promoted" to int and the second assignment
>>> fails. In the other cases the compiler will follow JLS 15.28 => 5.2.
>>> This concerns constant expressions first and the second allows a
>>> narrowing in rare cases, of which the first line is one but the
>>> "int i = 2.0" isn't.
>> Why isn't it though?
> Speculation: The fact that a floating-point expression happens
> to evaluate to a number with no fractional part doesn't mean the
> value "is" an integer. One reasonable view of FP is that a value
> is just a representative of a range of real values, namely, all
> those real values that round to the representative. In this
> view, converting 2.0 to 2 *does* lose precision, specifically,
> the amount of wiggle room.
As a more concrete example,
static final double FOO = 17.316;
static final double BAR = 8.658;
int i = FOO / BAR; // Should javac be silent here?
Neither the numerator nor the denominator can be represented
exactly as stated, yet the quotient is (probably; cook up your
own numbers if mine are faulty) exactly 2.0. Would it be a
good idea to let this construct slide through unremarked? Or
should javac acknowledge the inherent imprecisions in the
operations leading to the exactly-integral result?
Eric Sosman
(E-Mail Removed) | {"url":"http://www.velocityreviews.com/forums/t966058-possible-loss-of-precision.html","timestamp":"2014-04-21T10:10:11Z","content_type":null,"content_length":"67945","record_id":"<urn:uuid:05487753-041d-45d4-8b73-d2fc5a312464>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00357-ip-10-147-4-33.ec2.internal.warc.gz"} |
This Article
Bibliographic References
Add to:
The Ball-Pivoting Algorithm for Surface Reconstruction
October-December 1999 (vol. 5 no. 4)
pp. 349-359
ASCII Text x
Fausto Bernardini, Joshua Mittleman, Holly Rushmeier, Cláudio Silva, Gabriel Taubin, "The Ball-Pivoting Algorithm for Surface Reconstruction," IEEE Transactions on Visualization and Computer
Graphics, vol. 5, no. 4, pp. 349-359, October-December, 1999.
BibTex x
@article{ 10.1109/2945.817351,
author = {Fausto Bernardini and Joshua Mittleman and Holly Rushmeier and Cláudio Silva and Gabriel Taubin},
title = {The Ball-Pivoting Algorithm for Surface Reconstruction},
journal ={IEEE Transactions on Visualization and Computer Graphics},
volume = {5},
number = {4},
issn = {1077-2626},
year = {1999},
pages = {349-359},
doi = {http://doi.ieeecomputersociety.org/10.1109/2945.817351},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Visualization and Computer Graphics
TI - The Ball-Pivoting Algorithm for Surface Reconstruction
IS - 4
SN - 1077-2626
EPD - 349-359
A1 - Fausto Bernardini,
A1 - Joshua Mittleman,
A1 - Holly Rushmeier,
A1 - Cláudio Silva,
A1 - Gabriel Taubin,
PY - 1999
KW - 3D scanning
KW - shape reconstruction
KW - point cloud
KW - range image.
VL - 5
JA - IEEE Transactions on Visualization and Computer Graphics
ER -
Abstract—The Ball-Pivoting Algorithm (BPA) computes a triangle mesh interpolating a given point cloud. Typically, the points are surface samples acquired with multiple range scans of an object. The
principle of the BPA is very simple: Three points form a triangle if a ball of a user-specified radius $\rho$ touches them without containing any other point. Starting with a seed triangle, the ball
pivots around an edge (i.e., it revolves around the edge while keeping in contact with the edge's endpoints) until it touches another point, forming another triangle. The process continues until all
reachable edges have been tried, and then starts from another seed triangle, until all points have been considered. The process can then be repeated with a ball of larger radius to handle uneven
sampling densities. We applied the BPA to datasets of millions of points representing actual scans of complex 3D objects. The relatively small amount of memory required by the BPA, its time
efficiency, and the quality of the results obtained compare favorably with existing techniques.
[1] K. Pulli, T. Duchamp, H. Hoppe, J. McDonald, L. Shapiro, and W. Stuetzle, “Robust Meshes from Multiple Range Maps,” Intl. Conf. Recent Advances in 3D Digital Imaging and Modeling, pp. 205-211,
IEEE CS Press, May 1997.
[2] H. Edelsbrunner and D.P. Mücke, “Three-Dimensional Alpha Shapes,” ACM Trans, Graphics, vol. 13, pp. 43-72, 1994.
[3] J. Abouaf, "The Florentine Pietà: Can Visualization Solve the 450-Year-Old Mystery?," IEEE Computer Graphics and Applications, vol. 19, no. 1, Jan./Feb. 1999, pp. 6-10.
[4] F. Bernardini, C. Bajaj, J. Chen, and D. Schikore, “Automatic Reconstruction of 3D CAD Models from Digital Scans,” Int'l J. Computational Geometry and Applications, vol. 9,nos. 4&5, pp. 327-370,
Aug.-Oct. 1999.
[5] R. Mencl and H. Müller, “Interpolation and Approximation of Surfaces from Three-Dimensional Scattered Data Points,” Proc. Eurographics '98, Eurographics, State of the Art Reports, 1998.
[6] B. Curless and M. Levoy, “A Volumetric Method for Building Complex Models from Range Images,” Proc. SIGGRAPH '96, pp. 303-312, 1996.
[7] W.E. Lorensen and H.E. Cline, “Marching Cubes: A High Resolution 3D Surface Construction Algorithm,” Computer Graphics (SIGGRAPH '87 Proc.), vol. 21, pp. 163-169, 1987.
[8] M. Soucy and D. Laurendeau, "A General Surface Approach to the Integration of a Set of Range Views," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 17, no. 4, pp. 344-358, Apr. 1995.
[9] G. Turk and M. Levoy, “Zippered Polygon Meshes from Range Images,” Proc. SIGGRAPH '94, pp. 311-318, 1994.
[10] C. Dorai, G. Wang, A.K. Jain, and C. Mercer, “Registration and Integration of Multiple Object Views for 3D Model Construction,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 20,
no. 1, pp. 83-89, Jan. 1998.
[11] C.L. Bajaj, F. Bernardini, and G. Xu, “Automatic Reconstruction of Surfaces and Scalar Fields from 3D Scans,” Proc. SIGGRAPH '95, pp. 109-118, Aug. 1995.
[12] N. Amenta, M. Bern, and M. Kamvysselis, “A New Voronoi-Based Surface Reconstruction Algorithm,” Proc. SIGGRAPH '98, pp. 415-421, 1998.
[13] J.-D. Boissonnat, “Geometric Structues for Three-Dimensional Shape Representation,” ACM Trans. Graphics, vol. 3, no. 4, pp. 266-286, Oct. 1984.
[14] R. Mencl, “A Graph-Based Approach to Surface Reconstruction,” Proc. EUROGRAPHICS '95, Computer Graphics Forum, vol. 14, no. 3, pp. 445-456, 1995.
[15] A. Hilton et al., "Marching Triangles: Range Image Fusion for Complex Object Modelling," Proc. Int'l Conf. Image Processing, IEEE Computer Soc. Press, Los Alamitos, Calif., 1996.
[16] F. Bernardini and C. Bajaj, “Sampling and Reconstructing Manifolds Using Alpha-Shapes,” Proc. Ninth Canadian Conf. Computational Geometry, pp. 193-198, Aug. 1997. Updated online version
available atwww.qucis.queensu.cacccg97.
[17] N. Amenta and M. Bern, “Surface Reconstruction by Voronoi Filtering,” Proc. 14th Ann. ACM Symp. Computer Geometry, pp. 39-48, 1998.
[18] H. Edelsbrunner, “Weighted Alpha Shapes,” Technical Report UIUCDCS-R-92-1,760, Computer Science Dept., Univ. Illinois, Urbana, IL, 1992.
[19] G. Taubin, "A Signal Processing Approach to Fair Surface Design," Computer Graphics Proc., Ann. Conf. Series, ACM Siggraph, ACM Press, New York, 1995, pp.351-358.
[20] Y. Chiang, C.T. Silva, and W.J. Schroeder, “Interactive Out-of-Core Isosurface Extraction,” Proc. Visualization 1998, pp. 167-174, Oct. 1998.
[21] H. Rushmeier and F. Bernardini, "Computing Consistent Normals and Colors from Photometric Data," Proc. 2nd Int'l Conf. 3D Digital Imaging and Modeling, IEEE Press, Piscataway, N.J., 1999, pp.
[22] A.P. Witkin and P.S. Heckbert, "Using Particles to Sample and Control Implicit Surfaces," Computer Graphics(Proc. Siggraph 94), vol. 28, no. 2, 1994, pp. 269-277.
[23] P. Crossno and E. Angel, “Isosurface Extraction Using Particle Systems,” Proc. IEEE Visualization '97, pp. 495-498, Nov. 1997.
Index Terms:
3D scanning, shape reconstruction, point cloud, range image.
Fausto Bernardini, Joshua Mittleman, Holly Rushmeier, Cláudio Silva, Gabriel Taubin, "The Ball-Pivoting Algorithm for Surface Reconstruction," IEEE Transactions on Visualization and Computer Graphics
, vol. 5, no. 4, pp. 349-359, Oct.-Dec. 1999, doi:10.1109/2945.817351
Usage of this product signifies your acceptance of the
Terms of Use | {"url":"http://www.computer.org/csdl/trans/tg/1999/04/v0349-abs.html","timestamp":"2014-04-18T14:29:41Z","content_type":null,"content_length":"59912","record_id":"<urn:uuid:6ceff0fd-1bef-4661-ad2b-955452fbbbdd>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00286-ip-10-147-4-33.ec2.internal.warc.gz"} |
vertical spring
a mass is attached to a spring and released. it then oscillates in simple harmonic motion. what is the transformation of energy?
i understand how it works horizontally (max Ee at the 2 ends, max Ek at the equilibrium position), but how does it work vertically now that Eg is also present?
this is how i see it: setting the maximum stretch as h = 0, upon release of the mass, Eg = max (let's set max as 1 unit), Ee = 0, Ek = 0. at the bottom, Ee = 1, Eg = 0, Ek = 0. but at the middle,
which is the equilibrium point, Ek should be max, and Eg and Ee should be 0, but it is half way of the unstretched position, so Ee = 1/2, and Eg is also 1/2 since it is half way between max height
and min height (which i set to 0), so how can there be Ek? this is what's confusing me.
There is actually something interesting and nontrivial in the case of a vertical spring. In calculating [itex] {1 \over 2 } k x^2 [/itex], where is the x measured from? One can measure it from the
equilibrium position corresponding to the length when the spring is horizontal (let's call this the "unstretched" equilibrium position) OR one can measure it from the new equilibrium position when
the spring is vertical (the "stretched eq. pos.). This affects the way one applies conservation of energy.
Usually, most people start working with respect to the new, "stretched" equilibrium position (so that x goes from +A to -A, where A si the amplitude of the motion). In that case, it turns out that
when applying conservation of energy, one does not need to include mgh! One simply uses the potential energy stored in the spring and the kinetic energy.
It's easy to rpove this explicitly and is ultimately due to the fact that the force of the spring is a linear force. Shifting the origin in [itex] {1 \over 2 } k x^2 [/itex] adds a linear term in the
shift and this term is exactly cancelled by mgh. Quite neat to see at work, actually. | {"url":"http://www.physicsforums.com/showthread.php?t=119128","timestamp":"2014-04-18T18:24:29Z","content_type":null,"content_length":"31467","record_id":"<urn:uuid:07fc175c-81dc-46b6-855a-62c56c301639>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00266-ip-10-147-4-33.ec2.internal.warc.gz"} |
Geometry: Logic Statements
Problem : State the inverse, converse, and contrapositive of the following statement.
p : If the worker is injured, then the family sues.
Inverse: If the worker is not injured, then the family does not sue.
Converse: If the family sues, then the worker is injured.
Contrapositive: If the family does not sue, then the worker is not injured.
Problem : State the inverse, converse, and contrapositive of the following statement.
p : If the enemy retreats, the general will pursue them.
Inverse: If the enemy does not retreat, the general will not pursue them.
Converse: If the general pursues them, then the enemy will retreat.
Contrapositive: If the general does not pursue them, then the enemy will not retreat.
Problem : Given the following statement, decide its truth value, and then decide the truth values of its inverse, converse, and contrapositive.
p : If a polygon has four sides, it is a pentagon.
Statement: F
Inverse: F
Converse: F
Contrapositive: F
Problem : Given the following statement, decide its truth value, and then decide the truth values of its inverse, converse, and contrapositive.
p : If a triangle is obtuse, then it has one obtuse angle.
Statement: T
Inverse: T
Converse: T
Contrapositive: T
Problem : Given the following statement, decide its truth value, and then decide the truth values of its inverse, converse, and contrapositive.
p : If two triangles are congruent, then they are similar.
Statement: T
Inverse: F
Converse: F
Contrapositive: T | {"url":"http://www.sparknotes.com/math/geometry3/logicstatements/problems_2.html","timestamp":"2014-04-20T06:06:15Z","content_type":null,"content_length":"51797","record_id":"<urn:uuid:4f217f47-88b8-46ef-ba75-5989b5e48448>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00120-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proceedings Abstracts of the Twenty-Third International Joint Conference on Artificial Intelligence
The Inclusion-Exclusion Rule and Its Application to the Junction Tree Algorithm / 2568
David Smith, Vibhav Gogate
In this paper, we consider the inclusion-exclusion rule – a known yet seldom used rule of probabilistic inference. Unlike the widely used sum rule which requires easy access to all joint probability
values, the inclusion-exclusion rule requires easy access to several marginal probability values. We therefore develop a new representation of the joint distribution that is amenable to the
inclusion-exclusion rule. We compare the relative strengths and weaknesses of the inclusion-exclusion rule with the sum rule and develop a hybrid rule called the inclusion- exclusion-sum (IES) rule,
which combines their power. We apply the IES rule to junction trees, treating the latter as a target for knowledge compilation and show that in many cases it greatly reduces the time required to
answer queries. Our experiments demonstrate the power of our approach. In particular, at query time, on several networks, our new scheme was an order of magnitude faster than the junction tree | {"url":"http://ijcai.org/papers13/Abstracts/378.html","timestamp":"2014-04-20T06:49:02Z","content_type":null,"content_length":"1833","record_id":"<urn:uuid:755f90d1-aac9-4017-8777-68d7a0b2c942>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00217-ip-10-147-4-33.ec2.internal.warc.gz"} |
Adding integers from a file
October 31st, 2012, 10:52 PM #1
Join Date
Sep 2012
Thanked 0 Times in 0 Posts
My program works effectively, but now I have to add up the total of the integers. How would I go about doing this? The program is just reading a simple file in c:/ which it does, and displays the
files text, which it does. How do i add up the sum of these numbers?
* To change this template, choose Tools | Templates
* and open the template in the editor.
package readfile;
import java.io.*;
public class ReadFile {
* @param args the command line arguments
public static void main(String[] args) throws IOException, FileNotFoundException {
public static void processFile (String scores) throws IOException, FileNotFoundException {
FileInputStream fstream = new FileInputStream("c:/scores.txt");
DataInputStream in = new DataInputStream(fstream);
BufferedReader br = new BufferedReader(new InputStreamReader(in));
String strLine;
while ((strLine = br.readLine()) !=null) {
//Closing input stream
Any help would be appreciated. I added a parse to double, so now the string that is read from the file is converted to a double.
My program works effectively,....The program is just reading a simple file in c:/ which it does, and displays the files text, which it does.
but now I have to add up the total of the integers. How would I go about doing this?
What integers?
I added a parse to double, so now the string that is read from the file is converted to a double
If you are adding integers there is no reason to use double. Use int.
To you, you may have provided plenty of information. To people who have no clue what your project is supposed to do, there is much missing...
I am guessing you are reading some type of numbers from a text file and want to get a total of all of the numbers? Anything in the file besides numbers? Are the numbers all integers or what are
Is it one number per line? If it is for an assignment it may be helpful to post the instructions. If not try to explain what the contents of the file are supposed to be, and exactly what you are
supposed to do.
In my textfile, my goal is to add up the sum of all the integers, the text file is in a formal like this;
One per line. That is the only thing in the textfile, integers. I am stuck on how to add up the sum of all these integers that are on their own line.
So far, I have only been able to get the numbers to display.
Lets pretend you are standing in front of me, and I say to you, "Hey add these numbers up" and I start giving you numbers...
How would you add them up?
I say 15.
You say 15.
I say 14.
You say 29.
I say 23.
You say 52.
I say done.
You say the total is 52.
Every time I give you a new number, you add it to a running total. That would be one way.
Another way.
I say 15.
You say 15.
I say 14.
You say 15, 14.
I say 23.
You say 15, 14, 23.
I say done.
You say the total is 52.
Every time I give you a number, you remember every number until the end, and then add them all up.
The first way may be preferable if you only need a total. The second way also works, but may be more preferable in other situations, or possibly this one. Decide on how you will solve the problem
first, and worry about writing code to perform the steps of the solution later.
I could do that, but I do not know how to read each line seperately.
Update: I was able to add up all of the integers, and display what it equals, but now how do i get the mean or avg of these?
For some reason, it is printing out the mean of each seperate integer, but it want it only to calculate once with the total of all integers
Code so far:
* To change this template, choose Tools | Templates
* and open the template in the editor.
package readfile;
import java.io.*;
public class ReadFile {
* @param args the command line arguments
public static void main(String[] args) throws IOException, FileNotFoundException {
public static void processFile (String scores) throws IOException, FileNotFoundException {
FileInputStream fstream = new FileInputStream("c:/scores.txt");
DataInputStream in = new DataInputStream(fstream);
BufferedReader br = new BufferedReader(new InputStreamReader(in));
String strLine;
double doubleValue;
int accumulator = 0;
while ((strLine = br.readLine()) !=null) {
doubleValue = Integer.parseInt(strLine);
accumulator += (doubleValue);
System.out.print ("Total Sum: ");
System.out.print("Mean of Scores: ");
//Closing input stream
Last edited by LoganC; November 1st, 2012 at 03:25 PM.
I could do that, but I do not know how to read each line seperately.
Sure you do. The following statement in your posted code does just that:
strLine = br.readLine()
I was able to add up all of the integers, and display what it equals, but now how do i get the mean or avg of these?
Did you read what I said near the bottom of my other post? Pay more attention to the second example or think of a third way. To get an average you will need to know how many numbers were added
together right? Going to have to count how many numbers you add up then... Figure out a way to do that.
For some reason, it is printing out the mean of each seperate integer, but it want it only to calculate once with the total of all integers
The computer just does what the code tells it to do when it is told to do it. If you find something happening to every number as it goes instead of one time at the end, consider where the line of
code is placed in terms of when and how many times it will run.
November 1st, 2012, 12:05 PM #2
Join Date
Sep 2012
Thanked 0 Times in 0 Posts
November 1st, 2012, 02:15 PM #3
November 1st, 2012, 02:48 PM #4
Join Date
Sep 2012
Thanked 0 Times in 0 Posts
November 1st, 2012, 03:02 PM #5
November 1st, 2012, 03:04 PM #6
Join Date
Sep 2012
Thanked 0 Times in 0 Posts
November 1st, 2012, 04:10 PM #7 | {"url":"http://www.javaprogrammingforums.com/whats-wrong-my-code/18885-adding-integers-file.html","timestamp":"2014-04-18T03:24:30Z","content_type":null,"content_length":"78479","record_id":"<urn:uuid:c6acf875-4981-4bf7-a7d5-ec1dd6bfeecb>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00552-ip-10-147-4-33.ec2.internal.warc.gz"} |
irritating derivative problem
October 19th 2011, 05:05 PM #1
Junior Member
Dec 2010
hi, i have a test on derivatives tomorrow and was going over some problems when I got stumped. I have the problem and the answer, but i want to know how to get the answer. thanks
use definition fprime (a) = (lim as h approaches 0) [f(a+h) - f(a)] / h
to find the derivative of the given function at the indicated point
f(x) = 1/x, a = 2
nvm, brain cramp, i figured it out
Last edited by TacticalPro; October 19th 2011 at 05:21 PM.
Re: irritating derivative problem
$f'(a) = \lim_{h \to 0} \frac{\frac{1}{a+h} - \frac{1}{a}}{h}$
$= \lim_{h \to 0} \frac{\frac{a}{a(a+h)} - \frac{a+h}{a(a+h)}}{h}$
$= \lim_{h \to 0}\left(\frac{1}{h}\right)\left(\frac{-h}{a(a+h)}\right)$
$= \lim_{h \to 0}\frac{-1}{a(a+h)}$
$= \frac{-1}{a^2}$, provided a ≠ 0.
since 2 ≠ 0, f'(2) = -1/4.
Re: irritating derivative problem
hi, i have a test on derivatives tomorrow and was going over some problems when I got stumped. I have the problem and the answer, but i want to know how to get the answer. thanks
use definition fprime (a) = (lim as h approaches 0) [f(a+h) - f(a)] / h
to find the derivative of the given function at the indicated point
f(x) = 1/x, a = 2
$\lim_{h \to 0} \frac{1}{h} \left(\frac{1}{2+h} - \frac{1}{2}\right)$
get a common denominator and combine the fractions in ( ) ... do it right and you'll get that h in the denominator to cancel out ... then determine the limit.
Re: irritating derivative problem
thanks, when i first looked at the problem i completely forgot about rearranging to get the h out of the botom
Re: irritating derivative problem
$\lim_{h \to 0} \frac{\frac{1}{x+h} - \frac{1}{x}}{h}$
$= \lim_{h \to 0} \frac{\frac{x}{x(x+h)} - \frac{x+h}{x(x+h)}}{h}$
$= \lim_{h \to 0}\left(\frac{1}{h}\right)\left(\frac{-h}{x(x+h)}\right)$
$= \lim_{h \to 0}\frac{-1}{x(x+h)}$
$= \frac{-1}{x^2}$, provided x ≠ 0.
since 2 ≠ 0, f'(2) = -1/4.
thanks!!!! I actually plugged in for x before rearranging and I find I like it your way better: rearranging the equation in terms of x, and then putting in 2 for x to get the derivative. !!! yay
October 19th 2011, 05:21 PM #2
MHF Contributor
Mar 2011
October 19th 2011, 05:21 PM #3
October 19th 2011, 05:23 PM #4
Junior Member
Dec 2010
October 19th 2011, 05:25 PM #5
Junior Member
Dec 2010 | {"url":"http://mathhelpforum.com/calculus/190823-irritating-derivative-problem.html","timestamp":"2014-04-16T10:23:39Z","content_type":null,"content_length":"46471","record_id":"<urn:uuid:5f97ccc4-884b-4ae3-9651-c1ae671b6ed1>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00610-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] behavior of masked arrays
Giorgio F. Gilestro giorgio@gilestro...
Sun Mar 9 12:35:27 CDT 2008
Ok generic functions and a ma.stats specific module sounds very good to
me. Hope is going to happen for ma are a great plus.
Pierre, I did some adjusting to some of the functions in
scipy.stats.stats and more I am planning to do - not all but those I'll
need I am afraid. Is it ok if I send you what I'll have so that you have
a look at it (at your convenience) and maybe integrate it to
For the moment the only issues I met are:
- some functions require to know N, the number of elements on which we
are performing the operation. A simple N.shape[axis] won't work but
there is no native method returning the number of unmasked elements on a
given axis (maybe there should be?). So I am using instead
N = a.shape[axis] - a.mask.sum(axis)
- some functions need to handle float data. The float method on masked
array will raise an exception (why so?) so I am either introducing float
constant where possible
e.g. svar = ((n-1)*v) / float(df) becomes svar = ((n-1.0)*v) / df
or multiply by 1.0
Pierre GM wrote:
> On Friday 07 March 2008 12:25:13 Giorgio F. Gilestro wrote:
>> Ok, I see, thank you Pierre.
>> I thought scipy.stats would have been a widely used extension so I
>> didn't really consider the trivial possibility that simply wasn't
>> compatible with ma yet.
> Partly my fault here, as I should have ported more functions. <rant>Blame the
> fact that working on an open-source project doesn't translate in
> publications, and that my bosses are shortening the leash...</rant>.
> Note that most (all?) of the functions in scipy.stats never supported masked
> arrays in the first place anyway. Now that MaskedArray is just a subclass of
> ndarray, porting the functions should be easier.
>> I had a quick look at the code and it really seems that ma handling can
>> be achieved by replacing np.asarray with np.ma.asarray, and some
>> functions with their methods (like ravel) here and there.
> Yes and no. I'd prefer to use numpy.asanyarray as to avoid converting ndarrays
> to masked arrays, and use methods as much as possible. Of course, there's
> gonna be some particular cases to handle (as when all the data are masked),
> but that should be relatively painless.
> Another issue is where to store the new functions: should we try to ensure
> full compatibility of scipy.stats with masked arrays? Create a new module
> scipy.mstats instead, that we'd fill up with time ? I'd be keener on the
> second approach, as we could move most of the functions currently in
> numpy.ma.m(ore)stats to this new module, and that'd probably less work at
> once...
> _______________________________________________
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2008-March/031818.html","timestamp":"2014-04-17T14:35:44Z","content_type":null,"content_length":"5837","record_id":"<urn:uuid:2470261d-3c39-41e0-b71b-dc55cdc07da2>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00517-ip-10-147-4-33.ec2.internal.warc.gz"} |
Spaceship entering mars!!
okay guys so you have to open this file to read the question accuretely.. i have done some calculations but not sure if they are correct.
So just this as best as i can, fist i figure out my semi-minor axis = a
by 150km +150 KM +230 km =530km (major axis) now to get me semi minor axis i will just divide it by 2 . Giving me 265km!
Focal lenght= 265km - earths disntace to the sun (150 km) =115 km
Now to get my semi minor axis i use the equation b2=(a2-f2) | {"url":"http://www.physicsforums.com/showthread.php?t=220051","timestamp":"2014-04-20T08:34:58Z","content_type":null,"content_length":"19813","record_id":"<urn:uuid:c06e903e-e684-4a69-8264-d7578f3535fa>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00216-ip-10-147-4-33.ec2.internal.warc.gz"} |
Complex Conjugate Roots and the Quadratic Formula
February 29th 2012, 12:56 PM #1
May 2009
Complex Conjugate Roots and the Quadratic Formula
Hello, there.
I have the quadratic equation $x^2+2x+7=0$
Using $\begin{array}{*{20}c} {x = \frac{{ - b \pm \sqrt {b^2 - 4ac} }}{{2a}}} \\ \end{array}$, I get $\begin{array}{*{20}c} {x = \frac{{ - 2 \pm \sqrt {-24} }}{{2}}} \\ \end{array}$.
Now, I know the answer to this question is two complex conjugate roots, $x = - 1 \pm i \sqrt{6}$, my question is how? The square root of 24 isn't 6. I'm confused.
I guess I'm just asking for the steps to get there.
Thank you in advance.
Re: Complex Conjugate Roots and the Quadratic Formula
so sqrt(-24)=i2sqrt(6)
now you try.
Re: Complex Conjugate Roots and the Quadratic Formula
I'm still not sure I see where the $\sqrt{6}$ comes from..
This is on the tip of my tongue from when I took Algebra, but I'm not sure I remember how to factor radicals.
Edit: Nevermind I get it. $\sqrt{ab} = \sqrt{a} \sqrt{b}$ so $\sqrt{24} = \sqrt{4} \sqrt{6} = 2 \sqrt{6}$
February 29th 2012, 01:19 PM #2
Jan 2011
February 29th 2012, 01:46 PM #3
May 2009 | {"url":"http://mathhelpforum.com/algebra/195502-complex-conjugate-roots-quadratic-formula.html","timestamp":"2014-04-17T01:43:13Z","content_type":null,"content_length":"36060","record_id":"<urn:uuid:a0ff00e2-3fe6-4fe0-ad0c-d16b1dba6d0a>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00573-ip-10-147-4-33.ec2.internal.warc.gz"} |
PHYSTAT 05
All abstracts are in Adobe PDF format
Random walks, a link between statistical, condensed matter, mathematical and particle Physics.
Ali Alavi (Seyed Ali Asghar Alavi)
Application of machine learning tools to particle Physics
P. Bargassa, S. Herrin, S-J Lee, P. Padley, R. Vilalta - Rice University and The University of Houston, USA
A note on Delta ln L = -1/2 errors
Roger Barlow, Physics.org
Asymmetric Errors
Roger Barlow, Physics.org
Bayesian Neural Networks
P.C. Bhat (Fermi, Il USA), H.B. Prosper (Florida State University)
Regularized inversion methods and error bounds for general statistical inverse problems with application to density estimation of young massive cluster luminosities in the Antennae galaxies
Dr. Nicolai Bissantz - University of Göttingen, Germany
Program for evaluation of the significance confidence intervals and limits by direct probabilities calculations
S. Bityukov and various - Institute for high energy physics, Russia
The Bayesian effects in measurement of the astmmetry of Poisson flows
S. Bityukov and various - Institute for high energy physics, Russia
Statistically dual distributions in statistical inference
S. Bityukov and various - Institute for high energy physics, Russia
A new fast track-fit algorithm based on broken lines
Volker Blobel - University of Hamburg, Germany
Sifting data in the real world
M.M. Block - Department of Physics & Astronomy, Northwestern University, IL USA.
Maximal information analysis: I - various Wayne State plots and the most common likelihood principle
G. Bonvicini, Wayne State University, Detroit
Least Squares Approach to the Alignment of the Generic High Precision Tracking System
P. Brückman de Renstrom, S. Haywood - Univeristy of Oxford and Rutherford Appleton Laboratory, UK
Statistics in ROOT
R. Brun, A. Kreshuk - CERN
CEDAR: Combined e-Science Data Analysis Resource
Andy Buckley - CEDAR, Durham, UK
Bias-Free Estimation in Multicomponent Maximum Likelihood Fits with Component-Dependent Templates
P. Catastini (Universita’ di Siena), G. Punzi (Scuola Normale Superiore) - INFN Pisa, Italy
Bayesian analysis at work: troublesome examples
J. Charles & Various - France, Germany.
Restoration of Supersymmetry against arbitrary small quantum corrections using feedforward neural network
Dr Ashish Chaturvedi
Likelihood Ratio Confidence Intervals with Bayesian Treatment of Systematic Uncertainties
J. Conrad (CERN) & F. Tegenfeldt (ISU)
Generalized Frequentist Methods for Particle Physics
Luc Demortier, The Rockefeller University
Bayesian Reference Analysis for Particle Physics
Luc Demortier, The Rockefeller University
Application of a multidimensional wavelet denoising algorithum for the detention and characterization of astrophysical sources of Gamma rays.
S.W.Digel, B.Zhang, J.Chiang, M.Fadili, J.-L.Starck
x^2 test for comparison of weighted and unweighted histograms
N.D. Gagunashvili - University of Akureyri, Iceland
Unfolding with system identification
N.D. Gagunashvili - University of Akureyri, Iceland
How to do Bayes-Optimal Classification with Massive Datasets: Large-Scale Quasar Discovery
A. Gray and Various - School of Computer Science, Carnegie Mello University, USA
Goodness-of-Fit Statistics: Power Comparisons
M. Grazia Pia, B. Mascialino - INFN, Italy
An update on the Goodness-of-Fit Statistical Toolkit
M. Grazia Pia, B. Mascialino - INFN, Italy
The Bayesian Approach to Setting Limits: What to Avoid
Joel Heinrich - University of Pennsylvania
Examining the balance between optimising an analysis for best limit setting and best discovery potential
G. Hill, J. Hodges, B. Hughey and M. Stamatikos - University of Wisconsin, USA.
Likelihood analysis and goodness-of-fit for low count-rate experiments
A. Ianni - Laboratori Nazionali del Gran Sasso/INFN, Italy
Higher Criticism Statistic: Optimality and Applications in Cosmology and Astronomy
Jiashun Jin - Department of Statistics Purdue University, IN USA.
Expected principal component analysis of cosmic microwave background anisotropies
Samuel Leach, SISSA-ISAS, Italy
Additional Information: http://xxx.soton.ac.uk/abs/astro-ph/0506390
New Developments of ROOT Mathematical Software Libraries
Lorenzo Moneta - CERN
Confidence interval construction applied to an unfolding problem
K. Muenich, G. Hill, W. Rhode and H. Geenen
StatPatternRecongnition: A C++ Package for Statistical Analysis of High Energy Physics Data
Ilya Narsky - California Institute of Technology
Optimization of Signal Significance by Bagging Decision Trees
Ilya Narsky - California Institute of Technology
Fitting boundary value problems
Geoff Nicholls & Various - Department of Statistics, University of Oxford
sPLot; a statistical tool to unfold data distribitions
M. Pivk (Cern) and F.R. Le Diberder (LAL Paris University, France)
Ordering Algorithms and Confidence Intervals in the Presence of Nuisance Parameters
Giovanni Punzi - Scuola Normale Superiore and INFN - Pisa
A General Theory of Goodness of Fit in Likelihood Fits
Rajendran Raja - Fermi Nartional Accelerator Lab, Batavia, IL
Calculation of errors in fitted quantities in likelihood fits
Rajendran Raja - Fermi Nartional Accelerator Lab, Batavia, IL
The Boosting Technique for Particle Physics
B.P.Roe, H.Yang and J.Zhu - University of Michigan, USA.
Limits and Confidence Intervals in the Presence of Nuisance Parameters
Dr. Wolfgang Rolke, University of Puerto Rico - Mayaguez
Cosmological applications of Bayesian model selection techniques
Roberto Trotta - Oxford Astrophysics & Royal Astronomical Society
The RooFit toolkit for data modeling
W. Verkereke, NL
Signal Enhancement Using Multivariate Classification Techniques and Physical Constraints
R.Vilalta, P. Sarda and Others - Houston University and Rice University
Goodness-of-fit for sparce distributions in high energy physics.
B.D. Yabsley, University of Sydney, Australia
Maximum Likelihood Parameter Inference from Lowe Statistics Data and Monte Carlo Simulation
Dr Günter Zech, Germany
On Consistent and Calibrated Inference about the Parameters of Sampling Distributions
Tomi Zivko, Jozef Stefan Institute, Ljubljana, Slovenia
Statistical Software
Prof James Linneman, Michigan State University | {"url":"http://www.physics.ox.ac.uk/phystat05/abstracts.htm","timestamp":"2014-04-17T21:25:11Z","content_type":null,"content_length":"21728","record_id":"<urn:uuid:d141fb05-1440-476c-9770-600c82324275>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00372-ip-10-147-4-33.ec2.internal.warc.gz"} |
sketch the graph y=√(16-y^2)
Are you sure it is 16-Y^2? not X^2 anyway i graphed X^2 and it looks like a bowl upside down Roots:-4,4 y int:4
If the equation is $y=\sqrt{16-x^2}$ Then recall that the equation of a circle is $x^2 + y^2 = k^2$ If you re-arrange for y, then you get your equation, but only the positive half.
The OP fixed the mistake by posting again. It's actually \displaystyle \begin{align*} x = \sqrt{16 - y^2} \end{align*}. This is a semicircle centred at the origin of radius 4 units, taking on only
positive values for x. | {"url":"http://mathhelpforum.com/pre-calculus/213691-sketch-graph-y-16-y-2-a-print.html","timestamp":"2014-04-19T00:12:27Z","content_type":null,"content_length":"5062","record_id":"<urn:uuid:0cd92afd-cc22-408f-92f1-c7d4ae26f3a2>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00192-ip-10-147-4-33.ec2.internal.warc.gz"} |
Charles Hutton
Born: 14 August 1737 in Newcastle-upon-Tyne, England
Died: 27 January 1823 in London, England
Click the picture above
to see three larger pictures
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
Charles Hutton's father, who was a supervisor in a colliery, was descended from a respectable Westmoreland family. Charles, the youngest of his parents sons, was born in Percy Street,
Newcastle-on-Tyne. He was brought up in Newcastle where his father died when he was five years old, but his mother remarried the foreman of Old Long Benton pit whose name was Fraim. Charles suffered
an unfortunate injury when he was seven years old. He became involved in a fight with some other children and his left elbow was dislocated. Had young Charles told his parents immediately it is
probable that doctors could have done enough to allow it to heal over time. However, he kept it from his parents for long enough that by the time it was treated it was impossible to save him from a
permanent disability. Without this injury it is almost certain that Charles would have followed in his brothers footsteps and, like his father and his stepfather, would have gone to work at the
colliery. His parents decided that if he could not do manual work then they would send him to school to learn to read and write.
Riddle tells us of Hutton's education in [7]:-
He was taught to read by an old woman who conducted a little school in the neighbourhood, and to write by a schoolmaster named Robson, near Benwell, a village near Newcastle; and he attended
afterwards a school at Jesmond, kept by the Rev. Mr Ivison, a clergyman of the English Church; and on Mr Ivison's removal to a curacy in the county of Durham, Mr Hutton succeeded him in his
school at Jesmond.
In fact Hutton did work for a short time at Old Long Benton colliery between the time Mr Ivison left the school and when Hutton succeeded him. He was so successful in this school in Jesmond that he
soon moved to larger premises in the neighbourhood. At this stage Hutton began studying mathematics at evening classes at Mr James' school in Newcastle. In 1760 Hutton opened the Mathematical School
in Newcastle but he also taught at the main secondary school in the city. He was fortunate to have a number of private pupils from the local land owning families who helped further his career. Riddle
tells us that [7]:-
... his manners, as well as his talents, rendered him acceptable as a private teacher in the families of the higher classes.
On the other hand these "higher class" people benefited greatly from the high quality of teaching that Hutton provided. One such was Robert Shafto, whose children were taught by Hutton, who not only
gave Hutton access to his extensive library but also persuaded him to begin publishing texts.
Hutton had some pupils who went on to become more famous than their schoolmaster. One such was John Scott who was born in Newcastle in 1751 and went on to be lord chancellor of England between 1801
and 1827. King George IV made Scott Viscount Encombe and Earl of Eldon in 1821. Another of Hutton's pupils was Elizabeth Surtees, from an important family of wealthy bankers in Newcastle, who later
became John Scott's wife.
Encouraged by Shafto, Hutton published his first textbook The Schoolmaster's Guide, or a Complete System of Practical Arithmetic at Newcastle in 1764 which he dedicated to Robert Shafto. It was an
elementary arithmetic text which was soon was adopted widely. He now saw his opportunity to educate schoolmasters and provide them with further mathematical training so he advertised in 1766 and 1767
(see [5]):-
Any schoolmaster, in town or country, who is desirous of improvement in any branch of the mathematics, by applying to Mr Hutton, may be instructed.
The next textbook which Hutton published, again at Newcastle, was A Treatise on Mensuration. The book was, in his own words:-
... adapted particularly to the uses of schools, mathematicians and mechanics.
Thomas Bewick, born in 1753, was an apprentice in Newcastle when he undertook the illustrations in Hutton's A Treatise on Mensuration (1767-1770). Bewick rediscovered the technique of wood engraving
which he went on to establish as a major book illustrating technique but Hutton's book was his first assignment. Hutton certainly made an inspired choice in having Bewick illustrate his book. The
fact that 59 schoolmasters from the Newcastle area subscribed to the text before its publication tells us that Hutton had by this time acquired an excellent reputation both as a teacher and as a
writer of mathematical texts. Many later writers borrowed much material from this book.
Not only was Hutton teaching and writing textbooks, but he also undertook a land survey of the area around Newcastle for the mayor and corporation of the city. In 1770 he produced Plan of Newcastle
and Gateshead which is now lodged in Newcastle City Library. Two years later he published The Principles of Bridges, a treatise on the equilibrium of bridges.
Shafto persuaded Hutton to have greater ambitions than being a schoolmaster in Newcastle and when a competition for the position of professor of mathematics at the Royal Military Academy in Woolwich
in London was announced, following the death of Mr Cowley, Hutton became one of the eleven competitors. Maskelyne was one of the panel with the task of choosing the best candidate for the post and
there was little doubt that Hutton showed himself to be a class above the rest in the several days of examinations; he was appointed on 24 May 1773. In the following year, on 16 June, he became a
Fellow of the Royal Society. He later received an honorary degree from the University of Edinburgh.
Hutton became editor of the Ladies' Diary in 1773 and continued to undertake his editorial duties for 45 years until 1818; there is information about his role as editor in [6]. Riddle writes in [7]:-
The editorship of the Ladies' Diary afforded him an opportunity of becoming acquainted with the talents and acquirements of many ingenious individuals, who were inproving themselves in science by
endeavouring to solve the mathematical questions proposed in the Diary; ans as opportunity occurred, many of them were drawn by his kind discrimination from obscurity, and placed in situations in
which they were eminently useful to society.
In 1775 he published five volumes of extracts from the Ladies' Diary dealing with:-
... entertaining mathematical and poetical parts.
He now began to publish interesting papers in the Philosophical Transactions of the Royal Society. In 1776 he published A new and general method of finding simple and quickly converging series and
two year later, in the same Transactions he published The force of fired gunpowder and the velocity of cannon balls. He received the Copley Medal of the Royal Society for this 1778 paper. He also
computed the mean density of the Earth based on Maskelyne's data from the mountain Schiehallion in An Account of the Calculations made from the Survey and Measures taken at Schiehallion in order to
ascertain the mean density of the Earth (1779).
In 1779 Hutton became foreign secretary of the Royal Society. He held this position for four years before being forced to resign in 1783 by Sir Joseph Banks, who was president of the Society from
1778 to 1820. It was an unfortunate affair which led to considerable controversy in the Society. Banks claimed that Hutton had failed to carry out his duties efficiently, but many in the Society
supported Hutton and felt that it was in fact Banks who had failed to manage the affairs of the Society competently. Banks was accused by Fellows of using excessive authority and of being "despotic".
Francis Maseres and Nevil Maskelyne were among Hutton's supporters, while many others wrote anonymous pamphlets in support of Hutton and critical of Banks.
Hutton continued to publish textbooks, treatises and papers. In 1781 he published Mathematical Tables for the Board of Longitude. Further mathematical tables followed, one of which, published in
1785, contains an important historical introduction. He had a stroke of good fortune which was to make him a rich man and we quote the episode as given in [5]:-
In 1786 Hutton began to suffer from pulmonary disorders. The Royal Military Academy was situated near the river and dampness began to affect is chest; his predecessor Simpson had in fact died
from a chest complaint. Hutton decided then to move, and bought land on the hill south of the river overlooking Woolwich. There he built himself a house and also others for letting. No sooner had
he done this than it was decided to move the Academy from the damp riverside to the hilltop. A magnificent new building was erected, but, in the eyes of George III, its attractiveness was spoiled
by the presence of Hutton's houses. These were therefore sold to the crown who promptly demolished them, leaving Hutton with a hefty profit from his speculation, sufficient to guarantee his
financial future. Thus a physical disability turned him to mathematics and ill-health made him rich.
Hutton returned to publishing textbooks. The Compendious Measurer appeared in 1784, The Elements of Conic Sections in 1787 and, in 1795, his most famous work The Mathematical and Philosophical
Dictionary in two volumes. Baron writes in [1]:-
Although it was criticised as unbalanced in content, unduly cautious in tone, and sometimes lacking judgement, the dictionary has served as a valuable source for historians of mathematics.
However Howson writes [5]:-
It is an excellent survey of mathematics, includes biographies of many mathematicians, and is a pioneer contribution to the history of mathematics.
The first volume looks at topics such as: arithmetic including discussion of square and cube roots, arithmetical and geometrical progressions, compound interest, double position and permutations and
combinations; logarithms; algebra including the study of quadratic equations and the Cardan-Tartaglia method for cubic equations; geometry which follows the approach in Euclid's Elements; surveying;
and conic sections. The second volume contains Newton's approach to the differential and integral calculus.
The syllabus which was covered at The Royal Military Academy at Woolwich determined the contents of his textbook A course of mathematics for cadets of the Royal Military Academy published from 1798
to 1801. Hutton's fame as a writer of textbooks was such that even before this work appeared great things were expected of it as is indicated by the following pre-publication report in The Monthly
Magazine of August 1798:-
From Dr Hutton's talents and long experience in his profession, there is every reason to expect that this will not only be a most useful and valuable work, but will completely supersede every
other of the same description.
Written to be a textbook for students at the Academy at Woolwich, it was also adopted by the United States Military Academy at West Point north of New York. This academy opened on 4 July 1802 and
Hutton's book was immediately adopted for the first intake of cadets, remaining the standard text at the Academy until 1823.
Hutton retired from his professorship at Woolwich in 1807 at the age of seventy on a pension of £500 per year and went to live in Bedford Row, London. Shortly before his death he was consulted about
the curves which should be adopted for the arches for the New London Bridge, the proposed structure having five semielliptical stone arches. Construction of the bridge began in 1824, the year after
Hutton's death. He had been married twice and was survived by two daughters and a son. Hutton was buried in the family vault at Charlton in Kent.
Baron gives this assessment of Hutton's contributions in [1]:-
Hutton was an indefatigable worker and his mathematical contributions, if unoriginal, were useful and practical. Throughout his life, he contributed assiduously to scientific periodicals through
notes, problems, criticism, and commentary.
Article by: J J O'Connor and E F Robertson
List of References (7 books/articles)
A Poster of Charles Hutton Mathematicians born in the same country
Honours awarded to Charles Hutton
(Click below for those honoured in this way)
Fellow of the Royal Society 1774
Royal Society Copley Medal 1778
Fellow of the Royal Society of Edinburgh 1786
Cross-references in MacTutor
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
History Topics Societies, honours, etc. Famous curves
Time lines Birthplace maps Chronology Search Form
Glossary index Quotations index Poster index
Mathematicians of the day Anniversaries for the year
JOC/EFR © November 2002 School of Mathematics and Statistics
Copyright information University of St Andrews, Scotland
The URL of this page is: | {"url":"http://www-history.mcs.st-andrews.ac.uk/Biographies/Hutton.html","timestamp":"2014-04-17T21:43:38Z","content_type":null,"content_length":"23218","record_id":"<urn:uuid:409ec2d1-7e50-4077-94c9-11c0e48c539b>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00424-ip-10-147-4-33.ec2.internal.warc.gz"} |
Origin of Fujimura set
up vote 4 down vote favorite
If we have 10 coins arranged in an equilateral triangle and we want to know the minimum number of coins we can remove so that none of the remaining coins form an equilateral triangle the remaining
coins form a Fujimura set. See here for more on this problem.
We have been looking at these sets and some generalizations in Polymath1. In the paper "Density Hales-Jewett and Moser numbers" the problem has come up of finding a citation for the original problem.
In Martin Gardners article "“Eccentric Chess and Other Problems” which later appeared in his book Mathematical Circus he cites a a “recent book” of Fujimura. And we are trying to find the cited book.
At least one person has checked Fujimura's book The Tokyo Puzzles and did not find it there.
So the question is if anyone knows of the book by Fujimura where the problem was introduced.
puzzle books co.combinatorics reference-request
add comment
1 Answer
active oldest votes
As far as I know, The Tokyo Puzzles is the only book he ever wrote (at least that was translated into English). However there are several editions of it (1969, 1970, 1976, 1978, 1979,
up vote 2 down 1982). Are you sure this is the same Fujimura?
add comment
Not the answer you're looking for? Browse other questions tagged puzzle books co.combinatorics reference-request or ask your own question. | {"url":"http://mathoverflow.net/questions/13580/origin-of-fujimura-set","timestamp":"2014-04-20T03:51:42Z","content_type":null,"content_length":"50656","record_id":"<urn:uuid:4cbbe3e3-1717-4a02-ae24-67a38a4a23ec>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00508-ip-10-147-4-33.ec2.internal.warc.gz"} |
equivariant Serre Duality.
up vote 1 down vote favorite
Let $X$ be a nonsingular projective variety of dimension $n$ over a field $k$, and $\omega_X$ be its canonical sheaf. Let $G$ be a finite subgroup of the automorphism group $Aut_k(X)$, and $\mathcal
{F}$ a locally free $G$-equivariant sheaf on $X$. Then $G$ acts on all the cohomology groups $H^i(X, \mathcal{F})$. Is the Serre duality $$ H^i(X, \mathcal{F})\times H^{n-i}(X, \mathcal{F}^\vee\
otimes \omega_X)\to H^n(X, \omega_X)=k$$ a $G$-equivariant perfect pairing? Where can I find a reference to this result?
Thank you.
4 The isomorphism $\mathrm H^n(X, \omega_X) \simeq k$ is universal, that is, invariant under isomorphisms. The universal pairing is functorial under pullbacks. The result follows from this. – Angelo
Mar 20 '13 at 11:14
@Angelo: Perhaps the OP seeks a reference to justify the invariance you mention? Many references on Serre duality make the construction of the trace in a manner that is not sufficiently intrinsic
to render the triviality apparent. It is equivalent to show that the natural composite map $H^n(X,\Omega^n_{X/k}) \rightarrow H^n(X,g^{\ast}(\Omega^n_{X/k})) \rightarrow H^n(X,\Omega^n_{X/k})$ is
the identity (1st step pullback, 2nd step canonical at sheaf level); settling projective spaces "by bare hands" is a bit unpleasant (though easy by using the structure of the automorphism group).
– user28172 Mar 20 '13 at 14:41
@nosr, I tried to prove what you suggested and indeed it is not too difficult. However, I was trying to find a reference to include in a paper. – Jiangwei Xue Mar 26 '13 at 17:24
add comment
1 Answer
active oldest votes
In a restricted situation you may consult
Peskin, Barbara R. On the dualizing sheaf of a quotient scheme. Comm. Algebra 12 (1984), no. 15-16, pp. 1855–1869.
Of course, the general treatment is
Hashimoto, Mitsuyasu Equivariant twisted inverses. in Foundations of Grothendieck duality for diagrams of schemes, pp. 261–478, Lecture Notes in Math., 1960, Springer, Berlin,
up vote 2 down vote 2009.
but it requires a great deal of machinery, derived categories, etc.
Hope this is of some help
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/125051/equivariant-serre-duality","timestamp":"2014-04-21T02:30:57Z","content_type":null,"content_length":"52339","record_id":"<urn:uuid:16077f39-82cd-47a6-a943-2769126b0fe5>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00120-ip-10-147-4-33.ec2.internal.warc.gz"} |
Aliso Viejo ACT Tutor
Find an Aliso Viejo ACT Tutor
...I have been commended for my ability to make math more simple and relatable to the student. I have an easy personality that I believe helps make students comfortable and willing to learn. I do
not move on until I feel that the student has sufficiently mastered a skill/topic.
18 Subjects: including ACT Math, chemistry, calculus, physics
...I'm nearly done with my undergraduate degree in Chemistry, but I've been tutoring students and colleagues since high school. My favorite part of tutoring, by far, is the reward of seeing
someone succeed in an area they were struggling with. When I walk in and a student shows me they got an A for the first time on a Chemistry test, it confirms my work and renews my love of the
9 Subjects: including ACT Math, chemistry, calculus, algebra 1
...In addition to being a private tutor, I am the test preparation coordinator for a boutique company in Pasadena. We focus not only on College Admissions exams, but also High School exams
including COOP and HSPT. In addition to being a private tutor, I am the test preparation coordinator for a boutique company in Pasadena.
36 Subjects: including ACT Math, chemistry, Spanish, English
...As a future speech therapist, I am interested in exploring the best ways children pick up language and reading, and as a Physiology major, I think I would be the most helpful with the sciences.
However, I can also help with all of the other subjects listed in my profile, as well as college appli...
25 Subjects: including ACT Math, English, reading, chemistry
...Students love working with me because I take the time to listen and understand where they're coming from and I explain difficult concepts in a simple way. WHAT DO I TEACH? Standardized Test
Prep - SAT exam- ACT exam- SSAT exam- SAT IIs (Math and Science related exams)- AP exams (Math and Scienc...
23 Subjects: including ACT Math, chemistry, physics, calculus | {"url":"http://www.purplemath.com/Aliso_Viejo_ACT_tutors.php","timestamp":"2014-04-20T01:57:37Z","content_type":null,"content_length":"24034","record_id":"<urn:uuid:139892fe-cb3b-4f38-8dc8-5c29f7f6a15e>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00561-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quick Bayesian Probability Question
September 15th 2010, 07:22 AM #1
Mar 2010
Quick Bayesian Probability Question
Hey all,
I need to check my understanding of this. Let $\theta$ have some prior on it and observe $X_1, ..., X_n, X_{n + 1}$, which are independent conditional on $\theta$, such that $EX_i|\theta = \
theta$. I'm asked to find the posterior mean of $X_{n + 1}$ given $X_1, ..., X_n$. I'll spare the extra details.
My reasoning is
$EX_{n + 1} | X_1, ..., X_n = E\left(EX_{n + 1} | \theta, X_1, ..., X_n \right) | X_1, ..., X_n$
$= E\left(EX_{n + 1} | \theta \right) | X_1, ..., X_n$ (from conditional independnce)
$= E\theta | X_1, ..., X_n$ (since EX|theta = theta)
So the posterior mean of $X_{n + 1}$ is the posterior mean of $\theta$. This seems a little off to me for some reason I can't explain. Does this look okay? It seems like this is working out to an
expectation over the joint distribution of $\theta|X_1, ..., X_n$ and $X_{n + 1} | \theta$ but I guess I'm not sure if that's what is meant by asking for the posterior mean.
More broadly, if I'm asked for anything posterior, should I get something that is free of theta?
Nothing? After thinking about it I think this is fine, but I'd like to get some confirmation that I'm not making some fundamental mistake.
September 15th 2010, 04:37 PM #2
Mar 2010 | {"url":"http://mathhelpforum.com/advanced-statistics/156265-quick-bayesian-probability-question.html","timestamp":"2014-04-16T18:13:33Z","content_type":null,"content_length":"34718","record_id":"<urn:uuid:99f172bc-9713-442d-9d68-542f2f85a308>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00356-ip-10-147-4-33.ec2.internal.warc.gz"} |
An interesting conversation with Stan Cohen.
I have known Stan (President of SPEAKEASY Computing Corp),
as I have known Jim Goodnight (President of SAS), and as I
recalled in my reply to Gordon Sande in the thread "Time to
Teach Digital Etiquette", I invited both of them to a Session in
an International Symposium in 1978 because of the power,
versatility and USER-FRIENDLINESS of their software in that
era, and even by today's standards.
SAS had 4 employees then, but needs no introduction today,
except perhaps Jim Goodnight whom very few knew that he
is a superb statistician himself, and had many valuable
contributions in the computing software in SAS, for GLM
pseudo-inverses and generalized inverses, and many other
applications of the matrix operator SWEEP.
Stan Cohen is the quiet genius in computing software who
developed SPEAKEASY in 1960!! while he was a physicist
at Argonne Labs. Since the early 1960s, he formed his own
Corporation, and I wasn't aware of Speakeasy until after I
left the University of Chicago, while he was living in the
same Hyde Park neighborhood in Chicago.
SPEAKEASY remains today relatively unknown, but is in
my estimation, still the BEST software product ever
created and continuously improved and adapted to
modern computing environments, from platforms on
various mainframe computers to different PCs, to super-
computers and parallel computers. The SAME Speakeasy
software language applies!
Stan and I hadn't talked to each other since the last
millenium <g> until someone read something posted by
a Reef Fish about SPEAKEASY in the Math Forum (which
apparently shows all the posts in sci.stat.math) and
mentioned it to him. Stan wrote me an email, not knowing
who Reef Fish was and told me about the PC version,
not knowing that I had written a 6-page GLOWING review
of Micro-Speakeasy Delta Version in the American
Statistician in 1987, and that he had known me, and
knew me well, for years.
That was a BENEFIT of being in sci.stat.math that I'll
never forget! Getting re-acquainted to Stan. :-) Stan
immediately gave me the much, much improved THETA
version of 2002, when he learned that the ONLY reason
why I still had a 1990 TI lap top was that it had Speakez
in it, and I had no access to Speakez elsewhere. :-)
Now I have Speakeasy on everyone of my laptops, and
I used it to re-program and do computations with what
I had previously relied on the use of my own system IDA.
Our conversation this afternoon was prompted by my
suggestion to Stan that we should try to CO-AUTHOR
my Data Analysis "textbook" which I had been using
and revising for 30 years but never published. I knew
Stan is very creative with graphics and other fancy
stuff that were non-existent in IDA or in that era, and
I thought he could contribute heavily to the graphics
in the book while I revise and update the statistics --
I also had many Speakeasy programs already in
use for those computations that are not found in other
statistical software, for Advanced topics in Data
So far, it's just the intro of the BACKGROUND of
what was interesting. :-)
I knew that SPEAKEASY (Speakez for short, which
I'll further shorten to EZ) was never considered a
statistical package, but rather a general computing
software that have many statistical capabilities.
However, because of the POWER of that language,
I was able to easily write my own software using the
EZ as my base software.
So, the first thing we talked about was what I told
Stan are the TWO most commonly used graphical
methods in statistics, the Normal probability of qq
plot (the command NORM in IDA) and the PLTS
(PLoT Sequence command in IDA) for validating
the Normality and Independence assumptions in
a regression problem. The most glaring absence
in EZ is the capability to do a p-p or q-q plot.
Stan says, "But Bob, I don't know anything about
statistics!", and I assured him that I could tell him
what a q-q plot is in TWO minutes. :-)
That's where the interesting part begins. It took
over two minutes, but not by much, because Stan
didn't know what a normal quantile is nor what
the EZ function GAUSSINV does. :-) I said,
"Does GAUSSINV(.975) = 1.96 ring any bell?".
He won the "no bell" prize.
So, I was teaching the President of EZ how to do
some things in a package in which those had been
in place for nearly 50 years.
But the most interesting part was that he not only
grasped those simple ideas quickly, but we had
the QQ subroutine written in complete generality,
in TWO LINES of EZ code, within minutes while
we were talking on the phone, and both were
doing the same lines of computing, line by line.
This was how it went:
I told Stan we first have to create the EMPIRICAL
cdf (which he didn't know what it was) by taking
the integers 1 to n (sample size), subtract 1/2 for
correction and divide by n to create the n fractiles:
F = (INTS(n) - .5)/n
We then convert those to the Standard Normal
quantiles by
Q = GAUSSINV (F).
Then we take any set of data X and standardize it to
Z = (X - mean(X))/standdev(X), then order them to form
Q1 = ordered(Z)
finally do GRAPH(q,q1:q) to get the Q-Q plot!
Voila, we did it one line at a time of course, so that we
could both see what the result of each line was. But at
the end, what we had done was in fact the steps it takes
to write an EZ subroutine that looks like this:
1 SUBROUTINE QQ(X)
2 N=NOELS(X);Q=GAUSSINV((INTS(N)-.5)/N)
3 Q1=ORDERED((X-MEAN(X))/STANDDEV(X)); GRAPH(Q,Q1:Q)
4 END
We generated some U(0,1) data to show what its QQ plot looks like by
X = RANDOM (INTS(100)); QQ(X)
We generated N(0,1) data to show what Normal data look like:
Y = NORMRAND(X); QQ(Y)
Recalling my comment to Jack Tomsky in the Afonso thread
that his probability result of -2ln(1-p) for chi-sq with 2 d.f.
reminded me of the method of simulating chi-square r.v.
with 2 d.f. by using -2 ln (1 - U), where U is from U(0,1).
I had forgotten that I had deducted points from my students
for wasting the time in "1 - U" which has exactly the same
distribution as "U". :-)
So, in EZ,
W = -2*ln(X)
would have yielded an array of Chi-square (2) r.v. which
is a member of the exponential distribution as well.
The QQ(W) plot shows a really severe departure from the
diagonal straightline of what a normal sample should look.
I am showing what I had disussed with Stan because he was
really starting from "ground zero" on the subject of Q Q plot,
and yet in a matter of a few minutes, he not only understood
every step of it, but could write his own subroutine, similar
to my two-liner. I mentioned to him that with the added
capabilities of color graphics (in EZ), he could easily soup up
the QQ plot to highlight unusual behavior or outliers.
Then our conversation drifted to Stan telling me some
existing capabilities in EZ he created that nobody ever used. :-)
Those were nifty things he did in high dimensions, and we
immediately struck an accord in our mutual understanding
of the difficulty of representing points (or functions) in
anything over FOUR dimensions, graphically.
Without going into any details about those topics, Just from
a few minutes of that conversation, I could relate some of
what he did with what some of my doctoral students did in
the graphical representation of high dimension data -- and
I told him about the Chernoff Faces, which was ONE of
a dozen or so methods that I knew, for representing
data in dimension more than 4. Moreover, I could also
see that if I were in the days of wanting to publish papers,
I had enough new ideas to write three or four different
papers that are publishable in major statistical journals
on the conversation I had with Stan in one afternoon.
So, I am excited that I'll have the opportunity to with work
with the man who created my favorite software package,
SPEAKEASY, in using EZ and its powerful capabilities to
write routines and do computations on Applied Statistics
the way a Statistician wants, rather than trying as most
folks do, fit everything into the mode of SAS or SPSS,
whether its the right thing to do, or not -- and far often,
they are the WRONG things to do.
-- Reef Fish Bob. | {"url":"http://sci.tech-archive.net/Archive/sci.stat.math/2006-11/msg00064.html","timestamp":"2014-04-19T01:48:08Z","content_type":null,"content_length":"15678","record_id":"<urn:uuid:dcb6ee25-8d7b-4ef9-8b6e-4a7a64444068>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00128-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Arc Length on Sunday, August 24, 2008 at 9:11pm.
Find the length of the arc formed by
y = (1/8)(4x^2-2ln(x)) from x=4 to x=8.
I found the derivative of the function and got y'= x-(1/4x)
Where I'm lost now is after plugging it into the arc length equation: integral of sqrt(1+(x-(1/4x))^2). Squaring the derivative yields me sqrt(1+x^2+1/16x-1/2). Help please.
• calculus - Damon, Sunday, August 24, 2008 at 9:47pm
I got sqrt [ 1 + x^2 -(1/2) + 1/(16x^2) ]
which is
sqrt [ x^2 + (1/2) +1/(16x^2) ]
x^2 + (1/2) + 1/(16x^2) = [x+ 1/(4x)]^2
ok ?
• calculus - Arc Length, Sunday, August 24, 2008 at 10:12pm
Oh yes, sorry, I merely had a typo. I had the same result as you. What I'm confused about is what to do from that point on.
• calculus - Damon, Monday, August 25, 2008 at 3:50am
well, the sqrt of that is just
x + (1/4)(1/x)
integral of that is
(1/2) x^2 + (1/4) ln x
Related Questions
derivative - 29) Find the derivative of the function. F(X)=2x^2(3-4x)^2 This is...
Calculus - I know how to do this problem, but I'm stuck at the arc length ...
calculus - Find the first and second derivative - simplify your answer. y=x/4x+1...
Math - find any stationary points of the function g(x) = (2x-3)square root of 5+...
Calculus - Find the arc length given the equation y=(x^4/8)+(1/4x^2) [1,3]
Calculus - Find the arc length given the equation y=(x^4/8)+(1/4x^2) [1,3]
math, Calculus - Find the relative extrema of the functions. f(x)=4x/(x^2+1) f(x...
AP Calculus - Two circles of radius 4 are tangent to the graph of y^2=4x at the ...
calculus - find the derivative of f for the function(x)= x^3-4x+2
Calculus - We are doing problems with the product and quotient rules, but I'm ... | {"url":"http://www.jiskha.com/display.cgi?id=1219626662","timestamp":"2014-04-17T21:57:29Z","content_type":null,"content_length":"9092","record_id":"<urn:uuid:69edd4b7-8fa7-40c6-ae47-df6730fae8c2>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00457-ip-10-147-4-33.ec2.internal.warc.gz"} |
SVM equations from e1071 R package?
up vote 7 down vote favorite
I am interested in test the SVM performance to classify several individuals into four groups/classes. When using the svmtrain LibSVM function from MATLAB, I am able to get the three equations used to
classify those individuals among the 4 groups, based on the values of this equation. An scheme could be as follows:
All individuals (N)*
Group 1 (n1) <--- equation 1 ---> (N-n1)
(N-n1-n2) <--- equation 2 ---> Group 2 (n2)
Group 3 (n3) <--- equation 3 ---> Group 4(n4)
*N = n1+n2+n3+n4
Is there any way to get these equations using the svm function in the e1071 R package?
r machine-learning svm libsvm
add comment
1 Answer
active oldest votes
svm in e1071 uses the "one-against-one" strategy for multiclass classification (i.e. binary classification between all pairs, followed by voting). So to handle this hierarchical setup,
you probably need to do a series of binary classifiers manually, like group 1 vs. all, then group 2 vs. whatever is left, etc.. Additionally, the basic svm function does not tune the
hyperparameters, so you will typically want to use a wrapper like tune in e1071, or train in the excellent caret package.
Anyway, to classify new individuals in R, you don't have to plug numbers into an equation manually. Rather, you use the predict generic function, which has methods for different models
like SVM. For model objects like this, you can also usually use the generic functions plot and summary. Here is an example of the basic idea using a linear SVM:
# Subset the iris dataset to only 2 labels and 2 features
iris.part = subset(iris, Species != 'setosa')
iris.part$Species = factor(iris.part$Species)
iris.part = iris.part[, c(1,2,5)]
# Fit svm model
fit = svm(Species ~ ., data=iris.part, type='C-classification', kernel='linear')
# Make a plot of the model
dev.new(width=5, height=5)
plot(fit, iris.part)
# Tabulate actual labels vs. fitted labels
pred = predict(fit, iris.part)
table(Actual=iris.part$Species, Fitted=pred)
# Obtain feature weights
w = t(fit$coefs) %*% fit$SV
# Calculate decision values manually
iris.scaled = scale(iris.part[,-3], fit$x.scale[[1]], fit$x.scale[[2]])
t(w %*% t(as.matrix(iris.scaled))) - fit$rho
# Should equal...
Tabulate actual class labels vs. model predictions:
up vote 19
down vote > table(Actual=iris.part$Species, Fitted=pred)
accepted Fitted
Actual versicolor virginica
versicolor 38 12
virginica 15 35
Extract feature weights from svm model object (for feature selection, etc.). Here, Sepal.Length is obviously more useful.
> t(fit$coefs) %*% fit$SV
Sepal.Length Sepal.Width
[1,] -1.060146 -0.2664518
To understand where the decision values come from, we can calculate them manually as the dot product of the feature weights and the preprocessed feature vectors, minus the intercept
offset rho. (Preprocessed means possibly centered/scaled and/or kernel transformed if using RBF SVM, etc.)
> t(w %*% t(as.matrix(iris.scaled))) - fit$rho
51 -1.3997066
52 -0.4402254
53 -1.1596819
54 1.7199970
55 -0.2796942
56 0.9996141
This should equal what is calculated internally:
> head(fit$decision.values)
51 -1.3997066
52 -0.4402254
53 -1.1596819
54 1.7199970
55 -0.2796942
56 0.9996141
Thanks for you answer, John. The reason because I want to know these equations is to assess which parameters from the total have more importance when classifying my events. – Manuel
Ramón Oct 19 '11 at 9:53
1 @ManuelRamón Ahh gotcha. Those are called the "weights" for a linear SVM. See edit above for how to calculate from an svm model object. Good luck! – John Colby Oct 19 '11 at 18:22
1 Your example has only two categories (versicolor and virginica) and you got a vector with two coeffcients, one for each variable used to classify the iris data. If I have N
categories I get N-1 vectors from with(fit, t(coefs) %*% SV). What is the meaning of each vector? – Manuel Ramón Oct 21 '11 at 16:45
The length of the weights vector will be equal to the number of features that were actually used to fit the SVM. If you used the formula interface and factor features, your input
features get processed into numeric dummy variables via model.matrix(). Thus, if you have a factor feature with 3 levels, it will get processed into only two final features. That is
probably where your N-1 is coming from. – John Colby Oct 21 '11 at 17:24
Ohh I see...you decided to go with the multi-class mode up front. I see what you're saying - if running on the full iris data, coefs only has two columns, where I would have expected
1 3. rho has 3 values and decision.values has 3 columns as well (for the 3 one vs. one binary classifiers). See above for how to calculate the decision values manually, but so far I
can't reproduce what is stored in decision.values from any combination of those 2 coefs sets and 3 rho values. I'm stumped here at the moment... – John Colby Oct 23 '11 at 6:54
show 1 more comment
Not the answer you're looking for? Browse other questions tagged r machine-learning svm libsvm or ask your own question. | {"url":"http://stackoverflow.com/questions/7390173/svm-equations-from-e1071-r-package","timestamp":"2014-04-18T01:33:58Z","content_type":null,"content_length":"74291","record_id":"<urn:uuid:a5e5985d-70ec-469e-b4d0-35b279fbf2b1>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00340-ip-10-147-4-33.ec2.internal.warc.gz"} |
Frackville Math Tutor
Find a Frackville Math Tutor
I am a senior in college pursuing a BS in physics and a BS Ed for secondary education. I will be graduating in May. I have passed both the physics and mathematics PRAXIS content knowledge exams,
and will be state certified upon graduation.
20 Subjects: including calculus, geometry, physics, precalculus
I believe every child has a brilliant mind and through the right medium we can accomplish a lot! I have a degree in Finance from the University of Pittsburgh and have an ESL Certificate I used
when living and working in Phuket, Thailand. I have a diverse background of work experience from being employed by large and small banks to an ESL teacher in Thailand to a Playground Counselor.
10 Subjects: including algebra 1, prealgebra, reading, ESL/ESOL
Greetings! My name is Kelly and I have been tutoring pre-K and high school students for the past three years. Among my specialties are math, science, kindergarten prep, and dance.
56 Subjects: including algebra 2, chemistry, prealgebra, geometry
...I am a very patient and easy-going person with an infectious personality. I am constantly teaching people at work, in class, and generally in my life, and so I'd like to take my experience and
pass along my knowledge to current students. I teach based on the student's speed, and I will constantly check up to make sure everything is clear that I am tutoring.
34 Subjects: including algebra 1, algebra 2, calculus, chemistry
...I have recently graduated in December from Wilkes University with a master's in mathematics. I currently tutor at Penn State Hazleton and am looking to expand my tutoring opportunities. I have
been running since I was 13 and now in my 14th year of running.
16 Subjects: including geometry, trigonometry, Russian, logic
Nearby Cities With Math Tutor
Altamont, PA Math Tutors
Aristes Math Tutors
Ashland, PA Math Tutors
Centralia, PA Math Tutors
Cresmont, PA Math Tutors
Cumbola Math Tutors
Englewood, PA Math Tutors
Gilberton Math Tutors
Gordon, PA Math Tutors
Mahanoy Plane Math Tutors
Mahanoy, PA Math Tutors
Ringtown Math Tutors
Shaft, PA Math Tutors
Shenandoah, PA Math Tutors
Turkey Run, PA Math Tutors | {"url":"http://www.purplemath.com/frackville_pa_math_tutors.php","timestamp":"2014-04-20T02:25:16Z","content_type":null,"content_length":"23674","record_id":"<urn:uuid:2e3018cd-4b6a-4f42-9558-82f94b72f62d>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00314-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/lyssa/asked/1","timestamp":"2014-04-20T06:31:28Z","content_type":null,"content_length":"67556","record_id":"<urn:uuid:78560775-dc43-4471-a9a9-3e7bec34b973>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00490-ip-10-147-4-33.ec2.internal.warc.gz"} |
RE: st: Absolute point size in Stata schemes
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
RE: st: Absolute point size in Stata schemes
From "Svend Juul" <SJ@SOCI.AU.DK>
To <statalist@hsphsun2.harvard.edu>
Subject RE: st: Absolute point size in Stata schemes
Date Thu, 5 Oct 2006 22:49:51 +0200
Bill wrote:
The scale option is very useful. Still it would be nice if it were
possible to choose between absolute and relative sizes when designing
Stata schemes. The scale option does require a little trial and error
to get the precise appearance that one is looking for.
To me it is not a choice between absolute and relative size. This is hardly an exact science, but to get a visually appealing - and readable - graph, I think that:
- for a small graph, use a small absolute, but a large relative text size.
- for a large graph, use a large absolute, but a small relative text size.
Illustrated by these examples:
1. twoway (scatter mpg weight), xsize(6) ysize(4)
2. twoway (scatter mpg weight), xsize(3) ysize(2)
3. twoway (scatter mpg weight), xsize(3) ysize(2) scale(2)
4. twoway (scatter mpg weight), xsize(3) ysize(2) scale(1.5)
Example 1 and 2 have the same relative text and marker size; example 2 is hardly readable.
Example 1 and 3 have the same absolute text and marker size; example 3 is ugly, in my mind.
Example 4 is a decent compromise.
It may be possible to create a formula taking these things into account, and to make a wrapper for graph commands utilizing the formula, but I am not sure that it is worth the effort.
Svend Juul
Institut for Folkesundhed, Afdeling for Epidemiologi
(Institute of Public Health, Department of Epidemiology)
Vennelyst Boulevard 6
DK-8000 Aarhus C, Denmark
Phone, work: +45 8942 6090
Phone, home: +45 8693 7796
Fax: +45 8613 1580
E-mail: sj@soci.au.dk
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2006-10/msg00222.html","timestamp":"2014-04-16T22:08:34Z","content_type":null,"content_length":"7421","record_id":"<urn:uuid:2510f230-8186-40dc-a546-de5d36dcf218>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00417-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bala, PA Algebra 2 Tutor
Find a Bala, PA Algebra 2 Tutor
...In PA, I am certified k-6. I have my BS in Elementary Education from West Chester University. I believe in and understand the Core Content Standards as well as the NJ state testing (ASK) for
all grade levels.
12 Subjects: including algebra 2, geometry, algebra 1, trigonometry
...I took the GRE in 2012, after the new GRE had been implemented. I scored in a high percentile, and can help any student to succeed in the math topics that are covered in the GRE. As part of my
civil engineering degree, I gained a firm grasp on mathematical concepts including all of the critical concepts covered by the ACT math section.
21 Subjects: including algebra 2, reading, physics, calculus
...In particular, he was proud to be part of the NASA Space Shuttle program and the development of new-generation jet engines by the General Electric Company. Dr. Peter is always willing to offer
flexible scheduling to suit the client's needs.
10 Subjects: including algebra 2, calculus, GRE, algebra 1
...I have taught in the Philadelphia School System for the past 9 years with a heavy emphasis on Algebra. I have been told by students that they enjoy my teaching and tutoring methods because I
am able to make math seem practical and relevant to their lives. I have learned through the years how to make math seem easy.
11 Subjects: including algebra 2, statistics, geometry, algebra 1
I am teaching math, for over 20 years now, and was awarded four times as educator of the year. I was also mentor of the year twice. I have a variety of experience teaching not only in different
countries, but also teaching here in public school, private school, charter school, and adult continuing education school.
15 Subjects: including algebra 2, geometry, algebra 1, GED
Related Bala, PA Tutors
Bala, PA Accounting Tutors
Bala, PA ACT Tutors
Bala, PA Algebra Tutors
Bala, PA Algebra 2 Tutors
Bala, PA Calculus Tutors
Bala, PA Geometry Tutors
Bala, PA Math Tutors
Bala, PA Prealgebra Tutors
Bala, PA Precalculus Tutors
Bala, PA SAT Tutors
Bala, PA SAT Math Tutors
Bala, PA Science Tutors
Bala, PA Statistics Tutors
Bala, PA Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Bala Cynwyd algebra 2 Tutors
Belmont Hills, PA algebra 2 Tutors
Carroll Park, PA algebra 2 Tutors
Center City, PA algebra 2 Tutors
Cynwyd, PA algebra 2 Tutors
Drexelbrook, PA algebra 2 Tutors
Merion Park, PA algebra 2 Tutors
Merion Station algebra 2 Tutors
Merion, PA algebra 2 Tutors
Miquon, PA algebra 2 Tutors
Narberth algebra 2 Tutors
Oakview, PA algebra 2 Tutors
Overbrook Hills, PA algebra 2 Tutors
Penn Valley, PA algebra 2 Tutors
Penn Wynne, PA algebra 2 Tutors | {"url":"http://www.purplemath.com/bala_pa_algebra_2_tutors.php","timestamp":"2014-04-20T11:09:07Z","content_type":null,"content_length":"24010","record_id":"<urn:uuid:8e0619be-82dd-4319-94de-218ac9b3a849>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00086-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pulley on a Spring
1. The problem statement, all variables and given/known data
A pulley containing only a single mass, m, is hanging vertically from a mass with spring constant k. Find the period of vertical oscillation for the mass. Explanation in terms of energy and energy
conservation is preferred.
2. Relevant equations
Ay, there's the rub. You are supposed to find the spring's potential energy in terms of y (the height of the mass) and then use T=2*pi√(m/k) once you find the "true" k that corresponds with the mass
(the second derivative of the potential energy U)
3. The attempt at a solution
None worth noting. I know for sure that all of my attempts are terrible. If you need a closer look its from the lightandmatter.com textbook "Simple Nature", Chapter 2, question #35. | {"url":"http://www.physicsforums.com/showthread.php?p=3617113","timestamp":"2014-04-20T23:40:41Z","content_type":null,"content_length":"39795","record_id":"<urn:uuid:23355358-6921-43ca-9e8c-97f10fc87381>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00466-ip-10-147-4-33.ec2.internal.warc.gz"} |
User Martin M. W.
bio website
visits member for 4 years, 5 months
seen yesterday
stats profile views 659
10 answered Is there a similar theorem in the partially hyperbolic case?
25 reviewed Reject suggested edit on Degeneration of riemannian metrics with curvature bounds
9 awarded Custodian
3 awarded Custodian
3 reviewed Approve suggested edit on Elementary Embeddings and Relative Constructibility
Oct Dynamical properties of injective continuous functions on $\mathbb{R}^d$
28 comment Thanks! In light of your comment, I added your good point about a stronger downward component and made the wording overall a bit less tentative.
Oct Dynamical properties of injective continuous functions on $\mathbb{R}^d$
28 revised added 264 characters in body
28 answered Dynamical properties of injective continuous functions on $\mathbb{R}^d$
27 awarded Yearling
22 awarded Informed
2 answered How to understand a solenoid?
9 answered Force-directed graph drawing in 1D?
27 awarded Yearling
28 awarded Yearling
26 awarded Enlightened
26 awarded Nice Answer
Oct When is a submanifold of $\mathbf R^n$ given by global equations?
15 comment For $S^1$, what about $f(x, y, z) = (x^2 + y^2 - 1, z)$? But is it possible to do a knot?
Jul Fixed points which are not locally attractive can have distant basins of attraction?
23 comment Under one interpretation of your terms, here's an example. The flow defined by $x' = x^2$ has a non-locally-attracting fixed point at 0, but any open set of negative numbers is attracted
to it. But perhaps you mean something else?
Apr Orthogonal foliations
25 comment +1. This is really nicely written--it's an excellent example of how to give a helpful expository (as opposed to problem-solving) answer.
Apr A model of self-organizing behavior
15 comment This seems related to the literature on "pulse-coupled oscillators," which is inspired partly by models of synchronized firefly flashing or cricket chirping. It may not treat your exact
rule, but if you haven't looked into this already, see for instance: eecs.harvard.edu/~degesys/pulse.html | {"url":"http://mathoverflow.net/users/1227/martin-m-w?tab=activity","timestamp":"2014-04-21T07:52:22Z","content_type":null,"content_length":"44815","record_id":"<urn:uuid:b98705a7-ffa8-4d5e-b860-11f3ba08e724>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00308-ip-10-147-4-33.ec2.internal.warc.gz"} |
LOGO Challenge 11 - More on Circles
Where C is the circumference of a circle and $\pi$(pi) is equal to 3.14159...
In terms of LOGO it means that circles of any diameter, radius or circumference can be drawn. Consider the following procedure:
TO CIRC :C
REPEAT 360 [ FD :C/360 RT 1]
What do you think this is about?
Once decided, trace the procedure through in your mind's eye.
If you can, talk to others about what you think is happening. If in doubt out check your thoughts by typing in the procedure and testing what it does. N.B. Pi is a primitive approximately equal to
Try CIRC 314
For now experiment by changing:
The number of times you repeat the instruction (360) Or the length of the circumference (:C) Or the amount of turn done after each forward movement (1 degree)
Alternatively you might like to consider the next procedure:
TO CIR :D
REPEAT 360 [FD :D*PI/360 RT 1]
Try the following CIR 100
What do you notice now? | {"url":"http://nrich.maths.org/4970/index?nomenu=1","timestamp":"2014-04-20T03:26:20Z","content_type":null,"content_length":"4449","record_id":"<urn:uuid:ffa3ba1d-9629-4f2e-a2d2-af556b67b037>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00495-ip-10-147-4-33.ec2.internal.warc.gz"} |
The String Coffee Table
Talking to myself
Posted by Robert H.
Since nobody answered my question about how to properly generalize the calibration condition for BPS-branes if the gauge field has curvature I have to do it myself.
Until this very minute, I have been preparing for today’s joint math/physics block seminar in Hamburg where we’re going to find out what Generalized Complex Geomtery is really about (Lubos has
chatted about it in his reference frame). Not to arrive completely clueless I have been reading Gualtieri’s thesis which I can strongly recommend to everybody. It is an excelent read even for
physicists! And there, in chapter 7 my question is answered:
You probably know that this generalized business works by considering the tangent and co-tangent bundles together. Then a generalized complex structure $J$ is maps $T\oplus {T}^{*}$ to itself and
squares to $-1$ and fulfills some integrability condition. It’s easy to see that this condition contains complex, symplectic and Poisson geometry and interpolates between these. Furthermore it is co/
invariant under transformations by closed 2-forms $B$ and can be twisted by closed threeforms $H$, e.g. $\mathrm{dB}$ for not closed $B$.
Now consider a submanifold of this space on which there is a 2-form $F$ with $\mathrm{dF}=H$ (0 without twist). The trick now is to look at the subbundle of $T\oplus {T}^{*}$ on the submanifold such
that the vector component $X$ is tangent to the submanifold and the form component is given by ${i}_{X}F$. The condition for this to be a generalized complex submanifold is now to require that this
bundle is stable under $J$. And, as promised, this generalizes complex, Lagrangian and self-duality for $F$ as BPS conditions. And there is also a spinorial description.
I must say, this story is one of those that is so beautiful that it can really foster your belief that there must be some truth to string theory!
Posted at May 26, 2005 8:23 AM UTC
Re: Talking to myself
Posted by: Urs Schreiber on May 26, 2005 2:59 PM | Permalink | Reply to this
Off Topic
Hi Robert,
I’ve been out of the string business for a while, but I am trying to get back up to speed. As part of that effort I’ve been reading the String Coffee table and am going to Strings 2005. Will I see
you there? I don’t think I’ve seen you since Strings 98.
Posted by: Gavin Polhemus on June 3, 2005 12:26 AM | Permalink | Reply to this | {"url":"http://golem.ph.utexas.edu/string/archives/000573.html","timestamp":"2014-04-18T14:22:42Z","content_type":null,"content_length":"13806","record_id":"<urn:uuid:9f757270-b084-4789-b3b2-19cc526bca29>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00568-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathwords: Exponential Decay
Exponential Decay
A model for decay of a quantity for which the rate of decay is directly proportional to the amount present. The equation for the model is A = A[0]b^t (where 0 < b < 1 ) or A = A[0]e^kt (where k is a
negative number representing the rate of decay). In both formulas A[0] is the original amount present at time t = 0.
This model is used for phenomena such as radioactivity or depreciation. For example, A = 50e^–0.01t is a model for exponential decay of 50 grams of a radioactive element that decays at a rate of 1%
per year.
See also
Exponential growth, half-life, continuously compounded interest, logistic growth, e | {"url":"http://www.mathwords.com/e/exponential_decay.htm","timestamp":"2014-04-18T18:11:38Z","content_type":null,"content_length":"13925","record_id":"<urn:uuid:01ae1693-b3be-448e-911c-6627d28c9936>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00382-ip-10-147-4-33.ec2.internal.warc.gz"} |
Google Answers: Micro (3-5)
3) a) "demand curve (or demand schedule). A schedule or curve showing
the quantity of a good that buyers would purchase at each price, other
things equal. Normally a demand curve has price on the vertical or
y-axis and quantity demanded on the horizontal or x-axis." Page 734
So, we plot Bert's demand by placing dots at the intersection of $7
and 1, $5 and 2, $3 and 3, and $1 and 4, and then drawing a line
connecting them.
b) " Consumer surplus. The difference between the amount that a
consumer would be willing to pay for a commodity and the amount
actually paid. This difference arises because the marginal utilities
(in dollar terms) of all but the last unit exceed the price. Hence
the monetary equivalent of the total utility of the commodity consumed
may be well above the amount spent. Under rigorous assumptions, the
money value of consumer surplus can be measured (using a demand-curve
diagram) as the area under the demand curve but above the price line."
Page 732
A line drawn across the graph at $4 intersects the demand curve at a
quantity of 2.5, so Bert buys two bottles. Using the definition of
consumer surplus, we find that Bert has obtained a surplus of four
c) a line drawn across the graph at $2 intersects the demand curve at
a quantity of 3.5, so Bert buys three bottles. Using the definition
of consumer surplus, we find the Bert has achieved a surplus of nine
4) a) "supply curve (or supply schedule). A schedule showing the
quantity of a good that suppliers in a given market desire to sell it
each price, holding other things equal." Page 747
So, we plot Ernie's supply by placing dots at the intersection of $1
and 1, $3 and 2, $5 and 3, and $7 and 4, and then drawing a line
connecting them.
b) a line drawn across the graph at $4 intersects the supply curve at
a quantity of 2.5, so Ernie sells two bottles. His producer surplus
is four dollars.
c) a line drawn across the graph at $6 intersects the supply curve at
a quantity of 3.5, so Ernie sells three bottles. His producer surplus
is nine dollars.
5) a) the demand curve and the supply curve intersect at four dollars.
This is the equilibrium point. At two dollars, four dollars, and six
dollars, Bert demands three bottles, two bottles, and one bottle,
respectively. At two dollars, four dollars, and six dollars, Ernie
supplies one bottle, two bottles, and three bottles, respectively.
b) at equilibrium, both the consumer surplus and the producer surplus
are four dollars, making the total surplus eight dollars.
c) if Ernie produces and Bert consumes one fewer bottle of water,
assuming the equilibrium price is maintained at four dollars, each
achieves a three dollar surplus, resulting in a total surplus of six
d) if Ernie produces and Bert consumes one additional bottle of water,
assuming the equilibrium price is maintained at four dollars, each
incurs a deficit of one dollar, yielding a total deficit of two
Source: "Economics" 14th edition by Samuelson & Nordhaus, McGraw-Hill
Inc., 1992 | {"url":"http://answers.google.com/answers/threadview/id/265075.html","timestamp":"2014-04-18T21:30:23Z","content_type":null,"content_length":"11338","record_id":"<urn:uuid:ef81473c-f0a9-46f5-a9f3-3d9d8c04b32a>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00093-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
John wants to rent a tent to go camping. The cost to rent a tent from Woodland Outfitters is a flat rate of $10.00 plus $0.50 for each night the tent is rented. Part a. Let x represent the number of
nights John keeps the tent and write a cost function, c(x), for the cost to rent a tent from Woodland Outfitters. Part b. How much will John owe if he uses the tent for five nights?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/507ccab2e4b07c5f7c1fb342","timestamp":"2014-04-19T13:03:06Z","content_type":null,"content_length":"34999","record_id":"<urn:uuid:c42fc39f-a2e3-4acf-83da-754f2c853c36>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00610-ip-10-147-4-33.ec2.internal.warc.gz"} |
Theory of units and tabulations in allegories
Todd Trimble
Theory of units and tabulations in allegories
In this article we establish a connection between pretabular unitary allegories and bicategories of relations, and also between tabular unitary allegories and regular categories. The material is
entirely adapted from Categories, Allegories; we have merely changed some details of arrangement, notation, and terminology.
We write the composite of morphisms $r:a\to b$, $s:b\to c$ as $sr:a\to c$.
An allegory is a $\mathrm{Pos}$-enriched $†$-category $A$ where each hom-poset has binary meets, and the modular law is satisfied. The modular law takes two forms, whenever the left sides of the
inequalities make sense:
• $rs\wedge t\le r\left(s\wedge {r}^{†}t\right)$
• $rs\wedge t\le \left(r\wedge t{s}^{†}\right)s$
(and of course there are variations, using commutativity of $\wedge$). Each of these forms can be derived from the other, using the $†$-structure.
The $†$-operation, which we henceforth denote by $\left(-{\right)}^{o}$, preserves meets since it preserves order and is an involution.)
For any $r:a\to b$, we have $r\le r{r}^{o}r$.
We have $r\le r{1}_{b}\wedge r\le r\left(1\wedge {r}^{o}r\right)\le r{r}^{o}r$ where the middle inequality uses the modular law.
Recall that a map in an allegory is a morphism $f:a\to b$ such that $f⊣{f}^{o}$. A relation $r:a\to b$ is well-defined if we merely have a counit inclusion $r{r}^{o}\le {1}_{b}$, and is total if we
merely have a unit inclusion ${1}_{a}\le {r}^{o}r$. Clearly maps are closed under composition, as are total relations and well-defined relations.
If $f:a\to b$ is a map, then for any $r,s\in \mathrm{hom}\left(b,c\right)$ we have $\left(r\wedge s\right)f=rf\wedge sf$.
The non-trivial inclusion follows from
$rf\wedge sf\le \left(r\wedge sf{f}^{o}\right)f\le \left(r\wedge s\right)f$r f \wedge s f \leq (r \wedge s f f^o)f \leq (r \wedge s)f
where the first inequality is an instance of the modular law, and the second holds for any well-defined relation $f$.
If $f,g:a\to b$ are maps and $f\le g$, then $f=g$.
Since the dagger operation $\left(-{\right)}^{o}:\mathrm{hom}\left(a,b\right)\to \mathrm{hom}\left(b,a\right)$ preserves order, we have ${f}^{o}\le {g}^{o}$. But also the inclusion $f\le g$ between
left adjoints is mated to an inclusion ${g}^{o}\le {f}^{o}$ between right adjoints. Hence ${f}^{o}={g}^{o}$, and therefore $f=g$.
Domains and coreflexives
Let $r:a\to b$ be a morphism. We define the domain $dom\left(r\right)$ to be ${1}_{a}\wedge {r}^{o}r$. A morphism $e:a\to a$ is coreflexive if $e\le {1}_{a}$; in particular, domains are coreflexive.
$\mathrm{Cor}\left(a\right)$ denotes the poset of coreflexives in $\mathrm{hom}\left(a,a\right)$.
Coreflexives $r:a\to a$ are symmetric and transitive. (Symmetric and transitive imply idempotent.)
We have $r\le r{r}^{o}r\le {1}_{a}{r}^{o}{1}_{a}\le {r}^{o}$ where the first inequality is lemma 1. Of course also $rr\le {1}_{a}r=r$. (If $r$ is symmetric and transitive, then $r\le r{r}^{o}r=rrr\le
rr$ as well.)
For $r,s\in \mathrm{hom}\left(a,b\right)$, we have $dom\left(r\wedge s\right)={1}_{a}\wedge {r}^{o}s$.
One inclusion is trivial:
$dom\left(r\wedge s\right)={1}_{a}\wedge \left(r\wedge s{\right)}^{o}\left(r\wedge s\right)\le {1}_{a}\wedge {r}^{o}s.$\dom(r \wedge s) = 1_a \wedge (r \wedge s)^o (r \wedge s) \leq 1_a \wedge r^o s.
The other inclusion follows from fairly tricky applications of the modular law:
$\begin{array}{ccc}{1}_{a}\wedge {s}^{o}r& \le & {1}_{a}\wedge \left({1}_{a}\wedge \left({1}_{a}\wedge {s}^{o}r\right)\right)\\ & \le & {1}_{a}\wedge \left({1}_{a}\wedge {s}^{o}\left(s\wedge r\right)
\right)\\ & \le & {1}_{a}\wedge \left(\left(s\wedge r{\right)}^{o}\wedge {s}^{o}\right)\left(s\wedge r\right)\\ & \le & {1}_{a}\wedge \left(s\wedge r{\right)}^{o}\left(s\wedge r\right)\\ & =& dom\
left(s\wedge r\right).\end{array}$\array{ 1_a \wedge s^o r & \leq & 1_a \wedge (1_a \wedge (1_a \wedge s^o r)) \\ & \leq & 1_a \wedge (1_a \wedge s^o(s \wedge r)) \\ & \leq & 1_a \wedge ((s \wedge r)
^o \wedge s^o)(s \wedge r) \\ & \leq & 1_a \wedge (s \wedge r)^o (s \wedge r) \\ & = & \dom(s \wedge r). }
For $r:a\to b$ and coreflexives $c\in \mathrm{hom}\left(a,a\right)$, we have $dom\left(r\right)\le c$ if and only if $r\le rc$. In particular, $r\le r\circ dom\left(r\right)$.
If ${1}_{a}\wedge {r}^{o}r\le c$, then
$r\le r{1}_{a}\wedge r\le r\left({1}_{a}\wedge {r}^{o}r\right)\le rc.$r \leq r 1_a \wedge r \leq r(1_a \wedge r^o r) \leq r c.
If $r\le rc$, then
${1}_{a}\wedge {r}^{o}r\le {1}_{a}\wedge {r}^{o}rc\le \left({c}^{o}\wedge {r}^{o}r\right)c\le {c}^{o}c\le c$1_a \wedge r^o r \leq 1_a \wedge r^o r c \leq (c^o \wedge r^o r)c \leq c^o c \leq c
where the antepenultimate inequality uses the modular law, and the last uses lemma 4.
For $r:a\to b$ and $s:b\to c$, we have $dom\left(sr\right)\le dom\left(r\right)$.
By lemma 6, it suffices that $sr\le sr\circ dom\left(r\right)$. But this follows from the last sentence of lemma 6.
An object $t$ is a unit in an allegory if ${1}_{t}$ is maximal in $\mathrm{hom}\left(t,t\right)$ and if for every object $a$ there is $f:a\to t$ such that ${1}_{a}\le {f}^{o}f$ (i.e., $f$ is total).
Of course by maximality we also have $f{f}^{o}\le {1}_{t}$, so such $f$ must also be a map.
We say $A$ is unital (Freyd-Scedrov say unitary) if $A$ has a unit.
Let $t$ be a unit. Then $dom:\mathrm{hom}\left(a,t\right)\to \mathrm{Cor}\left(a\right)$ is an injective order-preserving function.
Order-preservation is clear. If $r,s:a\to t$ and $dom\left(r\right)\le dom\left(s\right)$, then $r\le s$:
$r\le r\circ dom\left(r\right)\le r\circ dom\left(s\right)\le r{s}^{o}s\le s$r \leq r \circ \dom(r) \leq r \circ \dom(s) \leq r s^o s \leq s
where the first inequality uses lemma 6, and the last inequality follows from $r{s}^{o}\le {1}_{t}$ (since ${1}_{t}$ is maximal in $\mathrm{hom}\left(t,t\right)$). Therefore $dom\left(r\right)=dom\
left(s\right)$ implies $r=s$.
For a unit $t$ and any $a$, there is at most one total relation = map $r:a\to t$ (because in that case $dom\left(r\right)={1}_{a}$, and we apply the previous proposition), and this is maximal in $\
mathrm{hom}\left(a,t\right)$. Thus $t$ is terminal in $\mathrm{Map}\left(A\right)$.
Let ${\epsilon }_{a}:a\to t$ denote the maximal element of $\mathrm{hom}\left(a,t\right)$. For any $r:a\to b$, we then have $r\le {\epsilon }_{b}^{o}{\epsilon }_{a}$ since this is mated to the
inequality ${\epsilon }_{b}r\le {\epsilon }_{a}$. Therefore ${\epsilon }_{b}^{o}{\epsilon }_{a}$ is the maximal element of $\mathrm{hom}\left(a,b\right)$.
Tabulations in allegories
Recall from Categories, Allegories that a tabulation of $r:a\to b$ is a pair of maps $f:x\to a$, $g:x\to b$ such that $r=g{f}^{o}$ and ${f}^{o}f\wedge {g}^{o}g={1}_{x}$.
Preliminaries on tabulations
For maps $f:x\to a$, $g:x\to b$, the condition ${f}^{o}f\wedge {g}^{o}g={1}_{x}$ implies $\left(f,g\right)$ is a jointly monic pair in $\mathrm{Map}\left(A\right)$.
Let $h,h\prime :y\to x$ be maps, and suppose $fh=fh\prime$ and $gh=gh\prime$. If ${f}^{o}f\wedge {g}^{o}g={1}_{x}$, then
$h=\left({f}^{o}f\wedge {g}^{o}g\right)h={f}^{o}fh\wedge {g}^{o}gh={f}^{o}fh\prime \wedge {g}^{o}gh\prime =\left({f}^{o}f\wedge {g}^{o}gg\right)h\prime =h\prime$h = (f^o f \wedge g^o g)h = f^o f h \
wedge g^o g h = f^o f h' \wedge g^o g h' = (f^o f \wedge g^o g g)h' = h'
where the second and fourth equations use lemma 2.
Suppose $r:a\to b$ is tabulated by $\left(f:x\to a,g:x\to b\right)$, and suppose $h:y\to a$, $k:y\to b$ are maps. We have $k{h}^{o}\le g{f}^{o}$ if and only if there exists a map $j:y\to x$ such that
$h=fj$ and $k=gj$ (this $j$ is unique by lemma 7).
One direction is easy: if $h=fj$ and $k=gj$ for some map $j$, then
$k{h}^{o}=gj{j}^{o}{f}^{o}\le g{f}^{o}.$k h^o = g j j^o f^o \leq g f^o.
In the other direction: suppose $k{h}^{o}\le g{f}^{o}$. Put $j={f}^{o}h\wedge {g}^{o}k$. First we check that $j$ is a map. We have ${1}_{y}\le {j}^{o}j$ from
${1}_{y}\le {1}_{y}\wedge \left({k}^{o}k\right)\left({h}^{o}h\right)\le {1}_{y}\wedge {k}^{o}g{f}^{o}h\le dom\left({f}^{o}h\wedge {g}^{o}k\right)=dom\left(j\right)$1_y \leq 1_y \wedge (k^o k)(h^o h)
\leq 1_y \wedge k^o g f^o h \leq \dom(f^o h \wedge g^o k) = \dom(j)
using lemma 5. We have $j{j}^{o}\le {1}_{x}$ from
$\left({f}^{o}h\wedge {g}^{o}k\right)\left({h}^{o}f\wedge {k}^{o}g\right)\le \left({f}^{o}h{h}^{o}f\right)\wedge \left({g}^{o}k{k}^{o}g\right)\le \left({f}^{o}f\wedge {g}^{o}g\right)\le {1}_{x}$(f^o
h \wedge g^o k)(h^o f \wedge k^o g) \leq (f^o h h^o f) \wedge (g^o k k^o g) \leq (f^o f \wedge g^o g) \leq 1_x
where the last step uses one of the tabulation conditions. So $j$ is a map.
Finally, we have $fj\le f\left({f}^{o}h\right)\le h$, which implies $fj=h$ (lemma 3). Similarly $gj=k$.
Tabulations are unique up to unique isomorphism.
A diagram in $\mathrm{Map}\left(A\right)$
$\begin{array}{ccc}p& \stackrel{h}{\to }& a\\ {}^{k}↓& & {↓}^{f}\\ b& \underset{g}{\to }& c\end{array}$\array{ p & \stackrel{h}{\to} & a \\ ^\mathllap{k} \downarrow & & \downarrow^\mathrlap{f} \\ b &
\underset{g}{\to} & c }
commutes if and only if $k{h}^{o}\le {g}^{o}f$.
An identity inclusion $gk=fh$ is certainly mated to an inclusion $k{h}^{o}\le {g}^{o}f$. Conversely, an inclusion $k{h}^{o}\le {g}^{o}f$ is mated to $gk\le fh$, and we can use lemma 3.
Given $f:a\to c$ and $g:b\to c$ in $\mathrm{Map}\left(A\right)$, a tabulation $\left(h:p\to a,k:p\to b\right)$ of $r={g}^{o}f$ provides a pullback of $\left(f,g\right)$.
Indeed, by definition of tabulation we then have $k{h}^{o}={g}^{o}f$, so $gk=fh$. For the universality of $\left(h,k\right)$: if $gk\prime =fh\prime$, then $k\left(h\prime {\right)}^{o}\le {g}^{o}f$,
and we can apply proposition 2 to finish.
If $i:a\to b$ is a monomorphism in $\mathrm{Map}\left(A\right)$ and ${i}^{o}i$ has a tabulation, then ${i}^{o}i={1}_{a}$.
The tabulation of ${i}^{o}i$ gives a pullback of the pair $\left(i,i\right)$, but since this pullback is already $\left({1}_{a},{1}_{a}\right)$, the conclusion is clear.
If $r:a\to a$ is coreflexive, then for any tabulation $\left(f,g\right)$ of $r$, we have $f=g$ and $f$ is monic.
The inclusion $g{f}^{o}=r\le {1}_{a}$ is mated to $g\le f$ which must be an identity $g=f$. If $\left(f,g\right)=\left(f,f\right)$ is jointly monic, then $f$ is monic.
Pretabularity and tabularity
An allegory is tabular if every morphism $r:a\to b$ admits a tabulation. A unital allegory is pretabular if for all $a,b$, the maximal morphism ${\epsilon }_{b}^{o}{\epsilon }_{a}\in \mathrm{hom}\
left(a,b\right)$ (see the last sentence of the Units section) admits a tabulation.
It is immediate from corollary 4 that if $A$ is tabular, then $\mathrm{Map}\left(A\right)$ has pullbacks.
Likewise, it is immediate from the preceding corollary that if $A$ is unital and pretabular, then $\mathrm{Map}\left(A\right)$ has products, because we can form the pullback of the maps ${\epsilon }_
{a}:a\to t$, ${\epsilon }_{b}:b\to t$ to the terminal object $t$.
In a tabular allegory $A$, a map $q:a\to e$ is a strong epi in $\mathrm{Map}\left(A\right)$ if $q{q}^{o}={1}_{e}$.
If $q{q}^{o}={1}_{e}$, then it is first of all clear that $q$ is an epi in $\mathrm{Map}\left(A\right)$ because it retracts ${q}^{o}$ in $A$. We show $q$ is orthogonal to monomorphisms in $\mathrm
{Map}\left(A\right)$. That is, consider a commutative diagram
$\begin{array}{ccc}a& \stackrel{q}{\to }& e\\ {}^{f}↓& & {↓}^{j}\\ b& \underset{i}{\to }& x\end{array}$\array{ a & \stackrel{q}{\to} & e \\ ^\mathllap{f} \downarrow & & \downarrow^\mathrlap{j} \\ b &
\underset{i}{\to} & x }
where $i$ is monic. We wish to show there exists a filler map $g:e\to b$ such that $gq=f$ and $ig=j$. The uniqueness of a filler map is clear since $i$ is monic.
Put $g=f{q}^{o}$. We first check that $g$ is a map. Notice that the identity inclusion $if=jq$ is mated to an inclusion $f{q}^{o}\le {i}^{o}j$, so we have $g\le {i}^{o}j$. In that case we have
$g{g}^{o}\le {i}^{o}j{j}^{o}i\le {i}^{o}i\le {1}_{e}$g g^o \leq i^o j j^o i \leq i^o i \leq 1_e
where the last inclusion holds because $i$ is monic (corollary 5). This gives the counit for $gdashv{g}^{o}$. For the unit, use
${1}_{b}=q{q}^{o}\le q{f}^{o}f{q}^{o}={g}^{o}g.$1_b = q q^o \leq q f^o f q^o = g^o g.
So $g=f{q}^{o}$ is a map. We also have
$j=jq{q}^{o}=if{q}^{o}=ig,$j = j q q^o = i f q^o = i g,
and finally we have
$f\le f{q}^{o}q=gq$f \leq f q^o q = g q
so that $f=gq$ by lemma 3. This completes the proof.
If $A$ is tabular, then $\mathrm{Map}\left(A\right)$ has equalizers. Moreover, every map has a (strong epi)-mono factorization, and strong epis are preserved by pullbacks. In short, $\mathrm{Map}\
left(A\right)$ is a locally regular category. If $A$ is moreover unital, then $\mathrm{Map}\left(A\right)$ is regular.
The equalizer of a pair of maps $f,g:a\to b$ may be constructed as a tabulation of the coreflexive arrow $dom\left(f\wedge g\right)$. By lemma 8, the tabulation is of the form $\left(h,h\right)$
where $h$ is monic, and by an application of proposition 2, one sees it is the universal map that equalizes $f$ and $g$.
For any map $f:a\to b$, consider a tabulation of the coreflexive $dom\left({f}^{o}\right)=f{f}^{o}:b\to b$ by a pair of maps $\left(i,i\right)$. Notice that $i:e\to b$ is monic in $\mathrm{Map}\left
(A\right)$, and $i{i}^{o}=f{f}^{o}$. By proposition 2, there exists a unique $q:a\to e$ such that $f=iq$. Following the proof of proposition 2, the map $q$ is constructed as ${i}^{o}f$. We have
${1}_{e}\le {i}^{o}i{i}^{o}i={i}^{o}f{f}^{o}i=q{q}^{o}$1_e \leq i^o i i^o i = i^o f f^o i = q q^o
i.e., $dom\left({q}^{o}\right)={1}_{e}$ (${q}^{o}$ is total). By lemma 9, this means $q$ is a strong epi in $\mathrm{Map}\left(A\right)$. Thus we have factored $f$ into a strong epi followed by a
Now suppose $q:a\to e$ is any strong epi and $g:d\to e$ is a map, and that $\left(g\prime :p\to a,q\prime :p\to d\right)$ is a pullback of $\left(g,q\right)$. Then $\left(g\prime ,q\prime \right)$ is
a tabulation of ${q}^{o}g$, so ${q}^{o}g=g\prime \left(q\prime {\right)}^{o}$. Now the left side of this equation is total, and therefore so is the right: ${1}_{e}\le dom\left(g\prime \left(q\prime
{\right)}^{o}\right)$. But then
${1}_{e}\le dom\left(g\prime \left(q\prime {\right)}^{o}\right)\le dom\left(\left(q\prime {\right)}^{o}\right)$1_e \leq \dom(g' (q')^o) \leq \dom((q')^o)
where the second inequality uses corollary 1. This means $q\prime$ is strong epi, and we are done. | {"url":"http://ncatlab.org/toddtrimble/published/Theory+of+units+and+tabulations+in+allegories","timestamp":"2014-04-17T18:57:33Z","content_type":null,"content_length":"73448","record_id":"<urn:uuid:7bcb2455-1266-4cfc-818a-dc8f4dbee652>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00025-ip-10-147-4-33.ec2.internal.warc.gz"} |
Edgemont, PA Calculus Tutor
Find an Edgemont, PA Calculus Tutor
...I found little fulfillment in the business world especially because I didn't believe I was having a strong positive impact on society. This is not to say that I did not have a positive impact
because I did have a hand in creating some amazing machines that were used for medical services, civilia...
16 Subjects: including calculus, Spanish, physics, algebra 1
...You will find physics much more understandable when you see how it works in your own life. Prealgebra is a great course to teach because it is easy to bring real life examples into the
teaching. Unlike, say calculus, students will be using the topics in prealgebra in their lives many times a day.
10 Subjects: including calculus, physics, geometry, algebra 1
...Subsequently I became Engineering Vice President for the entire company. Over the last 20 years I have given technical presentations and workshops throughout Europe and North America, to
delegations from China, Russia, and to NATO (North Atlantic Treaty Organization). I retired in 2004 to pursue...
10 Subjects: including calculus, GRE, algebra 1, GED
...My background is in engineering and business, so I use an applied math approach to teaching. I find knowing why the math is important goes a long way towards helping students retain
information. After all, math IS fun!In the past 5 years, I have taught differential equations at a local university.
13 Subjects: including calculus, geometry, statistics, algebra 1
...During my coursework, I took Calc I, Calc II, Calc III and Differential equations. My knowledge of calculus was also applied in my structural engineering coursework during senior year. All
through college I used Excel to present my data, and in the workplace I have done the same.
21 Subjects: including calculus, reading, physics, geometry
Related Edgemont, PA Tutors
Edgemont, PA Accounting Tutors
Edgemont, PA ACT Tutors
Edgemont, PA Algebra Tutors
Edgemont, PA Algebra 2 Tutors
Edgemont, PA Calculus Tutors
Edgemont, PA Geometry Tutors
Edgemont, PA Math Tutors
Edgemont, PA Prealgebra Tutors
Edgemont, PA Precalculus Tutors
Edgemont, PA SAT Tutors
Edgemont, PA SAT Math Tutors
Edgemont, PA Science Tutors
Edgemont, PA Statistics Tutors
Edgemont, PA Trigonometry Tutors | {"url":"http://www.purplemath.com/edgemont_pa_calculus_tutors.php","timestamp":"2014-04-17T13:09:34Z","content_type":null,"content_length":"24107","record_id":"<urn:uuid:42edf6e3-7d23-40f7-b047-e95e2dfa81bf>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00200-ip-10-147-4-33.ec2.internal.warc.gz"} |
how much would I weigh
Math Is Hard
What the heck is polystyrene anyway? Is that like Styrafoam?
Styrofoam (TM) is a polystyrene product. Polystyrene can be hard like a jewel case for CDs, computer cases, parts for home appliances, etc, can be rubbery, or when mixed with air or carbon dioxide
can be more of a foam like material (only about 5% polystyrene the rest is air or carbon dioxide). The density would vary depending on which form we are talking about.
I think what Bozo is trying to get at is if the planet material was the density of foam polystyrene and could maintain this density through to it's center. Given this material is 95% air, we can
assume that 95% would be the density of air (This also would vary toward the center with heat and pressure, but I think he wants us to ignore this.)
If we place the plastic on just the surface and all the air in the center we would have a balloon planet. Perhaps we could consider his question from this perspective? Basically, what would we weigh
on a balloon the size of the Earth? | {"url":"http://www.physicsforums.com/showthread.php?t=35840","timestamp":"2014-04-18T08:25:29Z","content_type":null,"content_length":"61330","record_id":"<urn:uuid:af8864c6-e365-47ff-95ca-1bd845b84be9>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00566-ip-10-147-4-33.ec2.internal.warc.gz"} |
Excel 2000 Tutorial · FGCU Technology Skills Orientation
« Formatting Cells Formulas and Functions Sorting and Filling »
The distinguishing feature of a spreadsheet program such as Excel is that it allows you to create mathematical formulas and execute functions. Otherwise, it is not much more than a large table for
displaying text. This page will show you how to create these calculations.
Formulas are entered in the worksheet cell and must begin with an equal sign "=". The formula then includes the addresses of the cells whose values will be manipulated with appropriate operands
placed in between. After the formula is typed into the cell, the calculation executes immediately and the formula itself is visible in the formula bar. See the example below to view the formula for
calculating the sub total for a number of textbooks. The formula multiplies the quantity and price of each textbook and adds the subtotal for each book.
Linking Worksheets
You may want to use the value from a cell in another worksheet within the same workbook in a formula. For example, the value of cell A1 in the current worksheet and cell A2 in the second worksheet
can be added using the format "sheetname!celladdress". The formula for this example would be "=A1+Sheet2!A2" where the value of cell A1 in the current worksheet is added to the value of cell A2 in
the worksheet named "Sheet2".
Relative, Absolute, and Mixed Referencing
Calling cells by just their column and row labels (such as "A1") is called relative referencing. When a formula contains relative referencing and it is copied from one cell to another, Excel does not
create an exact copy of the formula. It will change cell addresses relative to the row and column they are moved to. For example, if a simple addition formula in cell C1 "=(A1+B1)" is copied to cell
C2, the formula would change to "=(A2+B2)" to reflect the new row. To prevent this change, cells must be called by absolute referencing and this is accomplished by placing dollar signs "$" within the
cell addresses in the formula. Continuing the previous example, the formula in cell C1 would read "=($A$1+$B$1)" if the value of cell C2 should be the sum of cells A1 and B1. Both the column and row
of both cells are absolute and will not change when copied. Mixed referencing can also be used where only the row OR column fixed. For example, in the formula "=(A$1+$B2)", the row of cell A1 is
fixed and the column of cell B2 is fixed.
Basic Functions
Functions can be a more efficient way of performing mathematical operations than formulas. For example, if you wanted to add the values of cells D1 through D10, you would type the formula "=
D1+D2+D3+D4+D5+D6+D7+D8+D9+D10". A shorter way would be to use the SUM function and simply type "=SUM(D1:D10)". Several other functions and examples are given in the table below:
│Function│Example │Description │
│SUM │=SUM(A1:100) │finds the sum of cells A1 through A100 │
│AVERAGE │=AVERAGE(B1:B10)│finds the average of cells B1 through B10 │
│MAX │=MAX(C1:C100) │returns the highest number from cells C1 through C100 │
│MIN │=MIN(D1:D100) │returns the lowest number from cells D1 through D100 │
│SQRT │=SQRT(D10) │finds the square root of the value in cell D10 │
│TODAY │=TODAY() │returns the current date (leave the parentheses empty) │
Function Wizard
View all functions available in Excel by using the Function Wizard.
1. Activate the cell where the function will be placed and click the Function Wizard button on the standard toolbar.
2. From the Paste Function dialog box, browse through the functions by clicking in the Function category menu on the left and select the function from the Function name choices on the right. As each
function name is highlighted a description and example of use is provided below the two boxes.
3. Click OK to select a function.
4. The next window allows you to choose the cells that will be included in the function. In the example below, cells B4 and C4 were automatically selected for the sum function by Excel. The cell
values {2, 3} are located to the right of the Number 1 field where the cell addresses are listed. If another set of cells, such as B5 and C5, needed to be added to the function, those cells would
be added in the format "B5:C5" to the Number 2 field.
5. Click OK when all the cells for the function have been selected.
Use the Autosum function to add the contents of a cluster of adjacent cells.
1. Select the cell that the sum will appear in that is outside the cluster of cells whose values will be added. Cell C2 was used in this example.
2. Click the Autosum button (Greek letter sigma) on the standard toolbar.
3. Highlight the group of cells that will be summed (cells A2 through B2 in this example).
4. Press the ENTER key on the keyboard or click the green check mark button on the formula bar | {"url":"http://www.fgcu.edu/support/office2000/excel/functions.html","timestamp":"2014-04-21T15:49:43Z","content_type":null,"content_length":"19080","record_id":"<urn:uuid:c87fa48b-5e45-4d7c-9143-424b74e5f1a7>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00087-ip-10-147-4-33.ec2.internal.warc.gz"} |
The MX Project: Modelling and Solving Search Problems with Logic
Recent News
• Article in Faculty Newsletter [November 18 2008]
• Internships: We have Undergrad Research Assistant and Programmer positions available. Part-time or as a co-op term are possible. Contact David Mitchell or Eugenia Ternovska by email.
• Internships: Graduate Student and Post-Doctoral MITACS internships are available, from 4 to 8 months duration, to work on several parts of the project.
Project Overview
The main goals of the MX project are to develop theoretical foundations for languages and systems for modelling and solving combinatorial search and optimization problems, and to build and
demonstrate practical systems based on these foundations.
Computationally hard search and optimization problems are ubiquitous in science, engineering and business. Examples include drug design, protein folding, phylogeny reconstruction, hardware and
software design, test generation and verification, planning, timetabling, scheduling and on and on. In rare cases, practical application-specific software exists, but most often development of
successful methods requires hiring specialists, and often significant time and expense, to apply one or more computational approaches. Typical examples are mathematical programming (i.e.,
integer-linear programming, ILP), constraint logic programming (CLP), and development of custom-refined implementations of methods such as simulated annealing, branch-and-bound, reduction to SAT,
One goal of the MX project is to provide another practical technology for solving these problems, but one which would require considerably less specialized expertise on the part of the user, thus
making technology for solving such problems accessible to a wider variety of users. In this approach, the user gives a precise specification of their search (or optimization) problem in a declarative
modelling language. A solver then takes this specification, together with an instance of the problem, and produces a solution to the problem (if there is one).
Other languages and systems are in development with roughly similar goals and approach (e.g., Essence, Answer Set Programming). The MX project is distinguished by being based on a theoretical
foundation in mathematical logic - in particular descriptive complexity theory - and by the project philosophy that practical tools should be build on top of sound theoretical foundations, and formal
tools developed based on practical needs.
In our approach, search problems are formalized as model expansion, which is the logical task of expanding a given structure by new relations. Formally, the user is axiomatizing their problem,
formalized as model expansion, in some extension of classical logic. In an applied setting, the actual language the user works with may be different syntactically from, though equivalent to, such a
logic. For example, a language tailored to mainstream IT workers could be an extension of SQL.
By choosing the logic involved, the framework can be parameterized to capture various complexity classes, including NP. Second-order quantifiers can be used to concisely model problems at higher
complexity levels. First-order quantifiers can be used freely for modelling convenience, without affecting the complexity level which is determined by the second-order quantifiers. Adding inductive
definitions (as in ID-logic) contributes to the convenience of knowledge representation by adding recursion and recursion through negation, but does not change the main complexity properties.
At present, our focus is on problems in the complexity class NP. For this case, the modelling language is based on classical first order logic (FO), and the default mechanism for constructing a
solver is by ``grounding'': given a specification and instance, produce a formula of propositional logic that describes the solutions, and pass this to an engine that solves the propositional
satisfiability problem (SAT solver). A prototype grounder/solver, called MXG, has demonstrated the feasibility of this approach for FO (with some extensions), and grounding to SAT, and also to SAT
extended with cardinality constraints. (Fast SAT solvers are available off-the-shelf, and are a standard tool in, for example, electronic design and verification. For SAT+Card, we use our own solver
In current work, we have developed a new, improved, grounder, and are actively pursuing several directions, all of which have theoretical and practical components:
• Adding arithmetic and other constraints to our langauge
• Extending our method and tools from search to optimization
• Developing more effective grounding techniques
• Developing techniques for ``partial grounding'', to exploit solvers for richer languages than SAT, such as SMT and ILP solvers.
Current Project Team
• Faculty: Eugenia Ternovska, David Mitchell.
• Post-Docs: Gulay Unel.
• PhD Students: Amir Avani, Shahab Tasharrofi.
• MSc Students: Brendan Guild, Sia Bolourani.
Alumni, Collaborators, and other Contributors
Alphabetically (and certainly with some missing): David Bregman (Undergrad RA 2005-, developer of MXC), Arvind Gupta (SFU Faculty), Faraz Hach (MSc 2007), Antonina Kolokolova (Post-Doc 2005-2007, now
at Memorial), Yongmei Liu (Post-Doc 2006, now at HKUT), Toni Mancini (Visitor 2007, University of Rome, La Sapienza), Raheleh Mohebali (MSc 2007, now at Nokia), Nhan Nguyen (Undergrad RA 2006/7, now
at UBC), Murray Patterson (MSc 2006), Nikolay Pelov (Post-Doc 2005, now at Mission Critical), Stella Chui (Undergrad Summer Intern 2006, from U of Toronto). Partial List of Papers
• Declarative programming of search problems with built-in arithmetic. Eugenia Ternovska, David G. Mitchell and Brendan Guild LaSh 2008.
• Expressive Power and Abstraction in Essence. David G. Mitchell, Eugenia Ternovksa, Constraints, 8(3). (A preliminary version appeared as Technical Report: TR 2007-19.)
• Model expansion and the expressiveness of FO(ID) and other logics. Antonina Kolokolova, Yongmei Liu, David G. Mitchell, and Eugenia Ternovska, SFU Computing Science Technical Report: TR 2007-29,
December 2007.
• Faster Phylogenetic Inference with MXG. David G. Mitchell, Faraz Hach, Raheleh Mohebali, LPAR-2007.
• Grounding for Model Expansion with Inductive Definitions. Murray Patterson, Yongmei Liu, Eugenia Ternovska, Arvind Gupta, IJCAI'07. A logic of non-monotone inductive definitions. Eugenia
Ternovska, Marc Denecker, ACM Transactions in Computational Logic (TOCL).
• Constraint Programming with Unrestricted Quantification. David G. Mitchell, Eugenia Ternovska, CP-05 Workshop on Quantification in Constraint Programming.
• Reducing Inductive Definitions to Propositional Satisfiability. Nikolay Pelov, Eugenia Ternovska, ICLP-05.
• A Framework for Solving NP-hard Search Problems. David G. Mitchell, Eugenia Ternovska, AAAI'05.
• A logic of non-monotone inductive definitions and its modularity properties. Eugenia Ternovska, Marc Denecker, LPNMR.
Partial List of Talks
• Declarative Programming for Search Problems with Built-in Arithmetic
• Modelling Languages, Model Expansion and MXG
• An MX-Based Front End for SAT
• Talk on Complexity of MX and Related Tasks
• Slides for invited talks at Microsoft Research and Newton Institute, Cambridge, UK
Some Photos
• MX Project Dinner, July 2008:
□ Raheleh, Faraz, Sia, Amir, Shahab
□ Faraz, Sia, Amir
□ Raheleh, Faraz
□ Shahab, Brendan, Tamira, David
□ Sia, Amir, Shahab, Tamira, Brendan
□ Brendan, David, Eugenia
• Mini-workshop with SFU MX group and KU Leuven group.
• David Bregman with his SATRace-2006 tropy.
• Eugenia, David and Victor Marek August 2006, at the FLoC banquet after the LaSh Workshop.
• April 2005, Dagstuhl Workshop on Nonmonotinic Reasoning, Answer Set Programming and Constraints. (David gave a `plenary' talk on SAT solving, and Eugenia presented MX)
Old News: | {"url":"http://www.cs.sfu.ca/research/groups/mxp/","timestamp":"2014-04-18T02:59:38Z","content_type":null,"content_length":"16606","record_id":"<urn:uuid:efcf5c7d-eb55-4072-8aba-58f396239f3d>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00568-ip-10-147-4-33.ec2.internal.warc.gz"} |
Talk:1124: Law of Drama
Explain xkcd: It's 'cause you're dumb.
Regarding the transcript: I don't think you have enough data to characterize this short curve as exponential. What does "slightly exponential" mean, anyway? In any case, it looks like it becomes
linear as the x values increase. --Prooffreader (talk) 11:21, 22 October 2012 (UTC)
I think Randall thought about the shape of this curve. You see how it becomes linear as both drama and anti-drama declaration increase? At low values, there is a residual amount of drama even when
there is little anti-drama declaration, but the marginal increase eventually becomes constant. --Prooffreader (talk) 11:28, 22 October 2012 (UTC)
I was trying to figure out how the title text could make sense grammatically, but now I think it was just written in the form of a vague, 'dramatic', facebook post. Is it just me? Alanthecowboy (talk
) 13:32, 23 October 2012 (UTC)
As someone that has been through several drama classes, as well as a high school club, I've always found the phrases "causing drama" and "too much drama" to be really irritating. You can never have
too much drama! (You can have too much comedy, though.) 76.122.5.96 19:41, 23 October 2012 (UTC)
The title is probably influenced by the concept of dharma from Indian philosophy- the "natural law". 87.57.147.173 11:01, 27 October 2012 (UTC) mb
If we can figure out who "They" are, we'll have this solved in a jiffy. David.windsor (talk) 21:36, 6 December 2013 (UTC) | {"url":"http://www.explainxkcd.com/wiki/index.php/Talk:1124:_Law_of_Drama","timestamp":"2014-04-16T07:54:40Z","content_type":null,"content_length":"28528","record_id":"<urn:uuid:a90066a9-860d-4e36-8389-4ad345d1ce97>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00176-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Physicists prove Heisenberg's intuition correct
Replies: 5 Last Post: Nov 6, 2013 3:02 PM
Messages: [ Previous | Next ]
Re: Physicists prove Heisenberg's intuition correct
Posted: Oct 24, 2013 1:31 AM
On 10/23/13, 11:02 PM, Tom Potter wrote:
>If the "heat death" of the universe is a fact...
Matter will probably cease to exist first.
Date Subject Author
10/24/13 Re: Physicists prove Heisenberg's intuition correct Tom Potter
10/24/13 Re: Physicists prove Heisenberg's intuition correct Brian Q. Hutchings
10/24/13 Re: Physicists prove Heisenberg's intuition correct Sam Wormley
10/24/13 Re: Physicists prove Heisenberg's intuition correct Brian Q. Hutchings
11/6/13 Re: Physicists prove Heisenberg's intuition correct deneb | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2603206&messageID=9311978","timestamp":"2014-04-19T15:31:18Z","content_type":null,"content_length":"20842","record_id":"<urn:uuid:477c3988-d15c-4dde-9b1b-8630f726d6be>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00296-ip-10-147-4-33.ec2.internal.warc.gz"} |
Turbine Modelling
Model Scale Turbine Testing for Numerical Simulations Validation
In order to validate the empirical life model for the FVM, a set of experiments was conducted using NACA 63018 blades at different toe angles, tip speed ratios and free stream velocities.
Because of the complex nature of the problem, an empirical lift model based on a theoretical foundation in approximating the known asymptotic limit values was developed to account for the dynamic
stall behavior.
Turbine Wake Analyses
• An Acoustic Doppler Velocimeter (ADV) is used in the tow tank to measure the 3-dimensional flow field in the turbine wake.
• The data will be used to determine the dissipation with respect to the distance from rotational axis and solidity ratio.
• The model will be used to predict the turbine wakes and will be compared to experimental ADV data.
• This information will be implemented in existing high resolution ocean circulation models used to evaluate the environmental impact and optimize array parameters. | {"url":"http://umaine.edu/mtpi/turbine-modelling/","timestamp":"2014-04-19T22:23:44Z","content_type":null,"content_length":"13305","record_id":"<urn:uuid:7b298fc2-e081-42ac-8c25-818c622a9479>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00446-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sunnyvale, CA Statistics Tutor
Find a Sunnyvale, CA Statistics Tutor
...Over the past three years, I have the privilege to tutor many students in the Cupertino School District, and all of them achieved their GPA goals, i.e., became straight A students. All my
private SAT students earned 790-800 in math and physics, one ACT student earned a perfect score 36, and one ...
15 Subjects: including statistics, calculus, geometry, physics
...I am a patient, engaging tutor with an easy-going personal style. I am a semi-retired business professional with a great love for and interest in mathematics. I have Master's degrees in
Mathematics (Stanford) and Economics (University of Santa Clara). I enjoy working with students and their teachers to ensure maximum benefit from our mutual investment in each student's success.
22 Subjects: including statistics, calculus, geometry, accounting
I tutored all lower division math classes at the Math Learning Center at Cabrillo Community College for 2 years. I assisted in the selection and training of tutors. I have taught algebra,
trigonometry, precalculus, geometry, linear algebra, and business math at various community colleges and a state university for 4 years.
11 Subjects: including statistics, calculus, geometry, algebra 1
...I have taught classes at San Jose State University that are based on Unix systems. I have an MS degree in Computer Engineering from Case Western Reserve University. I have done serious
industry grade programming for over 25 years in various fields like computer image processing, bank wire transfers, medical imaging for diagnostics.
23 Subjects: including statistics, calculus, physics, algebra 2
...I tutored for symbolic logic, via the introduction to advanced math class that was a requirement for the math major. I've written many proofs both in math and philosophy and taught how to do
so. I received an A in marketing in my college class.
35 Subjects: including statistics, reading, calculus, geometry
Related Sunnyvale, CA Tutors
Sunnyvale, CA Accounting Tutors
Sunnyvale, CA ACT Tutors
Sunnyvale, CA Algebra Tutors
Sunnyvale, CA Algebra 2 Tutors
Sunnyvale, CA Calculus Tutors
Sunnyvale, CA Geometry Tutors
Sunnyvale, CA Math Tutors
Sunnyvale, CA Prealgebra Tutors
Sunnyvale, CA Precalculus Tutors
Sunnyvale, CA SAT Tutors
Sunnyvale, CA SAT Math Tutors
Sunnyvale, CA Science Tutors
Sunnyvale, CA Statistics Tutors
Sunnyvale, CA Trigonometry Tutors
Nearby Cities With statistics Tutor
Campbell, CA statistics Tutors
Cupertino statistics Tutors
Fremont, CA statistics Tutors
Hayward, CA statistics Tutors
Los Altos statistics Tutors
Los Altos Hills, CA statistics Tutors
Menlo Park statistics Tutors
Mountain View, CA statistics Tutors
Palo Alto statistics Tutors
Pleasanton, CA statistics Tutors
Redwood City statistics Tutors
San Jose, CA statistics Tutors
San Mateo, CA statistics Tutors
Santa Clara, CA statistics Tutors
Union City, CA statistics Tutors | {"url":"http://www.purplemath.com/Sunnyvale_CA_Statistics_tutors.php","timestamp":"2014-04-16T07:31:44Z","content_type":null,"content_length":"24390","record_id":"<urn:uuid:8a896d0b-abfe-4ef0-be12-3fb4299e3538>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00246-ip-10-147-4-33.ec2.internal.warc.gz"} |
Westchester Math Tutor
Find a Westchester Math Tutor
...I have one year's worth of experience working with students with disabilities, including Autism, and I have practice Whole Brain Teaching by Chris Biffle, et al. This is my current method of
teaching/tutoring. It is highly interactive with continuous student feedback and whole body involvement.
19 Subjects: including algebra 1, geometry, precalculus, ACT Math
...I was an advanced math student, completing the equivalent of Algebra 2 before high school. I continued applying algebraic skills in high school, where I was a straight A student and completed
calculus as a junior. I tutored math through college to stay fresh.
13 Subjects: including algebra 1, algebra 2, calculus, geometry
...I have completed undergraduate coursework in the following math subjects - differential and integral calculus, advanced calculus, linear algebra, differential equations, advanced differential
equations with applications, and complex analysis. I have a PhD. in experimental nuclear physics. I hav...
10 Subjects: including algebra 1, algebra 2, calculus, geometry
...I taught C++ to high school students to prepare them for the AP exam in Computer Science. I have written several programs in this language as well as in Pascal, Basic and Visual Basic. I have
taught computer applications including word processing, spreadsheet, database and mail applications in ...
14 Subjects: including algebra 1, algebra 2, Microsoft Excel, ACT Math
...I am open to travelling to any location convenient for my student as long as we both agree it is convenient for studying. I have had tutoring done in public library in the past. I look forward
to your email to discuss an opportunity to help you achieve the success you desire and deserved.I have experience teaching Algebra 1 to a group of students and one on one basis for over 10 years.
9 Subjects: including algebra 1, algebra 2, geometry, prealgebra | {"url":"http://www.purplemath.com/westchester_il_math_tutors.php","timestamp":"2014-04-19T23:25:58Z","content_type":null,"content_length":"23907","record_id":"<urn:uuid:da26232f-8c86-440a-9724-ba153009a5ec>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00338-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wilson's Realty has total assets of $46,800, net fixed assets of $37,400, current liabilities of $6,
Wilson's Realty has total assets of $46,800, net fixed assets of $37,400, current liabilities of $6,100, and long-term liabilities of $24,600. What is the total debt ratio?
a) 0.66
b) 0.86
c) 0.41
d) 0.78
e) 0.60
Which one of the following is the abbreviation for the U.S. government coding system that classifies a firm by its specific type of business operations?
a) BID
b) SED
c) SIC
d) SBC
e) BEC
Which one of the following is a measure of long-term solvency?
a) Equity multiplier
b) Receivables turnover
c) Profit margin
d) Quick ratio
e) Price-earnings ratio
Aardvaark & Co. has sales of $291,200, cost of goods sold of $163,300, net profit of $11,360, net fixed assets of $154,500, and current assets of $89,500. What is the total asset turnover rate?
a) 1.08
b) 1.19
c) 1.24
d) 1.28
e) 1.11
Freedom Health Centers has total equity of $861,300, sales of $1.48 million, and a profit margin of 5.2 percent. What is the return on equity?
a) 5.82 percent
b) 7.18 percent
c) 6.49 percent
d) 8.94 percent
e) 8.68 percent
A firm has net income of $5,890 and interest expense of $2,130. The tax rate is 34 percent. What is the firm's times interest earned ratio?
a) 5.38
b) 5.67
c) 5.19
d) 4.82
e) 6.33
The Saw Mill has a return on assets of 6.1 percent, a total asset turnover rate of 1.8, and a debt-equity ratio of 1.6. What is the return on equity?
a) 12.28 percent
b) 4.26 percent
c) 19.03 percent
d) 15.86 percent
e) 9.76 percent
Healthy Foods has total assets of $129,800, net fixed assets of $71,500, long-term debt of $52,000, and total debt of $78,700. If inventory is $31,800, what is the current ratio?
a) 0.46
b) 2.18
c) 0.84
d) 0.33
e) 1.18
Delmont Movers has a profit margin of 6.2 percent and net income of $48,900. What is the common-size percentage for the cost of goods sold if that expense amounted to $379,000 for the year?
a) 23.50 percent
b) 48.05 percent
c) 41.06 percent
d) 12.90 percent
e) 33.25 percent | {"url":"http://onlinesolutionproviders.com/tutorial/25860/wilson-s-realty-has-total-assets-of-46-800-net-fixed-assets-of-37-400-current-liabilities-of-6","timestamp":"2014-04-17T12:29:50Z","content_type":null,"content_length":"26595","record_id":"<urn:uuid:404a1eab-43d0-4f48-a968-168f68617515>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00040-ip-10-147-4-33.ec2.internal.warc.gz"} |