content
stringlengths
86
994k
meta
stringlengths
288
619
Resolving forces Could someone please show me how to draw the diagram for the following question as I am really stuck. I would really appreciate any help. I have already drawn a diagram but must have labelled the forces incorrectly as my subsequent answers do not match the ones in the back of the textbook. Forces P, Q, R and S act on a particle at O in the plane of the coordinate axes Ox, Oy, making angles p, q, r, s respectively with Ox, each angle being measured in the anticlockwise sense. Find the magnitude of the resultant and the angle it makes with Ox when: (a) P = 2N, Q = 3N, R = 4N, S = 5N p = 0, q = 40, r = 100, s = 150 Thank you.
{"url":"http://www.physicsforums.com/showthread.php?t=187839","timestamp":"2014-04-16T10:23:06Z","content_type":null,"content_length":"28244","record_id":"<urn:uuid:ba51c926-2404-4392-bea7-cecaa06ce01e>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
Encinitas Science Tutor Find an Encinitas Science Tutor ...I am a biology student at UCSD and will be graduating in June. I passed by IB Biology HL exam while in high school, and received an A in my honors biology course in high school as well. I received 5's in both of the AP calculus tests, and am a UCSD biology student and so use Calculus on a regular basis in my classes. 42 Subjects: including ACT Science, reading, biology, writing ...The CBEST tests basic Mathematical knowledge like multiplication in word problem format. I know how to take a student and have them be fluent in their Mathematical skills and pass the test. I have an Electrical Engineering degree (BSEE) from University of California Irvine. 28 Subjects: including chemistry, astronomy, golf, baseball ...My duties of which have included attending an IEP meeting and modifying and implementing lessons to meet the child's needs. As part of my Teaching Credential program, I have completed and passed (grade of A+) the course of Inclusive Educational Practices, which satisfies the Commission on Teache... 25 Subjects: including psychology, physical science, English, reading ...I recently just retook calculus 1-3 and received an A as well as reviewed this subject for the GRE. I have run study groups and have tutored other math subjects. My major is currently in math, and I'm working to become a math teacher My strength is breaking down what seems like big concepts and relating it to stuff students have seen before. 13 Subjects: including organic chemistry, statistics, chemistry, calculus Hello Learners! Growing up as a student always presented challenges and distractions. Family, friends, sports, random distractions, you name it! ...Most distractions, like family and friends, are not intended to get in the way of our studies because they are the most valued and highest prioritized. 37 Subjects: including physical science, anatomy, psychology, philosophy
{"url":"http://www.purplemath.com/encinitas_science_tutors.php","timestamp":"2014-04-20T20:58:34Z","content_type":null,"content_length":"23939","record_id":"<urn:uuid:09608346-831c-4fe2-b2fc-b8d1d909510d>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00405-ip-10-147-4-33.ec2.internal.warc.gz"}
Reenigne blog It is impossible to travel faster than light, and certainly not desirable, as one's hat keeps blowing off. -- Woody Allen Q1: Who is this page aimed at? A1: It's aimed at people who are happy with the basic concepts of classical motion such as speed equals distance divided by time, but who know nothing about relativity (except maybe $E=mc^2$ and that you can't go faster than the speed of light) and wish to know/understand more. Q2: What's the deal with relativity, then? A2: Relativity was invented to account for a peculiar experimental result - that the speed of light is the same no matter how fast you are moving with respect to the source. Q3: How do you do this experiment? A3: Suppose you have an accurate timer, which is stopped when a pulse of light goes past it. Then you have another exactly the same. You bring them close together, synchronize them, and then move them slowly apart (Why slowly? You'll find out later). Now, fire a laser beam along the line connecting the two timers. By looking at the difference of the times recorded by the timers and dividing the distance between the two timers by this, you can measure the speed of light, which we'll call $c$ for short from now on. (It's exactly 299,792,458 meters per second if you do the experiment in a perfect vacuum). Now, repeat the experiment but move the laser towards (or away from) the timers at speed $v$ whilst you're firing it. You'll notice that your estimate of the speed of light equals $c$, not $c+v$ or $c-v$ as you would expect if you know nothing about relativity. Q4: No, how do you really do this experiment? A4: Unfortunately it's too difficult to do the experiment so directly in real life, so you have to do it more indirectly. For details, look up "The Michaelson-Morley Experiment" in any elementary textbook about special relativity. Q5: Isn't this result because $c$ is very large whilst $v$ is very small, so $c+v$ and $c-v$ are roughly the same as $c$? A5: Nope, even if $v$ is 99.9999% of $c$, you'll still get the same result. The speed of light is an absolute constant (that's why it's called $c$ for constant.) Q6: What does this mean? A6: It means that almost everything you thought you knew about space, time, speed and motion is wrong - they break down at high speeds (of the order of magnitude of $c$). Q7: Why can't you go faster than $c$? A7: The kinetic energy of a particle of mass $m$ moving at speed $v$ is not $E_c=\frac{1}{2}mv^2$ as they tell you in physics lessons in secondary school. The correct formula is $\displaystyle E_r=mc ^2\left(1-\frac{1}{\sqrt{1-\frac{v^2}{c^2}}}\right)$, which is approximately $\frac{1}{2}mv^2$ if $v$ is much less than $c$. Here is a graph of the classical kinetic energy $E_c$ per unit mass and the relativistic kinetic energy $E_r$ per unit mass, plotted against speed: As you can see from this graph, as the velocity approaches $c$, the energy approaches infinity, so it requires an infinite amount of energy for any object with non-zero mass to even reach $c$, let alone go faster. In fact, no information can travel faster than $c$, even if that information carries no mass. Q8: But light goes at $c$. How come? A8: Particles of light do not have any rest mass - the $m$ in the above equation equals zero. Q9: So light has no energy? A9: No. Because it goes at $c$, you can't use the equation from question 7 to figure out the energy of a photon (a particle of light). The above equation gives zero times infinity, which is undefined. In fact, a photon can have any amount of energy, depending on it's wavelength or frequency. The energy of a photon $E$ equals $hf$ where $f$ is the frequency (oscillations per second) and $h$ is Planck's constant (about 6.626x10^-34 Joule-seconds). Q10: I've heard about "solar sails" - the idea that you can propel a spaceship using the momentum of light. But if the speed of light is finite and the mass of light is zero, then the momentum of light $p=mv=0$. So how does the solar sail work? A10: The equation for momentum $p=mv$ is another of those classical equations that are just plain wrong (well, not so much plain wrong as an approximation that only holds for velocities much less than $c$.) The correct equation is $\displaystyle p=\frac{mv}{\sqrt{1-\frac{v^2}{c^2}}}$ for particles with mass, or $p=hcf$ for photons. Q11: Okay, I'll accept for a moment that nothing can go faster than the speed of light. Suppose you're on a train that is moving at $0.95c$ with respect to the ground, and you're skateboarding down the aisle of the train on a jet propelled skateboard at $0.1c$. Now I'm going at $1.05c$ with respect to the ground. What's the explanation of this apparent paradox? A11: There is no paradox here. There's nothing in physics that says you can't have a train moving at $0.95c$. According to relativity the laws of physics are the same in any reference frame so there's nothing special about $0.05c$ or any other speed (except $c$) aboard the train. In fact, relativity says that you can't even tell how fast the train is moving by performing any experiment that doesn't rely on the outside of the train. The problem here is that if $A$ is moving relative to $B$ with speed $X$ and if $B$ is moving relative to $C$ with speed $Y$, the speed of $A$ relative to $C$ is not $X+Y$ as you think it is. In fact, it is $\displaystyle \frac{X+Y}{1+\frac{XY}{c^2}}$, which is always less than $c$ as long as $X$ and $Y$ are, and this is approximately $X+Y$ when $X$ and $Y$ are much smaller than $c$. Q12: I heard about this thing called time dilation. What's that all about? And why did the timers have to be moved apart slowly in question 3? A12: Suppose you have a set of twins. One of the twins stays on the Earth, the other goes on a round trip on a spaceship at a speed close to $c$. Because of the bizarre things that happen at speeds near $c$, when the travelling twin returns, he will not have aged as much (will have experienced less time than) his brother who stayed on the Earth. Q13: But surely from the point of view of the twin on the spaceship, it was the earth which went on the relativistic round trip, and to him it should be the Earth-bound brother who should end up A13: No, the two brothers do not experience the same things. The one on the spaceship experienced an acceleration at the far point of its journey as he stopped moving away from the Earth and started moving back towards it. His reference frame was not "inertial" (did not move at a constant speed) so is not equivalent to the reference frame of his brother. Q14: So just before the acceleration period, which brother is older? A14: It may seem strange, but the question is not meaningful. You can't compare the ages of the brothers when they are a long distance apart. You can't compare times over a long distance and you can't compare distances over a long period of time. This is because relativistically, time and distance are two sides of the same coin. When moving at speed, time and distance "change places" to a certain extent - this is the source of time dilation and it's partner, length contraction. Q15: Length contraction? What's that? A15: When you're moving relative to something, say a plank of wood, that plank will be shorter (from your point of view) the faster you are moving relative to it, compared to the length it was when you weren't moving. Q16: Suppose there's a 1 metre wide hole and a 2 metre wide plank. Suppose the plank is moving sufficiently fast that, from the point of view of the hole, the plank is length dilated to 1 metre. Then suppose that at the time the plank is passing over the hole, it goes through the hole. From the point of view of the plank, it's the hole that's length dilated (to 0.5m) so now the plank is too long to go through the hole. What's going on? A16: The problem here is that the concept of rigidity is a classical one and has no equivalent in relativity. Think of the speed of sound - this is how fast mechanical signals travel through a material. In an ideal rigid body the speed of sound is infinite, but since no information can travel faster than $c$ you cannot have a relativistic rigid body. So the simple answer to this question is that the the plank bends. Q17: Hang on a sec. The plank bends in the reference frame of the plank, but not in the reference frame of the hole? A17: Exactly. The fundamental thing here is the relativity of simultaneity. If two events happen simultaneously in one frame of reference, they do not necessarily happen simultaneously in another. This is why you cannot say what the difference in the age of the twins is when they are a long way away - the answer depends on your frame of reference. Q18: What is the Cherenkov effect? A18: Cherenkov radiation is a bluish light emitted when a particle moves faster than the speed of light. Q19: WHAT!?!?!??!!! A19: Notice that I said "speed of light", not $c$ as I have mostly been using in the rest of this document. $c$ is the speed of light in a vacuum, the speed of light in materials is lower, and depends on the material. The speed of light isn't the absolute speed limit, $c$ is. Q20: How do you actually do calculations with this stuff? It seems like all the starting points I've been taking for granted - space, time, velocity - aren't really fundamental any more. A20: You can define a basis for space and time, however it will depend on your velocity, so you'll need a different basis for every reference frame you use. Fortunately, there is a simple formula for converting between reference frames, the Lorentz transform. You can read about this in any elementary special relativity textbook. Q21: (From Gregg) If an object's mass increases as it's speed increases, where does this mass come from? A21: From whatever accelerated the object. Mass and energy are the same thing. So when you increase the object's (kinetic) energy by speeding it up, you also increase it's mass. Now, energy can't be created or destroyed, so whatever gave the object this kinetic energy has lost some energy (and, therefore, mass) itself. Note that there isn't any transfer of matter going on in the acceleration process - the accelerated and accelerating objects have the same number of atoms (electrons, quarks...) in them that they started with, but the masses of these particles have changed. Q22: (From Colin) Mass and energy are interchangeable. Has mankind managed to turn any energy into mass yet? A22: Oh yes, physicists are doing this every day in particle accelerators. As the particles are accelerated they gain mass, then when they smash into each other they break up into many particles, some of which may well be the same particles that were originally accelerated. The particles that are "created" are effectively the result of turning energy into mass. Q23: (From Colin) Mass attracts mass (gravity). Mass attracts energy (gravitational lensing etc). Has it been shown practically that energy attracts energy, or energy attracts mass? A23: Not directly (in a lab) because the amounts of energy we can work with are too small to exert any gravitational attraction. The finest gravitational experients that have been done require masses of the order of a few grams, 1 gram of mass could power 80,000 homes for a year. However, there are very good reasons to believe that energy does attract mass (and energy). Much of what makes up the "mass" in everyday substances is in fact energy (binding energy holding the protons and neutrons in the nuclei together). So if this energy didn't contribute to the gravitational force, we would expect that different substances (which have different ratios of mass to binding energy) would accelerate differently under gravity (because they would have different ratios of inertial mass to gravitational mass). Accurate experiments (to many significant figures) have been done measuring this ratio for many different substances, and no difference has been found between any of them. So if there is a difference in gravitational attraction between fermions ("matter" particles) and gauge bosons (the virtual "particles" responsible for "energies" of various sorts) it's very small (too small to be detected in any experiments anybody has devised so far). Q24: (From Rachel) Something about Einstein's theory of relativity bothers me, specifically about the issue of time dilation. According to what I read (pls correct me if I'm wrong), the stronger the gravity the slower the pace of time. This was proven by experiments with clocks that seem to run faster when farther from the Earth, as well as with experiments wherein time delays for radio waves near a sufficiently dense body (such as the Sun) were observed. Now, I understand that space distortions can be caused by sufficiently dense masses (similar to a rubber sheet weighed down in one part by a small yet heavy stone). But the reasoning regarding time doesn't convince me well. The experiments used to prove time dilation (as far as I know) had to make use of speed (i.e. a relationship between distance and time). This was the case for the experiments using clocks and radio waves. So I wonder: What if... the apparent (take note: apparent) slowing down of time and the delays were caused, not by the true slowing down of time, but by the "stretching" of distances due to the presence of dense masses in space (much like the stone-on-rubber sheet again)? If so, then aging will occur at the same pace regardless of whether a person experiences high or low gravity. Are there any experiments that disprove my assumption? A24: The Pound-Rebka experiment verifies that time passes slower in a stronger gravitational field. By "makes use of speed" do you mean "assumes that the speed of light is the same no matter how strong gravity is"? I don't think any other speeds are involved. The constancy of the speed of light has been verified by other experiments. Gravity bends space and it bends time, but it bends both in such a way that the speed of light remains constant (if only space were bent and time remained the same, the speed of light would have to change in proportion to the stretching of space). The stone-on-rubber sheet image is a neat way to visualize matter bending space, but don't confuse the visualization with the physics - that model has some serious oversimplifications, especially where time is concerned. If you have a question about relativity, email me (or comment below) and I might put it up here. I'm not going to do your homework for you, though. If you think relativity is strange, just wait until you find out about quantum mechanics.
{"url":"http://www.reenigne.org/blog/2000/07/","timestamp":"2014-04-20T00:37:56Z","content_type":null,"content_length":"104299","record_id":"<urn:uuid:47db6112-c4d9-477a-8a7c-8a4716bb75aa>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00116-ip-10-147-4-33.ec2.internal.warc.gz"}
o 2 (2001) Discrete Mathematics & Theoretical Computer Science Volume 4 n° 2 (2001), pp. 247-254 author: Vince Grolmusz title: A Degree-Decreasing Lemma for (MOD[q]-MOD[p]) Circuits keywords: Circuit complexity, modular circuits, composite modulus, Constant Degree Hypothesis abstract: Consider a (MOD[q],MOD[p]) circuit, where the inputs of the bottom MOD[p] gates are degree-d polynomials with integer coefficients of the input variables (p, q are different primes). Using our main tool --- the Degree Decreasing Lemma --- we show that this circuit can be converted to a (MOD[q],MOD[p]) circuit with linear polynomials on the input-level with the price of increasing the size of the circuit. This result has numerous consequences: for the Constant Degree Hypothesis of Barrington, Straubing and Thérien, and generalizing the lower bound results of Yan and Parberry, Krause and Waack, and Krause and Pudlák. Perhaps the most important application is an exponential lower bound for the size of (MOD[q],MOD[p]) circuits computing the n fan-in AND, where the input of each MOD[p] gate at the bottom is an arbitrary integer valued function of cn variables (c<1) plus an arbitrary linear function of n input If your browser does not display the abstract correctly (because of the different mathematical symbols) you can look it up in the PostScript or PDF files. reference: Vince Grolmusz (2001), A Degree-Decreasing Lemma for (MOD[q]-MOD[p]) Circuits, Discrete Mathematics and Theoretical Computer Science 4, pp. 247-254 bibtex: For a corresponding BibTeX entry, please consider our BibTeX-file. ps.gz-source: dm040213.ps.gz (38 K) ps-source: dm040213.ps (195 K) pdf-source: dm040213.pdf (81 K) The first source gives you the `gzipped' PostScript, the second the plain PostScript and the third the format for the Adobe accrobat reader. Depending on the installation of your web browser, at least one of these should (after some amount of time) pop up a window for you that shows the full article. If this is not the case, you should contact your system administrator to install your browser correctly. Due to limitations of your local software, the two formats may show up differently on your screen. If eg you use xpdf to visualize pdf, some of the graphics in the file may not come across. On the other hand, pdf has a capacity of giving links to sections, bibliography and external references that will not appear with PostScript. Automatically produced on Sat Jun 19 18:28:32 CEST 2004 by gustedt
{"url":"http://www.dmtcs.org/dmtcs-ojs/index.php/dmtcs/article/viewArticle/145/407","timestamp":"2014-04-19T18:31:37Z","content_type":null,"content_length":"16406","record_id":"<urn:uuid:2cdf1633-7656-46c8-aaf7-2b46908c1a85>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00433-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantum computing forges ahead Updating Magic Universe So what’s a Majorana fermion then? A news item in today’s Nature reminds me that last week it was all happening with quantum computing at a meeting of the American Physical Society. IBM announced a breakthrough in the technology, predicting practical computers of unimaginable power within 10 or 15 years. And in Nature Eugenie Samuel Reich discusses what seems to be a discovery of cosmic importance by a team in Delft, announced at the APS meeting. I’ll sum up two strands of progress in a brief update. In Magic Universe the last section of the story called “BITS AND QUBITS: the digital world and its quantum shadow looming” reads so far: Towards quantum computers For a second revolution in information technology, the experts looked to the spooky behaviour of electrons and atoms known in quantum theory. By 2002 physicists in Australia had made the equivalent of Shannon’s relays of 65 years earlier, but now the switches offered not binary bits, but qubits, pronounced cue-bits. They raised hopes that the first quantum computers might be operating before the first decade of the new century was out. Whereas electric relays, and their electronic successors in microchips, provide the simple on/off, true/false, 1/0 options expressed as bits of information, the qubits in the corresponding quantum devices will have many possible states. In theory it is possible to make an extremely fast computer by exploiting ambiguities that are present all the time, in quantum theory. If you’re not sure whether an electron in an atom is in one possible energy state, or in the next higher energy state permitted by the physical laws, then it can be considered to be both states at once. In computing terms it represents both 1 and 0 at the same time. Two such ambiguities give you four numbers, 00, 01, 10 and 11, which are the binary-number equivalents of good old 0, 1, 2 and 3. Three ambiguities give eight numbers, and so on, until with fifty you have a million billion numbers represented simultaneously in the quantum computer. In theory the machine can compute with all of them at the same time. Such quantum spookiness spooks the spooks. The world’s secret services are still engaged in the centuries-old contest between code-makers and code-breakers. There are new concepts called quantum one-time pads for a supposedly unbreakable cipher, but some experts suspect that a powerful enough quantum computer could crack anything. Who knows what developments may be going on behind the scenes, like the secret work on digital computing by Alan Turing at Bletchley Park in England during the Second World War? The Australians were up-front about their intentions. They simply wanted to beat the rest of the world in developing a practical machine, for the sake of the commercial payoff it would bring. The Centre for Quantum Computer Technology was founded in January 2000, with federal funding, and with participating teams in the Universities of New South Wales, Queensland and Melbourne. The striking thing was the confidence of project members about what they were attempting. A widespread opinion at the start of the 20th Century held that quantum computing was beyond practical reach for the time being. It was seen as requiring exquisite delicacy in construction and operation, with the ever-present danger that the slightest external interference or mismanagement could cause the whole multiply parallel computation to cave in, like a mistimed soufflé. The qubit switches developed in Australia consist of phosphorus atoms implanted in silicon using a high-energy beam aimed with high precision. Phosphorus atoms can sustain a particular state of charge for longer than most atoms, thereby reducing the risk of the soufflé effect. A pair of phosphorus atoms, together with a transistor for reading out their state, constitutes one qubit. Unveiling the first example at a meeting in London, Robert Clark of New South Wales said, ‘This was thought to be impossible just a few years ago.’ Update March 2012 – subject to confirmation of the Majorana fermion Ten years later, when many others had joined in a prolonged experimental quest for quantum computing, IBM researchers at Yorktown Height s claimed to be within sight of a practical device within 10 or 15 years. Dogging all the experimenters was a problem called decoherence – would the qbits survive long enough to be checked for possible errors? In 2012 Matthias Steffen of IBM told a reporter, “In 1999, coherence times were about 1 nanosecond. Last year, coherence times were achieved for as long as 1 to 4 microseconds. With [our] new techniques, we’ve achieved coherence times of 10 to 100 microseconds. We need to improve that by a factor of 10 to 100 before we’re at the threshold [where] we want to be. But considering that in the past ten years we’ve increased coherence times by a factor of 10,000, I’m not scared.” Then it would be a matter of scaling up from devices handling one or two qbits to an array with, say, 250 qubits., That would contain more ordinary bits of information than there are atoms in the entire universe and it would be capable of performing millions of computations simultaneously. No existing code could withstand its probing, which probably explains why the US Army funded IBM’s work. A by-product of quantum computing research was the discovery of a new particle in the cosmos. In 1937 the Italian physicist Ettore Majorana adapted a theory by the British Paul Dirac to predict a particle that is its own antiparticle – a very strange item indeed! It would be electrically neutral and exhibit peculiar behaviour. A team led by Leo Kouwenhoven at Delft University of Technology in the Netherland, tested experimentally a suggestion from 2010 about how to create a pair of these particles. At a very low temperature and in a magnetic field, you touch a superconductor with an extremely fine semiconducting wire. As the signature of the presence of “Majorana fermions”, confirmed by the experimental team, the resistance in the wire becomes very low at zero voltage. The Majorana particle opened a new route to quantum computing, because of its special ability to remember if it swaps places with a sibling. It was expected to be particularly resistant to the decoherence that plagued other techniques. So the Delft discovery promised a new research industry. Steffen quoted by Alex Knapp in Forbes 28 February 2012 http://www.forbes.com/sites/alexknapp/2012/02/28/ibm-paves-the-way-towards-scalable-quantum-computing/ IBM Press Release 28 February 2012 http://www-03.ibm.com/press/us/en/pressrelease/36901.wss Nature News 8 March 2012: http://www.nature.com/news/a-solid-case-for-majorana-fermions-1.10174 Nature News 28 Feb 2012 http://www.nature.com/news/quest-for-quirky-quantum-particles-may-have-struck-gold-1.10124 “A suggestion from 2010”: paper by Lutchyn et al. in PRL available at arXiv:1002.4033v2 3 Responses to Quantum computing forges ahead 1. “If you’re not sure whether an electron in an atom is in one possible energy state, or in the next higher energy state permitted by the physical laws, then it can be considered to be both states at once.” Thanks for this article. The quantum computing idea depends on intrinsic indeterminism, the single wavefunction of Schrodinger’s equation. This gives a spread of probabilities for the energy state, until the wavefunction is “collapsed” by an actual measurement. The quantum computing question is whether the single wavefunction (1st quantization quantum mechanics) mathematical model is an accurate, experimentally justified model. It’s non-relativistic, and in 1929 Dirac showed that the Hamiltonian in Schroedinger’s equation needs to be replaced by an SU(2) spinor to make it relativistic, which quantizes the field. This is Feynman’s path integral (2nd quantization, or QFT), where there is no single wavefunction amplitude. Instead, each path has a separate wavefunction amplitude, and apparent indeterminist is just multipath interference from the virtual particles (similar to multipath interference of old HF radio waves due to partial reflection by different charged layers in the ionosphere). Feynman explains this fact clearly in his 1985 book QED, stating that Heisenberg’s uncertainty principle is unnecessary. All indeterminism is multipath interference, a physical mechanism. So if Feynman is right, there is no real mathematical magic, and the 1st quantization single wavefunction states at the heart of quantum computing research are a delusion. The Majorana fermions news is very interesting, but again is a spin story. The “pair of Majorana fermions” described in the paper referenced by the Nature article (R. M. Lutchyn et al. http:// arxiv.org/abs/1002.4033; 2010) is simply an electron and a semi-conductor “hole” at the interface between a superconductor and a semiconducting nanowire. The hole behaves as a fermion, and is electrically like a positron. So this Majorana pair is electrically neutral, and with entangled wavefunctions would prove useful for quantum computing. But according to Feynman, the only entangled wavefunctions are from the 1st quantization non-relativistic model. Aspect’s experiments alleging quantum entanglement, and others, are fully explained by Feynman’s 2nd quantization multipath interference mechanism in path integrals, which simply isn’t included in Bell’s equality (a statistical test of 1st quantization). There is no discrimination between 1st and 2nd quantization in these experiments. Experimental spin correlation is assumed to be the entanglement of single wavefunctions. They simply ignore the path integral’s multipath interference mechanism. The use of statistical hypothesis testing is fiddled with a false selection of explanations: it is assumed that the experiments are a test of whether 1st quantization is right or wrong. Of course, under this assumption, it appears correct. A more scientific version of Bell’s inequality would include a third possibility, namely Feynman’s path integral where all indeterminism is due to multipath interference, so there are no single wavefunctions to begin with. Supposed pairs of spin-correlated particles actually follow all paths, most of which cancel one another. There is no single wavefunction; instead, Aspect’s two apparently correlated wavefunctions (one for each detected particle) are each the sum of wavefunction amplitudes for all the virtual paths taken. This provides the physical mechanism for what is actually taking place. 2. [...] Copy of my comment to Calder’s blog post on Quantum Computing: “If you’re not sure whether an electron in an atom is in one possible energy state, or in the next higher energy state permitted by the physical laws, then it can be considered to be both states at once.” [...] 3. For Majorana particle’s place in my alternative standard Model of Quantum FFF theory.see;
{"url":"http://calderup.wordpress.com/2012/03/08/quantum-computing-forges-ahead/","timestamp":"2014-04-16T16:36:08Z","content_type":null,"content_length":"84314","record_id":"<urn:uuid:b156667c-e539-419b-a64e-4e5570de5282>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00083-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent US5452104 - Adaptive block size image compression method and system In order to facilitate digital transmission of HDTV signals and enjoy the benefits thereof, it is necessary to employ some form of signal compression. In order to achieve such high definition in the resulting image, it is also important that high quality of the image also be maintained. The discrete cosine transform (DCT) techniques have been shown to achieve a very high compression. One such article which illustrates the compression factor is that entitled "Scene Adaptive Coder" by Wen-Hsiung Chen et al., IEEE Transactions on Communications, Vol. Com-32, No. 3, March, 1984. However, the quality of reconstructed pictures is marginal even for video conferencing applications. With respect to the DCT coding techniques, the image is composed of pixel data which is divided into an array of non-overlapping blocks, N in size. Strictly for black and white television images each pixel is represented by an 8-bit word whereas for color television each pixel may be represented by a word comprised of up to 24-bits. The blocks in which the image is divided up to is typically a 16 N=16. A two-dimensional N DCT is a separable unitary transformation, a two-dimensional DCT is performed typically by two successive one-dimensional DCT operations which can result in computational savings. The one-dimensional DCT is defined by the following equation: ##EQU1## For television images, the pixel values are real so that the computation does not involve complex arithmetic. Furthermore, pixel values are non-negative so that the DCT component X(0) is always positive and usually has the most energy. In fact, for typical images, most of the transform energy is concentrated around DC. This energy compaction property makes the DCT such an attractive coding It has been shown in the literature that the DCT approaches the performance of the optimum Karhunen-Loeve Transform (KLT), as evidenced by the article entitled "Discrete Cosine Transform" by N. Ahmed et al., IEEE Transactions on Computers, January 1974, pages 90-93. Basically, the DCT coding performs a spatial redundancy reduction on each block by discarding frequency components that have little energy, and by assigning variable numbers of bits to the remaining DCT coefficients depending upon the energy content. A number of techniques exist that quantize and allocate bits to minimize some error criterion such as MSE over the block. Typically the quantized DCT coefficients are mapped into a one-dimensional string by ordering from low frequency to high frequency. The mapping is done according to diagonal zig-zag mapping over the block of DCT coefficients. The locations of the zero (or discarded) coefficients are then coded by a run-length coding technique. In order to optimally quantize the DCT coefficient, one needs to know the statistics of the transform coefficients. Optimum or sub-optimal quantizers can be designed based on the theoretical or measured statistics that minimize the over-all quantization error. While there is not complete agreement on what the correct statistics are, various quantization schemes may be utilized, such as that disclosed in "Distribution of the Two-Dimensional DCT Coefficients for Images" by Randall C. Reininger et al., IEEE Transactions on Communications, Vol. 31, No. 6, June 1983, Pages 835-839. However, even a simple linear quantizer has been utilized which has provided good results. Aside from deciding on a quantization scheme, there are two other methods to consider in order to produce the desired bit rate. One method is to threshold the DCT coefficient so that the small values are discarded and set to zero. The other technique is to linearly scale (or normalize) the coefficients to reduce the dynamic range of the coefficients after floating point to integer conversion for coding. Scaling is believed to be superior to thresholding in retaining both the subjective as well as objective signal to noise ratio quality. Therefore the main variable in the quantization process will be the coefficient scale factor which can be varied to obtain the desired bit rate. The quantized coefficients usually are coded by Huffman codes designed from the theoretical statistics or from the measured histogram distribution. Most of the coefficients are concentrated around the low values so that Huffman coding gives good results. It is believed that Huffman codes generated from a measured histogram performs very close to theoretical limits set by the entropy measure. The location of the zero coefficients are coded by run-length codes. Because the coefficients are ordered from low to high frequencies, the runs tend to be long such that there is a small number of runs. However, if the number of runs in terms of length were counted, the short runs dominate so that Huffman coding the run-lengths reduces the bit rate even more. An important issue that concerns all low bit-rate compression schemes is the effect of channel bit error on the reconstruction quality. For DCT coding, the lower frequency coefficients are more vulnerable especially the DC term. The effect of the bit error rate (BER) on the reconstruction quality at various compression rates has been presented in the literature. Such issues are discussed in the article entitled "Intraframe Cosine Transfer Image Coding" by John A. Roese et al., IEEE Transactions on Communications, Vol. Com-25, No. 11, November 1977, Pages 1329-1339. The effect of BER becomes noticeable around 10.sup.-3 and it becomes significant at 10.sup.-2. A BER of 10.sup.-5 for the transmission subsystem would be very conservative. If necessary, a scheme can be devised to provide additional protection for lower frequency coefficients, such as illustrated in the article "Hamming Coding of DCT-Compressed Images over Noisy Channels" by David R. Comstock et al., IEEE Transactions on Communications, Vol. Com-32, No. 7, July 1984, Pages 856-861. It has been observed that most natural images are made up of blank or relatively slow varying areas, and busy areas such as object boundaries and high-contrast texture. Scene adaptive coding schemes take advantage of this factor by assigning more bits to the busy area and less bits to the blank area. For DCT coding this adaptation can be made by measuring the busyness in each transform block and then adjusting the quantization and bit allocation from block to block. The article entitled "Adaptive Coding of Monochrome and Color images" by Wen-Hsiung Chen et al., IEEE Transactions on Communications, Vol. Com-25, No. 11, November 1977, Pages 1285-1292, discloses a method where block energy is measured with each block classified into one of four classes. The bit allocation matrix is computed iteratively for each class by examining the variance of the transform samples. Each coefficient is scaled so the desired number of bits result after quantization. The overhead information that must be sent are the classification code, the normalization for each block, and four bit allocation matrices. Utilization of this method has produced acceptable results at1 and 0.5 bits per Further bit rate reduction was achieved by Chen et al in the previously mentioned article "Scene Adaptive Coder" where a channel buffer is utilized to adaptively scale and quantize the coefficients. When the buffer becomes more than half full, a feedback parameter normalizes and quantizes the coefficients coarsely to reduce the bits entering the buffer. The converse happens when the buffer becomes less than half full. Instead of transmitting the bit allocation matrices, they run-length code the coefficient locations and Huffman code the coefficients as well as the run-lengths. Such an implementation has shown good color image reconstructions at 0.4 bits per pixel. Although these results look very good when printed, the simulation of the system shows many deficiencies. When images are viewed under normal to moderate magnification smoothing and blocking effects are visible. In the image compression method and system disclosed herein, intraframe coding (two-dimensional processing) is utilized over interframe coding (three-dimensional processing). One reason for the adoption of intraframe coding is the complexity of the receiver required to process interframe coding signals. Interframe coding inherently require multiple frame buffers in addition to more complex processing circuits. While in commercialized systems there may only be a small number of transmitters which contain very complicated hardware, the receivers must be kept as simple as possible for mass production purposes. The second most important reason for using intraframe coding is that a situation, or program material, may exist that can make a three-dimensional coding scheme break down and perform poorly, or at least no better than the intraframe coding scheme. For example, 24 frame per second movies can easily fall into this category since the integration time, due to the mechanical shutter, is relatively short. This short integration time allows a higher degree of temporal aliasing than in TV cameras for rapid motion. The assumption of frame to frame correlation breaks down for rapid motion as it becomes jerky. Practical consideration of frame to frame registration error, which is already noticeable on home videos become worse at higher resolution. An additional reason for using intraframe coding is that a three-dimensional coding scheme is more difficult to standardize when both 50 Hz and 60 Hz power line frequencies are involved. The use of an intraframe scheme, being a digital approach, can adapt to both 50 Hz and 60 Hz operation, or even to 24 frame per second movies by trading off frame rate versus spatial resolution without inducing problems of standards conversion. Although the present invention is described primarily with respect to black and white, the overhead for coding color information is surprisingly small, on the order of 10 to 15% of the bits needed for the luminance. Because of the low spatial sensitivity of the eye to color, most researchers have converted a color picture from RGB space to YIQ space, sub-sample the I and Q components by a factor of four in horizontal and vertical direction. The resulting I and Q components are coded similarly as Y (luminance). This technique requires 6.25% overhead each for the I and Q components. In practice, the coded Q component requires even less data than the I component. It is envisioned that no significant loss in color fidelity will result when utilizing this class of color coding In the implementation of DCT coding techniques, the blocking effect is the single most important impairment to image quality. However, it has been realized that the blocking effect is reduced when a smaller sized DCT is used. The blocking effect becomes virtually invisible when a 2 is used. However, when using the small-sized DCT, the bit per pixel performance suffers somewhat. However, a small-sized DCT helps the most around sharp edges that separate relatively blank areas. A sharp edge is equivalent to a step signal which has significant components at all frequencies. When quantized, some of the low energy coefficients are truncated to zero. This quantization error spreads over the block. This effect is similar to a two-dimensional equivalent of the Gibbs phenomenon, i.e. the ringing present around a step pulse signal when part of the high frequency components are removed in the reconstruction process. When adjacent blocks do not exhibit similar quantization error, the block with this form of error stands out and creates the blocking effect. Therefore by using smaller DCT block sizes the quantization error becomes confined to the area near the edge since the error cannot propagate outside the block. Thereby, by using the smaller DCT block sizes in the busy areas, such as at edges, the error is confined to the area along the edge. Furthermore, the use of the small DCT block sizes is further enhanced with respect to subjective quality of the image due to the spatial masking phenomena in the eye that hides noise near busy areas. The adaptive block size DCT technique implemented in the present invention may be simply described as a compare-and-replace scheme. A b 16 pixel data array or block of the image is coded as in the fixed block size DCT techniques, however, block and sub-block sizes of 16 8 number of bits to code the block by using four 2 the 4 sub-blocks is smaller than the bits needed to code it as a 4 block, the 4 each of the 8 be replaced by four 4 previous stage. Similarly, the 16 if it can be replaced by four 8 the previous stage. At each stage the optimum block/sub-block size is chosen so that the resulting block size assignment is optimized for the 16 Since 8-bits are used to code the DC coefficients regardless of the block size, utilization of small blocks results in a larger bit count. For this reason, 2 count. The resulting sub-block structure can be conveniently represented by an inverted quadtree (as opposed to a binary tree), where the root corresponding to the 16 branches corresponding to four sub-blocks. An example of a possible inverted quadtree structure is illustrated in FIG. 3b. Each decision to replace a block with smaller sub-blocks requires one bit of information as overhead. This overhead ranges from one bit for a 16 sub-blocks are used everywhere within in the 16 overhead is also incorporated into the decision making process to ensure that the adaptive block size DCT scheme always uses the least number of bits to code each 16 Although block sizes discussed herein as being N envisioned that various block sizes may be used. For example an N block size may be utilized where both N and M are integers with M being either greater than or lesser than N. Another important aspect is that the block is divisible into at least one level of sub-blocks, such as N/i integers. Furthermore, the exemplary block size as discussed herein is a 16 coefficients. It is further envisioned that various other integer such as both even or odd integer values may be used, e.g. 9 Due to the importance of these overhead bits for the quadtree, these bits need to be protected particularly well against channel errors. One can either provide an extra error correction coding for these important bits or provide and error recovery mechanism so that the effect of channel errors is confined to a small area of the picture. The adaptive block size DCT compression scheme of the present invention can be classified as an intraframe coding technique, where each frame of the image sequence is encoded independently. Accordingly, a single frame still picture can be encoded just as easily without modification. The input image frame is divided into a number of 16 encoding performed for each block. The main distinction of the compression scheme of the present invention resides in the fact that the 16 block is adaptively divided into sub-blocks with the resulting sub-blocks at different sizes also encoded using a DCT process. By properly choosing the block sizes based on the local image characteristics, much of the quantization error can be confined to small sub-blocks. Accordingly small sub-blocks naturally line up along the busy area of the image where the perceptual visibility of the noise is lower than in blank areas. In review, a conventional or fixed block size DCT coding assigns a fixed number of bits to each block such that any quantization noise is confined and distributed within the block. When the severity or the characteristics of the noise between adjacent blocks are different, the boundary between the blocks become visible with the effect commonly known as a blocking artifact. Scene adaptive DCT coding assigns a variable number of bits to each block thereby shifting the noise between fixed sized blocks. However, the block size is still large enough, usually 16 blocks contain both blank and busy parts of the image. Hence the blocking artifact is still visible along image detail such as lines and edges. Using smaller block sizes such as 8 reduce the blocking artifact, however, at the expense of a higher data rate. As a result, the coding efficiency of DCT drops as the block size gets smaller. In the embodiment in which the present invention is described an adaptive block size DCT technique is used in which optimal block size is chosen such that smaller blocks are used only when they are needed. As a result, the blocking artifact is greatly reduced without increasing the data rate. Although a number of different methods can be devised that determine block size assignment, an exemplary illustration of an embodiment is provided which assigns block sizes such that the total number of bits produced for each block is minimized. Using the DQT transform of the present invention in combination with the adaptive block size technique a further reduction in the data rate, on the order of 5% or greater, can be achieved. FIGS. 1 and 2 illustrate an exemplary implementation of the adaptive block size DCT transform image signal compression scheme for converting N for purposes of illustration N=16. FIG. 1 illustrates the implementation of the DCT transform and block size determination elements. FIG. 2 illustrates the DCT coefficient data block selection according to the block size determination along with composite DCT coefficient data block bit coding. In FIG. 1, an image signal as represented by a 16 digitized pixel data is received from the frame buffer (not shown). The pixel data may be either 8 bit black and white image data or 24 bit color image data. The 16 two-dimensional discrete cosine transform (DCT) element 10a. The 16 8 DCT element 10c, and as sixty-four 2 element 10d. DCT elements 10a-10d may be constructed in integrated circuit form as is well known in the art. The 16 input is also provided in parallel to a DQT subsystem as discussed later herein with reference to FIG. 9. DCT elements 10a-10d perform two-dimensional DCT operations on each respectively sized input block of pixel data. For example, DCT element 10a performs a single 16 performs four 8 4 2 element 10a-10d to a respective quantizer look up table 12a-12d. Quantizer lookup tables 12a-12d may be implemented in conventional read only memory (ROM) form with memory locations containing quantization values. The value of each transform coefficient is used to address a corresponding memory location to provide an output data signal indicative of a corresponding quantized transform coefficient value. The output of quantizer lookup table 12a, indicated by the reference signal QC16, is a 16 quantizer lookup table 12b, indicated by the reference signal QC8, is comprised of a data block of four 8 coefficient values. The output of quantizer lookup table 12c, indicated by the reference signal QC4, is comprised of a data block of sixteen 4 output of quantizer lookup table 12d, indicated by the reference signal QC2, is comprised of a data block of sixty-four 2 quantized DCT coefficient. Although not illustrated, the DC (lowest frequency) coefficients of each transform may be optionally treated separately rather than through the corresponding quantizer lookup table. The outputs of quantizer lookup tables 12a-12d are respectively input to code length lookup tables 14a-14d. The quantized DCT coefficient values are each coded using variable length code, such as a Huffman code, in order to minimize the data rate. Code words and corresponding code lengths are found in the form of code length look up tables 14a-14d. Each of the quantized DCT coefficients QC2, QC4, QC8, and QC16 are used to look up in the code length tables the corresponding number of bits required to code each coefficient. Code length lookup tables 14a-14d may be implemented in read only memory form with the DCT coefficients addressing memory locations which contain respective code length values. The number of bits required to code each block or sub-block is then determined by summing the code lengths in each block and sub-block. In the basic implementation of the adaptive block size coding scheme the code lengths for the DC and AC coefficients of each block and sub-block is used in determining the number of bits to code the respective block or sub-block. However in the case where the DQT subsystem is utilized, the value corresponding to the DC DCT coefficients output from code length lookup tables 14a-14d is replaced with a similar value from the DQT subsystem. Multiplexers 15a-15d are used to permit the DQT coefficient code length values output from the DQT subsystem to be provided to the respective code length summer 16a-16d. Multiplexers 15a-15d also permit the AC DCT coefficient code length values output from code length lookup tables 14a-14d to be provided to the respective code length summer 16a-16d. The 256 code length values from code length lookup table 14a, comprised of 1 DC coefficient code length values and 255 AC coefficient code length values, are provided to multiplexer 15a. A DQT coefficient code length value is also provided to multiplexer 15a from the DQT subsystem. Multiplexer 15a is responsive to a control signal M.sub.a so as to provide the DQT coefficient code length value from the DQT subsystem to code length summer 16a in place of the DC DCT coefficient code length value from code length lookup table 14a. However, the 255 AC coefficient code length values are provided via multiplexer 15a to code length summer 16a. In code length summer 16a the number of bits required to code the 16 code lengths for the block. Therefore for the 16 length summer 16a sums the 255 AC coefficient code length values along with the 1 DQT coefficient code length value. The output from code length summer 16a is the signal CL16, a single value indicative of the number of bits required to code the 16 coefficients. The 256 code length values from code length lookup table 14b, comprised of a total of 4 DC coefficient code length values and 252 AC coefficient code length values, are provided to multiplexer 15b. Each of the four 8 blocks is comprised of 1 DC coefficient code length value and 63 AC coefficient code length values. For each DC DCT coefficient code length value provided to multiplexer 15b, a corresponding DQT coefficient code length value is provided to multiplexer 15b from the DQT subsystem. Multiplexer 15b is responsive to a control signal M.sub.b so as to provide the DQT coefficient code length value from the DQT subsystem to code length summer 16b in place of each of the 4 DC coefficient code length values from code length lookup table 14b. However, the 252 AC coefficient code length values from code length lookup table 14b are provided via multiplexer 15b to code length summer 16b. For each of the four 8 blocks, code length summer 16b sums the 63 AC coefficient code length values along with the DQT coefficient code length value so as to determine the number of bits required to code each 8 sub-block. The output of code length summer 16b is four values indicated by the reference signal CL8 with each value corresponding to the sum of the code lengths in each of the four 8 Similarly, the 256 code length values from code length lookup table 14c, comprised of a total of 16 DC coefficient code length values and 240 AC coefficient code length values, are provided to multiplexer 15c. Each of the sixteen 4 value and 15 AC coefficient code length values. For each DC DCT coefficient code length value provided to multiplexer 15c, a corresponding DQT coefficient code length value is provided to multiplexer 15c from the DQT subsystem. Multiplexer 15c is responsive to a control signal M.sub.c so as to provide the DC coefficient code length value from the DQT subsystem to code length summer 16c in place of each of the 16 DC coefficient code length values from code length lookup table 14c. However, the 240 AC coefficient code length values from code length lookup table 14c are provided via multiplexer 15c to code length summer 16b. For each of the sixteen 4 coefficient code length values along with the DQT subsystem DC coefficient code length value so as to determine the number of bits required to code each 4 summer 16c is sixteen values indicated by the reference signal CL4 with each value corresponding to the sum of the code lengths in each of the sixteen 4 Finally, the 256 code length values from code length lookup table 14d, comprised of a total of 64 DC coefficient code length values and 192 AC coefficient code length values, are provided to multiplexer 15d. Each of the sixty-four 2 length value and 3 AC coefficient code length values. For each DC DCT coefficient code length value provided to multiplexer 15d, a corresponding DQT coefficient code length value is provided to multiplexer 15d from the DQT subsystem. Multiplexer 15d is responsive to a control signal M.sub.d so as to provide the DC coefficient code length value from the DQT subsystem to code length summer 16d in place of each of the 64 DC coefficient code length values from code length lookup table 14d. However, the 192 AC coefficient code length values from code length lookup table 14d are provided via multiplexer 15d to code length summer 16d. For each of the sixty-four 2 coefficient code length values along with the DQT subsystem DC coefficient code length value so as to determine the number of bits required to code each 2 summer 16d is sixty-four values indicated by the reference signal CL2 with each value being the sum of the code lengths in each of the sixty-four 2 The values CL8, CL4, and CL2 are also identified with block position orientation indicia for discussion later herein. The position indicia is a simple x-y coordinate system with the position indicated by the subscript (x,y) associated with the values CL8, CL4, and CL2. The block size assignment (BSA) is determined by examining values of CL2, CL4, CL8 and CL16. Four neighboring entries of CL2.sub.(x,y) are added and the sum is compared with the corresponding entry in CL4.sub.(x,y). The output of CL2.sub.(x,y) from code length summer 16d is input to adder 18 which adds the four neighboring entries and provides a sum value CL4'.sub.(x,y). For example, the values representative of blocks CL2.sub.(0,0), CL2.sub.(0,1), CL2.sub.(1,0), and CL2.sub.(1,1) are added to provide the value CL4'.sub.(0,0). The value output from adder 18 is the value CL4'.sub.(x,y) which is compared with the value CL4.sub.(x,y) output from code length summer 16c. The value CL4'.sub.(x,y) is input to comparator 20 along with the value CL4.sub.(x,y). Comparator 20 compares the corresponding input values from adder 18 and code length summer 16c so as to provides a bit value, P, that is output to a P register (FIG. 2) and as a select input to multiplexer 22. In the example as illustrated in FIG. 1, the value CL4'.sub.(0,0) is compared with the value CL4.sub.(0,0). If the value CL4.sub.(x,y) is greater than the summed values of CL4'.sub.(x,y), comparator 20 generates a logical one bit, "1", that is entered into the P register. The "1" bit indicates that a corresponding 4 coded more efficiently using four 2 zero bit, "0", is entered into the P register, indicating that the 4 4 The output of code length summer 16c and adder 18 are also provided as data inputs to multiplexer 22. In response to the "1" bit value output from comparator 20, multiplexer 22 enables the CL4'.sub. (x,y) value to be output therefrom to adder 24. However should the comparison result in a "0" bit value being generated by comparator 20, mutiplexer 22 enables the output CL4.sub.(x,y) from code length summer 16c to be input to adder 24. Adder 24 is used to sum the data input therefrom, as selected from the comparisons of the values of CL4.sub.(x,y) and CL4'.sub.(x,y) . The result of the sixteen comparisons of the CL4.sub.(x,y) and the CL4'.sub.(x,y) data is added in adder 24 to generate a corresponding CL8'.sub.(x,y) value. For each of the sixteen comparisons of the CL4.sub.(x,y) and CL4'.sub.(x,y) values, the comparison result bit is sent to the P register. The next stage in the determination of block size assignment is similar to that discussed with respect to the generation and comparison of the values CL4 and CL4'. The output of CL8'.sub.(x,y) is provided as an input to comparator 26 along with the output CL8.sub.(x,y) from code length summer 16b. If the corresponding entry in CL8.sub.(x,y) is greater than the summed value CL8'.sub.(x,y), comparator 26 generates a "1" bit which is output to the Q register (FIG. 2). The output of comparator 26 is also provided as a selected input to multiplexer 28 which also receives the values CL8.sub.(x,y) and CL8'.sub.(x,y) respectively from code length summer 16b and adder 24. Should the value output from comparator 26b be a "1" bit, the CL8'.sub.(x,y) value is output from multiplexer 28 to adder 30. However, should the value CL8'.sub.(x,y) be greater than the value CL8.sub.(x,y), comparator 26 generates a "0" bit that is sent to the Q register and also to the select input of multiplexer 28. Accordingly, the value CL8.sub.(x,y) is then input to adder 30 via multiplexer 28. Comparison results of comparator 26 are the Q values sent to the Q register. Again a "1" bit indicates that the corresponding 8 of DCT coefficients may be more efficiently coded by smaller blocks such as all 4 optimally determined by the smaller block comparisons. A "0" bit indicates that the corresponding 8 efficiently coded than any combination of smaller blocks. The values input to adder 30 are summed and provided as an output value CL16' input to comparator 32. A second input is provided to comparator 32 as the value CL16 output from by code length summer 16a. Comparator 32 preforms a single comparison of the value CL16 and CL16'. Should the value CL16 be greater than the value CL16' a "1" bit is entered into the R register (FIG. 3). A "1" bit input to the R register is indicative that the block may be coded more efficiently using sub-blocks rather than a single 16 the value CL16, comparator 32 outputs a "0" bit to the R register. The "0" bit in the R register is indicative that the block of DCT coefficients may be coded more efficiently as a 16 Comparator 32 is also provides the output R bit as a select input to multiplexer 34. Multiplexer 34 also has inputs for receiving the CL16 and CL16' values respectively provided from code length summer 16a and adder 30. The output from multiplexer 34 is the value CL16 should the R output bit be a "0" while the value CL16' is output should the R output bit be a "1". The output of multiplexer 34 is a value indicative of the total bits to be transmitted. It should be noted that the overhead bits vary from one bit to up to twenty-one bits (1+4+16) when 4 everywhere within the 16 In FIG. 2, the P value output from comparator 20 (FIG. 1) is input serially to a sixteen-bit register, P register 40. Similarly, the output from comparator 26 is input serially to a four-bit register, Q register 42. Finally, the output from comparator 32 is input serially to a one-bit register, R register 44. The output from P register 40 is provided as a P output to the select input of multiplexer 46. Multiplexer 46 also has inputs as the QC2 and QC4 values respectively output from Quantizer lookup tables 12d and 12c. The output of multiplexer 46 is provided as an input to multiplexer 48, which also has as a second input for the QC8 values as output from quantizer lookup table 12b. A select input to multiplexer 48 is provided from the output of Q register 42. The output of multiplexer 48 is coupled as one input to multiplexer 50. The other input of multiplexer 50 is coupled to the output of quantizer lookup table 12a for receiving the values QC16. The select input of multiplexer 50 is coupled to the output of R register 44 so as to receive the output bit R. As illustrated in FIG. 2, P register 40 includes a sequence of bit positions, 0-15, with corresponding bit values as determined by the comparison process as discussed with reference to FIG. 1. Similarly Q register 42 and R register 44 respectively have bit position 0-3 and 0 with corresponding data as determined with reference to FIG. 1. The data in the P, Q and R registers as illustrated in FIG. 2 is merely for the purpose of illustration. As illustrated in FIG. 2, the value of P register 40 bit is used to select via multiplexer 46, QC2 data (four 2 coefficients) or the corresponding QC4 data (a 4 quantized transform coefficients). Multiplexer 48, in response to the value of the bit output from Q register 42 selects between the output of multiplexer 46 and the value QC8 data. When the Q register bit value is a "1" bit, the output of multiplexer 46 as input to multiplexer 48 is selected for output of multiplexer 48. When the Q register bit value is a "0" bit, the output of multiplexer 48 is the QC8 value. Therefore, the output bit value of Q register 42 is used to select between four QC4 blocks or sub-blocks of QC2 values as output from multiplexer 46 or a corresponding single 8 upper left hand blocks as output from multiplexer 46 include four 2 bit of the Q register being a "0" bit, multiplexer 48 selects the 8 replacement scheme. The output of multiplexer 48 is coupled as an input to multiplexer 50. The other input of multiplexer 50 is provided with the Q16 data, the 16 lookup table 12a. The select input to multiplexer 50 is the output bit of the R register. In the example illustrated in FIG. 2, the bit output from R register 44 is a "1" bit thus selecting data as output from multiplexer 50 that which was provided from multiplexer 48. Should the R register 44 output bit value be a "0" bit, multiplexer 50 would output the QC16 data. The multiplexing scheme as illustrated in FIG. 2 utilizes the block assignments to multiplex coefficient sub-blocks QC2, QC4, QC8, QC16 values into a composite block of DCT coefficients QC. In essence this step is accomplished by three stages. The first stage conditionally replaces a 4 content of the P register. The second stage conditionally replaces an 8 previous stage according to the content of the Q register. The third stage conditionally replaces the 16 the previous stages if the R register contains a "1" bit. FIGS. 3a and 3b respectively illustrate the exemplary P, Q and R register data and corresponding and BSA bit pattern, and the corresponding inverted quadtree corresponding thereto. The level of hierarchy involved is that should the bit stored in the R register be a "1", a condition exists which is indicative that the image block may be more efficiently coded using smaller blocks. Similarly, should the Q register contain any "1" bits it further indicates that the corresponding 8 efficiently coded by smaller blocks. Similarly, should the P register contain any "1" bits it further indicates that the corresponding 4 block may be more efficiently coded using four 2 any of the registers contain a "0" bit, this indicates that the block or sub-block may be coded more efficiently by using the size block related thereto. For example, the value of the bit in the P register bit 0 position, a "1" bit, indicates that this 4 using four 2 positions indicate that the three 4 using corresponding 2 register indicates that the four 4 of four 2 efficiently coded by a single 8 data would override the P register data. Once the P register data was overridden by the Q register 0 position bit, data in the P register bit positions 0-3, need not be transmitted as part of the block size assignment (BSA) data. However, should a bit position in a higher register be a "1" bit, such as bit position 1 of the Q register, the corresponding P register bits are provided as part of the BSA data. As illustrated in FIG. 3a, the Q register bit position 1 is a "1" bit and therefore the corresponding P register bits 4-7 are provided in the BSA data. On a higher level, since the R register bit is a "1" bit each of the Q register bits are provided in the BSA data. Returning to FIG. 2, the composite block QC contains many zero coefficient values which can be more efficiently coded by run-length codes. The number of consecutive zeros or runs are sent instead of the code words for each zero. In order to maximize the efficiency of the run-length coding, the coefficients are ordered in a predetermined manner such that the occurrence of short runs is minimized. Minimization is done by encoding the coefficients which are likely to be non-zeros first, and then encoding the coefficients that are more likely to be zeros last. Because of the energy compaction property of DCT towards low frequency, and because diagonal details occur less often than horizontal or vertical details, diagonal scan or zig-zag scan of the coefficients is preferred. However, because of the variable block sizes used, the zig-zag scan has to be modified to pick out the low frequency components from each sub-block first, but at the same time follow the diagonal scanning for coefficients of similar frequency, technically when the sum of the two frequency indices are the same. Accordingly, the output composite block QC from multiplexer 50 is input to zig-zag scan serializer 52 along with the BSA data (P, Q and R). FIG. 4a illustrates the zig-zag ordering of the block data within blocks and corresponding sub-blocks. FIG. 4b illustrates the ordering in the serialization between blocks and sub-blocks as determined by the BSA data. The output of zig-zag scan serializer 52, comprised of the ordered 256 quantized DCT coefficients of the composite block QC, is input to coefficient buffer 54 where they are stored for run-length coding. The serialized coefficients are output from coefficient buffer 54 to run-length coder 56 where run-length coding is preformed to separate out the zeros from the non-zero coefficients. Run-length as well as the non-zero coefficient values are separately provided to corresponding lookup tables. The run-length values are output from run-length coder 56 as an input of run-length code lookup table 58 where the values are Huffman coded. Similarly, the non-zero coefficient values are output from run-length coder 56 as an input to non-zero code lookup table 60 where the values are also Huffman coded. Although not illustrated it is further envisioned that run-length and non-zero code look up tables may be provided for each block size. The Huffman run-length coded values along with the Huffman non-zero coded values are respectively output from run-length lookup code table 58 and non-zero code lookup table 60 as inputs to bit field assembler 62. An additional input to bit field assembler 62 is the BSA data from the P, Q and R registers. Bit field assembler 62 disregards the unnecessary bits provided from the P, Q and R registers. Bit field assembler 62 assembles the input data with BSA data followed by the combined RL codes and NZ codes. The combined data is output from bit field assembler 62 to a transmit buffer 64 which temporarily stores the data for transfer to the transmitter (not shown). When the DQT subsystem is employed the coded DC DCT coefficients are omitted from transfer to transmit buffer 64. Instead the DQT coefficients provided by the DQT subsystem, as an input to bit field assembler 62, are transmitted. The formatting of data in this embodiment is typically a data packet comprised in sequence of sync, BSA, DQT and DCT data bits. Furthermore, the packet may also include end of block code bits following the DCT bits. FIGS. 5a-5d illustrates an alternate scan and serialization format for the zig-zag scan serializer 52. In FIGS. 5a-5d, the quantized DCT coefficients are mapped into a one-dimensional string by ordering from low frequency to high frequency. However in the scheme illustrated in FIGS. 5a-5d the lower order frequencies are taken from each block prior to taking the next higher frequencies in the block. Should all coefficients in a block be ordered, during the previous scan, the block is skipped with priority given to the next block in the scan pattern. Block to block scanning, as was done with the scanning of FIGS. 4a-4c is a left-to-right, up to down scan priority. As previously mentioned the present invention implements a new and previously undisclosed transform identified herein as the differential quadtree transform (DQT). The basis for this transform is the recursive application of the 2 sub-blocks. At the bottom of the inverted quadtree, for example the quadtree illustrated in FIG. 3b, the 2 and the node is assigned the DC value of the 2 nearest nodes are gathered and another 2 process is repeated until a DC value is assigned to the root. Only the DC value at the root is coded at a fixed number of bits, typically 8-bits, while the rest are Huffman coded. Because each 2 nothing more than a sum and a difference of numbers, no multiplications are required, and all coefficients in the quadtree, except DC, represent the differences of two sums, hence the name DQT. Theoretically this type of transform cannot exceed the performance of 16 However the implementation of the DQT transform has the advantage of requiring seemingly simple hardware in addition to naturally implementing the adaptive block size coding. Furthermore the quadtree structure allows the coding of the zero coefficients by simply indicating the absence of a subtree when all sub-blocks under the subtree contain only zeros. FIG. 6 illustrates an exemplary implementation of a DQT subsystem. In FIG. 6, the same 16 subsystem of FIGS. 1 and 2 such that the DQT processing is accomplished in parallel to the adaptive block size data processing. The input block of pixel data is provided to 2 sixty-four 2 DCT element 70 is comprised of a 16 block 71. For each 2 70, the corresponding output is comprised of one DC DCT coefficient (DC.sub.2) and three AC coefficients (AC.sub.2). The 16 provided to selector 72. Selector 72 removes the AC DCT coefficients from the input block of DCT coefficients and provides an output 8 of comprised of DC DCT coefficients (DC.sub.2) only. The 8 of DC DCT coefficients (DC.sub.2), block 73, is provided to 2 element 74. DCT element 74 performs sixteen 2 the input block of DC DCT coefficients (DC.sub.2). The output of DCT element 74 is comprised of an 8 each 2 element 74, the corresponding output is comprised of one DC DCT coefficient (DC.sub.4) and three AC DCT coefficients (AC.sub.4). As illustrated in FIG. 6, the circled DC DCT coefficients (DC.sub.4) and AC DCT coefficients (AC.sub.4) in block 75 replace a corresponding DC DCT coefficients (DC.sub.2)in the 8 output from selector 72. The 8 provided to selector 76. Selector 76 provides the AC DCT coefficients (AC.sub.4) from the input block of DCT coefficients as an input to multiplexer 80 while providing the DC DCT coefficients (DC.sub.4) as four 2 DCT element 78. DCT element 78 performs four 2 the input block of DC DCT coefficients (DC.sub.4). The output of DCT element 78 is comprised of an 4 79. For each of the four 2 (DC.sub.4) processed by DCT element 78, the corresponding output is comprised of one DC DCT coefficient (DC.sub.8) and three AC DCT coefficients (AC.sub.8). The output DC DCT coefficient (DC.sub.8) and AC DCT coefficients (AC.sub.8) are provided as another input to multiplexer 80. Multiplexer 80 normally provides an output of the AC DCT coefficients (AC.sub.4) from DCT element 74 (via separator 76). Multiplexer 80 is responsive to a control signal N.sub.a for providing the DC DCT coefficient (DC.sub.8) and the AC coefficients (AC.sub.8) from DCT element 78. Multiplexer 80 creates a composite 8 coefficients, block 81. Block 81 is identical in arrangement with respect to the AC DCT coefficients (AC.sub.4) as output from DCT element 74 in block 75. However in block 81 the DC DCT coefficients (DC.sub.8) and AC DCT coefficients (AC.sub.8) replace the DC DCT coefficients (DC.sub.4) in block 75 as it was output from DCT element 74. As illustrated in FIG. 6, the circled DC DCT coefficients (DC.sub.8) and AC DCT coefficients (AC.sub.8) in block 81 replace a corresponding DC DCT coefficients (DC.sub.4) in the 8 since there is only one DC coefficient for every fifteen AC coefficients, the composite block is considered as four 4 The four 4 provided to selector 82. Selector 82 provides the AC DCT coefficients (AC.sub.4) and (AC.sub.8) from the input block of DCT coefficients as an input to multiplexer 86 while providing the DC DCT coefficients (DC.sub.8)as a 2 83, to 2 DCT element 84 performs a 2 input block of DC DCT coefficients (DC.sub.8). The output of DCT element 78 is comprised of a 2 the 2 element 84, the output is comprised of one DC DCT coefficient (DC.sub.16) and three AC DCT coefficients (AC.sub.16). The output DC DCT coefficient (DC.sub.16) and AC DCT coefficients (AC.sub.16) are provided as another input to multiplexer 86. Multiplexer 86 normally provides an output of the AC DCT coefficients (AC.sub.4) and (AC.sub.8) from multiplexer 80 (via separator 82). Multiplexer 86 is responsive to a control signal N.sub.b for providing the DC DCT coefficient (DC.sub.16) and the AC coefficients (AC.sub.16) from DCT element 84. Multiplexer 86 creates a composite 8 and AC DCT coefficients, block 87. Block 87 is identical in arrangement with respect to the AC DCT coefficients (AC.sub.4) and (AC.sub.8) as output from multiplexer 80 in block 81. However in the block 87 the DC DCT coefficient (DC.sub.16) and AC DCT coefficients (AC.sub.16) replace the DC DCT coefficients (DC.sub.8) in block 81 as output from multiplexer 80. As illustrated in FIG. 6, the circled DC DCT coefficient (DC.sub.16) and AC DCT coefficients (AC.sub.16) in block 87 replace a corresponding DC DCT coefficients (DC.sub.8) in the 8 block 87 since there is only one DC coefficient for every fifteen AC coefficients, the composite block is considered as one 8 The 8 what are considered Discrete Quadtree Transform (DQT) coefficients. The DQT coefficients are then quantized by providing the coefficient values to quantizer look table 88. Quantizer lookup table 88 may also be implemented in conventional read only memory (ROM) form with memory locations containing quantization values. The value of each transform coefficient is used to address a corresponding memory location to provide an output data signal indicative of a corresponding quantized transform coefficient value. The output of quantizer lookup table 88, indicated by the reference signal QC16, is an 8 illustrated, again the DC coefficient (DC.sub.16) of the DQT transform operation may be optionally treated separately rather than through the corresponding quantizer lookup table. The DQT coefficients are provided as an output of quantizer lookup table 88 to code length lookup table 90. The quantized DQT coefficient values are each coded using variable length code, such as a Huffman code, in order to minimize the data rate. Code words and corresponding code lengths are found in the form of code length look up table 90. Each of the quantized DQT coefficients DC.sub.16, AC.sub.16, AC.sub.8 and AC.sub.4, are used to look up in the code length table the corresponding number of bits required to code each coefficient. Code length lookup table 90 may also be implemented in read only memory form with the DQT coefficients addressing memory locations which contain respective code length values. The DQT coefficient code length values are provided to multiplexers 15a-15d of FIG. 1 for replacing the corresponding DC DCT coefficient code length value for each block and sub-block in the block size determination as was discussed with reference to FIGS. 1 and 2. In FIG. 7 the 64 quantized DQT coefficient values of block 91 are selected for replacement of the DC DCT coefficient values for the block sizes as determined by the block size determination made in accordance as discussed with reference to FIGS. 1 and 2. The values stored in P, Q and R registers 40, 42 and 44 are used in selecting the DQT coefficient values for replacement in the block and sub-blocks of the DC DCT coefficient values. The DQT coefficient values are provided from quantizer lookup table 88 (FIG. 6) to one input to multiplexer 92. At the other input of multiplexer 92 a dummy value x is provided. Multiplexer 92 is responsive to the bits of P register 40 for providing an output of the DQT coefficient values for the entire 2 position. When the value in P register 40 bit position is a "0", only the DQT coefficient value which corresponds to the DC coefficient value of the sub-block is output with the remaining values output being the dummy values x. The value x is used merely to maintain the arrangement within the sub-block with these values eventually discarded. Using the data in the sixteen bit position of P register 40 as provided in the example as given with reference to FIGS. 1 and 2, composite block 93 is formed. The sub-blocks of DQT coefficients and dummy values x output from multiplexer 92 are provided as one input to multiplexer 94. At the other input of multiplexer 94 the dummy value x is again provided. Multiplexer 94 is responsive to the bits of Q register 42 for providing an output of the DQT coefficient values and dummy values x for an entire 4 sub-block as provided from multiplexer 92 when a "1" is in the Q register 42 bit position. When the value in Q register 42 bit position is a "0", only the DQT coefficient value which corresponds to the DC coefficient value of the sub-block is output with the remaining values output being the dummy values x. The value x is again used merely to maintain the arrangement within the sub-block with these values eventually discarded. Using the data in the four bit positions of Q register 42 as provided in the example as given with reference to FIGS. 1 and 2, composite block 95 is formed. The sub-blocks of DQT coefficients and dummy values x output from multiplexer 94 are provided as one input to multiplexer 96. At the other input of multiplexer 96 the dummy value x is again provided. Multiplexer 96 is responsive to the bits of R register 44 for providing an output of the DQT coefficient values and dummy values x for an entire 16 block as provided from multiplexer 94 when a "1" is in the R register 44 single bit position 0. When the value in R register 42 bit position 0 is a "0", only the DQT coefficient value which corresponds to the DC coefficient value of the block is output with the remaining values output being the dummy values x. The value x is again used merely to maintain the arrangement within the block with these values eventually discarded. Using the data in single bit position of R register 44 as provided in the example as given with reference to FIGS. 1 and 2, composite block 97 is formed. A comparison of block 97 with the representative block for the values QC of FIG. 2 reveals that for each 8 encoded a single DQT coefficient value exists. Similarly for each 4 DQT coefficient value exists. Although in the example given the block was not encoded as a 16 value would have been generated in the composite block. The DQT coefficient values are output from multiplexer 96 to zig-zag scan serializer which arranges the DQT coefficients and dummy values x in a manner that was discussed with reference to FIGS. 4a and 4b. The zig-zag scan serialized data is provided to value removal logic 99 which removes the dummy values x using information identifying the position of the dummy values x based upon the P, Q and R register data. It should be understood that the use of the dummy values x may be eliminated using a more elaborate multiplexing scheme. The DQT coefficient values output from logic 99 is provided to code lookup table 66 the values are coded also preferably using a Huffman code. The coded DQT coefficient values are output from code lookup table 66 to bit field assembler 62 of FIG. 2. Bit field assembler 62 as previously discussed in arranging the data for transmission provides the coded DQT coefficients along with the coded AC DCT coefficients, while removing the coded DC DCT coefficients from the data output. FIG. 8 illustrates the implementation of a receiver for decoding the compressed image signal generated according to the parameters of FIGS. 1 and 2. In FIG. 8, the coded word is output from the receiver (not shown) to a receive buffer 100. Receive buffer 100 provides an output of the code word to separator 102. Received code words include by their nature the BSA data, the DQT coded coefficients, and the coded DCT coefficients in the form of the RL codes and NZ codes. All received code words obey the prefix conditions such that the length of each code word need not be known to separate and decode the code words. Separator 102 separates the BSA codes from the the DQT coded coefficients and the coded DCT coefficients since the BSA codes are transmitted and received first before this data. The first received bit is loaded into an internal R register (not shown) similar to that of FIG. 2. An examination of the R register determines that if the bit is a "0", the BSA code is only one bit long. Separator 102 also includes Q and P registers that are initially filled with zeros. If the R register contains a "1" bit, four more bits are taken from the receive buffer and loaded into the Q register. Now for every "1" bit in the Q register, four more bits are taken from the receive buffer and loaded into the P register. For every "0" in the Q register, nothing is taken from the receive buffer but four "0" are loaded into the P register. Therefore, the possible lengths of the BSA code is 1, 5, 9, 13, 17 and 21 bits. The decoded BSA data is output from separator 102. Separator 102 also separates the the DQT coded coefficients from the coded DCT coefficients. Separator 102 outputs the DQT coded coefficients to the DQT decoder subsystem illustrated in further detail in FIG. 9. Separator 102 also outputs the coded DCT coefficients in the form of RL codes and NZ codes respectively to RL decode lookup table 104 and NZ decode lookup table 106. Lookup tables 104 and 106 are essentially inverse lookup tables with respect to lookup tables 58 and 60 of FIG. 2. The output of lookup table 104 are values corresponding to run-length and are input to run-length decoder 108. Similarly the non-zero coefficient values output from lookup table 106 are also input to run-length decoder 108. Run-length decoder 108 inserts the zeros into the decoded coefficients and provides an output to coefficient buffer 110 which temporarily stores the coefficients. The stored coefficients are output to inverse zig-zag scan serializer 112 which orders the coefficients according to the scan scheme employed. Inverse zig-zag scan serializer 112 receives the BSA signal from separator 102 to assist in proper ordering of the block and sub-block coefficients into a composite coefficient block. The block of coefficient data is output from inverse zig-zag scan serializer 112 and respectively applied to a corresponding inverse quantizer lookup table 114a-114d. In each of inverse quantizer lookup tables 114a-114d an inverse quantizer value is applied to each coefficient to undo the quantization. Inverse quantizer lookup tables 114a-114d may be employed as ROM devices which contain the quantization factors from that of quantizer lookup tables 12a-12d. The coefficients are output from each of inverse quantizer lookup tables 114a-114d to a corresponding input of multiplexers 115a-115d. The other input of each of multiplexers 115a-115d is coupled to the DQT decoder subsystem of FIG. 9. Multiplexers 115a-115d are each responsive to a respective control signal Y.sub.a -Y.sub.d for providing an output of the AC DCT coefficient values provided from inverse quantizer lookup tables 114a-114d and the DQT coefficients values, which replace the DC DCT coefficient values. The DQT/DCT coefficients are output respectively from multiplexers 115a-115d to inverse discrete cosine transform (IDCT) elements 116a-116d. IDCT element 116a forms from the 16 a 16 118. Similarly, DCT 116b transforms respective 8 coefficients, if present, to 8 IDCT element 116b is provided to sub-block combiner 118. IDCT elements 116c and 116d respective transform the 4 blocks, if present, to corresponding pixel data blocks which are provided to sub-block combiner 118. Sub-block combiner 118 in addition to receiving the outputs from IDCT elements 116a-116d also receives the BSA data from separator 102 so as to reconstruct the blocks of pixel data into a single 16 output to a reconstruction buffer (not shown) for ultimate transfer to the display system. FIG. 9 illustrated in further detail the structure of the DQT decoder subsystem wherein separator 102 provides the coded DQT coefficients to decode lookup table 120. Lookup table 120 is essentially an inverse lookup table with respect to lookup table 66 of FIG. 7, and as such is a Huffman decoder lookup table. The output of lookup table 120 are values corresponding to the decoded DQT coefficients and are input to value insertion logic 122. Logic 122 also receives the BSA data which is in essence the values for the P, Q, and R registers. Logic 122 reconstructs the block/sub-blocks with the DQT data and dummy values x to produce a composite serialized block of DQT and dummy values corresponding to a zig-zag scan serialized version of block 97 of FIG. 7. The DQT coefficients and dummy values are output from logic 122 to inverse zig-zag scan serializer 124 which orders the coefficients according to the scan scheme employed. Inverse zig-zag scan serializer 124 also receives the BSA signal from separator 102 to assist in proper ordering of the block and sub-block coefficients into a composite coefficient block identical to that of block 97 of FIG. 7. The block of coefficient data is output from inverse zig-zag scan serializer 124 to inverse quantizer lookup table 126. In inverse quantizer lookup table 126 an inverse quantizer value is applied to each coefficient to undo the quantization. Inverse quantizer lookup table 126 may also be employed as a ROM device which contain the quantization factors from that of quantizer lookup table 88. The coefficients are output from each of inverse quantizer lookup tables 126 to separator 128. Separator 128 provides the DC DQT coefficient (DC.sub.16), along with the AC DQT coefficients (AC.sub.16), to multiplexer 115a. IDCT element 116a therefore receives the DC DQT coefficient (DC.sub.16) via multiplexer 115a. It should be noted that although these values are sent to multiplexer 115a, the AC coefficients (AC.sub.16) are not output from the multiplexer. With respect to the DC coefficient (DC.sub.16) this value is output from the multiplexer but may ultimately be disregarded should it belong to a block size which is not selected according to the block size assignment. In an alternate embodiment the ultimate unused value may not be sent to multiplexer 115a or inhibited thereat using the BSA data. In FIG. 7 this value is not graphically illustrated as an input to sub-block combiner 118. Separator 128 also provides the DC and AC DQT coefficients (DC.sub.16) and (AC.sub.16) to 2 130 while providing all of the other values of the 8 block as an input to multiplexer 132. Illustrated in block 129 are the relevant values provided to IDCT element 130. IDCT element 130 performs one 2 on the 2 (DC.sub.16) and (AC.sub.16 's), so as to produce four resultant DC DQT coefficients (DC.sub.8) which are provided as an input to multiplexer 132. The DC DQT coefficients (DC.sub.8) are also provided to multiplexer 115b. In the example provided, two of the DC DQT coefficients (DC.sub.8), circled in block 135, are used in the final block data in place of the DC DCT coefficient that was not sent. As was discussed with reference to the values provided to multiplexer 115a, in the present example the other two of these values are also unused in the final block assignment data. In FIG. 7 these unused computation values are not graphically illustrated as an input to sub-block combiner 118. Multiplexer 132 provides all of the values from separator 128 as an output to separator 134. Multiplexer 132 also receives the DC DQT coefficient values (DC.sub.8) from IDCT element 130 and in response to a control signal Z.sub.1 the DC DQT coefficients (DC.sub.8) are output to separator 134 at the corresponding place in the composite block for the DC and AC DQT coefficients, (DC.sub.16) and (AC.sub.16). Separator 134 provides the DC and AC DQT coefficients (DC.sub.8) and (AC.sub.8) to 2 136 while providing all of the other values of the 8 block as an input to multiplexer 138. Illustrated in block 135 are the relevant values provided to IDCT element 136. IDCT element 136 performs four 2 on each of the four 2 DQT coefficients, (DC.sub.8) and (AC.sub.8 's) so as to produce sixteen resultant DC DQT coefficients (DC.sub.4) which are provided as an input to multiplexers 138 and 115c. In the provided example the IDCT computation is performed in each of the two left hand side sub-blocks of block 135 using the DC DQT coefficient (DC.sub.8) and three dummy values for the three AC DQT coefficients (AC.sub.8). Since the result of the IDCT computations using dummy values is not an actual DC DQT coefficient these values may be transmitted to multiplexer 115c but will remain unused. The DC DQT coefficients (DC.sub.4) are as output from IDCT element 136 provided to multiplexer 115c. Again for the example provided, six of the eight DC DQT coefficients (DC.sub.4), circled in block 139, are used in the final block data in place of the DC DCT coefficient that was not sent. As was discussed with reference to the values provided to multiplexers 115a and 115b, in the present example, the other two of these values are also unused in the final block assignment data. In FIG. 7 these unused values along with the dummy value computation results are not graphically illustrated as an input to sub-block combiner 118. Multiplexer 138 provides the values output from separator 134 as an input to 2 Multiplexer 138 also receives the DC DQT coefficient values (DC.sub.4) from IDCT element 136 and in response to a control signal Z.sub.2 the DC DQT coefficients (DC.sub.4) are output to IDCT element 140 at the corresponding place in the composite block for the DC and AC DQT coefficients, (DC.sub.8) and (AC.sub.8). Illustrated in block portion 141 are the relevant values provided to IDCT element 140. IDCT element 140 performs sixteen 2 transform on the 2 DQT coefficients, (DC.sub.4) and (AC.sub.4 's), so as to produce sixty-four resultant DC DQT coefficients (DC.sub.2) which are provided as an input to multiplexer 115d. In the provided example the IDCT computation is performed in each coefficient sub-block in block 139 including those without AC DQT coefficients (AC.sub.4) by using three dummy values for the three AC DQT coefficients (AC.sub.4). Since the result of the IDCT computations using dummy values is not an actual DC DQT coefficient these values may be transmitted to multiplexer 115d but will remain unused. Again for the example provided, eight of the sixty-four DC DQT coefficients (DC.sub.2), circled in block 141, are used in the final block data in place of the DC DCT coefficient that was not sent. As was discussed with reference to the values provided to multiplexer 115a and 115b, in the present example, the other two of these values are also unused in the final block assignment data. In FIG. 7 these unused values along with the dummy value computation results are not graphically illustrated as an input to sub-block combiner 118. As was discussed with reference to FIG. 8, the DQT coefficients which were transmitted as replacements for the omitted DC DCT coefficients are used as the DC coefficient for each relevant block size. The DQT implementation is readily adaptable to the adaptive block size image compression scheme as disclosed herein. The DQT subsystem reduces the data rate by providing a reduction in bits that need to be sent. Furthermore, a DQT subsystem implementation has no effect on the number of overhead bits in the adaptive block size image compression scheme. In fact the use of the DQT processing scheme results in a reduction in the number of bits that need be transmitted for those cases in where the number of smaller blocks is great. FIG. 10 illustrates in block diagram form a flow chart for signal compression of the present invention. FIG. 10 briefly illustrates the steps involved in the processing as discussed with reference to FIGS. 1, 2, 6 and 7. Similarly, FIG. 11 illustrates the decompression process of transmitted compressed image data to result in the output pixel data. The steps illustrated in FIG. 11 are previously discussed with reference to FIG. 9 and 10. The use of DQT processing techniques furthers the improved image quality provided by the adaptive block size processing scheme without making any sacrifice in the bit per pixel ratio. It is also believed that a bit per pixel ratio of about "1" and even substantially less than this level would provide substantial improvement in image quality sufficient for HDTV applications when using the techniques disclosed herein. It is envisioned that many variations to the invention may be readily made upon review of the present disclosure. The previous description of the preferred embodiments is provided to enable any person skilled in the art to make or use the present invention. The various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without the use of the inventive faculty. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. The features, objects, and advantages of the present invention will become more apparent from the detailed description set forth below when taken in conjunction with the drawings in which like reference characters identify correspondingly throughout and wherein: FIG. 1 is a block diagram illustrating the processing elements of the adaptive block size image compression system for providing DCT coefficient data and block size determination; FIG. 2 is a block diagram illustrating the further processing elements for the adaptive block size image compression system for selecting block sizes of DCT coefficient data so as to generate a composite block of DCT coefficient data and the encoding of the composite block for transmission; FIGS. 3a and 3b respectively illustrate exemplarily register block size assignment data and the block selection tree corresponding thereto; FIGS. 4a and 4b are graphs respectively illustrating in graphical form the selected block zig-zag scan serialization ordering sequence within the sub-blocks and between sub-blocks for an exemplary composite block of DCT coefficient data whose block size selection was made according to the block size assignment data of FIG. 3a; FIGS. 5a-5d respectively illustrate in graphical form an alternate zig-zag scan serialization format; FIG. 6 is a block diagram illustrating the DQT coefficient processing elements of the DQT subsystem of the present invention used in accompaniment with the image compression system of FIGS. 1 and 2; FIG. 7 is a block diagram further illustrating the DQT coefficient block replacement processing elements of the DQT subsystem of the present invention used in accompaniment with the image compression system of FIGS. 1, 2 and 6 FIG. 8 is a block diagram illustrating a decoder for reconstructing an image from a received signal generated by the processing elements of FIGS. 1 and 2; FIG. 9 is block diagram illustrating the DQT subsystem of the present invention used in accompaniment with the decoder of FIG. 7; FIG. 10 is a flow chart illustrating the processing steps involved in compressing and coding image data as performed by the processing elements of FIGS. 1 and 2; and FIG. 11 is a flow chart illustrating the processing steps involved in decoding and decompressing the compressed signal so as to generate pixel data. I. Field of the Invention More particularly, the present invention relates to a novel and improved method and system for data compression in an image signal compression scheme utilizing adaptively sized blocks and sub-blocks of encoded discrete cosine transform (DCT) coefficient data. II. Description of the Related Art In the field of transmission and reception of television signals, various improvements are being made to the NTSC (National Television Systems Committee) System. Developments in the field of television are commonly directed towards a high definition television (HDTV) System. In the development of HDTV, system developers have merely applied the Nyquist sampling theorem and low pass filtering design with varying degrees of success. Modulation in these systems amounts to nothing more than a simple mapping of an analog quantity to a value of signal amplitude or frequency. It has most recently been recognized that it is possible to achieve further improvements in HDTV systems by using digital techniques. Many of the proposed HDTV transmission formats share common factors. These systems all involve digital processing of the video signal, which necessitates analog-to-digital (A/D) conversion of the video signal. An analog transmission format is then used thereby necessitating conversion of the digitally processed picture back to analog form for transmission. The receiver/processor must then reverse the process in order to provide image display. The received analog signal is therefor digitized, stored, processed and reconstructed into a signal according to the interface format used between the receiver/processor and the HDTV display. Furthermore the signal is most likely converted back to analog form once more for display. It is noted however that the proposed HDTV formats utilize digital transmission for transmission of control, audio and authorization signals. Many of the conversion operations mentioned above, however, may be avoided using a digital transmission format which transmits the processed picture, along with control, audio and authorization signals, using digital modulation techniques. The receiver may then be configured as a digital modem with digital outputs to the video processor function. Of course, the modem requires an A/D function as part of operation, but this implementation may only require a 4-bit resolution device rather than the 8-bit resolution device required by analog format receivers. Digital transmission is superior to analog transmission in many ways. Digital transmissions provide efficient use of power which is particularly important to satellite transmission and in military applications. Digital transmissions also provide a robustness of the communications link to impairments such as multipath and jamming. Furthermore digital transmission facilitates ease in signal encryption, necessary for military and many broadcast applications. Digital transmission formats have been avoided in previous HDTV system proposals primarily because of the incorrect belief that they inherently require excessive bandwidth. Therefore in order to realize the benefits of digital transmission, it is necessary to substantially compress the HDTV signal. HDTV signal compression must therefor be achieved to a level that enables transmission at bandwidths comparable to that required by analog transmission formats. Such levels of signal compression coupled with digital transmission of the signal will enable a HDTV system to operate on less power with greater immunity to channel impairments. It is therefore an object of the present invention to provide a novel and improved method and system for enhancing the compression of HDTV signals so as to enable digital transmission at bandwidths comparable to that of analog transmissions of conventional TV signals. The present invention is a novel and improved method and system for further compressing image data for transmission and for reconstruction of the image data upon reception. The image compression system includes a subsystem for generating from a block of input pixel data a corresponding composite block of discrete cosine transform (DCT) data optimized for encoding for a minimized transmission data rate. An additional subsystem is utilized to replace certain DCT coefficients with Discrete Quadtree Transform (DQT) coefficients in order to further reduce the data rate. In the present invention a transform means receives an input block of pixel data and performs a discrete cosine transform (DCT) operation on the block of pixel data and on at least one predetermined level of constituent sub-blocks thereof. The transform means provides an output of corresponding block and sub-blocks of DC and AC DCT coefficient values. An additional transform means also receives the input block of pixel data and performs a discrete quadtree transform (DQT) operation thereupon so as to generate a block of DQT coefficient values. A block size assignment means receives, for the block and each sub-block, AC DCT coefficient values and a DQT value which in replacement of the DC DCT coefficient value. The block size assignment means determines, for the block and each corresponding group of constituent sub-blocks of DQT/DCT coefficient values, a bit count value corresponding to a number of bits required to respectively encode the block and each corresponding group of constituent sub-blocks of DQT/DCT coefficient values according to a predetermined coding format. The block assignment means further determines, from the bit count values, ones of the block and group of constituent sub-blocks of DQT/DCT coefficient values requiring a lesser number of bits to encode according to the predetermined coding format, and providing an output of a corresponding selection value. A DCT selection means receives the selection value and the block and sub-blocks of DCT coefficient values and selects the block of DCT coefficient values or ones of the DCT coefficient values sub-blocks in accordance with the selection value. The DCT selection means provides an output of a corresponding composite block of DCT coefficient values formed from the selected block or sub-blocks of DCT coefficient values. A DQT selection means also receives the selection value and the block of DQT coefficient values and selects ones of the DQT coefficient values in accordance with the selection value. The each selected DQT coefficient values corresponds to a DC DCT coefficient values of the selected block or sub-block. A DCT ordering means receives and orders the composite block of DCT coefficient values according to a predetermined ordering format. The ordering means provides an output of the ordered DCT coefficient values to an encoder means that codes the ordered DCT coefficient values according to a predetermined coding format. The encoder means provides an output of the coded ordered DCT coefficient values. A DQT ordering means receives selected DQT coefficient and orders the selected DQT coefficients in a format such that each maintains corresponds with a respective one of the DC DCT coefficients in the coded ordered coefficient values. The DQT ordering means provides an output of the ordered DQT coefficient values. An assembler means receives the coded ordered DCT coefficient values, the ordered DQT values and the selection value. The assembler means generates an coded image value by removing the DC coefficient values in the ordered coded DCT coefficient values while combining the selection value with the remaining AC DCT coefficients in the ordered coded DCT coefficient values and the ordered DQT coefficient values. The coded image value is representative of the input block of pixel data and is of a reduced bit count with respect to a bit count of the input block of pixel data. The assembler means provides an output of the coded image value for transmission. The present invention also provides for a novel and improved method for reconstructing from each received coded image value a corresponding block pixel of pixel data. The present invention further envisions a novel and improved method for compressing an image signal as represented by a block of pixel data and for reconstructing the image signal from the compressed image signal. This application is a continuation of application Ser. No. 08/004,213, filed Jan. 13, 1993, abandoned, which is a continuation of application Ser. No. 07/710,216, filed Jun. 4, 1991, now abandoned, which is a continuation-in-part application of U.S. patent application Ser. No. 487,012 filed Feb. 27, 1990, now U.S. Pat. No. 5,021,891 issued Jun. 4, 1991, and as such relates to image processing.
{"url":"http://www.google.com/patents/US5452104?ie=ISO-8859-1","timestamp":"2014-04-23T10:35:58Z","content_type":null,"content_length":"259500","record_id":"<urn:uuid:ea56ed07-d0ec-4e7f-a9dd-17da8ce0675b>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00350-ip-10-147-4-33.ec2.internal.warc.gz"}
Greenbank Math Tutor Find a Greenbank Math Tutor ...I'd studied Chinese for over ten years, before I moved to the U.S. I graduated from University of Washington with a B.A. degree in business. So I'm fluent in English as well. 13 Subjects: including algebra 1, algebra 2, general computer, geometry ...Please don't hesitate to contact me for further information! I look forward to each new student and opportunity to make a difference in their academic lives! I have received my B.S. in Zoology from Texas A&M University. 36 Subjects: including geometry, probability, SAT math, statistics ...GailMy experience in teaching and tutoring Spanish comes from studying and earning college credits, along with spending time as an exchange student in Mexico, and visiting Madrid, Spain for 2 months. I have 9 years of experience teaching grades k-10, depending on the student count each year, fo... 10 Subjects: including algebra 1, reading, Spanish, grammar ...I've had to write a plethora of my own papers and revise much work for others, so I'm well-versed in composition and grammar. Additionally, I've worked with students to prepare them for the SSAT, so I'm quite familiar with standardized testing format and content. I also helped prepare students when I taught overseas for their Cambridge O-Level exams. 39 Subjects: including algebra 1, algebra 2, grammar, linear algebra ...Contact Steve, your answer to math anxiety.I've taught Statistics at Lightworks Institute and ITT Tech in Everett. I have also been coaching Statistics for over 7 years and have a unique ability to blend knowledge and humor into my teaching style. Descriptive and Inferential statistics can be understood by most students, with the right teacher and the right motivation. 19 Subjects: including calculus, physics, differential equations, public speaking
{"url":"http://www.purplemath.com/Greenbank_Math_tutors.php","timestamp":"2014-04-17T07:41:21Z","content_type":null,"content_length":"23655","record_id":"<urn:uuid:a9a1f422-c1e1-4a9e-8316-757876502dc1>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00062-ip-10-147-4-33.ec2.internal.warc.gz"}
Posterior Perry-bility: A (semi-serious) Bayesian update of Rick Perry As we continue to adjust the trims on our Who Will Be the Nominee© model, let’s assess Rick Perry’s chances of the GOP nomination, given his track record in the first four debates. If we go with the traditional political science wisdom that campaigns don’t matter that much (good review of topic here), then new information we receive is not affecting the outcome so much as just revealing the true state of the world. And that means it’s clearly time to whip out some Bayesian analysis! Let’s get right to the priors. Priors (as of 9/6, before first debate) Probability (Perry is GOP nominee) = 39% = .39 [estimated from intrade] Probability (Perry is not GOP nominee (i.e. Loser Perry)) = 61% = .61 [inverse of above] Probability (Nominee Perry has four mediocre-at-best debates) = 20% = .20 [feels about right] Probability (Loser Perry has four mediocre-at-best debates ) = 40% = .40 [again, feels about right] Perry has had four mediocre-at-best-debates. At the Reagan Library (/7), he failed to impress and looked unprepared on Social Security. At the Tea Party Express debate (9/12), he was hammered on HPV and did not recover well. In Orlando at the Fox debate (9/22), his performance was described by the Weekly Standard as “disqualifying.” At Dartmouth (10/11), he announced he had no jobs plan, gave the impression he had disappeared, and then spent his post-debate meet and greet at Beta Theta Pi riffing on America’s 16th century revolution. Given this observed event, what is our updated probability that Perry is the GOP nominee? Use Bayesian inference. Probability Perry is GOP nominee given four mediocre-at-best-debates = ((Probability he has four mediocre debates if he is the nominee)(Probability he is the nominee)) / (((Probability he has four mediocre debates given he is the nominee) (Probability he is the nominee)) + ((Probability he has four mediocre debates if he is not the nominee) (Probability he is not the nominee))) Written in simple notation: P(Nom |E[1]) = ( P(E [1]| Nom) P(Nom) ) / (( P(E[1 ]| Nom) P(Nom) ) + (P(E[1 ]| Loser) P(Loser))) P(Nom|E1) = (.20)(.39 ) / ((.20) (.39) + (.4) (.61)) P(Nom|E1) = .24 Updated probability Perry is GOP nominee given four mediocre debate performances = 24% Still viable, confirming some recent wisdom. But looking a lot less like the GOP nominee than he did six weeks ago. Given that he is currently trading around 11% on Intrade, the above analysis suggests the market might be overreacting, creating an opportunity for some value investing. Obviously, you can quibble with the estimated priors. Who the hell am I to assign specific percentages to debate-failure probabilities? And that’s the key theoretical question raised by the analysis: what’s the true gap between the probability of the nominee having four bad debates and the probability of a non-nominee having four bad debates. My priors say it’s about twice as likely. Yours may differ, even substantially. But if you accept the priors as reasonable — that nominee Perry would have about a 1 in 5 chance of four mediocre debates, and Loser Perry about double that chance — then the conclusion is that Perry still has a reasonable likelihood of being the nominee, but the probability has decreased a fair amount. Someone check my math. Everyone assess my priors.
{"url":"http://www.mattglassman.com/?p=1475","timestamp":"2014-04-20T05:43:50Z","content_type":null,"content_length":"23101","record_id":"<urn:uuid:5d5bdaed-37e5-450c-b8d5-0f49c8c59fc1>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00587-ip-10-147-4-33.ec2.internal.warc.gz"}
Main Page Namespace List Class Hierarchy Alphabetical List Compound List File List Namespace Members Compound Members File Members Related Pages MODEL Documentation Welcome to MODEL, the Modest Ordinary Differential Equation Library Michael Peeters MODEL is a numeric simulation library written during my PhD to simulate systems of rate equations describing Vertical Cavity Surface Emitting Lasers (VCSELs). I decided to write one myself after looking at existing libraries and deciding that they were either too complicated to use (having a target user base of mathematicians) or too opaque (i like to know what the code is doing, exactly). Since most of my programming before has been done in C/C++, what you see here is a C++ library composed of various interacting classes which have the following main functionalities: • deterministic integration of any system of well-behaved differential equations. • stochastic integration of these systems, with the possibility of specifying the correlations present in the noise • nonlinear rootfinder, to find stationary solutions • eigenvalue determination, for stability analysis • easy time modulation of input parameters • diverse data collecting classes for data analysis • Small signal analysis (first order) The following will be added RSN: • Fourier transforms, for spectral analysis (although this can be done in an external program) • Periodic solution finder It furthermore provides a numerical vector class, vectorfunction classes, LU solver and random generators. As I developed it using publicly available resources, GNU/Linux and other GPL'ed software, I decided that it should be GPL as well. However (Oh no, a "however" ! Let's hope it does not invalidate the copyleft), I would very much appreciate it if you let me know if you have used MODEL in any of your applications/simulations/research and provide a reference (this way, I can refer to your work, At the moment, MODEL has the rather arbitrary version number 1.0. Meaning it is useful. Period. Some interfaces (especially the stochastics) might still change, and I would like to add some ieee floating point exception trapping to avoid silly numerical errors. Assuming you downloaded a full .tar of checked out the full CVS tree, you first have to make sure everything is configured correctly on your system: run ./configure to generate the correct configuration. Then a simple make or gnumake (on some systems) will do the trick. If you wish to change the option submitted to the compiler, define the environment variable CXXFLAGS to contain those you need (I know there must be a better way to do this). To get it to compile on the Alpha cluster here (www.vub.ac.be/bfucc , I have to export CXXFLAGS="-mieee-malpha-as" before running the configuration script. Once the make process has ended, You should have a libModel.a in the model directory and a singlemode executable in /tutorial. Run it to see if all went well (make test should do the trick). Learn to use gnuplot :-): plot "stepmodulation.dat" u 1:3 w l and admire the relaxation oscillations. If you want to generate the introduction, API and tutorial documentation, use make docs. This assumes you have a full teTex distribution, Doxygen and a copy of lgrind to prettyprint the code, however. Your mileage may vary. You can also regenerate the README file by typing make README. YMMVAL. You can generate an introductory PDF file using make pdf. To get the sources or tarballs, please go to SourceForge or you can use the CVS repository. More Info? Michael Peeters. Also, check our research website: www.alna.vub.ac.be Last update: June 2002.
{"url":"http://model.sourceforge.net/?source=navbar","timestamp":"2014-04-16T10:09:54Z","content_type":null,"content_length":"6103","record_id":"<urn:uuid:6b926712-6a25-44e8-ba53-667b8a8f3649>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00461-ip-10-147-4-33.ec2.internal.warc.gz"}
Monte Carlo Statistical Methods Results 1 - 10 of 730 , 2004 "... Statistical applications in fields such as bioinformatics, information retrieval, speech processing, image processing and communications often involve large-scale models in which thousands or millions of random variables are linked in complex ways. Graphical models provide a general methodology for ..." Cited by 612 (11 self) Add to MetaCart Statistical applications in fields such as bioinformatics, information retrieval, speech processing, image processing and communications often involve large-scale models in which thousands or millions of random variables are linked in complex ways. Graphical models provide a general methodology for approaching these problems, and indeed many of the models developed by researchers in these applied fields are instances of the general graphical model formalism. We review some of the basic ideas underlying graphical models, including the algorithmic ideas that allow graphical models to be deployed in large-scale data analysis problems. We also present examples of graphical models in bioinformatics, error-control coding and language processing. Key words and phrases: Probabilistic graphical models, junction tree algorithm, sum-product algorithm, Markov chain Monte Carlo, variational inference, bioinformatics, error-control coding. - IEEE J. Selected Areas in Comm , 2005 "... Abstract—Cognitive radio is viewed as a novel approach for improving the utilization of a precious natural resource: the radio electromagnetic spectrum. The cognitive radio, built on a software-defined radio, is defined as an intelligent wireless communication system that is aware of its environment ..." Cited by 543 (0 self) Add to MetaCart Abstract—Cognitive radio is viewed as a novel approach for improving the utilization of a precious natural resource: the radio electromagnetic spectrum. The cognitive radio, built on a software-defined radio, is defined as an intelligent wireless communication system that is aware of its environment and uses the methodology of understanding-by-building to learn from the environment and adapt to statistical variations in the input stimuli, with two primary objectives in mind: • highly reliable communication whenever and wherever needed; • efficient utilization of the radio spectrum. Following the discussion of interference temperature as a new metric for the quantification and management of interference, the paper addresses three fundamental cognitive tasks. 1) Radio-scene analysis. 2) Channel-state estimation and predictive modeling. 3) Transmit-power control and dynamic spectrum management. This paper also discusses the emergent behavior of cognitive radio. Index Terms—Awareness, channel-state estimation and predictive modeling, cognition, competition and cooperation, emergent behavior, interference temperature, machine learning, radio-scene analysis, rate feedback, spectrum analysis, spectrum holes, spectrum management, stochastic games, transmit-power control, water filling. , 2003 "... This purpose of this introductory paper is threefold. First, it introduces the Monte Carlo method with emphasis on probabilistic machine learning. Second, it reviews the main building blocks of modern Markov chain Monte Carlo simulation, thereby providing and introduction to the remaining papers of ..." Cited by 222 (2 self) Add to MetaCart This purpose of this introductory paper is threefold. First, it introduces the Monte Carlo method with emphasis on probabilistic machine learning. Second, it reviews the main building blocks of modern Markov chain Monte Carlo simulation, thereby providing and introduction to the remaining papers of this special issue. Lastly, it discusses new interesting research horizons. , 2002 "... In this paper, we propose a general algorithm to sample sequentially from a sequence of probability distributions known up to a normalizing constant and de ned on a common space. A sequence of increasingly large arti cial joint distributions is built; each of these distributions admits a marginal ..." Cited by 141 (24 self) Add to MetaCart In this paper, we propose a general algorithm to sample sequentially from a sequence of probability distributions known up to a normalizing constant and de ned on a common space. A sequence of increasingly large arti cial joint distributions is built; each of these distributions admits a marginal which is a distribution of interest. To sample from these distributions, we use sequential Monte Carlo methods. We show that these methods can be interpreted as interacting particle approximations of a nonlinear Feynman-Kac ow in distribution space. One interpretation of the Feynman-Kac ow corresponds to a nonlinear Markov kernel admitting a speci ed invariant distribution and is a natural nonlinear extension of the standard Metropolis-Hastings algorithm. Many theoretical results have already been established for such ows and their particle approximations. We demonstrate the use of these algorithms through simulation. - Sequential Monte Carlo Methods in Practice , 2000 "... Bayesian estimation problems where the posterior distribution evolves over time through the accumulation of data arise in many applications in statistics and related fields. Recently, a large number of algorithms and applications based on sequential Monte Carlo methods (also known as particle filter ..." Cited by 140 (11 self) Add to MetaCart Bayesian estimation problems where the posterior distribution evolves over time through the accumulation of data arise in many applications in statistics and related fields. Recently, a large number of algorithms and applications based on sequential Monte Carlo methods (also known as particle filtering methods) have appeared in the literature to solve this class of problems; see (Doucet, de Freitas & Gordon, 2001) for a survey. However, few of these methods have been proved to converge rigorously. The purpose of this paper is to address this issue. We present a general sequential Monte Carlo (SMC) method which includes most of the important features present in current SMC methods. This method generalizes and encompasses many recent algorithms. Under mild regularity conditions, we obtain rigorous convergence results for this general SMC method and therefore give theoretical backing for the validity of all the algorithms that can be obtained as particular cases of it. Keywords: , 2002 "... Optimal filtering problems are ubiquitous in signal processing and related fields. Except for a restricted class of models, the optimal filter does not admit a closed-form expression. Particle filtering methods are a set of flexible and powerful sequential Monte Carlo methods designed to solve the o ..." Cited by 133 (4 self) Add to MetaCart Optimal filtering problems are ubiquitous in signal processing and related fields. Except for a restricted class of models, the optimal filter does not admit a closed-form expression. Particle filtering methods are a set of flexible and powerful sequential Monte Carlo methods designed to solve the optimal filtering problem numerically. The posterior distribution of the state is approximated by a large set of Dirac-delta masses (samples/particles) that evolve randomly in time according to the dynamics of the model and the observations. The particles are interacting; thus, classical limit theorems relying on statistically independent samples do not apply. In this paper, our aim is to present a survey of recent convergence results on this class of methods to make them accessible to practitioners. - Bayesian Analysis , 2005 "... Abstract. Dirichlet process (DP) mixture models are the cornerstone of nonparametric Bayesian statistics, and the development of Monte-Carlo Markov chain (MCMC) sampling methods for DP mixtures has enabled the application of nonparametric Bayesian methods to a variety of practical data analysis prob ..." Cited by 128 (16 self) Add to MetaCart Abstract. Dirichlet process (DP) mixture models are the cornerstone of nonparametric Bayesian statistics, and the development of Monte-Carlo Markov chain (MCMC) sampling methods for DP mixtures has enabled the application of nonparametric Bayesian methods to a variety of practical data analysis problems. However, MCMC sampling can be prohibitively slow, and it is important to explore alternatives. One class of alternatives is provided by variational methods, a class of deterministic algorithms that convert inference problems into optimization problems (Opper and Saad 2001; Wainwright and Jordan 2003). Thus far, variational methods have mainly been explored in the parametric setting, in particular within the formalism of the exponential family (Attias 2000; Ghahramani and Beal 2001; Blei et al. 2003). In this paper, we present a variational inference algorithm for DP mixtures. We present experiments that compare the algorithm to Gibbs sampling algorithms for DP mixtures of Gaussians and present an application to a large-scale image analysis problem. , 2001 "... Jump Markov linear systems (JMLS) are linear systems whose parameters evolve with time according to a finite state Markov chain. In this paper, our aim is to recursively compute optimal state estimates for this class of systems. We present efficient simulation-based algorithms called particle filter ..." Cited by 122 (11 self) Add to MetaCart Jump Markov linear systems (JMLS) are linear systems whose parameters evolve with time according to a finite state Markov chain. In this paper, our aim is to recursively compute optimal state estimates for this class of systems. We present efficient simulation-based algorithms called particle filters to solve the optimal filtering problem as well as the optimal fixed-lag smoothing problem. Our algorithms combine sequential importance sampling, a selection scheme, and Markov chain Monte Carlo methods. They use several variance reduction methods to make the most of the statistical structure of JMLS. Computer - Journal of the American Statistical Association , 1999 "... This paper deals with both exploration and interpretation problems related to posterior distributions for mixture models. The specification of mixture posterior distributions means that the presence of k! modes is known immediately. Standard Markov chain Monte Carlo techniques usually have difficult ..." Cited by 111 (12 self) Add to MetaCart This paper deals with both exploration and interpretation problems related to posterior distributions for mixture models. The specification of mixture posterior distributions means that the presence of k! modes is known immediately. Standard Markov chain Monte Carlo techniques usually have difficulties with well-separated modes such as occur here; the Markov chain Monte Carlo sampler stays within a neighbourhood of a local mode and fails to visit other equally important modes. We show that exploration of these modes can be imposed on the Markov chain Monte Carlo sampler using tempered transitions based on Langevin algorithms. However, as the prior distribution does not distinguish between the different components, the posterior mixture distribution is symmetric and thus standard estimators such as posterior means cannot be used. Since this is also true for most non-symmetric priors, we propose alternatives for Bayesian inference for permutation invariant posteriors, including a cluster... - Journal of Machine Learning Research , 2007 "... Multi-task learning (MTL) is considered for logistic-regression classifiers, based on a Dirichlet process (DP) formulation. A symmetric MTL (SMTL) formulation is considered in which classifiers for multiple tasks are learned jointly, with a variational Bayesian (VB) solution. We also consider an asy ..." Cited by 98 (9 self) Add to MetaCart Multi-task learning (MTL) is considered for logistic-regression classifiers, based on a Dirichlet process (DP) formulation. A symmetric MTL (SMTL) formulation is considered in which classifiers for multiple tasks are learned jointly, with a variational Bayesian (VB) solution. We also consider an asymmetric MTL (AMTL) formulation in which the posterior density function from the SMTL model parameters, from previous tasks, is used as a prior for a new task; this approach has the significant advantage of not requiring storage and use of all previous data from prior tasks. The AMTL formulation is solved with a simple Markov Chain Monte Carlo (MCMC) construction. Comparisons are also made to simpler approaches, such as single-task learning, pooling of data across tasks, and simplified approximations to DP. A comprehensive analysis of algorithm performance is addressed through consideration of two data sets that are matched to the MTL problem.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=63579","timestamp":"2014-04-20T06:29:36Z","content_type":null,"content_length":"38914","record_id":"<urn:uuid:d3af535a-2e5c-4ab2-b5c1-00b617e74646>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
{-# LANGUAGE GeneralizedNewtypeDeriving, PackageImports #-} {- | This module is an alternative version of "Control.Monad.Par" in which the `Par` type provides `IO` operations, by means of `liftIO`. The price paid is that only `runParIO` is available, not the pure `runPar`. This module uses the same default scheduler as "Control.Monad.Par", and tasks scheduled by the two can share the same pool of worker module Control.Monad.Par.IO ( ParIO, P.IVar, runParIO -- And instances! -- import qualified Control.Monad.Par as P -- import qualified Control.Monad.Par.Scheds.Trace as P -- import qualified Control.Monad.Par.Scheds.TraceInternal as TI import qualified Control.Monad.Par.Scheds.DirectInternal as PI import qualified Control.Monad.Par.Scheds.Direct as P import Control.Monad.Par.Class import Control.Applicative import "mtl" Control.Monad.Trans (lift, liftIO, MonadIO) -- | A wrapper around an underlying Par type which allows IO. newtype ParIO a = ParIO { unPar :: PI.Par a } deriving (Functor, Applicative, Monad, ParFuture P.IVar, ParIVar P.IVar) -- | A run method which allows actual IO to occur on top of the Par -- monad. Of course this means that all the normal problems of -- parallel IO computations are present, including nondeterminsm. -- A simple example program: -- > runParIO (liftIO$ putStrLn "hi" :: ParIO ()) runParIO :: ParIO a -> IO a runParIO = P.runParIO . unPar instance MonadIO ParIO where liftIO io = ParIO (PI.Par (lift$ lift io))
{"url":"http://hackage.haskell.org/package/monad-par-0.3.4.3/docs/src/Control-Monad-Par-IO.html","timestamp":"2014-04-20T06:34:15Z","content_type":null,"content_length":"7708","record_id":"<urn:uuid:f10a18ab-954b-471f-a05b-4580ee0a76fb>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00482-ip-10-147-4-33.ec2.internal.warc.gz"}
Oakton Algebra Tutor Find an Oakton Algebra Tutor ...I am familiar with many of the student oriented educational sites available on the web, and have worked with students on the Jefferson Labs, NovaNET and Khanacademy computer assisted learning systems. At this stage of my tutoring career, I have assisted students covering the range from top-of-... 13 Subjects: including algebra 1, algebra 2, physics, chemistry ...As a private tutor, I have accumulated over 750 hours assisting high school, undergraduate, and returning adult students. And as a research scientist, I am a published author and have conducted research in nonlinear dynamics and ocean acoustics. My teaching focuses on understanding concepts, connecting different concepts into a coherent whole and competency in problem solving. 9 Subjects: including algebra 1, algebra 2, physics, geometry ...I took classes covering many periods of music including Ancient Greek, medieval, renaissance, baroque, classical, romantic, and modern/20th century. I took classes studying and analyzing significant works in many of these eras. I also took chamber music and opera history and literature courses. 11 Subjects: including algebra 1, algebra 2, public speaking, writing ...Overall, I feel as if I have obtained intermediate proficiency in the language, and I would feel comfortable tutoring a beginning level student. As a self-employed college counselor, I have worked intensively with high school students throughout all stages of the college search and application p... 16 Subjects: including algebra 1, reading, English, French Throughout my life, I have held a strong commitment to academics. After attending an elite undergraduate college, I continued on to earn a Master's of Science at an Ivy League university. I love to learn, and I love to share my passion for learning with others. 14 Subjects: including algebra 1, reading, writing, GRE
{"url":"http://www.purplemath.com/Oakton_Algebra_tutors.php","timestamp":"2014-04-18T16:27:59Z","content_type":null,"content_length":"23795","record_id":"<urn:uuid:bc87bd34-43c6-43d9-b7ff-f731308c692c>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00555-ip-10-147-4-33.ec2.internal.warc.gz"}
Cloning quantum information from the past January 8, 2014 It is theoretically possible for time travelers to copy quantum data from the past, according to three scientists in a recent paper in Physical Review Letters. It all started when David Deutsch, a pioneer of quantum computing and a physicist at Oxford, came up with a simplified model of time travel to deal with the Grandfather paradox*. He solved the paradox originally using a slight change to quantum theory, proposing that you could change the past as long as you did so in a self-consistent manner. “Meaning that, if you kill your grandfather, you do it with only probability one-half,” said PRL co-author Mark Wilde, an LSU assistant professor with a joint appointment in the Department of Physics and Astronomy and with the Center for Computation and Technology. “Then, he’s dead with probability one-half, and you are not born with probability one-half, but the opposite is a fair chance. You could have existed with probability one-half to go back and kill your grandfather.” No-cloning theorem But the Grandfather paradox is not the only complication with time travel. Another problem is the “no-cloning theorem,” or the no “subatomic Xerox-machine” theorem, known since 1982. This theorem, which is related to the fact that one cannot copy quantum data at will, is a consequence of Heisenberg’s Uncertainty Principle, by which one can measure either the position of a particle or its momentum, but not both with unlimited accuracy. According to the Uncertainty Principle, it is thus impossible to have a subatomic Xerox-machine that would take one particle and spit out two particles with the same position and momentum — because then you would know too much about both particles at once. “We can always look at a paper, and then copy the words on it. That’s what we call copying classical data,” Wilde said. “But you can’t arbitrarily copy quantum data, unless it takes the special form of classical data. This no-cloning theorem is a fundamental part of quantum mechanics — it helps us reason how to process quantum data. If you can’t copy data, then you have to think of everything in a very different way.” But what if a Deutschian closed timelike curve did allow for copying of quantum data to many different points in space? Deutsch suggested in his paper that it should be possible to violate the fundamental no-cloning theorem of quantum mechanics. Now, Wilde and collaborators at the University of Southern California and the Autonomous University of Barcelona have advanced Deutsch’s 1991 The new approach allows for a particle, or a time traveler, to make multiple loops back in time — something like Bruce Willis’ travels in the Hollywood film “Looper.” “That is, at certain locations in spacetime, there are wormholes such that, if you jump in, you’ll emerge at some point in the past,” Wilde said. “To the best of our knowledge, these time loops are not ruled out by the laws of physics. But there are strange consequences for quantum information processing if their behavior is dictated by Deutsch’s model.” A single looping path back in time, a time spiral of sorts, behaving according to Deutsch’s model, for example, would have to allow for a particle entering the loop to remain the same each time it passed through a particular point in time. In other words, the particle would need to maintain self-consistency as it looped back in time. “In some sense, this already allows for copying of the particle’s data at many different points in space,” Wilde said, “because you are sending the particle back many times. It’s like you have multiple versions of the particle available at the same time. You can then attempt to read out more copies of the particle, but the thing is, if you try to do so as the particle loops back in time, then you change the past.” To be consistent with Deutsch’s model, which holds that you can only change the past as long as you can do it in a self-consistent manner, Wilde and colleagues had to come up with a solution that would allow for a looping curve back in time, and copying of quantum data based on a time traveling particle, without disturbing the past. “That was the major breakthrough, to figure out what could happen at the beginning of this time loop to enable us to effectively read out many copies of the data without disturbing the past,” Wilde said. “It just worked.” However, there is still some controversy over interpretations of the new approach, Wilde said. In one instance, the new approach may actually point to problems in Deutsch’s original closed timelike curve model. “If quantum mechanics gets modified in such a way that we’ve never observed should happen, it may be evidence that we should question Deutsch’s model,” Wilde said. “We really believe that quantum mechanics is true, at this point. And most people believe in a principle called Unitarity in quantum mechanics. But with our new model, we’ve shown that you can essentially violate something that is a direct consequence of Unitarity. To me, this is an indication that something weird is going on with Deutsch’s model. However, there might be some way of modifying the model in such a way that we don’t violate the no-cloning theorem.” Other researchers argue that Wilde’s approach wouldn’t actually allow for copying quantum data from an unknown particle state entering the time loop because nature would already “know” what the particle looked like, as it had traveled back in time many times before. Consequences of being able to copy quantum data from the past But whether or not the no-cloning theorem can truly be violated as Wilde’s new approach suggests, the consequences of being able to copy quantum data from the past are significant. Systems for secure Internet communications, for example, will likely soon rely on quantum security protocols that could be broken or “hacked” if Wilde’s looping time travel methods were correct. “If an adversary, if a malicious person, were to have access to these time loops, then they could break the security of quantum key distribution,” Wilde said. “That’s one way of interpreting it. But it’s a very strong practical implication because the big push of quantum communication is this secure way of communicating. We believe that this is the strongest form of encryption that is out there because it’s based on physical principles.” Physicists and computer scientists are working on securing critical and sensitive communications using the principles of quantum mechanics. Such encryption is believed to be unbreakable — that is, as long as hackers don’t have access to Wilde’s looping closed timelike curves. “This ability to copy quantum information freely would turn quantum theory into an effectively classical theory in which, for example, classical data thought to be secured by quantum cryptography would no longer be safe,” Wilde said. “It seems like there should be a revision to Deutsch’s model which would simultaneously resolve the various time travel paradoxes but not lead to such striking consequences for quantum information processing. However, no one yet has offered a model that meets these two requirements. This is the subject of open research.” * In the Grandfather paradox, a time traveler faces the problem that if he kills his grandfather back in time, then he himself is never born, and consequently is unable to travel through time to kill his grandfather, and so on. Some theorists have used this paradox to argue that it is actually impossible to change the past. The question is, how would you have existed in the first place to go back in time and kill your grandfather?” Abstract of Physical Review Letters paper We show that it is possible to clone quantum states to arbitrary accuracy in the presence of a Deutschian closed timelike curve (D-CTC), with a fidelity converging to one in the limit as the dimension of the CTC system becomes large—thus resolving an open conjecture [Brun et al., Phys. Rev. Lett. 102, 210402 (2009)]. This result follows from a D-CTC-assisted scheme for producing perfect clones of a quantum state prepared in a known eigenbasis, and the fact that one can reconstruct an approximation of a quantum state from empirical estimates of the probabilities of an informationally complete measurement. Our results imply more generally that every continuous, but otherwise arbitrarily nonlinear map from states to states, can be implemented to arbitrary accuracy with D-CTCs. Furthermore, our results show that Deutsch’s model for closed timelike curves is in fact a classical model, in the sense that two arbitrary, distinct density operators are perfectly distinguishable (in the limit of a large closed timelike curve system); hence, in this model quantum mechanics becomes a classical theory in which each density operator is a distinct point in a classical phase Comments (16) 1. January 11, 2014 by cybervigilante Time is like a filmstrip. Each frame does not Cause the next one. It just seems to, due to the velocity of the film. Since there is really no causation, the grandfather paradox is perfectly 2. January 10, 2014 by timetc Hold on! all this granfather paradox and linear time streams isnt it nulified if the many worlds theory is corect? kill your grandfather and you have a new world line created where he did not exist. Just like the world line where Hitler won the war because the RAF did not bomb Berlin by accident in 1940 □ January 10, 2014 by Gorden Russell But the many worlds hypothesis begins with every choice you make now in the present. If you travel back in time to change something, that negates a decision that was made, thereby erasing that choice of a world. 3. January 8, 2014 by nfordkai Time is not an entity. It is a system of measuring the rate of change of entities. Just like math is not an entity but a system of defining characteristics of things. You cannot travel through a system of measurement; you can only travel through the things being measured. So when one talks about traveling back (say, 10 years exactly) in time, every particle in the universe would have to revert back to exactly the location, direction of change, etc., that it had at exactly 10 years ago. The laws in our universe do not allow that. □ January 8, 2014 by J2014A57b That seems to make sense. On top of that I thought we were dealing with only 6 percent of the universe, you see, like some say my brain—if we change the parameters of things measured by time and known space will we also change those of things we are just learning about and know next to nothing of like dark energy and matter? We may be even changing the state of consciousness itself of which we know so little, even if it is or not embedded in that 94 percent of the universe we almost totally ignore if we really want to think about it. Quantum encryption I hope is safe from this angle at least. Besides Quantum encryption has one gigantic flaw like any encryption- who or what is at either end of what’s encrypted. That would be a hell of a lot easier to □ January 8, 2014 by Exovane Your thinking of Time/Space as a giant pool that must all be changed at once in order to achieve the goal of Teleportation through it. Consider the effect gravity has on the Universe. If you create a point in the universe that has the maximum amount of density possible in a single space, do you shrink the rest of the universe around that object? Only slightly, but yes. What if you could control approximately HOW the universe shrank around the object? Down to energy state of the particles. In effect you have CHANGED the entire universe to whatever state you wanted it to be. Are the calculations incredibly complex and nearly impossible? – yes. But that doesn’t make it any less real. □ January 10, 2014 by beephatron People used to think that, but if I start walking, the rest of the universe’s actually changes, very slightly, in my timeframe. If time is a system of measuring, it’s a very fluid and flexible system, that changes drastically depending on the relative velocity of the measurer, and the measured. ☆ January 10, 2014 by nfordkai Time (a system of measuring) is a very fluid and flexible system because it is whatever we define it and agree on it to be, whether it is the speed with which the earth circles the sun or the a rate of radioactive decay. If you travel east, the sun appears to move faster than if you travel west and your “day” becomes shorter. You can even travel back across the international date line where the day becomes “yesterday”, but that does not mean that you are literally travelling back in time to meet yourself, right? ○ January 11, 2014 by beephatron In absolute time terms you will be younger than everyone else when you get back from your trip, because time moved faster for them than it did for you. How fast time passes for you depends on your velocity. You will believe you travelled into the future. Noone knows if you can travel into the past, or if it even exists. 4. January 8, 2014 by cloudswrest How about copying some deceased’s biological image from the past, from shortly before he died (or his last neurologically healthy point) and then reinstantiate him in the present. No more need for cryogenic preservation. Just reach back in time and grab a couple snapshots! □ January 8, 2014 by Gorden Russell I’ve been talking about that for some time, cloudswrest. You need to scan your loved one’s connectome and take a DNA sample and then bring it all back to clone a new body while installing the old mind. Years ago I called this a “chron-scanned clone” and I can’t think of anything better to call it now that I’m old. 5. January 8, 2014 by dougw659 If you read Deutsch’s books “The Fabric of Reality” and “The Beginning of Infinity” it will help explain a lot. They are both aimed at ‘non-physicists’, i was able to get through them easily, and have to say there is a lot to like in his theories. Basically, for this issue, what Deustch says (apologies for the layman’s interpretation…) is that there are a huge number (not infinite) of universes where the various states or outcomes of Quantum probability exist, and you can use probability to determine how often specific states occur. So, in the case of the Grandfather killing, in half of the resulting Universes your grandfather would be killed, but in half he would live, perhaps because in those universes you never existed to go back and kill him or maybe you just ‘missed’ when you tried to kill him, etc. The big idea behind what Deustch proposes is that we have actual real ‘proof’ even today that these other universes exist. That proof can be seen in the interference between multiple quantum state particles (i.e. the photon in the dual-slit experiment is clearly interacting with another photon….so where is that photon?) and in the proven use of quantum computers (as small as that has been to date), where calculations are actually completed using fewer bits that exist in our universe than possible (so where are the bits that perform the calculation?). □ January 8, 2014 by pablof Thank you Dougw659 that was very helpful. At least now I have an idea of what the article meant and some reading suggestions! Thanks! □ January 10, 2014 by beephatron you could explain wave-particle duality without creating all these universes. It could just be that the psi field randomly coalesces around a point when u measure it like a pool of water forming a dew drop, or some other mundane hidden effect. We just don’t have the tools to come to any conclusions. 6. January 8, 2014 by pablof Since I’m not a physicist I need some help understanding much of this…can someone explain this in layman’s terms?: “Meaning that, if you kill your grandfather, you do it with only probability one-half, Then, he’s dead with probability one-half, and you are not born with probability one-half, but the opposite is a fair chance. You could have existed with probability one-half to go back and kill your grandfather.” □ January 8, 2014 by Gorden Russell That’s like the T-shirt with the wanted poster that reads: “Wanted Schrodinger’s Cat…Dead and Alive”
{"url":"http://www.kurzweilai.net/time-warp-lsu-researcher-shows-possibility-of-cloning-quantum-information-from-the-past","timestamp":"2014-04-19T09:57:29Z","content_type":null,"content_length":"55374","record_id":"<urn:uuid:3690b914-c69e-4034-8f72-795c1482cb98>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00442-ip-10-147-4-33.ec2.internal.warc.gz"}
Avon, MA Statistics Tutor Find an Avon, MA Statistics Tutor ...I am so effective that many of my students call me the "miracle tutor"! That is why I am one of the busiest tutors in Massachusetts and the United States (top 1% across the country). I provide 1-on-1 instruction in all levels of math and English, including test preparation (SAT, GMAT, LSAT, GR... 67 Subjects: including statistics, reading, English, calculus I am a motivated tutor who strives to make learning easy and fun for everyone. My teaching style is tailored to each individual, using a pace that is appropriate. I strive to help students understand the core concepts and building blocks necessary to succeed not only in their current class but in the future as well. 16 Subjects: including statistics, French, elementary math, algebra 1 ...I specifically successfully taught mathematics through Algebra 1 to students with neurochemical disorders (schizophrenia, depression, anxiety), as well as kids with Asperger's and ADD/ADHD, the latter including my own son, who is an honor student in college now. I believe that every child can su... 90 Subjects: including statistics, English, reading, writing I am a retired university math lecturer looking for students, who need experienced tutor. Relying on more than 30 years experience in teaching and tutoring, I strongly believe that my profile is a very good fit for tutoring and teaching positions. I have significant experience of teaching and ment... 14 Subjects: including statistics, calculus, SPSS, probability ...I help struggling students learn more than they ever believed possible with extra help tailored to their needs. Students with Asperger’s and limited English skills thrive in my classroom. I challenge gifted students to keep them motivated. 17 Subjects: including statistics, geometry, precalculus, trigonometry Related Avon, MA Tutors Avon, MA Accounting Tutors Avon, MA ACT Tutors Avon, MA Algebra Tutors Avon, MA Algebra 2 Tutors Avon, MA Calculus Tutors Avon, MA Geometry Tutors Avon, MA Math Tutors Avon, MA Prealgebra Tutors Avon, MA Precalculus Tutors Avon, MA SAT Tutors Avon, MA SAT Math Tutors Avon, MA Science Tutors Avon, MA Statistics Tutors Avon, MA Trigonometry Tutors
{"url":"http://www.purplemath.com/avon_ma_statistics_tutors.php","timestamp":"2014-04-17T10:47:40Z","content_type":null,"content_length":"24081","record_id":"<urn:uuid:c8be6af0-1f30-4169-88e8-3e4791a9426b>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00147-ip-10-147-4-33.ec2.internal.warc.gz"}
Java Binary Tree (Beginners)? June 12th, 2011, 10:57 PM Java Binary Tree (Beginners)? I'm suppose to add up all the number that are in a node of a binary tree. Is this correct??? This is just the algorithm::: Code : static int weight(BinaryTree t, Node v) if t.size == 0 then //if BinaryTree t is an empty tree return 0 if t.hasLeft(v) then /** if Node v of BinaryTree t doesn't have a left children, it means Node v is an external node. */ return v.element return weight(t, t.left(v)) + weight(t, t.right(v)) June 13th, 2011, 12:36 AM Re: Java Binary Tree (Beginners)? Code Pseudo: if t.hasLeft(v) then /** if Node v of BinaryTree t doesn't have a left children, it means Node v is an external node. */ return v.element return weight(t, t.left(v)) + weight(t, t.right(v)) It looks like you're assuming that if a node doesn't have a left child, it must not have a right child. That assumption is not necessarily true for binary trees. I would suggest implementing a true in-order traversal: 1. Check to see if there's a left child. If there is, get the weight of the left child. Set the calculated weight of this node to that value. Otherwise, set the calculated weight of this node to 2. Add the value of the current node to the calculated weight of this node. 3. Check to see if there's a right child. If there is, add the weight of the right child to the calculated weight of this node. 4. return the calculated weight. June 13th, 2011, 04:16 PM Re: Java Binary Tree (Beginners)? I would suggest implementing a true in-order traversal: 1. Check to see if there's a left child. If there is, get the weight of the left child. Set the calculated weight of this node to that value. Otherwise, set the calculated weight of this node to 2. Add the value of the current node to the calculated weight of this node. 3. Check to see if there's a right child. If there is, add the weight of the right child to the calculated weight of this node. 4. return the calculated weight. I was about to do it your way, but I notice that in my book, there's no method t.hasRight(v) and t.right(v) . It only checks for the left children. So, I guess I have to stick to my original code. Is my code on my first post correct? Thank you for your help. June 13th, 2011, 06:31 PM Re: Java Binary Tree (Beginners)? Only if the assumption I said holds. Actually, the assumption you're making is: If and only if node v has a left child, then there must be a right child. I can think of several scenarios where this is not true. Consider the tree with 2 elements: There is no possible way the above assumption could be true, since one of these elements must be the root node, and the other must be either the left child or the right child, but not both. I would suggest thoroughly reading your book to make sure that this kind of scenario will not happen. If this scenario can happen, then your book is wrong (don't worry, it happens). In the real world, I can tell you that this assumption is almost always a bad assumption. June 18th, 2011, 04:34 PM Re: Java Binary Tree (Beginners)? I think you're right. So, I changed my code so that the base case meets your requirement. So my code right??? Code : // returns the weight of node v static int weight(BinaryTree t, Node v) if t.size == 0 then // base case: if BinaryTree t is an empty tree return 0 if not(t.hasLeft(v) or t.hasRight(v)) then // base case: if Node v has no left child and right child, it means Node v is an external node. return v.element return weight(t, t.left(v)) + weight(t, t.right(v)) So my code right??? June 18th, 2011, 06:01 PM Re: Java Binary Tree (Beginners)? mmm, not quite. Now you're making the assumption that the node must either not have any children, or it must have both a left and right child. You must check for each child individually and add the weight from each node separately. The pseudo-code I provided is basically a line-by-line of what the code should do. June 19th, 2011, 10:28 AM Re: Java Binary Tree (Beginners)? Code : if t.hasALeft(v) then left = t.left(v) return weight(t, left) + weight(t, right) if t.hasARight(v) then right = t.Right(v) Would this be right? It look right to me.
{"url":"http://www.javaprogrammingforums.com/%20algorithms-recursion/9291-java-binary-tree-beginners-printingthethread.html","timestamp":"2014-04-19T09:28:55Z","content_type":null,"content_length":"12057","record_id":"<urn:uuid:53e6c775-34ae-4174-acb4-310abe0c1f55>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00503-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: In a rescue scene during the 911 crisis, a helicopter of mass 6000kg accelerates upward at the rate of 0.50ms^-1 while lifting a 2500kg piece of concrete what is the tension in the cable attaching the concrete to the helocopter... • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50f435ace4b0abb3d8705633","timestamp":"2014-04-21T15:56:02Z","content_type":null,"content_length":"146383","record_id":"<urn:uuid:59820e30-6da0-45db-97cc-6a73721e1f98>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00197-ip-10-147-4-33.ec2.internal.warc.gz"}
Is the closure of the orbits of the mean curvature flow compact for a finite time? up vote 2 down vote favorite Question: Suppose $N$ is an $n$ dimensional immersed submanifold in a complete, non-compact manifold $M$, with bounded geometry. Suppose the mean curvature flow $F:N\times [0,q)\rightarrow M$ satisfies $$ \begin{split} \frac{d}{dt}F&=\vec{H};\\ F(p,0)&=F_{0}; \end{split} $$ If $F(p,t)$ exists for $t\in [0,q)$, is there any possible the closure of $F(N\times[0,t))$ unbounded in $M$. In other words, can the mean curvature flow go to infinity for a finite time? I didn't find the contra_example. I hope the answer is yes. In Euclidean space, definitely this is ture? Is any one know any related reference about this question? You are very appreciated if you can offer some opinions or information. add comment 1 Answer active oldest votes I guess you assume that $F_0$ is bounded. In this case the answer is "yes". Fix a point $p$. Fix small $\varepsilon>0$ so that principle curvatures of $\varepsilon$ spheres in $M$ are uniformly bounded. (It is possible since $M$ has bounded geometry.) Denote by $S_r$ the sphere with center $p$ and radius $r$ and let $\Sigma_r$ be the inward $\varepsilon$-equidistant to $S_r$. up vote 2 down vote Note that for $r>2\cdot\varepsilon$ there is a fixed upper bound for principle curvatures of $\Sigma_r$. Therefore $\ell(t)=\max_{x\in F_t}\{\mathop{\rm dist}_px\}$ grows at most P.S. I used that $M$ has bounded geometry in an essential way, but I am not sure if it is necessary. add comment Not the answer you're looking for? Browse other questions tagged dg.differential-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/115353/is-the-closure-of-the-orbits-of-the-mean-curvature-flow-compact-for-a-finite-t?sort=newest","timestamp":"2014-04-18T18:53:10Z","content_type":null,"content_length":"51724","record_id":"<urn:uuid:56bc4823-2972-46ab-97cb-f98a7e65a586>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00042-ip-10-147-4-33.ec2.internal.warc.gz"}
probability Mr. Smith will miss work April 13th 2013, 02:54 AM #1 Junior Member Jan 2012 Can you hep me with this? Mr. Smith can drive to work along the shortest path or bypass. He drives the shortest path in 36% cases. The probabilty he will get stuck in traffic jam on the shortest road is 20%, in addition in the case of traffic jam he can miss the work with the probability of 10%. The probability to get stuck in traffic on bypass road is 8%, in addition after getting out of traffic jam, he can miss work with the probability of 24%. a) what is the probability, Mr. Smith will miss the work? b) If Mr. Smith misses work, what is the probability, he was driving bypass road? Do I have to use full probability, Bayse formula or something else? Re: probability Mr. Smith will miss work Can you hep me with this? Mr. Smith can drive to work along the shortest path or bypass. He drives the shortest path in 36% cases. The probabilty he will get stuck in traffic jam on the shortest road is 20%, in addition in the case of traffic jam he can miss the work with the probability of 10%. The probability to get stuck in traffic on bypass road is 8%, in addition after getting out of traffic jam, he can miss work with the probability of 24%. a) what is the probability, Mr. Smith will miss the work? b) If Mr. Smith misses work, what is the probability, he was driving bypass road? Do I have to use full probability, Bayse formula or something else? Law of total probability is sufficient. You would like $P(\overline{W})$ where W is the event that Mr.Smith shows to work. Let S be the event that Mr.Smith takes the shortest path. $P(\overline{W}) = P(\overline{W} | S) P(S) + P(\overline{W} | \overline{S}) P(\overline{S})$ Now, $P(\overline{W} | S) = 0.2(0.1)$ and $P(\overline{W} | \overline{S}) = 0.08(0.24)$, and I'll let you take it from here. Re: probability Mr. Smith will miss work Last edited by Kiiefers; April 13th 2013 at 03:49 AM. Re: probability Mr. Smith will miss work Law of total probability is sufficient. You would like $P(\overline{W})$ where W is the event that Mr.Smith shows to work. Let S be the event that Mr.Smith takes the shortest path. $P(\overline{W}) = P(\overline{W} | S) P(S) + P(\overline{W} | \overline{S}) P(\overline{S})$ Now, $P(\overline{W} | S) = 0.2(0.1)$ and $P(\overline{W} | \overline{S}) = 0.08(0.24)$, and I'll let you take it from here. So it will be like this? Re: probability Mr. Smith will miss work Yes. That looks correct. April 13th 2013, 03:22 AM #2 Mar 2013 BC, Canada April 13th 2013, 03:46 AM #3 Junior Member Jan 2012 April 13th 2013, 03:56 AM #4 Junior Member Jan 2012 April 13th 2013, 07:42 AM #5 Mar 2013 BC, Canada
{"url":"http://mathhelpforum.com/advanced-statistics/217372-probability-mr-smith-will-miss-work.html","timestamp":"2014-04-16T20:15:21Z","content_type":null,"content_length":"41544","record_id":"<urn:uuid:e8ffa008-c2cf-4ccd-813d-a6756dad287c>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00321-ip-10-147-4-33.ec2.internal.warc.gz"}
Topology question? waht, you seem to need to practice reading definitions and using them directly. this problem is almost completely trivial logically, so not getting it may mean you have a gap in understanding doing you might want to consult some elementary proof books, like An introduction to mathematical thinking, or Principles of mathematics, or even the logic chapter of Harold Jacobs excellent high school book Geometry, since many high schools no longer offer such fine geometry training including proofs. I am also puzzled that you find yourself in a course like Munkres without this training. Does your school offer a proofs course which you may have missed? If so, you might try that first. Or do they just think plunging right into Munkres is sufficient practice in making proofs? You really need basic rules of mathematical logic first. best wishes.
{"url":"http://www.physicsforums.com/showthread.php?t=83711","timestamp":"2014-04-19T09:37:49Z","content_type":null,"content_length":"35074","record_id":"<urn:uuid:25a2cfd4-ec83-40fd-af6d-3b2868110be2>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00504-ip-10-147-4-33.ec2.internal.warc.gz"}
Walnut Park, CA Algebra Tutor Find a Walnut Park, CA Algebra Tutor ...One thing I always practice in my classroom is starting from a level where the students can understand, and moving forward from there. I believe that building a good foundation is the key to success in any subject. I have come up with many different tricks for helping students remember key idea... 11 Subjects: including algebra 2, algebra 1, physics, geometry ...Thanks and feel free to contact me and ask questions!As a dual citizen of South Korean and the United States, I have lived in both countries and have both English and Korean as my first language. I travel to Korea frequently as I have family there. I also have experience teaching Korean to American business executives and believe that I am qualified to to teach Korean. 25 Subjects: including algebra 1, English, reading, writing I consider myself a natural teacher having inherited the love of teaching from my parents, both educators. I am very patient, and good at explaining difficult concepts to those unfamiliar with them. I am also good with young children. 26 Subjects: including algebra 1, algebra 2, reading, English ...In high school, I was awarded a four-year full-tuition scholarship to The Catholic University of America, where I earned an M.A. in rhetoric and composition and a B.A. in literature. I graduated magna cum laude and was designated a University Scholar for my excellence in the honors program. I a... 25 Subjects: including algebra 1, algebra 2, English, geometry Hello everyone, I'm new to the tutoring business. I've had experience teaching my college sister math so I felt like I can help other students with math or history or fitness since I enjoy all three. I as an individual feel like I can help any one from K-9th grade since I'm still a currently a senior in my last year in high school. 23 Subjects: including algebra 1, algebra 2, Spanish, reading Related Walnut Park, CA Tutors Walnut Park, CA Accounting Tutors Walnut Park, CA ACT Tutors Walnut Park, CA Algebra Tutors Walnut Park, CA Algebra 2 Tutors Walnut Park, CA Calculus Tutors Walnut Park, CA Geometry Tutors Walnut Park, CA Math Tutors Walnut Park, CA Prealgebra Tutors Walnut Park, CA Precalculus Tutors Walnut Park, CA SAT Tutors Walnut Park, CA SAT Math Tutors Walnut Park, CA Science Tutors Walnut Park, CA Statistics Tutors Walnut Park, CA Trigonometry Tutors Nearby Cities With algebra Tutor August F. Haw, CA algebra Tutors Baldwin Hills, CA algebra Tutors Bicentennial, CA algebra Tutors Broadway Manchester, CA algebra Tutors Dockweiler, CA algebra Tutors Firestone Park, CA algebra Tutors Green, CA algebra Tutors Greenmead, CA algebra Tutors Hancock, CA algebra Tutors Hollyglen, CA algebra Tutors Huntington Park algebra Tutors Lennox, CA algebra Tutors Rosewood, CA algebra Tutors Sanford, CA algebra Tutors South Gate algebra Tutors
{"url":"http://www.purplemath.com/Walnut_Park_CA_Algebra_tutors.php","timestamp":"2014-04-19T19:45:26Z","content_type":null,"content_length":"24322","record_id":"<urn:uuid:2f612960-f971-46ec-9277-5251de1e8883>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00582-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of Binomial_coefficient 0: 1 1: 1 1 2: 1 2 1 3: 1 3 3 1 4: 1 4 6 4 1 5: 1 5 10 10 5 1 6: 1 6 15 20 15 6 1 7: 1 7 21 35 35 21 7 1 8: 1 8 28 56 70 56 28 8 1 Row number n contains the numbers $tbinom n k$ for k = 0,…,n. It is constructed by starting with ones at the outside and then always adding two adjacent numbers and writing the sum directly underneath. This method allows the quick calculation of binomial coefficients without the need for fractions or multiplications. For instance, by looking at row number 5 of the triangle, one can quickly read off that (x + y)^5 = 1 x^5 + 5 x^4y + 10 x^3y^2 + 10 x^2y^3 + 5 x y^4 + 1 y^5. The differences between elements on other diagonals are the elements in the previous diagonal, as a consequence of the recurrence relation (3) above. In the 1303 AD treatise Precious Mirror of the Four Elements, Zhu Shijie mentioned the triangle as an ancient method for evaluating binomial coefficients indicating that the method was known to Chinese mathematicians five centuries before Pascal. Combinatorics and statistics Binomial coefficients are of importance in combinatorics, because they provide ready formulas for certain frequent counting problems: □ There are $tbinom n k$ ways to choose k elements from a set of n elements. See Combination. □ There are $tbinom \left\{n+k-1\right\}k$ ways to choose k elements from a set of n if repetitions are allowed. See Multiset. □ There are $tbinom \left\{n+k\right\} k$strings containing k ones and n zeros. □ There are $tbinom \left\{n+1\right\} k$ strings consisting of k ones and n zeros such that no two ones are adjacent. □ The Catalan numbers are $frac \left\{tbinom\left\{2n\right\}n\right\}\left\{n+1\right\}.$ □ The binomial distribution in statistics is $tbinom n k p^k \left(1-p\right)^\left\{n-k\right\} !.$ □ The formula for a Bézier curve. Formulas involving binomial coefficients When n is an integer $tbinom n k= tbinom n \left\{n-k\right\},qquadqquad\left(4\right)$ This follows from (2) by using (1 + x)^n = x^n·(1 + x^−1)^n. It is reflected in the symmetry of Pascal's triangle. Another formula is $sum_\left\{k=0\right\}^n tbinom n k = 2^n, qquadqquad\left(5\right)$ it is obtained from (2) using x = 1. This is equivalent to saying that the elements in one row of Pascal's triangle always add up to two raised to an integer power. A combinatorial interpretation of this fact is given by counting subsets of size 0, size 1, size 2, and so on up to size n of a set S of n elements. Since we count the number of subsets of size i for 0 ≤ i ≤ n, this sum must be equal to the number of subsets of S, which is known to be 2^n. The formula $sum_\left\{k=1\right\}^n k tbinom n k = n 2^\left\{n-1\right\} qquad\left(6\right)$ follows from (2), after differentiating with respect to x and then substituting x = 1. $sum_j tbinom m j tbinom\left\{n-m\right\}\left\{k-j\right\} = tbinom n k qquad \left(7a\right)$ is found by expanding (1 + x)^m (1 + x)^n−m = (1 + x)^n with (2). As $tbinom n k$ is zero if k > n, the sum is finite for integer n and m. Equation (7a) generalizes equation (3). It holds for arbitrary, complex-valued $m$ and $n$, the Chu-Vandermonde identity. A related formula is $sum_m tbinom m j tbinom \left\{n-m\right\}\left\{k-j\right\}= tbinom \left\{n+1\right\}\left\{k+1\right\}. qquad \left(7b\right)$ While equation (7a) is true for all values of m, equation (7b) is true for all values of j. From expansion (7a) using n=2m, k = m, and (4), one finds $sum_\left\{j=0\right\}^m tbinom m j ^2 = tbinom \left\{2m\right\} m. qquad \left(8\right)$ Denote by F(n + 1) the Fibonacci numbers. We obtain a formula about the diagonals of Pascal's triangle $sum_\left\{k=0\right\}^n tbinom \left\{n-k\right\} k = F\left(n+1\right). qquad \left(9\right)$ This can be proved by induction using (3). Also using (3) and induction, one can show that $sum_\left\{j=k\right\}^n tbinom j k = tbinom \left\{n+1\right\}\left\{k+1\right\}. qquad \left(10\right)$ Again by (3) and induction, one can show that for k = 0, ... , n−1 $sum_\left\{j=0\right\}^k \left(-1\right)^jtbinom n j = \left(-1\right)^ktbinom \left\{n-1\right\}k qquad\left(11\right)$ as well as $sum_\left\{j=0\right\}^n \left(-1\right)^jtbinom n j = 0 qquad\left(12\right)$ which is itself a special case of the result that for any integer k = 1, ..., n − 1, $sum_\left\{j=0\right\}^n \left(-1\right)^jtbinom n j j^k = 0 qquad\left(13\right)$ which can be shown by differentiating (2) k times and setting x = −1. The infinite series $sum_\left\{j=0\right\}^infty frac 1 \left\{tbinom \left\{n+j\right\}n\right\}=frac n\left\{n-1\right\}qquad\left(14\right)$ is convergent for n ≥ 2. It is the limiting case of the finite sum $sum_\left\{j=0\right\}^k\left\{tbinom \left\{n+j\right\}n\right\}^\left\{-1\right\}=\left(1-n^\left\{-1\right\}\right)^\left\{-1\right\}\left(1-\left\{tbinom \left\{n+k\right\}\left\{n-1\ This formula is proved by mathematical induction on k. Combinatorial identities involving binomial coefficients Some identities have combinatorial proofs: $sum_\left\{k=q\right\}^n tbinom n k tbinom k q = 2^\left\{n-q\right\}tbinom n qqquad\left(15\right)$ for $\left\{n\right\} geq \left\{q\right\}.$ The combinatorial proof goes as follows: the left side counts the number of ways of selecting a subset of $\left[n\right]$ of at least q elements, and marking q elements among those selected. The right side counts the same parameter, because there are $tbinom n q$ ways of choosing a set of q marks and they occur in all subsets that additionally contain some subset of the remaining elements, of which there are $2^\left\{n-q\right\}.$ This reduces to (6) when $q=1.$ The identity (8) also has a combinatorial proof. The identity reads $sum_\left\{k=0\right\}^n tbinom n k ^2 = tbinom \left\{2n\right\} n.$ Suppose you have $2n$ empty squares arranged in a row and you want to mark (select) n of them. There are $tbinom \left\{2n\right\}n$ ways to do this. On the other hand, you may select your n squares by selecting k squares from among the first n and $n-k$ squares from the remaining n squares. This gives $sum_\left\{k=0\right\}^ntbinom n ktbinom n\left\{n-k\right\} = tbinom \left\{2n\right\} n.$ Now apply (4) to get the result. Generating functions The binomial coefficients can also be derived from the labelled case of the Fundamental Theorem of Combinatorial Enumeration. This is done by defining $C\left(n, k\right)$ to be the number of ways of partitioning $\left[n\right]$ into two subsets, the first of which has size k. These partitions form a combinatorial class with the specification $mathfrak\left\{S\right\}_2\left(mathfrak\left\{P\right\}\left(mathcal\left\{Z\right\}\right)\right) =$ mathfrak{P}(mathcal{Z}) mathfrak{P}(mathcal{Z}). Hence the exponential generating function B of the sum function of the binomial coefficients is given by $B\left(z\right) = exp\left\{z\right\} exp\left\{z\right\} = exp\left(2z\right),.$ This immediately yields $sum_\left\{k=0\right\}^\left\{n\right\} \left\{n choose k\right\} = n! \left[z^n\right] exp \left(2z\right) = 2^n,$ as expected. We mark the first subset with $mathcal\left\{U\right\}$ in order to obtain the binomial coefficients themselves, giving $mathfrak\left\{P\right\}\left(mathcal\left\{U\right\} ; mathcal\left\{Z\right\}\right) mathfrak\left\{P\right\}\left(mathcal\left\{Z\right\}\right).$ This yields the bivariate generating function $B\left(z, u\right) = exp uz exp z,.$ Extracting coefficients, we find that $\left\{n choose k\right\} = n! \left[u^k\right] \left[z^n\right] exp uz exp z =$ n! [z^n] frac{z^k}{k!} exp z frac{n!}{k!} [z^{n-k}] exp z = frac{n!}{k! , (n-k)!}, again as expected. This derivation closely parallels that of the Stirling numbers of the first and second kind, motivating the binomial-style notation that is used for these numbers. Divisors of binomial coefficients The prime divisors of $tbinom n k$ can be interpreted as follows: if p is a prime number and p^r is the highest power of p which divides $tbinom n k$, then r is equal to the number of natural numbers j such that the fractional part of k/p^j is bigger than the fractional part of n/p^j. In particular, $tbinom n k$ is always divisible by n/gcd(n,k). A somewhat surprising result by David Singmaster (1974) is that any integer divides almost all binomial coefficients. More precisely, fix an integer d and let f(N) denote the number of binomial coefficients $tbinom n k$ with n < N such that d divides $tbinom n k$. Then $lim_\left\{Ntoinfty\right\} frac\left\{f\left(N\right)\right\}\left\{N\left(N+1\right)/2\right\} = 1.$ Since the number of binomial coefficients $tbinom n k$ with n < N is N(N+1) / 2, this implies that the density of binomial coefficients divisible by d goes to 1. Bounds for binomial coefficients The following bounds for $tbinom n k$ hold: $left\left(frac\left\{n\right\}\left\{k\right\}right\right)^k le \left\{n choose k\right\} le frac\left\{n^k\right\}\left\{k!\right\} le left\left(frac\left\{ncdot e\right\}\left\{k\right\}right\ Generalization to multinomials Binomial coefficients can be generalized to multinomial coefficients. They are defined to be the number: $\left\{nchoose k_1,k_2,ldots,k_r\right\} =frac\left\{n!\right\}\left\{k_1!k_2!cdots k_r!\right\}$ While the binomial coefficients represent the coefficients of (x+y)^n, the multinomial coefficients represent the coefficients of the polynomial (x[1] + x[2] + ... + x[r])^n. See multinomial theorem. The case r = 2 gives binomial coefficients: $\left\{nchoose k_1,k_2\right\}=\left\{nchoose k_1, n-k_1\right\}=\left\{nchoose k_1\right\}= \left\{nchoose k_2\right\}$ The combinatorial interpretation of multinomial coefficients is distribution of n distinguishable elements over r (distinguishable) containers, each containing exactly k[i] elements, where i is the index of the container. Multinomial coefficients have many properties similar to these of binomial coefficients, for example the recurrence relation: $\left\{nchoose k_1,k_2,ldots,k_r\right\} =\left\{n-1choose k_1-1,k_2,ldots,k_r\right\}+\left\{n-1choose k_1,k_2-1,ldots,k_r\right\}+ldots+\left\{n-1choose k_1,k_2,ldots,k_r-1\right\}$ and symmetry: $\left\{nchoose k_1,k_2,ldots,k_r\right\} =\left\{nchoose k_\left\{sigma_1\right\},k_\left\{sigma_2\right\},ldots,k_\left\{sigma_r\right\}\right\}$ where $\left(sigma_i\right)$ is a permutation of (1,2,...,r). Generalization to negative integers If $k geq 0$, then $\left\{n choose k\right\} = frac\left\{n\left(n-1\right) dots \left(n-k+1\right)\right\}\left\{1 . 2 dots k\right\}= \left(-1\right)^k \left\{-n+k-1 choose k\right\}$ extends to all $n$. The binomial coefficient extends to $k leq 0$ via {n choose k}= begin{cases} (-1)^{n-k} {-k-1 choose n-k} quad mbox{if } n geq k, (-1)^{n-k} {-k-1 choose -n-1} quad mbox{if } n leq -1. end{cases} Notice in particular, that $\left\{n choose k\right\}=0 quad mbox\left\{iff \right\}$ begin{cases} n geq 0 mbox{ and } n < k, n geq 0 mbox{ and } k < 0, n < 0 mbox{ and } n < k < 0. end{cases} This gives rise to the Pascal Hexagon or Pascal Windmill. Generalization to real and complex argument The binomial coefficient $\left\{zchoose k\right\}$ can be defined for any complex number z and any natural number k as follows: $\left\{zchoose k\right\} = prod_\left\{n=1\right\}^\left\{k\right\}\left\{z-k+nover n\right\}= frac\left\{z\left(z-1\right)\left(z-2\right)cdots \left(z-k+1\right)\right\}\left\{k!\right\}. qquad \left(14\right)$ This generalization is known as the generalized binomial coefficient and is used in the formulation of the binomial theorem and satisfies properties (3) and (7). Alternatively, the infinite product $\left(-1\right)^k \left\{z choose k\right\}= \left\{-z+k-1 choose k\right\} = frac\left\{1\right\}\left\{Gamma\left(-z\right)\right\} frac\left\{1\right\}\left\{\left(k+1\right)^\left\{z+1\ right\}\right\} prod_\left\{j=k+1\right\} frac\left\{\left(1+frac\left\{1\right\}\left\{j\right\}\right)^\left\{-z-1\right\}\right\}\left\{1-frac\left\{z+1\right\}\left\{j\right\}\right\}$ may be used to generalize the binomial coefficient. This formula discloses that asymptotically $\left\{z choose k\right\} approx frac\left\{\left(-1\right)^k\right\}\left\{Gamma\left(-z\right) k^ \left\{z+1\right\}\right\}$ as $k to infty$. For fixed k, the expression $f\left(z\right)=\left\{zchoose k\right\}$ is a polynomial in z of degree k with rational coefficients. f(z) is the unique polynomial of degree k satisfying f(0) = f(1) = ... = f(k − 1) = 0 and f(k) = 1. Any polynomial p(z) of degree d can be written in the form $p\left(z\right) = sum_\left\{k=0\right\}^\left\{d\right\} a_k \left\{zchoose k\right\}.$ This is important in the theory of difference equations and finite differences, and can be seen as a discrete analog of Taylor's theorem. It is closely related to Newton's polynomial. Alternating sums of this form may be expressed as the Nörlund-Rice integral. In particular, one can express the product of binomial coefficients as such a linear combination: $\left\{xchoose m\right\} \left\{xchoose n\right\} = sum_\left\{k=0\right\}^m \left\{m+n-kchoose k,m-k,n-k\right\} \left\{xchoose m+n-k\right\}$ where the connection coefficients are multinomial coefficients. In terms of labelled combinatorial objects, the connection coefficients represent the number of ways to assign m+n-k labels to a pair of labelled combinatorial objects of weight m and n respectively, that have had their first k labels identified, or glued together, in order to get a new labelled combinatorial object of weight m+n-k. (That is, to separate the labels into 3 portions to be applied to the glued part, the unglued part of the first object, and the unglued part of the second object.) In this regard, binomial coefficients are to exponential generating series what falling factorials are to ordinary generating series. Newton's binomial series Newton's binomial series, named after Sir Isaac Newton, is one of the simplest Newton series: $\left(1+z\right)^\left\{alpha\right\} = sum_\left\{n=0\right\}^\left\{infty\right\}\left\{alphachoose n\right\}z^n = 1+\left\{alphachoose1\right\}z+\left\{alphachoose 2\right\}z^2+cdots.$ The identity can be obtained by showing that both sides satisfy the differential equation (1+z) f'(z) = α f(z). The radius of convergence of this series is 1. An alternative expression is $frac\left\{1\right\}\left\{\left(1-z\right)^\left\{alpha+1\right\}\right\} = sum_\left\{n=0\right\}^\left\{infty\right\}\left\{n+alpha choose n\right\}z^n$ where the identity $\left\{n choose k\right\} = \left(-1\right)^k \left\{k-n-1 choose k\right\}$ is applied. The formula for the binomial series was etched onto Newton's gravestone in Westminster Abbey in 1727. Two real or complex valued arguments The binomial coefficient is generalized to two real or complex valued arguments using gamma function or Beta function via $\left\{x choose y\right\}:= frac\left\{Gamma\left(x+1\right)\right\}\left\{Gamma\left(y+1\right) Gamma\left(x-y+1\right)\right\}= frac\left\{1\right\}\left\{\left(x+1\right) Beta\left This definition inherits these following additional properties from $Gamma$: $\left\{x choose y\right\}= frac\left\{sin \left(y pi\right)\right\}\left\{sin\left(x pi\right)\right\} \left\{-y-1 choose -x-1\right\}= frac\left\{sin\left(\left(x-y\right) pi\right)\right\} \left\{sin \left(x pi\right)\right\} \left\{y-x-1 choose y\right\};$ $\left\{x choose y\right\} cdot \left\{y choose x\right\}= frac\left\{sin\left(\left(x-y\right) pi\right)\right\}\left\{\left(x-y\right) pi\right\}$. Generalization to q-series The binomial coefficient has a q-analog generalization known as the Gaussian binomial. Generalization to infinite cardinals The definition of the binomial coefficient can be generalized to infinite cardinals by defining: $\left\{alpha choose beta\right\} = | \left\{ B subseteq A : |B| = beta \right\} |$ where A is some set with cardinality $alpha$. One can show that the generalized binomial coefficient is well-defined, in the sense that no matter what set we choose to represent the cardinal number $alpha$, $\left\{alpha choose beta\right\}$ will remain the same. For finite cardinals, this definition coincides with the standard definition of the binomial coefficient. Assuming the Axiom of Choice, one can show that $\left\{alpha choose alpha\right\} = 2^\left\{alpha\right\}$ for any infinite cardinal $alpha$. Binomial coefficient in programming languages The notation $\left\{n choose k\right\}$ is convenient in handwriting but inconvenient for typewriters and computer terminals. Many programming languages do not offer a standard subroutine for computing the binomial coefficient, but for example the J programming language uses the exclamation mark: k ! n . Naive implementations, such as the following snippet in C: int choose(int n, int k) { return factorial(n) / (factorial(k) * factorial(n - k)); are prone to overflow errors, severely restricting the range of input values. A direct implementation of the first definition works well: unsigned long long choose(unsigned n, unsigned k) { if (k > n) return 0; if (k > n/2) k = n-k; // faster long double accum = 1; for (unsigned i = 1; i <= k; i++) accum = accum * (n-k+i) / i; return accum + 0.5; // avoid rounding error See also □ This article incorporates material from the following PlanetMath articles, which are licensed under the Text of the GNU Free Documentation License: Binomial Coefficient, Bounds for binomial coefficients, Proof that C(n,k) is an integer, Generalized binomial coefficients □ Knuth, Donald E. (1997). The Art of Computer Programming, Volume 1: Fundamental Algorithms. Third Edition, Addison-Wesley. □ Singmaster, David (1974). "Notes on binomial coefficients. III. Any integer divides almost all binomial coefficients". J. London Math. Soc. (2) 8 555–560. □ Bryant, Victor (1993). Aspects of combinatorics. Cambridge University Press.
{"url":"http://www.reference.com/browse/wiki/Binomial_coefficient","timestamp":"2014-04-16T07:54:56Z","content_type":null,"content_length":"121463","record_id":"<urn:uuid:3b9215ff-00cf-4b20-ba48-05cd2029a437>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00000-ip-10-147-4-33.ec2.internal.warc.gz"}
Factoring Quadratic Equations -- Splitting the Middle Term August 9th 2011, 03:33 PM #1 Is there a good way to always know how to split the middle term of a quadratic equation like x^2-9x+20=0? It turns out that -5x and -4x "works" but is there a better way to split the term that is less tedious? Re: Factoring Quadratic Equations -- Splitting the Middle Term Note that the constant term is $+$, that means the two factors must have the same sign. But it must be $-$ because the middle term is negative. So what are they? Re: Factoring Quadratic Equations -- Splitting the Middle Term I see! Negative 4 and 5 equal 9 when added, and 20 when multiplied. There are a few cases (like ax^2 + bx + c) where it is not so obvious where to split the middle term. That is especially where I get stumped, since c does not always equal the product of the "split" term. Thats actually what I meant to ask about. Re: Factoring Quadratic Equations -- Splitting the Middle Term BUT $\frac{c}{a}$ is the product and $-\frac{b}{a}$ is the sum. Re: Factoring Quadratic Equations -- Splitting the Middle Term Thanks! Very helpful. August 9th 2011, 03:38 PM #2 August 9th 2011, 03:43 PM #3 August 9th 2011, 03:51 PM #4 August 9th 2011, 04:50 PM #5
{"url":"http://mathhelpforum.com/algebra/185885-factoring-quadratic-equations-splitting-middle-term.html","timestamp":"2014-04-21T10:55:32Z","content_type":null,"content_length":"45862","record_id":"<urn:uuid:bc93916c-215d-42da-b73d-3a3f7320df9a>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00201-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] ontology Harvey Friedman friedman at math.ohio-state.edu Fri Jan 16 23:06:59 EST 2004 On 1/14/04 1:14 PM, "Thomas Forster" <T.Forster at dpmms.cam.ac.uk> wrote: > An essay question: > Are the finite ordinals the same mathematical objects as the finite > cardinals? Give reasons... [\aleph_0 marks] This is a particular case of an old and heavily discussed issue in f.o.m. It is all too easy to say mundane things about this matter, merely reproducing what has been interminably discussed in the literature and elsewhere for many decades. So let me try to suggest something nonmundane, or arguably nonmundane, about this. Three main approaches to f.o.m., relevant to this matter, come to 1. Make the ontology of mathematics as minimal as possible. Take the position - at least theoretically - that every mathematical object has an immutable objective reality independently of context. (I.e., avoid such approaches as "categorical foundations" which, in its most ambitious forms, view mathematical objects as meaningful only in the context of structures, etc.) All mathematical objects are to be forced to be one of the primitive objects. Define appropriate notions of isomorphism between structures, and prove that simple conditions on structures imply the existence of isomorphisms (often unique isomorphisms). NOTE: In this approach, systems, like all mathematical objects, must themselves be one of the primitive objects, and the simple conditions placed on structures that imply the existence of isomorphisms (often unique) correspond directly to the use of these systems in actual mathematics, and their motivation for being introduced in mathematics. 2. Make the ontology of mathematics as diverse as at all reasonable. Still take the position that every mathematical object has an immutable objective reality independently of context. The idea here is that just about all fundamental mathematical entities are not sets, but rather urelements. So we have a huge number of extra unary predicates (and associated multivariate predicates and operations). One can go to an extreme, where even positive integers are treated not as a kind of integer, but rather as another kind of urelemente. I think that such a wild extreme can be distinguished from more reasonable control of the proliferation of notions. 3. Take the idea that mathematical structures are paramount, and individuals are not. I have never seen this worked out coherently and productively. Where do the structures come from, and where do the objects in the structures come from? Most working mathematicians like this approach at the working level, but readily see that a fully autonomous development is not only unnecessary, but cumbersome to the point of it being quite unclear just how to proceed. So they readily accept, say, the definition of a category as a set of objects together with another set of objects called arrows, etcetera. So they are happy with a set theoretic underpinning of mathematics, and don't want to be bothered with foundational issues, assuming, confidently, that they have been resolved - or at least resolved sufficiently clearly for their purposes. 4. Other approaches, that have not been considered systematically to my knowledge, and which I won't go into until I have something productive to say about them. Obviously 1 is the most common approach to f.o.m., and it serves us very well. There is something particularly attractive - and simplifying!! - about immutable objects of objective character, with one nonlogical relation. I.e., sets and membership (in addition to equality, connectives, I have never seen a fully developed treatment of approach 2, which should include at least working criteria for when we should introduce a new unary relation symbol (new kind of urelement) and when we should not. On the FOM, I had previously discussed a limited form of 2 by taking the ordered ring of real numbers as primitive, and rewriting the axioms of ZFC incorporating this new primitive. I could have instead used the ordered group of real numbers, with a constant for 1, and prove that there is exactly one multiplication operation that makes this into an ordered ring (I think I previously mentioned this on the FOM). Here is what I wrote in >We have variables of only one sort, but with the following 7 >nonlogical symbols (in addition to the logical symbols not, and, or, >implies, iff, forall, therexists, =). >Sets. (Unary predicate symbol). >Membership. (Binary relation symbol). >Ordered pairing (Binary function symbol). >Real numbers. (Unary predicate symbol). >0,1. (Constant symbols). > <. (Binary relation symbol for ordering of reals). >+. (Ternary relation symbol for addition on reals). >1. Everything is exactly one of: a set, an ordered pair, or a real number, >2. Only sets can have an element. >3. If two sets have the same elements then they are equal. >4. <x,y> = <z,w> iff (x = z and y = x). >5. 0,1 are distinct real numbers. >6. +(x,y,z) implies x,y,z are reals. >7. x < y implies x,y are reals. >8. Usual axioms that reals are an ordered group with 0,1,+,<. >9. Every nonempty set of reals bounded above has a least upper bound. >10. The set of all reals numbers exists. >11. Pairing, union, power set, separation, replacement, foundation, choice. >Rationals, integers, natural numbers, are all defined as certain real >numbers. Functions are sets of ordered pairs. >One proves that every sentence is provably equivalent to a sentence >that mentions only epsilon. Note that I didn't treat ordered pairs as urelements. If one wishes to adopt approach 2, one would want finite tuples to be urelements, with appropriate axioms. For that matter, it seems that one would also want infinite sequences as urelements, also with appropriate axioms. So the question is: just what should a fully formalized treatment of 2 look like? It seems that any two competent people would create at least somewhat different fully formalized treatments of 2. *Whereas, any two fully competent people would create the same fully formalized treatments of 1 (at least after some well known further investigations).* How do we establish the robustness of the formalized treatments of 2? Of course, we at least expect that all such systems based on ZFC would be conservative extensions of ZFC. But we would want much sharper relationships between different full formalizations of 2. In the case at hand, we would treat ordinals as urelements, and cardinals also as urelements. We certainly would not have any axioms that imply that any ordinals are cardinals. Do we want to add axioms that imply that no ordinal is a cardinal? That depends on just what general approach we want to take towards a full formalization of 2. Thus we may want to focus only on various full formalizations of 2 such that #any isomorphism between the purely set theoretic parts of any two models extends to an isomorphism between the two models.# Under this idea, the natural thing would be to add an axiom that states that no ordinal is a cardinal. Let me close by saying that I am dubious about any connection between the questions like those raised by Forster and Platonism. Harvey M. Friedman More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2004-January/007797.html","timestamp":"2014-04-19T17:03:09Z","content_type":null,"content_length":"9981","record_id":"<urn:uuid:d643f2ff-bc37-4bf4-9c20-60a9c578355c>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00661-ip-10-147-4-33.ec2.internal.warc.gz"}
Heat kernel estimates for Laplace operators on metric trees Seminar Room 1, Newton Institute We consider the integral kernel of the semigroup generated by a differential Laplace operator on certain class of infinite metric trees. We will show how the time decay of the heat kernel depends on the geometry of the tree. (This is a joint work with Rupert Frank). The video for this talk should appear here if JavaScript is enabled. If it doesn't, something may have gone wrong with our embedded player. We'll get it fixed as soon as possible.
{"url":"http://www.newton.ac.uk/programmes/AGA/seminars/2010072811151.html","timestamp":"2014-04-20T01:20:07Z","content_type":null,"content_length":"6021","record_id":"<urn:uuid:f937d371-8c0f-45c7-b6c2-78feed0b1d6f>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00087-ip-10-147-4-33.ec2.internal.warc.gz"}
Keep Track of Time in Excel Enter the Current Time To quickly enter the current time in Excel, you can use the keyboard shortcut: Ctrl + Shift + ; • When you start a task, use the shortcut to enter the start time in one cell. • When you finish working on a task, use the shortcut in an adjacent cell, to enter the end time. In the screen shot below, the Task start times are entered in column B and the end times are entered in column C. Calculate the Elapsed Time Based on the start and end times, you can calculated the elapsed time, by subtracting the start time from the end time. In this example, I'll calculate the time in cell D2, using this formula: Format the Elapsed Time There is a 15 minute difference in the start and end times in this example, but the result cell is showing a time – 12:15 AM To see the result as hours and minutes of elapsed time, we'll change the formatting: 1. Select cell D2, where the elapsed time is calculated 2. Press Ctrl + 1 to open the Format Cells dialog box 3. On the Number tab, click the Time category 4. Click on the 37:30:55 format, then click OK This format displays an overall total of hours, minutes and seconds. Cell D2 now shows the elapsed time of 15 minutes. Formula Error When No End Time The formula works well if a start time and an end time have been entered. But there is a problem if only the start time has been entered. The result in cell D3 is a negative number, because the start time is subtracted from zero (the value of empty cell C3). Excel displays negative dates and times as #####. To avoid this problem, we'll change the formula in cell D2, and copy it down to cell D3: Now, the formula checks cell C2, and if it is empty, the formula result is zero. Formula Error After Midnight If you're burning the midnight oil on those client projects, you'll run into another error. In the screen shot below, the start time for Task 3 is 10:30 PM, and I worked past midnight – ending at 12:30 AM. The formula result is a negative number, because the end time is smaller than the start time. To fix this problem, we'll change the formula one last time. If the start time is greater than the end time, we'll assume that the task ended the next day. In that case, we'll add 1 to the end time, which is the equivalent of adding a full day to the end time. That makes the end time greater than the start time, and the calculation will work correctly. Add the Task Times To get the total time for all your tasks, use the SUM function. It should automatically format the total cell in the 37:30:55 format, but if not, you can format the cell manually. Share and Enjoy I'm actually working on a time card in Excel right now. It's pretty close to being finished. If you're interested I can post when it is completed. I'll be offering it up for free, it's helping me to learn how to do different things in Excel with VB.NET. It's been fun so far! It should come out pretty nice. • Hi Jon, I'm about to start working on a time sheet, and was seeing if I could take a look at your project. Thanks! • Hi Jon, You posted you would offer your time sheet project. I am doing something similar where I work and would love to see how you did yours. I haven't gotten into the VB part of Excel and would like to see some completed projects to back my way in. I like to see the end results and then decipher how it was accomplished. If you could email me, it would be greatly appreciated. Negative time is not possible in Excel (other than by formatting, or number to text). This makes it especially difficult to make a bellshaped barcart chart around zero time deviation ----0++++ - from a pivottable. I tried both formulas =IF(A10>=0;"";"-")&TEXT(ABS(A10);"tt:mm"), and formatting [>=1]D\d tt:mm;[<0]-0,00;tt:mm but Excel 2010 Chart has an error so it doesn't work. Any other options? Its good indeed, but any one can change the timings as per their wish. I mean to say if we working on sharing work book, so I can make changes in others time. so is there any way to track the time with out further corrections by any one. Need advise. I love the "after midnight" solution...have been trying to work this out for some time. But the formatting will not enable me to take the sum total and multiply it by a payrate (as in a timesheet application). Any help? Beautiful! Thanks, Debra! My timesheet is working great with one exception. When I enter the time 12:00 a.m., nothing shows up in the field...it's blank. This is the only "time figure" this happens for. The total hours worked are still calculated correctly and populate into the timesheet...but I have a blank for the end time field (I don't get the same problem with 12:00 p.m.). Any suggestions? Thanks! Many, many thanks! I applaud your Excel expertise. Thanks for the super quick response Debra "burning the midnight oil" ;) I would like to use excel to capture the time for an event. The starting time is known but the finish times will vary. Finish time needs to be in h,mm,ss. Would also like to use the space bar to capture the finish times. Any help out there? I'm not an advanced excel user.
{"url":"http://blog.contextures.com/archives/2012/02/23/keep-track-of-time-in-excel/","timestamp":"2014-04-25T02:35:42Z","content_type":null,"content_length":"112683","record_id":"<urn:uuid:18e9a27f-a0ba-4e7f-9a10-7716ec1c4100>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00652-ip-10-147-4-33.ec2.internal.warc.gz"}
algebraic expressions What is the proper way to work this probelm...4cd =14...c=1/2d=7 What is the right way to wok this problem ? Quote: What is the proper way to work this probelm...4cd =14...c=1/2d=7 What is the right way to wok this problem ? yup darkness9375 is right there are 1 equation and 2 unknown variable we have to assume 1 variable to get the value of another variable Hi Leona, I'm not sure that you have stated your question properly. Try and be a little clearer. Use parentheses, separate lines, anything to set apart what it is you are trying to convey. Looks like your original equation is $4cd=14$ Then, the second part is ambiguous. Dose $c=\frac{1}{2}d$ and does $\frac{1}{2}d=7$? If that's the case, then substitute 7 for c and solve for d. But I'm not sure that's the case. Ok, Leona, I think you intend to state the problem this way. $4cd=14$ $c=\frac{1}{2}$ Substituting, we get $4\left(\frac{1}{2}\right)d=14$ $2d=14$ $\frac{2d}{2}=\frac{14}{2}$ $d=7$ Checking, we substitute the values we found for c and d back into the original equation and get: $4\left(\frac{1}{2}\right)(7)=14$ $4(3.5)=14$ $14=14$
{"url":"http://mathhelpforum.com/algebra/68791-algebraic-expressions-print.html","timestamp":"2014-04-20T18:46:28Z","content_type":null,"content_length":"17166","record_id":"<urn:uuid:16b42441-3431-453e-9f60-35af551f74ca>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00037-ip-10-147-4-33.ec2.internal.warc.gz"}
Rising Fawn Math Tutor Find a Rising Fawn Math Tutor ...I have taught pre-algebra for almost 20 years of the 25 I have been a classroom teacher. I have worked with kids of every level of competency and I am Highly Qualified in Tennessee and Georgia. All of my classroom experience has been dealing with middle school students and I look forward to helping you. 6 Subjects: including algebra 1, prealgebra, ADD/ADHD, special needs ...I have tutored children in all subjects from ages 6 through 15, and on up into adulthood. I get along great with children, and am a respected babysitter in my community. I have helped teach students in a Physically & Mentally Handicapped class (grades 2 through 5) alongside my mother at a school in Florida. 52 Subjects: including linear algebra, calculus, chemistry, geometry My experience as a teacher and tutor dates back to 1984 when I taught English as a Second Language in China and later in Taiwan. I earned my Bachelor's degree in International Studies from Rhodes College in Memphis, Tennessee. I have studied Mandarin Chinese from that time to the present day. 42 Subjects: including algebra 2, SAT math, English, algebra 1 ...I would love to help students understand how to excel in this area of math. I would love to help students understand how to excel in this area of math. I would love to help students understand how to excel in this area of math. 13 Subjects: including discrete math, linear algebra, logic, algebra 1 ...I have previously worked as a teacher's assistant in a Research Methods class, which involved basic statistics and the use of SPSS. I also have my PhD, during which I was required to analyze the data I collected using basic and more advanced statistical analyses. I have a Bachelor's degree in Nutrition and Dietetics as well as a PhD specializing in Nutrition. 9 Subjects: including statistics, algebra 1, SPSS, anatomy Nearby Cities With Math Tutor Bridgeport, AL Math Tutors Flat Rock, AL Math Tutors Guild, TN Math Tutors Henagar Math Tutors La Fayette, GA Math Tutors Lookout Mountain, GA Math Tutors New Hope, TN Math Tutors Pisgah, AL Math Tutors Rock Spring Math Tutors South Pittsburg Math Tutors Stevenson, AL Math Tutors Sylvania, AL Math Tutors Trion Math Tutors Valley Head, AL Math Tutors Whiteside, TN Math Tutors
{"url":"http://www.purplemath.com/rising_fawn_math_tutors.php","timestamp":"2014-04-21T10:58:38Z","content_type":null,"content_length":"23895","record_id":"<urn:uuid:97ef7c64-dbd1-4dd5-af71-d5af24e9ab63>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
Have you ever thought of slope connected with books and reading? I know it sounds interesting. Take a look at this dilemma. Five of the students in Mrs. Henderson’s class have been tracking the number of books that they have read and have been comparing their results. During the first week, all five finished one book. After the second week, all five had finished two books. During the third week, all five had finished three books. After ten weeks, all five had finished ten books. What is their rate? If you were to draw the rate of change on a graph comparing books to weeks, what would the graph look like? To understand this problem, you have to understand slope and graphs. All of this will be shown in this Concept. At the end, you will understand how to show these values on a graph. Have you ever been skiing? Even if you haven't, you may know a little bit about what it might be like to learn to ski. When someone first learns to ski, he or she usually starts on a slope that is not very steep. Sometimes that slope is called a beginners' slope. After mastering the basic skills for skiing, a person may begin to try slopes that are steeper and more challenging. In mathematics, the term slope has a different meaning. In mathematics, the slope of a line describes the steepness of a line. However, thinking of a ski slope can help you remember that the slope of a line tells how steep it is. It is helpful to think of the slope of the line as the “rise-over-run.” That is, the slope is the ratio of the vertical (up and down) rise of a line to its horizontal (left to right) run. To help us understand this ratio, let's look at line $AB$ Imagine placing your finger on point $A$$A$$B$ rise of 5 units up and a run of 6 units to the right. So, $\text{slope} = \frac{rise}{run} = \frac{5}{6}$ The slope of line $AB$$\frac{5}{6}$ Line $AB$$\frac{5}{6}$ Notice that line $AB$ Knowing some basic information about the slope of a line can tell you about its slant. • A line that slants up from left to right has a positive slope. • A line that slants down from left to right has a negative slope. Determine if the slope of each line shown below is positive or negative. Consider the line in $a$ The line slants down from left to right, so its slope is negative. Consider the line in $b$ The line slants up from left to right, so its slope is positive. You should also know about the slopes of horizontal and vertical lines. • A horizontal line has a run, but does not have a rise. $\text{slope} = \frac{rise}{run} = \frac{0}{n} = 0.$ So, the slope of a horizontal line is zero. • A vertical line has a rise, but does not have a run. $\text{slope} = \frac{rise}{run} = \frac{n}{0} = undefined.$ Any fraction with a zero in the denominator is undefined. So, the slope of a vertical line is undefined. Identify the slope of each line shown below. Consider the line in $a$ The line is vertical, so its slope is undefined. Consider the line in $b$ The line is horizontal, so its slope is zero. Now it is your turn to think about slope. Answer the following questions. Example A What is the slope of a horizontal line? Solution: 0 Example B What is the slope of a vertical line? Solution: Undefined Example C What is the slope of a line that goes up from left to right? Solution: Positive Here is the original problem once again. Five of the students in Mrs. Henderson’s class have been tracking the number of books that they have read and have been comparing their results. During the first week, all five finished one book. After the second week, all five had finished two books. During the third week, all five had finished three books. After ten weeks, all five had finished ten books. What is their rate? If you were to draw the rate of change on a graph comparing books to weeks, what would the graph look like? If you read the problem carefully you’ll see that the rate of the students is one book per week. Let’s list their weeks and books in a table. week books Now, let’s create a graph to show these results. These are the vocabulary words in this Concept. the slant of a line or the steepness of a line. It is represented on a graph by a ratio of rise over run. the vertical measurement of a line. the horizontal measurement of a line. Positive Slope a slope that goes up from left to right. Negative Slope a slope that goes down from right to left. Guided Practice Here is one for you to try on your own. Is this slope positive, negative or undefined? This line is vertical, therefore the slope of the line is undefined. Video Review Here is a video for review. - This is a Khan Academy video on the slope of a line. Directions: For each graph, tell if the slope of the line shown is positive, negative, zero, or undefined. Directions : Answer each question. 11. Does a positive slope have to contain positive numbers? 12. True or false. A horizontal line is undefined. 13. True or false. A negative slope goes down from right to left. 14. True or false. A vertical line has an undefined slope. 15. True or false. You can figure out any slope as long as the line has some slant to it. Files can only be attached to the latest version of Modality
{"url":"http://www.ck12.org/book/CK-12-Middle-School-Math-Concepts---Grade-7/r4/section/5.7/","timestamp":"2014-04-23T13:16:48Z","content_type":null,"content_length":"161013","record_id":"<urn:uuid:c1f98bee-0c29-46c1-8074-95b44edd6b10>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00167-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Measure the Slope of a Road Trigonometry functions have everyday applications. For example, you can use a trigonometry function to measure the slope of the surface you are standing on. Have you ever noticed a worker along the road, peering through an instrument, looking at a fellow worker holding up a sign or flag? Haven’t you ever wondered what they’re doing? Have you wanted to get out and look through the instrument, too? With trigonometry, you can do just what those workers do — measure distances and angles. Land surveyors use trigonometry and their fancy equipment to measure things like the slope of a piece of land. The slope of land downward is sort of like an angle of depression. Slopes, angles of depression, and angles of elevation are all interrelated because they use the same trig functions. It’s just that in slope applications, you’re solving for the angle rather than being given it. To solve problems involving slope, you can use the trig ratios and right triangles. One side of the triangle is the distance from one worker to the other; the other side is the vertical distance from the ground to a point on a pole. You form a ratio with those measures and determine the angle — voilà! Suppose that Dan and Stan are making measurements for the road-paving crew. They need to know how much the land slopes downward along a particular stretch of road. Dan walks 80 feet from Stan and holds up a long pole, perpendicular to the ground, that has markings every inch along it. Stan looks at the pole through a sighting instrument. Looking straight across, parallel to the horizon, Stan sights a point on the pole 50 inches above the ground — call it point A. Then Stan looks through the instrument at the bottom of the pole, creating an angle of depression. The preceding figure shows a diagram of this situation. What is the angle of depression, or slope of the road, to where Dan is standing? 1. Identify the parts of the right triangle that you can use to solve the problem. The values you know are for the sides adjacent to and opposite the angle of depression. Call the angle measure x. 2. Determine which trig function to use. The tangent of the angle with measure x uses opposite divided by adjacent. 3. Write an equation with the trig function, then input the values that you know. In this problem, you need to write the equation with a common unit of measurement — either feet or inches. Changing 80 feet to inches makes for a big number; changing 50 inches to feet involves a fraction or decimal. Whichever unit you choose is up to you. This example converts feet to inches. 80 feet = 80 12 inches = 960 inches Substituting in the values, you get the tangent of some angle with a measure of x degrees: 4. Solve for the value of x. An angle of 2.9 degrees has a tangent of 0.0507, and a 3-degree angle has a tangent of 0.0524. The 3-degree angle has a tangent that’s closer to 0.05208333, so you can estimate that the road slopes at a 3-degree angle between Dan and Stan.
{"url":"http://www.dummies.com/how-to/content/how-to-measure-the-slope-of-a-road.html","timestamp":"2014-04-19T17:38:33Z","content_type":null,"content_length":"54075","record_id":"<urn:uuid:e4a73a6a-0efd-45f9-a057-b65c5a8ed98d>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
Quadratic Combinations The modal response is determined by means of the following formula. n - Number of modes e[ ij ] - Correlation coefficients R[ i ], R[ j ] - Spectral response to the modes i and j The following types of quadratic combinations are available in the program. For SRSS method, the correlation coefficients equal: e[ij] = 1 for i=j, e[ij ]= 0 for i≠j, For CQC method, the correlation coefficients are calculated based on the following formula. ζ[ i ], ζ[ j ] - Damping coefficients for the modes i and j (relative values) r = Min (T[ i ]/T[ j ]; T[ j ]/T[ i ]) ≤ 1 T[ j ], T[ i ] - Vibration periods for the modes i and j. The above formula is used when the Include damping in calculations option is selected in the dialog with modal analysis parameters. If this option is deselected, one damping value is applied to all modes and the formula above assumes the following form. where R^dir1 is the representative maximum value of a particular response of a given element to a given component of an earthquake, defined as a dir1; R[k]^dir1 is a peak value of the element response due to the k-th mode. where R^dir1 is the representative maximum value of a particular response of a given element to a given component of an earthquake, defined as a dir1; R[k]^dir1 is a peak value of the element response due to the k-th mode (k <-> s); N is a number of modes. τ denotes duration of an earthquake ξ[k] is a damping coefficient for the k-th mode ω[k] is a pulsation for the k-th mode (k <-> s).
{"url":"http://docs.autodesk.com/RSA/2013/ENU/filesROBOT/GUID-9A9795B9-E2B1-4F70-BC47-FF61A312699D.htm","timestamp":"2014-04-20T10:54:55Z","content_type":null,"content_length":"12414","record_id":"<urn:uuid:e95a1e12-385f-4118-a586-62ce6704f2ba>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00380-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help March 12th 2010, 03:04 PM #1 Sep 2009 related rates two cars approach an intersection at a certain time, each car is 5km from it. One car travels west at 90km/h and the other south at 80km/h. How fast is the distance between the two cars decreasing after 2min? I got an answer of -55 km/h but the answer in my text says 119 km/h..... Can anyone verify if my answer is wrong or correct and explain why. Thanks two cars approach an intersection at a certain time, each car is 5km from it. One car travels west at 90km/h and the other south at 80km/h. How fast is the distance between the two cars decreasing after 2min? I got an answer of -55 km/h but the answer in my text says 119 km/h..... Can anyone verify if my answer is wrong or correct and explain why. Thanks So the distance between them is defined as $\sqrt{x\left( t \right)^{2}\; +\; y\left( t \right)^{2}}$ where x(t) is given by: $x\left( t \right)\; =\; 5km\; -\; \left( \frac{3}{2}t \right)\frac{km}{\min }$ and y(t) similarly is given by: $y\left( t \right)\; =\; 5km\; -\; \left( \frac{4}{3}t \right)\frac{km}{\min }$ You then derive the expression with respect to t and you should get: $\frac{d\left( dis\tan ce \right)}{dt}\; =\; \frac{x\left( t \right)x'\left( t \right)\; +\; y\left( t \right)y'\left( t \right)}{\sqrt{x\left( t \right)^{2}\; +\; y\left( t \right)^{2}}}$ or if you want it only in terms of t you can insert the function x(t), y(t), x'(t), and y'(t) in their respective places to get: $\frac{d\left( dis\tan ce \right)}{dt}\; =\; \frac{\frac{145}{36}t\; -\; \frac{25}{2}}{\sqrt{\left( \frac{145}{36} \right)t^{2}\; -\; \frac{85}{3}t\; +\; 50}}$ and plug in t = 2 to get: $\frac{d\left( dis\tan ce \right)}{dt}\; =\; -\frac{8}{3}\sqrt{\frac{5}{17}}$ I believe that is right, but that answer certainly is not pretty. It would be even more useful if you would explain what you did to get that answer. The easterer lives at (5 km - 90km/h * t min, 0) = (f(t),0) The northerner lives at (0, 5 km - 80km/h * t min) = (0,g(t)) The distance between them is $h(t) = \sqrt{f^{2}+g^{2}}$ Now what? Ok that's that formula that I did use. I differentiated and substitued t = 1/30 h into the equation and found that dD/dt = -55 km/h.. so I was wondering if my answer was correct So you have the distance in terms of two functions in t and you can plug those functions of t into your distance formula, then you derive it and plug in the value of 2 mins. (keep in mind that your rate is in hours and your t value is in minutes so I converted 90 km/h to 3/2 km/min by dividing by 60 and similarly converting 80km/h to 4/3 km/min. I guess and easier approach would be to plug in before deriving so you have: distance = $\sqrt{\left( 5\; -\; \frac{3}{2}t \right)^{2}\; +\; \left( 5\; -\; \frac{4}{3}t \right)^{2}\; }$ then simplifying you get: distance = $\sqrt{\frac{145}{36}t^{2}\; -\; \frac{85}{3}t\; +\; 50\; }$ Derive that with respect to t to get: D(distance) = $\left( \frac{1}{2} \right)\frac{\left( \frac{145}{18}t\; -\; \frac{85}{3} \right)}{\sqrt{\frac{145}{36}t^{2}\; -\; \frac{85}{3}t\; +\; 50}}$ I derived that using first the power rule $d\left( x^{p} \right)\; =\; px^{p-1}$ and the chain rule: $d\left( f\left( g\left( x \right) \right) \right)\; =\; f'\left( g\left( x \right) \right)g'\ left( x \right)\; dx$ Then plug in your value of t = 2 min to get the answer. So you have the distance in terms of two functions in t and you can plug those functions of t into your distance formula, then you derive it and plug in the value of 2 mins. (keep in mind that your rate is in hours and your t value is in minutes so I converted 90 km/h to 3/2 km/min by dividing by 60 and similarly converting 80km/h to 4/3 km/min. I guess and easier approach would be to plug in before deriving so you have: distance = $\sqrt{\left( 5\; -\; \frac{3}{2}t \right)^{2}\; +\; \left( 5\; -\; \frac{4}{3}t \right)^{2}\; }$ then simplifying you get: distance = $\sqrt{\frac{145}{36}t^{2}\; -\; \frac{85}{3}t\; +\; 50\; }$ Derive that with respect to t to get: D(distance) = $\left( \frac{1}{2} \right)\frac{\left( \frac{145}{18}t\; -\; \frac{85}{3} \right)}{\sqrt{\frac{145}{36}t^{2}\; -\; \frac{85}{3}t\; +\; 50}}$ I derived that using first the power rule $d\left( x^{p} \right)\; =\; px^{p-1}$ and the chain rule: $d\left( f\left( g\left( x \right) \right) \right)\; =\; f'\left( g\left( x \right) \right)g'\ left( x \right)\; dx$ Then plug in your value of t = 2 min to get the answer. I did it the way the person who replied to my post did it and I got -55.2 km/h. When I did it using your method, I got -1.8 something.. It's still not correct with the answer in my textbook. Any two cars approach an intersection at a certain time, each car is 5km from it. One car travels west at 90km/h and the other south at 80km/h. How fast is the distance between the two cars decreasing after 2min? I got an answer of -55 km/h but the answer in my text says 119 km/h..... Can anyone verify if my answer is wrong or correct and explain why. Thanks Let x be the distance of the westbound car from the intersection, y be the distance of the southbound car from the intersection, and r be the distance between the two cars. Then $x^2 + y^2 = r^2$ $2x\frac{dx}{dt}+2y \frac{dy}{dt}=2r\frac{dr}{dt}$ You are given $\frac{dx}{dt}=-90$ and $\frac{dy}{dt}=-80$ After 2 minutes, $t=\frac{1}{30},x=2,y=\frac{7}{3},r=\sqrt{\frac{85} {9}}$ Solve for $\frac{dr}{dt}$ March 12th 2010, 03:29 PM #2 Junior Member Mar 2010 March 12th 2010, 03:30 PM #3 MHF Contributor Aug 2007 March 12th 2010, 03:43 PM #4 Sep 2009 March 12th 2010, 03:47 PM #5 Junior Member Mar 2010 March 12th 2010, 04:03 PM #6 Sep 2009 March 12th 2010, 04:46 PM #7 Feb 2010
{"url":"http://mathhelpforum.com/calculus/133525-related-rates.html","timestamp":"2014-04-19T04:13:57Z","content_type":null,"content_length":"55975","record_id":"<urn:uuid:63b382cb-8770-4614-ac05-6aa107f80828>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00214-ip-10-147-4-33.ec2.internal.warc.gz"}
Soviet Calculators History The first Soviet calculators The habitual language used today when working with calculators only appeared at the beginning of the 70's. In general, the first models of calculators had their own operational language, and the user had to learn the specific procedures related to each calculator. Let's take, for example, the C3-07, the first calculator of the Series "C" manufactured by the Leningrad factory "Svetlana." By the way, as a parentheses, it is interesting to note that all calculators produced by the factory "Svetlana" were independent of other Russian electronic appliances. All electronic calculators manufactured during those years received the common designation "B3". The desktop electronic clocks received the code "B2", electronic watches - "B5" (for example, B5-207), desktop electronic devices with vacuum display were identified with codes "B6," "B7," and so on. The "B" is the first letter of "Home appliances" in Russian. Svetlana's calculators where the only ones identified with a letter "C" - Svetlana means the light of an electric lamp (CBETLAHA - SVET LAmpochki NAkalivaniya) and is also a popular women's name in Russia. Here is the keyboard of the C3-07 calculator. This was a very surprising calculator, especially because of its keyboard and display. As it is can be seen in the image, the calculator combined not only the functions [+=] and [-=], but also the multiply-divide functions [X -:-]. Try to guess how to multiply and to divide in this calculator. A hint: the calculator does not recognize two sequential keystrokes on the same key, only one keystroke is possible for each key. The answer is no less than surprising: to multiply, say 2 by 3, it is necessary to press the following keys [2] [X-:-] [3] [+=], while to divide 2 by 3, the following sequence is applied: [2] [X-:-] [3] [-=]. The addition and subtraction is made in a similar way (as the one applied in the B3-04 calculator) that is, to perform the difference 2 - 3 the following sequence is used: [2] [+=] [3] [-=]. Another surprise is the eight elements used to build a number in the display as shown in the figure at the left. Starting with this model, all simple calculators made by the Svetlana's factory operated with exponential numbers up to 10e16-1, even when the display had only a capacity of eight or twelve digits. If the result exceeded 8 or 12 digits (depending on the model), the decimal comma disappeared and the display showed the first 8 or 12 digits of the number. Speaking about the operational language of early calculators, it is necessary to mention that in the B3-02, B3-05 and C3-07 calculators of the type "Iskra", the result of the calculations used all digits of the display filling with zeroes the unused positions. It was certainly inconvenient to find on such calculators the first (and last) significant digit. By the way, in model C3-07, which was mentioned before, there was an attempt to lessen a little bit this problem by applying an unusual method - on this calculator the zero has half of the height. Also, these calculators had a very inconvenient, but quite explicable for early calculators, feature: the required accuracy of the calculations was set by the number of significant digits entered on the first number. For example, to calculate the quotient of division 23 by 32 to three decimal digits, the number 23 had to be entered with three decimal digits: |23,000| [-:-] |32| [=] (0.718). So long as the operator didn't press the reset button, all subsequent calculations were made with three decimal digits, and the decimal point would remain fixed in the same position all the time. These calculators, by the way, were referred to as "fixed point" calculators. Later calculators, in which the point moved on the display, were referred to as "floating point" calculators. Now, the terminology has changed, and "floating point" is used to describe displays where a number is represented by a mantissa at the left and the exponent order at the right. These calculators could work with a power unit, or with four (B3-09M, B3-14M) or three AA batteries (B3-14). Although the three calculators used the same chip, they had different functionality. In general, "removing" some functions was a typical practice in many models of Soviet calculators. For example, the B3-09M calculator did not have square root function, and the B3-14M was not good for percent calculations. As an additional feature, the decimal point took the place of a full digit. This made easier to read the information, but the last sign digit was lost. Before starting an operation (after turning the power on) it was necessary to press the "C" key in order to clear the registers. The first soviet engineering calculator The next huge step in the history of Soviet calculators was the development, completed by the end of 1975, of the B3-18, the first engineering calculator. As stated in the article "Fantastic Electronics" published in "Science and Life" magazine, No. 10, 1976: "...this calculator has crossed the Rubicon of arithmetic, its mathematical capability has stepped into trigonometry and algebra. "Elektronika B3-18" is able to raise instantly a square and extract a square root, it can raise any number to any degree in just two steps within the limits of eight digits, can convert dimensions, calculate the logarithms, antilogarithms, and trigonometric functions ... It is difficult to understand the huge amount of work that this machine performs in few seconds while it folds huge numbers to perform an algebraic or trigonometric operation before lighting the result in the display..." And this was true, a huge amount work was made. To make this possible, 45,000 transistors, resistors, condensers and conductors were packed in a uniform crystal with the size of 5x5.2 mm. This was equivalent to fifty TV sets of those years pushed into the square of an arithmetic exercise-book! However, the price of such calculator was considerable - 220 roubles in 1978. As an example, in those years the salary of an engineer who just graduated from a technical institute was 120 roubles per month. But it worth to purchase one. The logarithmic slide rule was no longer necessary, and the margin of error was no longer a concern. Now it was possible to throw the tables of logarithms into the shelf. By the way, a prefix function key "F" was used for the first time in this calculator. Nevertheless it was not possible to include all the desired functionality into the microcircuit K145IP7 of the B3-18 calculator. For example, in order to evaluate a function in which the Taylor decomposition of a number was required, the working register was cleared, and therefore the previous result of the operation was erased. In this context it was impossible to make sequential calculations such as 5 + sin 2. For this purpose it was necessary first to find the sine of 2, and only then add the result to 5. So the main effort was made, and the result was a good but very expensive calculator. In order to make the calculator accessible to the mass segments of the population, it was decided make a cheaper model based on the B3-18A. To avoid reinventing the wheel, engineers took the easiest way: removing the prefix key "F" and all the function keys from the calculator. So the calculator became a simple calculator and was named "B3-25A." Only the developers and calculator repairmen knew about the secret alteration made to produce the B3-25A... The further development of calculators After the B3-18, the B3-19M calculator was developed with the participation of engineers from the Soviet Union and the German Democratic Republic (GDR). This calculator used RPN (Reverse Polish Notation). Once the first number is entered, pressing the input key pushes the number into the stack In 1977, another very powerful engineering calculator was introduced, the C3-15. This calculator had increased calculation accuracy (up to 12 numbers), worked with exponents up to 9.999999999e99, had three registers of memory, but most remarkable: worked with algebraic logic. That is, to calculate the expression 2 + 3 * 5 it was no longer necessary to calculate first 3 * 5, and then add 2 to the result. This expression could be written down in a "natural" way: [2] [+] [3] [*] [5] [=]. Besides, the calculator supported up to eight levels of brackets. This calculator, together with its desktop brother MK-41 were the only ones having a "/p/" key. This key was used for calculations under the formula sqrt (x ^ 2 + y ^ 2) Based on the B3-26 calculator, the B3-23 (with percents), the B3-23A (with square root) and the B3-24G (with memory) were made. By the way, priced at 18 roubles, the B3-23A calculator subsequently became the cheapest Soviet calculator. The B3-26 was soon named as MK-26 and so was its brother MK-57 and the MK-57A, which had similar functions. Svetlana's factory launched model C3-27, which in reality did not have success, and soon was replaced by the very popular and cheap model C3-33 (MK-33). One more direction in the development of microcalculators were the engineering calculators B3-35 (MK-35) and B3-36 (MK-36). B3-35 differed from B3-36 by having a simpler design and costing five roubles less. These calculators were able to convert degrees into radians and vice versa, multiply and divide numbers in memory, and also calculate a factorial. It was very interesting the way these calculators calculated a factorial - simple sort out. The calculation of the factorial for the maximum value of 69 took more than five seconds on the B3-35 These calculators were very popular in the USSR, although they had, on my opinion, a defect: they displayed too few significant figures, as many as the precision guaranteed in the manual. They usually had five to six digits for transcendental functions. The desktop variant MK-45 was based on these calculators. By the way, many pocket engineering calculators had their desktop counterparts, for example: EPOS 73A (B3-26), MK-41 (C3-15), MKSCH-2 (B3-30), and MK-45 (B3-35, B3-36). The calculator MKSCH-2 - became the standard "school" calculator - Except for some demonstration units, it was produced by the Soviet industry for exclusive use at schools. This calculator, as well as the non-RPN B3-32 calculator (shown at the left) was able to calculate the roots of a quadratic equation and find the roots of a system of two equations with two unknown variables. On its appearance this calculator is completely identical to the B3-14 calculator. All key inscriptions follow the western standards. For example, the key to record a number in memory was designated "STO" instead of "P" or "x - > P". The key to recall a number from memory was designated "RCL," and so on. Despite the capability to handle numbers with large exponents, this calculator used the same eight-digit display of the B3-14 calculator. The developers decided to display floating point numbers with the mantissa and the exponent, leaving room only for five significant digits. To address this problem, the calculator was provided with a "CN" key. For example, if the result of a calculation was the number 1.2345678e-12, the display showed 1.2345-12. By pressing [F] [CN], the display showed 12345678. The decimal point was omitted. The first Soviet programmable calculator The first Soviet programmable calculator B3-21 (shown at the right) was developed by the end of 1977 and sold at the beginning of 1978. It was one large step forward. Before, users had to repeat calculations many times, and calculators had a maximum of three memory registers. Now users were able to write programs and store instructions and numbers in memory. The term "programmable calculator" caused awe and some shivering of voice. It was a very expensive calculator - it cost the whole 350 roubles! Soon calculators were conferred a mark of quality. The first models of the Elektronika B3-21 had a red LED display. The comma used one full position in the display. Later the display was changed to green fluorescent but this made its operation slower by 20 %. The calculator worked with Reverse Polish Notation, this required to enter first the two numbers and then the operator. After entering the first number it was necessary to press the upward arrow key The calculator had two operation prefix keys - "F" and "P." The "F" key was black and the "P" key was red. Prefix keys were also used to store and recall numbers from the registers. The "P" key was used to store, while the "F" key was used to recall. But still the main feature of the B3-21 calculator has not been mentioned yet - the ability to program! The calculator supported 60 steps of program, and the addresses were named with a module of six, therefore the addresses had the following order: 00, 01, 02, 03, 04, 05, 10, 11 and so on. Each key had an operation code. The calculator had functions for unconditional transfer, transfer to subroutines, and also conditional branching. The branching keys used two memory locations on the calculator - one cell to store the operation code, and another to maintain the branch address. The required transfer address was equal to the code corresponding to the transfer key minus 1. For example, in order to jump to address 33, it was necessary to press keys [BP] and [3] (code 34). The operation codes were taken from a table. The most popular Soviet calculator The first programmable calculators B3-21, MK-46 and MK-64, although worked under the control of a program, had only two operational registers X and Y, and working with the circular stack was very inconvenient. This was changed in 1980 by the programmable calculator B3-34, with fluorescent display and priced at 85 roubles. It was another step forward! It had a stack based on four registers, 98 steps of program memory, 14 registers of memory instead of the seven available on the B3-21, and most importantly - the capability to organize cycles and work with index registers. It was a pleasure to work with this calculator. Soon, in 1982, appeared its analogues, the B3-34 and MK-54, with fluorescent display and a more beautiful design, and costing on 20 roubles cheaper at the expense of using a power supply of different type. The desktop variant MK-56 was also developed. One behind another, the most popular scientific and technical magazines, such as "Science and Life", "Engineering - youth" and "Chemistry and Life," started to teach how to work with the calculator. "Science and Life", started in October 1983 a special section named "Man with the calculator", talking about how to work with the B3-34, and including plenty of useful and game programs. The magazine "Engineering - Youth", beginning in 1985 included a column on programming the B3-34 under the name "The Calculator - your assistant" This calculator worked under the Reverse Polish Notation system, therefore, after entering the first number, the In programming mode the code for each command takes one cell of memory. Branching commands (transfers, loops, conditional transfers) take two cells. One cell for the operation code , and a second for the transfer address. In contrast with the B3-21, the transfer address can now be entered directly, instead of finding the correspondent operation code in a table. For example, to enter a transfer command to address 33 with the B3-21 it was necessary to enter [BP] [3] (the 3 key corresponded to code 34), in the B3-34 calculator it was only necessary to enter [BP] [3] [3]. Although now one more keystroke is required, it is no longer necessary to look for the operation code in a table. More details on how to work with the B3-34 calculator, are described on the special page devoted to the use of the B3-34 located here. However, the most interesting aspect of the B3-34 calculator and its analogues is the availability of undocumented features. These were useful not only to write programs, but also to build special display messages. There are so many undocumented features that they could deserve writing an additional article. The B3-34 calculator and its analogues, the MK-54 and the desktop MK-56, became so popular, that the developers from the "Crystal" Kiev factory decided to continue this line. In 1985 the new models MK-61 and MK-52 were introduced. They had one more memory register, 5 programs of 97 steps each, and ten additional functions. In addition, the MK-52 calculator had 512 bytes of permanent memory, which was not erased when the power was disconnected. This memory was able to store both programs and data. The MK-52 calculator also had special sockets for the connection of available program modules known as BRP (blocks of memory expansion). When designing the BRP blocks the developers again killed two rabbits at once by soldering in one block the matrix for two sets of programs. By connecting a jumper, say, in rule 1, one had the block BRP-3 with a mathematical set of the programs, then re-soldering the jumper to rule 2 - the block became the BRP-2 with astro-navigational functions. Of course, this implied to lose the manufacturer warranty since to do that it was necessary to remove a sealed screw. This was divulged in one of the issues of "Science and Life" magazine by a reader who in turn was told by one of the "Crystal" developers. I can imagine what would happen to this developer. By the way, the MK-52 flew to the space in the "Soyuz TM-7", where it was supposed to compute the landing trajectory in case the onboard computer would fail. Late models of calculators Early calculators consumed a lot of energy from its batteries, providing a maximum of two hours of independent work. 220 volts were not always available, and replacement batteries where only available in large cities. Therefore, engineers and developers began to develop calculators with less power requirements. By that time, displays based on liquid crystals with low power consumption had already been invented. The B3-30 (shown at the left) became the second calculator based on liquid crystals after the B3-04. Developed in 1978 and consuming only 8 mW (for comparison, the B3-26 calculator consumed 600 mW), this calculator had a function, unusual for Soviet calculators, to return the inverse of a number. This function is now available practically in all modern simple calculators. To calculate 1/5, the following sequence was used: [5] [-:-] [=]. In 1979, the B3-30 calculator was replaced by the B3-39 model, in which the microchip used a new low-level logic. The power consumption was decreased by eight times to only one mW. This allowed to build this calculator without a voltage converter. One year after, for the Moscow Olympiads of 1980, the MK-53 calculator was manufactured with an onboard watch, an alarm clock, and a stop watch. This calculator required one less battery than the B3-39. This became possible at the expense of using an even lower level microcircuit, the K145VV3-2, which was considered to be "Bodiless". A new milestone in the development of calculators was the MK-60 which was powered by a solar cell. In general, this was a simple calculator with one memory register, nothing special except for the solar batteries. The creativity of the engineers didn't rest, and deciding that miniaturization was important, they developed in 1979 a new super-small, but very clever calculator, the B3-38. It included all the last achievements of micro-electronics. Its dimensions were the smallest available at the time - 91 x 55 x 5.5 mm. It was able to perform not only scientific, but also statistical calculations. This calculator had two prefix keys - F1 and F2. A similar calculator was introduced in 1982, but with larger size, the MK-51. Soon it became very popular, although it had a basic defect - the worst power switch ever made. Engineers had decided to include a mechanism consisting of a semicircular toddler, which closed the contacts on a wiring attached directly to the printed circuit board. Certainly, with the pass of time the contact points got rusted and became defective. These calculators used for the first time the "digit by digit" (CORDIC) method for the calculation of transcendental functions which has replaced the Taylor finite-series approximations of a number. CORDIC was the standard in almost all modern calculators all over the world, except at the USSR. In two words, the "digit by digit" method allows to calculate an attribute by iteration and tabulation. It is characterized by the simplicity in the execution of operations (algebraic addition and shift), the significant similarity of the algorithms applied for various functions and, most importantly, for the high speed and accuracy of the calculations. The margin of error in calculations for an 8-digit argument was at most +/ - 1 in the seventh or eighth digit. Finally, one of the latest models among engineering calculators was the MK-71 standard calculator powered by solar components. As a matter of fact, it was a continuation of the series B3-38 and MK-51. As opposed to the B3-38 and MK-51 models, this calculator, as well as the C3-15, used an algebraic logic with five levels of brackets for calculations. It also worked with simple fractions, and could display the results in degrees, minutes and seconds. It had hyperbolic functions, and a mechanism to round-off the result to a required accuracy. In addition, it was a ten-digit calculator. There is one more direction in the development of calculators - the demonstration calculators. As a matter of fact, these were normal calculators wired to large displays and magnetic buttons. A hand magnetic pointer was used to activate the keys. I only have one photo of the demonstration calculator made on the basis of the MK-36. On one occasion I attended a demonstration in my school with a calculator compatible with the MK-54 measuring 1.5 meters, but by the end of August it had been thrown out on the rubbish dump...
{"url":"http://www.xnumber.com/xnumber/russian_calcs.htm","timestamp":"2014-04-16T11:05:02Z","content_type":null,"content_length":"64329","record_id":"<urn:uuid:ff952bc0-854b-439f-9715-f5bdde90b8aa>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00228-ip-10-147-4-33.ec2.internal.warc.gz"}
Hungarian mathematicians Abraham Wald Abraham Wald (Hungarian: Wald Ábrahám, –) was a mathematician born in Cluj, in the then Austria–Hungary (present-day Romania) who contributed to decision theory, geometry, and economet... Alfred Tauber Alfred Tauber was an Austrian mathematician who was born in Bratislava, and died in the Theresienstadt concentration camp. Alfréd Haar Alfréd Haar (Haar Alfréd; 11 October 1885, Budapest – 16 March 1933, Szeged) was a Jewish Hungarian mathematician. Alfréd Rényi Alfréd Rényi (20 March 1921 – 1 February 1970) was a Hungarian mathematician who made contributions in combinatorics, graph theory, number theory but mostly in probability theory. Andrew Kalotay Andrew Kalotay (born 1941) is a Hungarian-born finance professor, Wall Street quant and chess master. Andrew Vázsonyi Andrew Vázsonyi, also known as Endre Weiszfeld and Zepartzatt Gozinto was a mathematician and operations researcher. András Frank András Frank (born 3 June 1949) is a Hungarian mathematician, working in combinatorics, especially in graph theory, and combinatorial optimisation. András Gyárfás András Gyárfás is a Hungarian mathematician who specializes in combinatorics and graph theory. András Hajnal András Hajnal is an emeritus professor of mathematics at Rutgers University and a member of the Hungarian Academy of Sciences known for his work in set theory and combinatorics. András Kornai András Kornai (born 1957 in Budapest), son of economist János Kornai, is a well-known mathematical linguist. András Prékopa András Prékopa (born September 11, 1929) is a Hungarian mathematician, a member of the Hungarian Academy of Sciences. András Sárközy András Sárközy (born 16 January 1941, in Budapest) is a Hungarian mathematician, working in analytic and combinatorial number theory, although his first works were in the fields of geometry and ... Aurel Wintner Aurel Friedrich Wintner (8 April 1903 – 15 January 1958) was a mathematician noted for his research in mathematical analysis, number theory, differential equations and probability theory. Bálint Tóth Bálint Tόth (born 1955) is a Hungarian mathematician whose work concerns probability theory. Béla Bollobás Béla Bollobás FRS (born 3 August 1943) is a Hungarian-born British mathematician who has worked in various areas of mathematics, including functional analysis, combinatorics, graph theory, and p... Béla Kerékjártó Béla Kerékjártó (October 1, 1898, Budapest–June 26, 1946, Gyöngyös) was a Hungarian mathematician who wrote numerous articles on topology. Cornelius Lanczos Cornelius (Cornel) Lanczos (Lánczos Kornél,) (until 1906: Löwy (Lőwy) Kornél) was a Hungarian mathematician and physicist, who was born on February 2, 1893, and died on June 25, 1974. Dénes Kőnig Dénes Kőnig (September 21, 1884 – October 19, 1944) was a Jewish Hungarian mathematician who worked in and wrote the first textbook on the field of graph theory. Endre Szemerédi Endre Szemerédi (born August 21, 1940) is an Hungarian-American mathematician, working in the field of combinatorics and theoretical computer science. Endre Süli Endre Süli is Professor of Numerical Analysis in the Mathematical Institute, University of Oxford, Fellow and Tutor in Mathematics at Worcester College, Oxford, and Supernumerary Fellow of Linac... Erno Lendvai Ernő Lendvai (19251993) was one of the first theorists to write on the appearance of the golden section and Fibonacci series and how these are implemented in Bartók's music. Ernő Lendvai Ernő Lendvai (19251993) was one of the first theorists to write on the appearance of the golden section and Fibonacci series and how these are implemented in Bartók's music. Ernő Lendvaï Ernő Lendvai (19251993) was one of the first theorists to write on the appearance of the golden section and Fibonacci series and how these are implemented in Bartók's music. Ervin Feldheim Ervin Feldheim was a Hungarian mathematician working on analysis. Esther Szekeres Esther Szekeres (born Klein Eszter 20 February 1910 – 28 August 2005) was a Hungarian–Australian mathematician. Eugene Lukacs Eugene Lukacs (Hungarian: Lukács Jenő, 14 August 1906 – 21 December 1987) was a Hungarian statistician born in Szombathely, notable for his work in characterization of distributions, stabili... Eugene Wigner Eugene Paul "E. P." Wigner (Wigner Jenő Pál; November 17, 1902 – January 1, 1995), was a Hungarian American theoretical physicist and mathematician. Farkas Bolyai Farkas Bolyai (February 9, 1775 – November 20, 1856; also known as Wolfgang Bolyai in Germany) was a Hungarian mathematician, mainly known for his work in geometry. Frigyes Riesz Frigyes Riesz (Riesz Frigyes, January 22, 1880 – February 28, 1956) was a Hungarian mathematician who made fundamental contributions to functional analysis. Gabriel Andrew Dirac Gabriel Andrew Dirac (March 13, 1925 – July 20, 1984) was a mathematician who mainly worked in graph theory. George Pólya George Pólya (Pólya György; December 13, 1887 – September 7, 1985) was a Hungarian mathematician. George Szekeres George Szekeres AM (29 May 1911 – 28 August 2005) was a Hungarian–Australian mathematician. Gyula Farkas (natural scientist) Farkas Gyula, or Julius Farkas (March 28, 1847, Sárosd, Fejér County - December 27, 1930, Pestszentlőrinc) was a Hungarian mathematician and physicist. Gyula J. Obádovics Gyula J. Obádovics (born 1927) is a Hungarian mathematician, Dr. Techn., Dr. Rer. Gyula Kőnig Gyula Kőnig (16 December 1849 – 8 April 1913) was a Hungarian mathematician. Gyula O. H. Katona Gyula O. H. Katona (born March 16, 1941, Budapest) is a Hungarian mathematician known for his work in combinatorial set theory, and especially for the Kruskal–Katona theorem and his beautiful an... Gyula Pál Gyula Pál (1881– September 6, 1946) was a noted Hungarian-Danish mathematician. Gyula Vályi Gyula Vályi (5 January 1855, Marosvásárhely/Târgu Mureş – 13 October 1913, Kolozsvár/Cluj) was a Hungarian mathematician and theoretical physicist, a member of the Hungarian Academy of Sciences,... Gyula Y. Katona Gyula Y. Katona (December 4, 1965 - ) is a Hungarian mathematician, the son of mathematician Gyula O. H. Katona. György Elekes György Elekes (–) was a Hungarian mathematician and computer scientist who specialized in Combinatorial geometry and Combinatorial set theory. György Hajós György Hajós was a Hungarian mathematician who worked in group theory, graph theory, and geometry. Gábor N. Sárközy Gábor N. Sárközy (Gabor Sarkozy) is a Hungarian-American mathematician, the son of noted mathematician András Sárközy. Gábor Tardos Gábor Tardos (born 11 July 1964) is a Hungarian mathematician, currently a professor and Canada Research Chair at Simon Fraser University. Géza Fodor (mathematician) Géza Fodor (Szeged, 6 May, 1927–Szeged, 28 September, 1977) was a Hungarian mathematician, working in set theory. Géza Grünwald Géza Grünwald (Budapest, October 18, 1910 – September 7, 1943) was a Hungarian mathematician who worked on analysis. Géza Ottlik Géza Ottlik was a Hungarian writer, translator, mathematician, and bridge theorist. Imre Bárány Imre Bárány (Mátyásföld, 7 December 1947) is a Hungarian mathematician, working in combinatorics and discrete geometry. Imre Csiszár Imre Csiszár is a Hungarian mathematician with contributions to information theory and probability theory. Imre Izsak Imre Gyula Izsák was a Hungarian mathematician, physicist, astronomer, and celestial mechanician. Imre Lakatos Imre Lakatos (Lakatos Imre November 9, 1922 – February 2, 1974) was a Hungarian philosopher of mathematics and science, known for his thesis of the fallibility of mathematics and its 'methodolo... Imre Z. Ruzsa Imre Z. Ruzsa (born 23 July 1953) is a Hungarian mathematician specializing in number theory. István Fenyő István Fenyő (1917 March 5 Budapest – 1987 July 28 Budapest) was a Hungarian mathematician best known for his publications of applied mathematics in fields like chemistry and technology. István Fáry István Fáry was a Hungarian-born mathematician known for his work in geometry and algebraic topology. István Gyöngy István Gyöngy (born 1951) is a Hungarian mathematician working in the fields of stochastic differential equations, stochastic partial differential equations and their applications to nonlinear f... István Hatvani István Hatvani (1718–1786) was a Hungarian mathematician. István Vincze (mathematician) István Vincze (— 1999) was a Hungarian mathematician, known for his contributions to number theory, non-parametric statistics, empirical distribution, Cramér–Rao inequality, and informatio... Janos Galambos Janos Galambos is a mathematician affiliated with Temple University in Philadelphia, Pennsylvania, USA. Jenő Egerváry Jenő Egerváry (or Eugene Egerváry) (April 16, 1891 – November 30, 1958) was a Hungarian mathematician. Jenő Hunyady Jenő Hunyady (April 28, 1838, Pest – December 26, 1889, Budapest) was a Hungarian mathematician noted for his work on conic sections and linear algebra, specifically on determinants. Jenő Szép Jenő Szép (13 January 1920 - 18 October 2004) was a Hungarian mathematician. John G. Kemeny John George Kemeny (Kemény János György) (May 31, 1926 – December 26, 1992) was a Hungarian American mathematician, computer scientist, and educator best known for co-developing the BASIC ... John George Kemeny John George Kemeny was a Jewish-Hungarian American mathematician, computer scientist, and educator best known for co-developing the BASIC programming language in 1964 with Thomas E. Kurtz. John Harnad John Harnad (born Hernád János, Budapest) is a Hungarian-born mathematical physicist. John Kemeny (computer scientist) John George Kemeny was a Jewish-Hungarian American mathematician, computer scientist, and educator best known for co-developing the BASIC programming language in 1964 with Thomas E. Kurtz. John von Neumann John von Neumann (December 28, 1903 – February 8, 1957) was a Hungarian-American pure and applied mathematician, physicist, and polymath. János Apáczai Csere János Apáczai Csere (June 10, 1625 – January 31, 1659) was a Transylvanian Hungarian polyglot and mathematician, famous for his work The Hungarian Encyclopedia, the first textbook to be written in János Bolyai János Bolyai (15 December 1802 – 27 January 1860) or Johann Bolyai, was a Hungarian mathematician, one of the founders of non-Euclidean geometry — a geometry that differs from Euclidean ge... János Kollár János Kollár (born June 7, 1956) is a Hungarian mathematician, specializing in algebraic geometry. János Komlós (mathematician) János Komlós (Budapest, 23 May 1942) is a Hungarian-American mathematician, working in probability theory and discrete mathematics. János Pach János Pach (born May 3, 1954) is a mathematician and computer scientist working in the fields of combinatorics and discrete and computational geometry. János Pintz János Pintz is a Hungarian mathematician working in analytic number theory. József Beck József Beck (Budapest, Hungary, February 14, 1952) is a Harold H. Martin Professor of Mathematics at Rutgers University. József Kürschák József Kürschák (14 March 1864 – 26 March 1933) was a Hungarian mathematician noted for his work on trigonometry and for his creation of the theory of valuations. Karl Zsigmondy Karl Zsigmondy (27 March 1867 - 14 October 1925) was an Austrian mathematician of Hungarian ethnicity. Károly Bezdek Károly Bezdek (born May 28, 1955 in Budapest, Hungary), is a Hungarian-Canadian mathematician with main interests in geometry and specializing in combinatorial, computational, convex, and discre... Károly Hadaly Károly Hadaly (1743, Kolárovo – 1834, Budapest) was a Hungarian mathematician. Ladislaus Chernac Ladislaus Chernac was a Hungarian scientist who moved to Deventer in the Netherlands. Lajos Jánossy Lajos Jánossy (Budapest, 1912 March 2 – Budapest, 1978 March 2) was a Hungarian physicist, astrophysicist and mathematician and a member of the Hungarian Academy of Sciences. Lajos Pukánszky Lajos Pukánszky was a Hungarian and American mathematician noted for his work in representation theory of solvable Lie groups. Lajos Pósa (mathematician) Lajos Pósa (Budapest, December 9, 1947) is a Hungarian mathematician working in the topic of combinatorics, and one of the most prominent mathematics educators of Hungary, best known of his math... Lajos Takács Lajos Takács is a Hungarian mathematician, known for his contributions to probability theory and in particular, queueing theory. Lipót Fejér Lipót Fejér (or Leopold Fejér, 9 February 1880 – 15 October 1959) was a Hungarian mathematician. Ludwig Schlesinger Ludwig Schlesinger (Slovak Ľudovít Schlesinger, (Hungarian: Lajos Schlesinger), born November 1, 1864 in Trnava, December 15, died 1933 in Giessen) was a German mathematician known for the resea... László Babai László (Laci) Babai (born July 20, 1950 in Budapest) is a Hungarian professor of mathematics and computer science at the University of Chicago. László Fejes Tóth László Fejes Tóth (Fejes Tóth László Szeged, 12 March 1915 – Budapest, 17 March 2005) was a Hungarian mathematician who specialized in geometry. László Filep László Filep was a Hungarian mathematician who specialized in history of mathematics. László Fuchs László Fuchs is a Hungarian-American mathematician, the Evelyn and John G. Phillips Distinguished Professor Emeritus in Mathematics at Tulane University. László Kalmár László Kalmár (March 27, 1905, Edde – August 2, 1976, Mátraháza) was a Hungarian mathematician and Professor at the University of Szeged. László Lempert László Lempert is a Hungarian-American mathematician, working in the analysis of multiple complex variables. László Lovász László Lovász (born March 9, 1948) is a Hungarian–American mathematician, best known for his work in combinatorics, for which he was awarded the Wolf Prize and the Knuth Prize in 1999, and the K... László Pyber László Pyber (born 8 May 1960 in Budapest) is a Hungarian mathematician. László Rátz László Rátz, born April 9, 1863 in Sopron, died September 30, 1930 in Budapest, was a Hungarian mathematics high school teacher best known for educating such people as John von Neumann and Nobel... László Rédei László Rédei (Rákoskeresztúr, 15 November, 1900—Budapest, 21 November, 1980) was a Hungarian mathematician. Marcel Grossmann Marcel Grossmann (Grossmann Marcell, April 9, 1878 – September 7, 1936) was a mathematician of Jewish ancestry, and a friend and classmate of Albert Einstein. Marcel Riesz Marcel Riesz (Riesz Marcell, 16 November 1886 – 4 September 1969) was a Hungarian-born mathematician, known for work on summation methods, potential theory, and other parts of analysis, as well... Marianna Csörnyei Marianna Csörnyei (born October 8, 1975 in Budapest) is a Hungarian mathematician. Mario Szegedy Mario Szegedy is a Hungarian-American computer scientist, professor of computer science at Rutgers University. Michael Fekete Michael (Mihály) Fekete (מיכאל פקטה; July 19, 1886 – May 13, 1957) was an Israeli-Hungarian mathematician. Michael Makkai Michael Makkai (Makkai Mihály, 24 June 1939, Budapest, Hungary) is Canadian mathematician of Hungarian origin, specializing in mathematical logic. Miklós Ajtai Miklós Ajtai (born 2 July 1946) is a computer scientist at the IBM Almaden Research Center, USA. In 2003, he received the Knuth Prize for his numerous contributions to the field, including a classic sorting network algorithm (developed jointly with J. Komlós and Endre Szemerédi)... Miklós Bóna Miklós Bóna (born on October 6, 1967, in Székesfehérvár) is an American mathematician of Hungarian origin. Miklós Laczkovich Miklós Laczkovich (born 21 February 1948) is a Hungarian mathematician mainly noted for his work on real analysis and geometric measure theory. Miklós Simonovits Miklós Simonovits (4 September 1943, Budapest) is a Hungarian mathematician who currently works at the Rényi Institute of Mathematics in Budapest and is a member of the Hungarian Academy of Sciences. Otto Szász Otto Szász (11 December 1884, Hungary – 19 December 1952, Cincinnati, Ohio) was a Hungarian mathematician who worked on real analysis, in particular on Fourier series. Paul Erdős Paul Erdős (Erdős Pál 26 March 1913 – 20 September 1996) was a Hungarian mathematician. Peter Lax Peter David Lax (born 1 May 1926) is an American mathematician working in the areas of pure and applied mathematics. Peter Ozsváth Peter Steven Ozsváth (born October 20, 1967) is a professor of mathematics at Princeton University. Pál Turán Paul (Pál) Turán (18 August 1910 – 26 September 1976) was a Hungarian mathematician who worked primarily in number theory. Péter Frankl Péter Frankl (born 26 March 1953 in Kaposvár, Somogy county, Hungary) is a Hungarian mathematician and street performer. Péter Kiss (mathematician) Péter Kiss was a Hungarian mathematician, Doctor of Mathematics, and professor of mathematics at Eszterházy Károly College, who specialized in number theory. Péter Komjáth Péter Komjáth is a Hungarian mathematician, working in set theory, especially combinatorial set theory. Raoul Bott Raoul Bott, ForMemRS (September 24, 1923 – December 20, 2005) was a Hungarian mathematician known for numerous basic contributions to geometry in its broad sense. Rudolf E. Kalman Rudolf (Rudy) Emil Kalman (in Hungarian Kálmán Rudolf Emil; born May 19, 1930) is a Hungarian-American electrical engineer, mathematical system theorist, and college professor, who was ed... Rudolf E. Kálmán Rudolf (Rudy) Emil Kálmán (in Hungarian Kálmán Rudolf Emil; born May 19, 1930) is a Hungarian-born American electrical engineer, mathematician, and inventor, who was educated in the United... Róbert Szelepcsényi Róbert Szelepcsényi (born 19 August 1966, Žilina) was a Slovak student of Hungarian descent and a member of the Faculty of Mathematics, Physics and Informatics of Comenius University in Bratislava. Rózsa Péter Rózsa Péter (orig.: Politzer) (17 February 1905 – 16 February 1977) was a Hungarian mathematician. Sandor Lehoczky Sandor Lehoczky is an American amateur mathematician of Hungarian descent. Simon Sidon Simon Sidon or Simon Szidon (1892 in Versec, Kingdom of Hungary – 27 April 1941, Budapest, Hungary) was a reclusive Hungarian mathematician who worked on trigonometric series and orthogona... Steven Gaal Steven Alexander Gaal (born ca. 1923), (also known as István Sándor Gál or I. S. Gál) is a Hungarian-American mathematician and emeritus Professor of Mathematics at the University of... Steven Vajda Steven Vajda (August 20, 1901 - December 10, 1995) played an important role in the development of mathematical programming and operational research for more than fifty years. Sámuel Mikoviny Sámuel Mikoviny (1700 – 23 March 1750) was a renowned Hungarian mathematician, engineer, cartographer, and professor. Tamás Hausel Tamás Hausel (born 1972) is a Hungarian mathematician working in the areas of combinatorial, differential and algebraic geometry and topology. Tamás Szőnyi Tamás Szőnyi (July 23, 1957, Budapest) is a Hungarian mathematician, doing research in finite geometry. Tibor Gallai Tibor Gallai (born Tibor Grünwald, 15 July 1912 – 2 January 1992) was a Hungarian mathematician. Tibor Radó Tibor Radó (June 2, 1895 – December 29, 1965) was a Hungarian mathematician who moved to the USA after World War I. He was born in Budapest and between 1913 and 1915 attended the Polytech... Tibor Szele Tibor Szele Hungarian mathematician, working in combinatorics and abstract algebra. Vavrinec Benedikt of Nedožery Vavrinec Benedikt z Nedožier (Vavřinec Benedikt z Nudožer (Nedožer, Nedožier, Nudožerinus), Benedicti M. Lőrinc; 1555, Nedožery (Nedozser, Nádasér-Berzseny), Kingdom of Hungary, now Slovak... Vera T. Sós Vera T. Sós (born September 11, 1930) is a Hungarian mathematician, specializing in number theory and combinatorics. Vilmos Totik Vilmos Totik (Mosonmagyaróvár, March 8, 1954) is a Hungarian mathematician, working in classical analysis, harmonic analysis, orthogonal polynomials, approximation theory, potential theory. Zoltán Füredi Zoltán Füredi (Budapest, Hungary, 21 May 1954) is a Hungarian mathematician, working in combinatorics, mainly in discrete geometry and extremal combinatorics. Zoltán Pál Dienes Zoltán Pál Dienes (anglicized as Zoltan Paul Dienes) (1916- January 11, 2014) was a Hungarian mathematician whose ideas on education (especially of small children) have been popular in som... Zoltán Szabó Zoltán Szabó (November 24, 1965, Budapest, Hungary) is a professor of mathematics at Princeton University. Zoltán Tibor Balogh Zoltán "Zoli" Tibor Balogh (December 7, 1953 – June 19, 2002) was a Hungarian-born mathematician, specializing in set-theoretic topology. Zoárd Geöcze Zoárd Geöcze (1873-1916) was a Hungarian mathematician famous for his theory of surfaces (Horváth 2005:219ff). Zoárd Geőcze Zoárd Geőcze (1873–1916) was a Hungarian mathematician famous for his theory of surfaces (Horváth 2005:219ff). Zsolt Baranyai Zsolt Baranyai (June 23, 1948 in Budapest – April 18, 1978) was a Hungarian mathematician, working in combinatorics. Ákos Császár Ákos Császár (born 26 February 1924, Budapest) is a Hungarian mathematician, specializing in general topology and real analysis. Éva Tardos Éva Tardos (born 1957) is a Hungarian mathematician, winner of the Fulkerson Prize (1988), and professor of Computer Science at Cornell University.
{"url":"http://duckduckgo.com/c/Hungarian_mathematicians","timestamp":"2014-04-20T10:51:21Z","content_type":null,"content_length":"72947","record_id":"<urn:uuid:3c5f5097-9611-4836-89a7-5224bbb7687f>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: A downward force of 8.4N is exerted on a -8.8uC charge. What are the magnitude and direction of the electric field? • one year ago • one year ago Best Response You've already chosen the best response. \[E=F/q\] E= 8.4N/ -8.8 X 10^6 C= Upward Best Response You've already chosen the best response. what would be the unit? Best Response You've already chosen the best response. thank you very much Best Response You've already chosen the best response. its unit will be Newton per coulomb Best Response You've already chosen the best response. E=F/q E=8.4 N/-8.8*10^6=-9.5*10^-7 N/C -ve sign indicates the direction of Electric field is vertically upward. Best Response You've already chosen the best response. if the sign is positive the direction will be downward? Best Response You've already chosen the best response. Best Response You've already chosen the best response. okay now i finally got the problem thank you. but this one is different: An electron is released from the rest in a uniform electric field and accelerates to the north at the rate of 115m/s^2. What are the magnitude and direction of electric field (E)? Best Response You've already chosen the best response. we know that F=ma, so on putting the value of m and a you will get force... and we know that the charge on electron...simply put all the values in eqn. E=F/q=ma/q you will get ur ans Best Response You've already chosen the best response. the acceleration would be_115m/s^2 right? what would be the mass? and our q would be the electrons charge? Best Response You've already chosen the best response. mass of electron is 9.1*10^-31kg and charge on electron is -1.6*10^-19C Best Response You've already chosen the best response. ans will be -6.5*10^-48 N /C Best Response You've already chosen the best response. this: (115m/s^2) ( 9.1*10^-31kg) divided by -1.6*10^-19C ? Best Response You've already chosen the best response. Best Response You've already chosen the best response. thank you so much Best Response You've already chosen the best response. how did you get this ( 9.1*10^-31kg) ? Best Response You've already chosen the best response. it is calculated value..you may learn it. u may also learn the charges and masses of another sub atomic particles Best Response You've already chosen the best response. is it constant? can u show me how did u get it? Best Response You've already chosen the best response. yes it is constant... and for further information please concern with wikipedia here u will get ur ans... Best Response You've already chosen the best response. how did u get the exponent -48? i got -10 ? Best Response You've already chosen the best response. yes, -10 Best Response You've already chosen the best response. how about this: What are the magnitude and direction of the electric force on an electron in a uniform electric field of strength 2360N/C that points due East. the charge of electron is 1.6 x10^ (-19) ? how come? Force=electric field x charge electric field=2360 Best Response You've already chosen the best response. what are u asking alfers101? Best Response You've already chosen the best response. The charge of an electron is always the same, it is a given ( and it should be negative though). You have to plug in. Pay attention to the units, write the units of the given. You have to still determine the direction. F= qE; plug in ; and since it is negative it should be in the opposite direction. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4fe50f40e4b06e92b8737a3e","timestamp":"2014-04-17T12:45:11Z","content_type":null,"content_length":"82926","record_id":"<urn:uuid:bf28fa07-a5e6-4d1a-aadc-5b1bf2d6c682>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00602-ip-10-147-4-33.ec2.internal.warc.gz"}
A graph is symmetric with respect to the x-axis if whenever a point is on the graph the point is also on the graph. The following graph is symmetric with respect to the -axis. The mirror image of the blue part of the graph in the -axis is just the red part, and vice versa. This graph is that of the curve . If you replace with , the result is , which mathematically shows that this graph is symmetric about the x-axis. Was this information helpful? Please add your Comment (Optional) E-mail Address (Optional) What is This question helps us to combat spam
{"url":"http://www.maplesoft.com/support/help/Maple/view.aspx?path=MathApps/SymmetriesOfAGraph","timestamp":"2014-04-19T14:39:00Z","content_type":null,"content_length":"97038","record_id":"<urn:uuid:406659b4-2d78-40e7-972f-2e866526eda7>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00386-ip-10-147-4-33.ec2.internal.warc.gz"}
Title: OPTIMAL RECOVERY OF SIGNALS FROM LINEAR MEASUREMENTS AND PRIOR KNOWLEDGE (EXTRAPOLATION, DETERMINISTIC, BAND-LIMITED, SPECTRAL, ESTIMATION) Author: CABRERA GARCIA, SERGIO DAVID The problem of band-limited extrapolation is studied in a general framework of estimation of a signal in an ellipsoidal signal class from the value of a linear transformation. The dissertation deals with finite-length sequences and consequently with the Discrete Fourier Transform for our frequency domain. An algorithm is proposed for defining the signal class from the data. Optimal Recovery theory is described for estimating the value of a desired linear transformation from a given linear transformation and a bound on the norm in a Hilbert space. The optimal estimation procedure requires that we find the minimum norm signal that satisfies the linear measurements. With additive errors, we require a regularized solution to the minimum norm problem. A filter class is an ellipsoidal signal class defined for band-limited sequences with a weighted frequency domain norm. This weight is the squared magnitude of a filter Abstract: function that defines the class. The minimum norm signal in the filter class that satisfies a given set of samples is the signal estimate. It wil usually have frequency contents that resembles that of the filter function. We next develop a procedure to define the filter from the given samples in a recursive manner. The estimate found at one iteration is used to define the filter of the class that is used to estimate at the next iteration. The new filter is a windowed version of the previous estimate, where the window is placed in the region of the given samples. At each iteration, this provides a smoothing of the previously estimated spectrum as well as a dependence of the filter on the data. A convergence analysis for the case where no windowing is done shows a tendency to obtain narrow-band spectra. The extension to two-dimensional signals is described and examples to illustrate this signal class modification algorithm as an interpolator/extrapolator and as a spectral estimator are provided. Citation: CABRERA GARCIA, SERGIO DAVID. (1985) "OPTIMAL RECOVERY OF SIGNALS FROM LINEAR MEASUREMENTS AND PRIOR KNOWLEDGE (EXTRAPOLATION, DETERMINISTIC, BAND-LIMITED, SPECTRAL, ESTIMATION)." Doctoral Thesis, Rice University. http://hdl.handle.net/1911/19052. URI: http://hdl.handle.net/1911/19052 Date: 1985
{"url":"http://scholarship.rice.edu/handle/1911/19052","timestamp":"2014-04-20T03:33:58Z","content_type":null,"content_length":"10263","record_id":"<urn:uuid:798e62ad-5a91-4048-825a-e5310a532293>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00212-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: how linear algebra is connected with projective geometry ?? • one year ago • one year ago Best Response You've already chosen the best response. Projective geometry is an extension of 'ordinary' geometry in the sense that points 'at infinity' are allowed to be represented. Normally we exclude, implicitly or otherwise, infinity from calculations that don't involve limiting procedures. It turns out that there are many shades of infinity. So if I say to you that "I am infinitely far away", you might reply "well OK, but in which direction?" With that in mind then for projective geometry as applied to 3D space you have four components representing vectors: \[V = (x,y,z,w)\]with rules available to determine the meaning of that representation depending on what 'w' is. The 'x', 'y' and 'z' have the standard sense. If w = 0 then V is understood as the point at infinity in the direction indicated by (x, y, z). If w is non-zero then V is understood to indicate the point at \[(\frac{ x}{w }, \frac{ y }{w }, \frac{ z }{w })= (X, Y, Z)\]and if you think in terms of limits then as w-> 0 each of x/w, y/w and z/w will increase without bound BUT retaining their original proportions amongst themselves. That is X/Y, X/Z and Y/Z - if any of those ratios have meaning - are constant regardless of w. You could visualise this limiting process ( w-> 0) as a vector in the direction (x, y, z) progressively stretching - as (X,Y,Z) - to an arbitrarily large length. With that construct in mind then it is clear that there is an infinite number of points at infinity in 3D space - one for each possible direction from the origin. Now linear algebra as we are studying has concepts of angle and distance, and these don't ( usefully ) translate to projective geometry - which allows, say, parallel lines to meet. So if I have three distinct points all at ( different ) infinities, then what does the triangle inequality mean ? Or an inner product, and thus orthogonality? Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5107aff8e4b08a15e7845b7e","timestamp":"2014-04-21T02:34:05Z","content_type":null,"content_length":"29417","record_id":"<urn:uuid:9e3b05d3-9ab0-453d-9f80-b7526e21d00b>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00512-ip-10-147-4-33.ec2.internal.warc.gz"}
References for References for Lev Pontryagin Version for printing 1. P S Aleksandrov, V G Boltyanskii, R V Gamkrelidze and E F Mishchenko, Lev Semenovich Pontryagin (on his sixtieth birthday) (Russian), Uspekhi Mat. Nauk 23 (6) (1968), 187-196. 2. P S Aleksandrov, V G Boltyanskii, R V Gamkrelidze and E F Mishchenko, Lev Semenovich Pontryagin (on his sixtieth birthday), Russian Math. Surveys 23 (6) (1968), 143-152. 3. E J McShane, The Calculus of Variations from the beginning through Optimal Control Theory, SIAM Journal on Control and Optimization 27 (5) (1989), 916-939. 4. Pontryagin, Notices of the American Mathematical Society 35 (1988), 1002. JOC/EFR January 1999 The URL of this page is:
{"url":"http://www-gap.dcs.st-and.ac.uk/~history/References/Pontryagin.html","timestamp":"2014-04-20T05:45:24Z","content_type":null,"content_length":"1778","record_id":"<urn:uuid:432f3f7e-8bff-46db-a243-4677edd30d43>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
Adding and Subtracting Variables Examples Page 1 Solve the equation - x - 2 = - 2x + 1. We would like to have one x all by itself on one side of the equation. If we add x to either side of the equation, we find that - 2 = - x + 1 Since we would like to get a positive x on one side of the equation—we made a New Year's resolution to be more upbeat—we add another x to either side: - 2 + x = 1 All that is left is to add 2 to each side and write x = 3 We could also have added 2x to each side from the start:
{"url":"http://www.shmoop.com/equations-inequalities/adding-subtracting-variables-examples.html","timestamp":"2014-04-16T04:23:28Z","content_type":null,"content_length":"37193","record_id":"<urn:uuid:1304284a-02f2-4431-9927-4a18950b34c3>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
View topic - So how important is algebra? Homeschool World Forum Read thousands of forum posts on topics such as homeschool law, getting started, curriculum, special needs, homeschool vs public school, and much, much more! Author Message lisalinnay Posted: Tue Oct 16, 2007 4:14 pm Post subject: So how important is algebra? User I have a 14 year old son who hates math. He can do it, but he hates it. He's not at all likely to go into a career where he'll need algebra. He's very adventurous, loves the outdoors, loves to fight for justice, and he's an awesome writer. He writes like a novelist. My husband is in law enforcement, and he'll likely follow in his footsteps or become a conservation officer, or writer, or writer about nature, etc. But, he's only 14, so who knows? Joined: 14 Jul 2007 Anyway, my question is - how important is algebra? Posts: 32 _________________ Location: Psalm 86:11 Teach me your way, O Lord, Michigan and I will walk in your truth. . . Academic Software and LeapFrog Toys at Discount prices ncmom Posted: Tue Oct 16, 2007 10:47 pm Post subject: User In my opinion if you can do basic algebra...fill in the missing number type stuff that is all you need. All the other math you will need is real world type math and I don't mean the type where they give you the word problems about mixing coffee beans. I mean to cook, sew, build something, read a tape measure, things like that. I took algebra in high school and college and have yet to actually use what I was taught. I have my own way of solving the problems I come across in life and it is not by using the formulas I had to memorize in class. I guess it Joined: 13 really depends on what you end up doing in life as to how important it is. If he can function in life and in the career he chooses doesn't require he have it then I wouldn't worry about Jul 2007 it. However, if he is going to college he will need to pass it either by testing out of it or taking it as a course as it is required for all the degrees that I have ever read about. So if Posts: 321 college is in his future he may want to at least go through alg. 1 so he isn't lost when he gets to college. Eastern NC Theodore Posted: Wed Oct 17, 2007 1:41 am Post subject: Moderator Algebra is quite useful for chemistry and physics, and it's probably going to be a requirement just for graduating from high school, never mind making it through college. Given, he won't need algebra much in real life, but it's just one of those things he'll have to grind through anyway, just like I had to work through a bunch of useless humanities credits (I was a comp sci major). Will I ever need to know anything about poetry? Not in a million years, but it was a requirement and I did it anyway. Joined: 06 _________________ Oct 2005 Homeschool Articles - Events - Support Groups Ginia Posted: Mon Oct 22, 2007 6:37 pm Post subject: User Whether you feel that Algebra will be useful in daily life or not, learning and practicing the higher level maths does a couple of very important things for the student: (1) It teaches the student to think things through logically, step-by-step, to get to the final answer. (2) It teaches him to do persevere through an entire problem until he reaches a solution. Joined: 22 Oct 2007 When my daughter was in Algebra II, it was clear she would never use this math in her chosen career. Yet we persevered. She learned how to think things through. These critical thinking Posts: 6 skills helped her tremendously in writing essays for college. And in almost any 4-year college, math is a requirement for the degree. On another note: I have a nephew who struggled with math. He didn't want to go to college. He wanted to become a carpenter, and when he applied to be apprenticed, he had to take - guess what! - a math test. Ginia - Author of the online program: Preparing Your Student to Win College Scholarships, a blueprint for parents. mark_egp Posted: Wed Oct 24, 2007 9:22 am Post subject: User Algebra is essential. Not everyone will solve equations for a living. But you need to be able to think in terms of "functions" - meaning how one thing varies as a result of a change to another thing on which the first thing is dependent. To have an intuitive sense of this is priceless. Algebra trains this type of thinking. "Functional" thinking works in politics, relationships, science, finances, etc - really in all of life. Algebra is just learning to convert various functions into different forms so they can be understood and manipulated more Joined: 14 easily. This idea of "transforming" one problem into another type is also essential. Some problems are truly unsolveable as presented, but clever rethinking may show how it can be solved Aug 2007 from another perspective - another essential life skill! Posts: 57 Location: Algebra is wonderful to train proper thinking, as there is always an "objectively" correct answer - not a subjective "I think/you think" impasse common to so many of the "soft" sciences. Austin, _________________ Texas USA Mark - http://www.everygoodpath.net/ Homeschool ideas http://www.everygoodbook.com/ Classic Book lists easy to search/sort for history, literature, and reading lesson plans Bob Hazen Posted: Tue Dec 25, 2007 11:47 am Post subject: the training asepcts of ALL higher math Moderator As a former home school dad, as a current public school teacher, and as a contributor to the PHS magazine and forum, I'm here to say that ALL higher math can be of great help for students in several ways. Joined: 28 I'm going to skip the "pragmatic" approach - "Well, you'll use algebra when you..." or "You'll need trig when you..." While I don't dismiss the pragmatic aspects of higher math (and it is Oct 2005 pragmatic), I want to emphasize the "training" aspect of higher math. Posts: 28 I tell my students that higher math is marvelous for three things: 1. to learn how to pay attention to detail (Was that exponent a 3 or a 5? Was that a positive or a negative? Is this sine or cosine?); 2. to learn to keep the big picture in mind (Wait - the square root of a negative number has no real number answer); 3. to learn how to solve problems. Then I tell my high school kids, "I'll break the news for you: Many of you after you leave this precalculus class will NEVER use trigonometry again, ever. BUT.... but... but... if you realize that math is a training program to do #1 and #2 and #3 above, then I've got another bit of news for you. ALL OF YOU indeed WILL spend the rest of your lives needing to pay attention to detail (Was that bill due on the 14th or the 24th?); needing to keep the big picture in mind (Wait - call the customer - nobody could possibly want to order 14,000 cupcakes); and needing to solve problems (One of the four boxes on the shipping order isn't marked - how do we figure out which one? What's the shortest route for me to take to deliver these orders in as little time as possible? as little mileage as possible?). Math is a training program in which you have trememdous opportunity to train your eyes, your brain, and your will to pay attention to detail, to keep the big picture in mind, and to solve problems. Hope this helps! Bob Hazen Bob Hazen's Algebra for Kids All times are GMT - 6 Hours (CST) You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
{"url":"http://www.home-school.com/forums/viewtopic.php?p=11717","timestamp":"2014-04-17T15:32:41Z","content_type":null,"content_length":"53252","record_id":"<urn:uuid:f768b93b-58fd-4793-a3c6-3e72d5f97444>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
show the expressions are rational numbers March 5th 2008, 08:39 PM show the expressions are rational numbers give a, b and c are rational numbers, show the expressions are rational. i.) a^2 + b^2 + c ^2 ii.) (5a^3 +6b^4)/(a^2+b^2) iii.) a + a^2 + a^3 + ... +a^10 I had alot of trouble proving this, and it was not enough work. please explain these. March 6th 2008, 09:25 AM Write a, b and c as ratio of integers, then substitute these into the expressions and rearrange to show that they are the ratios of integers. For example let $a=e/f$, $b=g/h$, $c=i/j$, where $e,\ f,\ g,\ i$ and $j$ are integers. $a^2 + b^2 + c ^2=\frac{e^2}{f^2}+\frac{g^2}{h^2}+\frac{i^2}{j^2} =\frac{e^2h^2j^2+g^2f^2j^2+i^2f^2h^2}{f^2h^2j^2}$ March 6th 2008, 01:34 PM Speaking generally there are a lot of different operations one can do to rational numbers without leaving the rational world. The product of any finite number of rational numbers is rational: let $q_1, q_2, ... q_k$ be rational numbers. say $q_i=\frac{n_i}{m_i}, \forall i, \text{ and } m_i, n_i$ are integers. then $n_1n_2...n_k \text{ and } m_1m_2...m_k$ are integers therefore $q_1 q_2 ... q_k$ is rational. Same kind of proofs can be used to show the sum of 2 rationals is rational, which extends inductively to the finite sum of rationals. The division of rationals remaining rational is a direct consequence of the multiplication of rationals remaining rational. So it is not hard to see that any finite composition of such operations on rationals results in a rational. Proving these general statements might be easier than specific examples which can get very messy. March 9th 2008, 02:44 PM iii.) a + a^2 + a^3 + ... +a^10 i could let a = f/e then = (f/e)^2 then no matter what to the 10th power it is rational. is that correct?
{"url":"http://mathhelpforum.com/discrete-math/30134-show-expressions-rational-numbers-print.html","timestamp":"2014-04-17T22:20:58Z","content_type":null,"content_length":"8243","record_id":"<urn:uuid:c29dc53e-b37f-47ed-8793-f313c2351d2e>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00004-ip-10-147-4-33.ec2.internal.warc.gz"}
Find theta from the cross product and dot product of two vectors 1. The problem statement, all variables and given/known data If the cross product of vector v cross vector w = 3i + j + 4k, and the dot product of vector v dot vector w = 4, and theta is the angle between vector v and vector w, find tan(theta) and theta. 2. Relevant equations vector c = |v||w| sin(theta) where vector c is the cross product of v and w. 3. The attempt at a solution I'm assuming you have to split the cross product back into the two original vectors and then calculate the angle but I'm not sure how to go from cross product to 2 vectors. Please help!
{"url":"http://www.physicsforums.com/showthread.php?t=468976","timestamp":"2014-04-16T10:37:00Z","content_type":null,"content_length":"53524","record_id":"<urn:uuid:6ae49f3f-0c06-4ce8-bb0a-13d2527ac9f9>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00094-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: Complex Dynamics: Twenty-Five Years after the Appearance of the Mandelbrot Set Table of Contents &nbsp &nbsp &nbsp &nbsp &nbsp &nbsp Contemporary Chaotic behavior of (even the simplest) iterations of polynomial maps of the complex plane was known for almost one hundred years due to the pioneering work of Fatou, Julia, and Mathematics their contemporaries. However, it was only twenty-five years ago that the first computer generated images illustrating properties of iterations of quadratic maps appeared. These images of the so-called Mandelbrot and Julia sets immediately resulted in a strong resurgence of interest in complex dynamics. The present volume, based on the talks at the 2006; 206 pp; conference commemorating the twenty-fifth anniversary of the appearance of Mandelbrot sets, provides a panorama of current research in this truly fascinating area of mathematics. Graduate students and research mathematicians interested in complex analysis and dynamical systems. Volume: 396 List Price: US$61 Member Price: Order Code: CONM/
{"url":"http://www.ams.org/bookstore-getitem/item=CONM-396","timestamp":"2014-04-18T01:32:49Z","content_type":null,"content_length":"14767","record_id":"<urn:uuid:270b212c-90ce-40a1-bdc4-a40cefcae4fa>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
Walnut Park, CA Algebra Tutor Find a Walnut Park, CA Algebra Tutor ...One thing I always practice in my classroom is starting from a level where the students can understand, and moving forward from there. I believe that building a good foundation is the key to success in any subject. I have come up with many different tricks for helping students remember key idea... 11 Subjects: including algebra 2, algebra 1, physics, geometry ...Thanks and feel free to contact me and ask questions!As a dual citizen of South Korean and the United States, I have lived in both countries and have both English and Korean as my first language. I travel to Korea frequently as I have family there. I also have experience teaching Korean to American business executives and believe that I am qualified to to teach Korean. 25 Subjects: including algebra 1, English, reading, writing I consider myself a natural teacher having inherited the love of teaching from my parents, both educators. I am very patient, and good at explaining difficult concepts to those unfamiliar with them. I am also good with young children. 26 Subjects: including algebra 1, algebra 2, reading, English ...In high school, I was awarded a four-year full-tuition scholarship to The Catholic University of America, where I earned an M.A. in rhetoric and composition and a B.A. in literature. I graduated magna cum laude and was designated a University Scholar for my excellence in the honors program. I a... 25 Subjects: including algebra 1, algebra 2, English, geometry Hello everyone, I'm new to the tutoring business. I've had experience teaching my college sister math so I felt like I can help other students with math or history or fitness since I enjoy all three. I as an individual feel like I can help any one from K-9th grade since I'm still a currently a senior in my last year in high school. 23 Subjects: including algebra 1, algebra 2, Spanish, reading Related Walnut Park, CA Tutors Walnut Park, CA Accounting Tutors Walnut Park, CA ACT Tutors Walnut Park, CA Algebra Tutors Walnut Park, CA Algebra 2 Tutors Walnut Park, CA Calculus Tutors Walnut Park, CA Geometry Tutors Walnut Park, CA Math Tutors Walnut Park, CA Prealgebra Tutors Walnut Park, CA Precalculus Tutors Walnut Park, CA SAT Tutors Walnut Park, CA SAT Math Tutors Walnut Park, CA Science Tutors Walnut Park, CA Statistics Tutors Walnut Park, CA Trigonometry Tutors Nearby Cities With algebra Tutor August F. Haw, CA algebra Tutors Baldwin Hills, CA algebra Tutors Bicentennial, CA algebra Tutors Broadway Manchester, CA algebra Tutors Dockweiler, CA algebra Tutors Firestone Park, CA algebra Tutors Green, CA algebra Tutors Greenmead, CA algebra Tutors Hancock, CA algebra Tutors Hollyglen, CA algebra Tutors Huntington Park algebra Tutors Lennox, CA algebra Tutors Rosewood, CA algebra Tutors Sanford, CA algebra Tutors South Gate algebra Tutors
{"url":"http://www.purplemath.com/Walnut_Park_CA_Algebra_tutors.php","timestamp":"2014-04-19T19:45:26Z","content_type":null,"content_length":"24322","record_id":"<urn:uuid:2f612960-f971-46ec-9277-5251de1e8883>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00582-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Berlin: Springer (ISBN 978-3-540-34513-8/hbk). xvii, 500 p./v.1; xiii, 575 p./v.2. EUR 119.95/net; SFR 203.00; $ 159.00; £ 92.50 (2007). One of the key aspects of graduate instruction in ‘Real Analysis’ is to produce a $\sigma$-additive set function, or measure, quickly to define and then establish the main Lebesgue-Vitali limit theorems as they are of enormous interest in many applications. The most efficient method now available is the one devised by C. Carathéodory, who produced it after analyzing Lebesgue’s ideas and results. This important and very general approach needs some reflection, which prompted E. Hewitt to express: “how Carathéodory came to think of this definition seems mysterious...(but it) has many useful implications.” However, Dunford-Schwartz and many others make this the central approach to the subject, after putting some light do dispel the ‘mystery’ by way of outlining Lebesgue’s prior successful efforts in introducing the concept of modern measure and integral as distinct from the Riemann-Darboux approach. The main thrust of the very detailed account of this subject by Bogachev in two volumes making up of approximately 1100 pages, with 2038 references listed, is a scholarly rendering of its many sided view, some highlights of which will be commented on here. The two volumes are nearly evenly divided each having five (long) chapters. The first volume concentrates on the basic construction of measures and the (Lebesgue) integral, the fundamental limit theorems, the Radon-Nikodym and Fubini-Tonelli theorems as well as the ${L}^{p}$, $p\ge 1$, spaces and their duality. The last chapter of the first volume contains some specialized material including (the weak or Sobolev-L. Schwartz) differentiation, change of variables, weighted (Hardy-Littlewood type) inequalities and BMO spaces (but not the (${H}^{1}$, BMO) duality). Excluding the last chapter, the material contains a standard graduate course coverage of the subject in the U.S. universities, and is detailed. The author includes many extensions and sidelights in a long section called ‘Supplements and exercises’ for each chapter. The readers are encouraged to glance through the material, if there is no time, to get at least a feeling on how the use of measure and integral enables new insights to both classical and contemporary applications. There are also brief but useful historical accounts of many of the contributors of the subject. A few glaring omissions (or oversights) are the inadequate emphasis on Carathéodory’s construction of outer measures which would explain the significance as well as a clarification of the concept and its power. Also, the crucial aspect of Fubini’s theorem, amplified by M. H. Stone, now called the Fubini-Stone theorem, does not appear here. It has a significant part to play in product measure and integral construction without $\sigma$-finite restriction. On the other hand, the Lebesgue integral being an absolute one and the Riemann’s nonabsolute, the gap between these two was bridged by the (independent) construction of Henstock and Kurzweil around 1960, explains the subject better and the author has included only a brief account of this matter in Chapter 5. Thus Volume 1 contains an essentially complete treatment of the basic subject that every student should know in starting a possible research career in analysis. Turning to Volume 2, it may be observed that its five chapters concentrate on the interplay of measure and topology, as well as topological measures. This naturally covers a vast area of mathematical analysis. Thus Chapter 6 is devoted to systematic study of Baire, Borel, analytic or Souslin classes of sets, and includes some measurable selection theorems. With topology one can consider regularity properties of measures, and particularly those which can be determined from their values on compact subsets of given sets. These are Radon measures. Their tightness properties, and the projective systems (and limits) prompted by Kolmogorov, and Fubini-Jessen theory on products of the spaces are treated in Chapter 7. Here probability theory enters crucially. Alternatively, one can consider the integral as a linear functional on a space of bounded (scalar) functions and start from such functionals and then produce measures which is Daniell’s method. The author indicated this also, although this could have been deduced efficiently and quickly from Choquet’s capacity theory which was sketched, but this possibility was not considered. Only in the supplements, Carathéodory metric outer measures, as well as capacity were briefly discussed. Then Chapter 8 takes up weak convergence of sequences or sets of measures, related to weak${}^{*}$-convergence of Banach spaces, which plays an important part in probability theory, and this chapter is devoted to various aspects of the subject. The whole theory of function spaces such as ${L}^{p}$ can be replaced by spaces of set functions since measurability questions play a lesser role, and some of this is discussed. Also Bochner-Kolmogorov-Prokhorov-Sazonov theory of projective systems fits in here, and its discussion with some of its analysis is included. Chapter 9 deals with (point) transformations on spaces and their (induced) set mappings leading to translation invariance when the space is a locally compact group (Haar measures) and specializations such as ‘quasi-invariance’. There are several supplements. In studying regularity of the mappings and measures, conditional probability functions appear significantly, and they are discussed in the final chapter. Since a conditional measure is really a vector measure, the regularity becomes important so as to treat them as ordinary measures in the form of ‘transition probability functions’. From here on the material leads to more advanced parts of probability theory, including liftings and martingales. These are outlined, to make the discussion understandable, as the true discussion needs many more pages. This description shows that a study of ‘Real Analysis’ leads to most parts of mathematical analysis, and the material grows without bounds so that an author has to restrain the impulse and cover some parts in depth and some superficially. As the late Prof. Mark Kac used to say that Probability is a major customer of Measure theory, and many new results are motivated by it. If an author looks on probability favorably, as the author does, having already written a good sized volume on “Gaussian measures” (1997; Zbl 0883.60032)], then measure theory gets a better than a fair treatment in this sense. The general point of view is somewhat along the lines of the reviewer’s book on the subject [“Measure theory and integration” (1987; Zbl 0619.28001)], second edition (2004; Zbl 1108.28001), but the author covers more topics. There is a good set of references having a balance of Russian works and the western contributions, a fact which unfortunately is not always the case in books by many authors. The treatment is reader friendly and I would recommend that each graduate real analysis student own both volumes, at least volume 1 for study (the publisher should allow this), and they make a good reference set to keep on ones shelf. Each chapter starts with an appealing quote from a stalwart (a mathematician, a poet, or a literary person). It is therefore appropriate to end this review with another quote from a literary giant (R. Tagore): “Where the words come out of the depths of truth; where the mind is led forward by thee into ever-widening thought and action; into that heaven of freedom”; shall we aspire in this subject. 28-02 Research monographs (measure and integration) 28A05 Classes of sets 28A35 Measures and integrals in product spaces 60A10 Probabilistic measure theory 60B10 Convergence of probability measures 60B99 Probability theory on general structures
{"url":"http://zbmath.org/?format=complete&q=an:1120.28001","timestamp":"2014-04-19T22:12:01Z","content_type":null,"content_length":"29342","record_id":"<urn:uuid:a4b30bc6-ee55-4273-9cfa-63c5126d3b1e>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
Chapter Listing | Return to 1997 Topical Index Multivariate Analysis In the second step of the analysis, multivariate analysis was performed to find out to what extent expected consequences of growth could explain growth willingness, and what specific expectancy variables had smaller or larger effect on growth willingness. The contingencies from the previous analyses were used to divide the sample, since it is possible that the pattern of relationships differ between industries, size brackets and age groups. The results from the regressions are displayed in Table 5. TABLE 4 Comparison Of Means For Growth Willingness And Expected Consequences Of Growth For Firms Of Different Industries. │ │Service n=340│Manufacturing n=571│Retail n=246│ │Growth willingness^a │ │ │ │ │Moderate growth │5.46 │5.39 │5.71^c │ │Substantial growth │4.54 │4.48 │4.82 │ │Expected consequences^b │ │ │ │ │Workload │2.64 │2.77 │2.87^c │ │Work tasks │2.80 │3.12^c │3.05^c │ │Employee well-being │2.53 │2.59 │2.74^c │ │Personal income │3.75 │3.70 │3.78 │ │Control │2.49 │2.54 │2.69^c │ │Independence │3.03 │3.13 │3.25^c │ │Survival of crises │2.44 │2.36 │2.48 │ │Product/service quality │2.71 │2.95^c │3.00^c │ Note: One-way ANOVA with Bonferroni test is used in the analysis. a= growth willingness is measured on a 7-point scale, 1 indicating a strongly negative and 7 a strongly positive attitude. b= expected consequences of growth is measured on a 5-point scale, 1 indicating a strongly negative and 5 a strongly positive attitude. c= p< 0.05 for difference to lowest group. The adjusted explained variance, ranging from 0.16 to 0.28, indicates that expected consequences have an influence on growth willingness and that the proposed model is relevant. The results reveal that non-economic concerns are very important determinants of growth willingness. Personal income is not the most important variable in any regression, suggesting that money is not the most important motivator. In all regressions, employee well-being is the most important explanatory variable, giving a remarkably consistent result. Workload is relatively unimportant in all regressions as well as work-tasks, with the exception of the smallest size bracket. The pattern concerning independence is the opposite, it has an effect in all regressions except in the smallest size bracket and the service industry, where the largest proportion of small firms are found. Significant standardised regression coefficients for the remaining explanatory variables have the same signs across all analyses which indicates that the regressions are stable. However, their rank order and magnitude vary depending on how the sample is divided. Interpretation of these coefficients should be restrictive. Considering the moderate explained variance and the magnitude of the employee well-being coefficient in all regressions, relatively little is explained by other variables in all regressions. TABLE 5 Linear Regression Results For The Effect Of Expected Consequences Of Growth On Growth Willingness When The Sample Is Divided Based On The Three Contingencies Industry, Size And Age. │ │Manuf. n=571│Service n=340│Retail n=246│5-9 emp n=326│10-19 emp n=479│20-49 emp n=353│Old firms n=771│Young firms n=372│ │Workload │.07 │.08 │.00 │.07 │.08 │-.01 │.07* │.07 │ │Work tasks │.04 │.06 │.02 │.13* │.05 │-.05 │.01 │.10 │ │Employee well-being │.23*** │.27*** │.23*** │.30*** │.17*** │.29*** │.28*** │.22** │ │Personal income │.10* │.10* │.10 │.11* │.06 │.13** │.12*** │.05 │ │Control │.08 │.10* │.08 │.07 │.12** │.08 │.10** │.04 │ │Independence │.11* │.09 │.14* │.02 │.15** │.13* │.10** │.13* │ │Survival of crises │.09* │.10 │.04 │.07 │.07 │.13* │.09** │.06 │ │Product/service quality │.06* │.04 │.09 │.06 │.09 │.00 │.03 │.11* │ │Adj. R^2 │.22 │.28 │.16 │.26 │.24 │.21 │.25 │.20 │ Note: Forced entry of independent variables is used. Standardised regression coefficients are displayed in the Table. *= p< 0.05; **= p< 0.01; ***= p< 0.001 Further support for the latter result is provided when the regression is run for the full sample, which is displayed in Table 6. While significant effects are obtained for all expected consequences but work tasks, the magnitude of the coefficients for variables other than employee well-being are small in magnitude. Due to the large number of cases in the full sample, significant results are easier to obtain. When the contingencies are added to the equation as dummy variables they alter the equation only to a small extent. Albeit significant on two instances, their standardised regression coefficients are generally low and the explained variance is not increased. In all, the conclusion is that explanatory variables are not dramatically different in different industries, size brackets or age groups. As mentioned earlier, data were collected from three different samples during a ten year period. There are reasons to analyses the samples separately and compare the results. First, this makes it possible to check the stability of the results, if the results are the same for all three samples, conclusions will be more valid. Second, data were collected during different stages of the business cycle which may affect the attitudes of the respondents. Different explanatory variables may be important during different phases of the business cycle. A pure trend effect over time is also The results of the analyses of the three different samples are displayed in Table 7. Employee well-being is by far the most important explanatory variable in all samples, whereas the magnitude and rank order of all other explanatory variables vary. In all, the relationships are relatively stable over-all, but not in detail. No clear cyclical or trend pattern emerges over this time period. Top of page | Chapter Listing | Return to 1997 Topical Index © 1997 Babson College All Rights Reserved Last Updated 06/01/98
{"url":"http://fusionmx.babson.edu/entrep/fer/papers97/wiklund/wik5.htm","timestamp":"2014-04-18T11:43:07Z","content_type":null,"content_length":"12502","record_id":"<urn:uuid:0e78b18d-eac9-40ee-9cf1-3c274321df2c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00465-ip-10-147-4-33.ec2.internal.warc.gz"}
[Haskell-cafe] Re: Are type synonym families really equivalent to fundeps? [Haskell-cafe] Re: Are type synonym families really equivalent to fundeps? Tom Schrijvers Tom.Schrijvers at cs.kuleuven.be Tue Sep 4 03:32:11 EDT 2007 > I think one can change my earlier proposal to include this, but there are zero > examples of this syntax, so I am likely making up some syntax that does not exist: > type instance (A_T a ~ A_T (RHS (Var x) a)) => A_B_C (RHS (Var x) a) (x->b) = > A_B_C a b > type instance (A_T a ~ A_T (RHS (Var x) a)) => A_C_B (RHS (Var x) a) c = x -> > A_C_B a c > type instance (A_T a ~ A_T (RHS [t] a)) => A_B_C (RHS [t] a) b = A_B_C a b > type instance (A_T a ~ A_T (RHS [t] a)) => A_C_B (RHS [t] a) c = A_C_B a c I am afraid you are right: this syntax does not exist. Type family instances cannot have a context. They are unconditional. Tom Schrijvers Department of Computer Science K.U. Leuven Celestijnenlaan 200A B-3001 Heverlee tel: +32 16 327544 e-mail: tom.schrijvers at cs.kuleuven.be More information about the Haskell-Cafe mailing list
{"url":"http://www.haskell.org/pipermail/haskell-cafe/2007-September/031380.html","timestamp":"2014-04-16T22:31:48Z","content_type":null,"content_length":"3912","record_id":"<urn:uuid:235cfa05-221c-456a-ac98-5d950acf50b5>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00142-ip-10-147-4-33.ec2.internal.warc.gz"}
Andrew Gelman’s Top Statistical Tip Andrew Gelman writes: If I had to come up with one statistical tip that would be most useful to you–that is, good advice that’s easy to apply and which you might not already know–it would be to use transformations. Log, square-root, etc.–yes, all that, but more! I’m talking about transforming a continuous variable into several discrete variables (to model nonlinear patterns such as voting by age) and combining several discrete variables to make something [more] continuous (those “total scores” that we all love). And not doing dumb transformations such as the use of a threshold to break up a perfectly useful continuous variable into something binary. I don’t care if the threshold is “clinically relevant” or whatever–just don’t do it. If you gotta discretize, for Christ’s sake break the variable into 3 categories. I agree (and wrote an article about it). Transforming data is so important that intro stats texts should have a whole chapter on it — but instead barely mention it. A good discussion of transformation would also include use of principal components to boil down many variables into a much smaller number. (You should do this twice — once with your independent variables, once with your dependent variables.) Many researchers measure many things (e.g., a questionnaire with 50 questions, a blood test that measures 10 components) and then foolishly correlate all independent variables with all dependent variables. They end up testing dozens of likely-to-be-zero correlations for significance. Thereby effectively throwing all their data away — when you do dozens of such tests, none can be trusted. My explanation why this isn’t taught differs from Andrew’s. I think it’s pure Veblen: professors dislike appearing useful and like showing off. Statistics professors, like engineering professors, do less useful research than you might expect, so they are less aware than you might expect of how useful transformations are. And because most transformations don’t involve esoteric math, writing about them doesn’t allow you to show off. In my experience, not transforming your data is at least as bad as throwing half of it away, in the sense that your tests will be that much less sensitive. Alex Chernavsky Says: March 30th, 2010 at 6:40 am And speaking of statistics, here’s an interesting debunking of John Gottman’s research into marriage & divorce: Matt Weber Says: March 30th, 2010 at 9:44 am Thanks for the links to the articles. I’m about to run a fairly large test battery using a number of different types of measure (accuracy, RT, differences in RT) and different tests of related abilities, so I’ll be needing to think hard about both transformations and principal components in the weeks to come. In your experience, does using transformations and PCs make reviewers skittish? I could easily imagine people wondering why you transformed the data (cf. “less aware than you might expect of how useful transformations are”), or being disinclined to believe the results of a statistical test that wasn’t significant on the raw data. seth Says: March 31st, 2010 at 6:01 pm About 10-20% of reviewers in my experience are bothered by transformations. I simply explain to the editor the importance and acceptedness of transformations. I haven’t had a problem.
{"url":"http://blog.sethroberts.net/2010/03/30/andrew-gelmans-top-statistical-tip/","timestamp":"2014-04-20T10:48:04Z","content_type":null,"content_length":"62110","record_id":"<urn:uuid:0174581f-4f59-4627-b6e9-fbc56edc0ced>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00556-ip-10-147-4-33.ec2.internal.warc.gz"}
Overview Package Class Tree Deprecated Index Help PREV CLASS NEXT CLASS FRAMES NO FRAMES SUMMARY: NESTED | FIELD | CONSTR | METHOD DETAIL: FIELD | CONSTR | METHOD Class Ray All Implemented Interfaces: IGeometry, IRay, IRay2, com.esri.arcgis.interop.RemoteObjRef, IClone, ISupportErrorInfo, Serializable public class Ray extends Object implements com.esri.arcgis.interop.RemoteObjRef, IRay, IRay2, IGeometry, IClone, ISupportErrorInfo A 3D ray that begins at a point and extends infinitely along a line in one direction only. Product Availability Available with ArcGIS Engine, ArcGIS Desktop, and ArcGIS Server. Supported Platforms Windows, Solaris, Linux See Also: │ Constructor Summary │ │ Ray() │ │ │ Constructs a Ray using ArcGIS Engine. │ │ │ Ray(Object obj) │ │ │ Deprecated. As of ArcGIS 9.2, replaced by normal Java casts. │ │ │ Ray theRay = (Ray) obj; │ │ │ Method Summary │ │ void │ assign(IClone src) │ │ │ Assigns the properties of src to the receiver. │ │ boolean │ equals(Object o) │ │ │ Compare this object with another │ │ IClone │ esri_clone() │ │ │ Clones the receiver and assigns the result to *clone. │ │ void │ geoNormalize() │ │ │ Shifts longitudes, if need be, into a continuous range of 360 degrees. │ │ void │ geoNormalizeFromLongitude(double longitude) │ │ │ Normalizes longitudes into a continuous range containing the longitude. │ │ static String │ getClsid() │ │ │ getClsid. │ │ int │ getDimension() │ │ │ The topological dimension of this geometry. │ │ IEnumIntersection │ getEnumIntersect(IGeometry targetGeometry) │ │ │ Not implemented at this release. │ │ IEnvelope │ getEnvelope() │ │ │ Creates a copy of this geometry's envelope and returns it. │ │ int │ getGeometryType() │ │ │ The type of this geometry. │ │ IPoint │ getOrigin() │ │ │ The origin point of the ray. │ │ IPoint │ getPointAtDistance(double distance) │ │ │ Constructs a point at a distance along the ray. │ │ ISpatialReference │ getSpatialReference() │ │ │ The spatial reference associated with this geometry. │ │ IVector3D │ getVector() │ │ │ The direction vector of the ray. │ │ int │ hashCode() │ │ │ the hashcode for this object │ │ void │ interfaceSupportsErrorInfo(GUID riid) │ │ │ interfaceSupportsErrorInfo │ │ void │ intersect(IGeometry targetGeometry, IPointCollection intersectionPoints) │ │ │ Returns a point collection containing all points of intersection, in order along the ray. │ │ boolean │ intersects(IGeometry targetGeometry) │ │ │ Indicates if the ray intersects the target geometry. │ │ boolean │ isEmpty() │ │ │ Indicates whether this geometry contains any points. │ │ boolean │ isEqual(IClone other) │ │ │ Returns TRUE when the receiver and other have the same properties. │ │ boolean │ isIdentical(IClone other) │ │ │ Returns TRUE when the receiver and other are the same object. │ │ void │ project(ISpatialReference newReferenceSystem) │ │ │ Projects this geometry into a new spatial reference. │ │ void │ queryEnvelope(IEnvelope outEnvelope) │ │ │ Copies this geometry's envelope properties into the specified envelope. │ │ void │ queryFirstIntersection(IGeometry targetGeometry, IPoint intersectionPoint) │ │ │ Returns the first point of intersection between the ray and the target geometry. │ │ void │ queryOrigin(IPoint vectorOrigin) │ │ │ Sets a point equal to the ray's origin. │ │ void │ queryPlaneIntersection(_WKSPointZ pPlaneNormal, double d, IPoint pPoint) │ │ │ Returns the point of intersection between the ray and the target plane. │ │ void │ queryPointAtDistance(double distance, IPoint point) │ │ │ Queries a point at a distance along the ray. │ │ void │ queryVector(IVector3D directionVector) │ │ │ Sets a vector equal to a unit vector with the same direction as the ray. │ │ void │ setEmpty() │ │ │ Removes all points from this geometry. │ │ void │ setOrigin(IPoint vectorOrigin) │ │ │ The origin point of the ray. │ │ void │ setSpatialReferenceByRef(ISpatialReference spatialRef) │ │ │ The spatial reference associated with this geometry. │ │ void │ setVector(IVector3D directionVector) │ │ │ The direction vector of the ray. │ │ void │ snapToSpatialReference() │ │ │ Moves points of this geometry so that they can be represented in the precision of the geometry's associated spatial reference system. │ │ Methods inherited from interface com.esri.arcgis.interop.RemoteObjRef │ │ getJintegraDispatch, release │ public Ray() throws IOException, Constructs a Ray using ArcGIS Engine. IOException - if there are interop problems UnknownHostException - if there are interop problems public Ray(Object obj) throws IOException Deprecated. As of ArcGIS 9.2, replaced by normal Java casts. Ray theRay = (Ray) obj; Construct a Ray using a reference to such an object returned from ArcGIS Engine or Server. This is semantically equivalent to casting obj to Ray. obj - an object returned from ArcGIS Engine or Server IOException - if there are interop problems public static String getClsid() public boolean equals(Object o) Compare this object with another equals in class Object public int hashCode() the hashcode for this object hashCode in class Object public void queryOrigin(IPoint vectorOrigin) throws IOException, Sets a point equal to the ray's origin. Returns the Origin of the Ray into the input Point. Note: The output geometry must be co-created prior to the query. The output geometry is not co-created by the method; it is populated. This can be used in performance critical situations. For example, creating the geometry only once outside a loop and use the query method could improve performance. Product Availability Available with ArcGIS Engine, ArcGIS Desktop, and ArcGIS Server. Supported Platforms Windows, Solaris, Linux Specified by: queryOrigin in interface IRay vectorOrigin - A reference to a com.esri.arcgis.geometry.IPoint (in) IOException - If there are interop problems. AutomationException - If the ArcObject component throws an exception. public IPoint getOrigin() throws IOException, The origin point of the ray. Returns and sets the Origin of the Ray. The Origin is the starting Point from which the Ray infinitely extends in the direction of its vector. Product Availability Available with ArcGIS Engine, ArcGIS Desktop, and ArcGIS Server. Supported Platforms Windows, Solaris, Linux Specified by: getOrigin in interface IRay A reference to a com.esri.arcgis.geometry.IPoint IOException - If there are interop problems. AutomationException - If the ArcObject component throws an exception. public void setOrigin(IPoint vectorOrigin) throws IOException, The origin point of the ray. Product Availability Available with ArcGIS Engine, ArcGIS Desktop, and ArcGIS Server. Supported Platforms Windows, Solaris, Linux Specified by: setOrigin in interface IRay vectorOrigin - A reference to a com.esri.arcgis.geometry.IPoint (in) IOException - If there are interop problems. AutomationException - If the ArcObject component throws an exception. public void queryVector(IVector3D directionVector) throws IOException, Sets a vector equal to a unit vector with the same direction as the ray. Returns the Vector3D of the Ray. The Vector3D determines the direction the Ray extends from its Origin. The Vector of a Ray is always Normalized to a unit vector. Product Availability Available with ArcGIS Engine, ArcGIS Desktop, and ArcGIS Server. Supported Platforms Windows, Solaris, Linux Specified by: queryVector in interface IRay directionVector - A reference to a com.esri.arcgis.geometry.IVector3D (in) IOException - If there are interop problems. AutomationException - If the ArcObject component throws an exception. public IVector3D getVector() throws IOException, The direction vector of the ray. Returns and sets the Vector3D of the Ray. The Vector3D determines the direction the Ray extends from its Origin. The Vector of a Ray is always Normalized to a unit vector. Product Availability Available with ArcGIS Engine, ArcGIS Desktop, and ArcGIS Server. Supported Platforms Windows, Solaris, Linux Specified by: getVector in interface IRay A reference to a com.esri.arcgis.geometry.IVector3D IOException - If there are interop problems. AutomationException - If the ArcObject component throws an exception. public void setVector(IVector3D directionVector) throws IOException, The direction vector of the ray. Product Availability Available with ArcGIS Engine, ArcGIS Desktop, and ArcGIS Server. Supported Platforms Windows, Solaris, Linux Specified by: setVector in interface IRay directionVector - A reference to a com.esri.arcgis.geometry.IVector3D (in) IOException - If there are interop problems. AutomationException - If the ArcObject component throws an exception. public void queryPointAtDistance(double distance, IPoint point) throws IOException, Queries a point at a distance along the ray. Product Availability Available with ArcGIS Engine, ArcGIS Desktop, and ArcGIS Server. Supported Platforms Windows, Solaris, Linux Specified by: queryPointAtDistance in interface IRay distance - The distance (in) point - A reference to a com.esri.arcgis.geometry.IPoint (in) IOException - If there are interop problems. AutomationException - If the ArcObject component throws an exception. public IPoint getPointAtDistance(double distance) throws IOException, Constructs a point at a distance along the ray. Product Availability Available with ArcGIS Engine, ArcGIS Desktop, and ArcGIS Server. Supported Platforms Windows, Solaris, Linux Specified by: getPointAtDistance in interface IRay distance - The distance (in) A reference to a com.esri.arcgis.geometry.IPoint IOException - If there are interop problems. AutomationException - If the ArcObject component throws an exception. public boolean intersects(IGeometry targetGeometry) throws IOException, Indicates if the ray intersects the target geometry. Implemented for Points, Multipoints, Polylines, Polygons, Envelopes, and Multipatches. This method is intended to be called against top-level geometries only (Point, Multipoint, Polyline, Polygon, Envelope, MultiPatch). To call this method against a Segment/Path or Ring, first add the part to a Polyline or Polygon container, respectively, and then call this method against the container. Product Availability Available with ArcGIS Engine, ArcGIS Desktop, and ArcGIS Server. Supported Platforms Windows, Solaris, Linux Specified by: intersects in interface IRay targetGeometry - A reference to a com.esri.arcgis.geometry.IGeometry (in) The intersectsTarget IOException - If there are interop problems. AutomationException - If the ArcObject component throws an exception. public void queryFirstIntersection(IGeometry targetGeometry, IPoint intersectionPoint) throws IOException, Returns the first point of intersection between the ray and the target geometry. The point is set empty if there is no intersection. Implemented for Points, Multipoints, Polylines, Polygons, Envelopes, and Multipatches. This method is intended to be called against top-level geometries only (Point, Multipoint, Polyline, Polygon, Envelope, MultiPatch). To call this method against a Segment/Path or Ring, first add the part to a Polyline or Polygon container, respectively, and then call this method against the container. If a Ray intersects an Envelope and is located within the bounds of the Envelope, the result of QueryFirstIntersection will be the point closest to the Ray origin along the Ray, located on the exterior of the Envelope at which an intersection takes place. Product Availability Available with ArcGIS Engine, ArcGIS Desktop, and ArcGIS Server. Supported Platforms Windows, Solaris, Linux Specified by: queryFirstIntersection in interface IRay targetGeometry - A reference to a com.esri.arcgis.geometry.IGeometry (in) intersectionPoint - A reference to a com.esri.arcgis.geometry.IPoint (in) IOException - If there are interop problems. AutomationException - If the ArcObject component throws an exception. public void intersect(IGeometry targetGeometry, IPointCollection intersectionPoints) throws IOException, Returns a point collection containing all points of intersection, in order along the ray. Implemented for Points, Multipoints, Polylines, Polygons, Envelopes, and Multipatches. This method is intended to be called against top-level geometries only (Point, Multipoint, Polyline, Polygon, Envelope, MultiPatch). To call this method against a Segment/Path or Ring, first add the part to a Polyline or Polygon container, respectively, and then call this method against the container. Product Availability Available with ArcGIS Engine, ArcGIS Desktop, and ArcGIS Server. Supported Platforms Windows, Solaris, Linux Specified by: intersect in interface IRay targetGeometry - A reference to a com.esri.arcgis.geometry.IGeometry (in) intersectionPoints - A reference to a com.esri.arcgis.geometry.IPointCollection (in) IOException - If there are interop problems. AutomationException - If the ArcObject component throws an exception. public IEnumIntersection getEnumIntersect(IGeometry targetGeometry) throws IOException, Not implemented at this release. This method is currently not implemented. Product Availability Available with ArcGIS Engine, ArcGIS Desktop, and ArcGIS Server. Supported Platforms Windows, Solaris, Linux Specified by: getEnumIntersect in interface IRay targetGeometry - A reference to a com.esri.arcgis.geometry.IGeometry (in) A reference to a com.esri.arcgis.geometry.IEnumIntersection IOException - If there are interop problems. AutomationException - If the ArcObject component throws an exception. public int getGeometryType() throws IOException, The type of this geometry. esriGeometryNull = 0 esriGeometryPoint = 1 esriGeometryMultipoint = 2 esriGeometryPolyline = 3 esriGeometryPolygon = 4 esriGeometryEnvelope = 5 esriGeometryPath = 6 esriGeometryAny = 7 esriGeometryMultiPatch = 9 esriGeometryRing = 11 esriGeometryLine = 13 esriGeometryCircularArc = 14 esriGeometryBezier3Curve = 15 esriGeometryEllipticArc = 16 esriGeometryBag = 17 esriGeometryTriangleStrip = 18 esriGeometryTriangleFan = 19 esriGeometryRay = 20 esriGeometrySphere = 21 Product Availability Available with ArcGIS Engine, ArcGIS Desktop, and ArcGIS Server. Supported Platforms Windows, Solaris, Linux Specified by: getGeometryType in interface IGeometry A com.esri.arcgis.geometry.esriGeometryType constant IOException - If there are interop problems. AutomationException - If the ArcObject component throws an exception. public int getDimension() throws IOException, The topological dimension of this geometry. Returns the dimension of the geometry object based on the geometry's type. Note: At 9.0, Multipatches are now considered as two dimensional geometry. esriGeometry3Dimension will be used for an upcoming new geometry type. Supported esriGeometryDimensions: -1 esriGeometryNoDimension 1 esriGeometry0Dimension 2 esriGeometry1Dimension 4 esriGeometry2Dimension 5 esriGeometry25Dimension 6 esriGeometry3Dimension Product Availability Available with ArcGIS Engine, ArcGIS Desktop, and ArcGIS Server. Supported Platforms Windows, Solaris, Linux Specified by: getDimension in interface IGeometry A com.esri.arcgis.geometry.esriGeometryDimension constant IOException - If there are interop problems. AutomationException - If the ArcObject component throws an exception. public ISpatialReference getSpatialReference() throws IOException, The spatial reference associated with this geometry. Returns and sets the Spatial Reference in which the geometry exists. If the spatial reference has not been set the property will return an empty ISpatialReference instance. Product Availability Available with ArcGIS Engine, ArcGIS Desktop, and ArcGIS Server. Supported Platforms Windows, Solaris, Linux Specified by: getSpatialReference in interface IGeometry A reference to a com.esri.arcgis.geometry.ISpatialReference IOException - If there are interop problems. AutomationException - If the ArcObject component throws an exception. public void setSpatialReferenceByRef(ISpatialReference spatialRef) throws IOException, The spatial reference associated with this geometry. Product Availability Available with ArcGIS Engine, ArcGIS Desktop, and ArcGIS Server. Supported Platforms Windows, Solaris, Linux Specified by: setSpatialReferenceByRef in interface IGeometry spatialRef - A reference to a com.esri.arcgis.geometry.ISpatialReference (in) IOException - If there are interop problems. AutomationException - If the ArcObject component throws an exception. public boolean isEmpty() throws IOException, Indicates whether this geometry contains any points. IsEmpty returns TRUE when the Geometry object does not contain geometric information beyond its original initialization state. An object may be returned to its original initialization (IsEmpty = TRUE) state using SetEmpty. Product Availability Available with ArcGIS Engine, ArcGIS Desktop, and ArcGIS Server. Supported Platforms Windows, Solaris, Linux Specified by: isEmpty in interface IGeometry The isEmpty IOException - If there are interop problems. AutomationException - If the ArcObject component throws an exception. public void setEmpty() throws IOException, Removes all points from this geometry. SetEmpty returns the Geometry to its original initialization state by releasing all data referenced by the Geometry. Use the SetEmpty method to clear geometries and release memory. For example, a polygon with 100 rings will have an internal array of 100 pointers to ring objects. That array will go away and Release will be called on each ring. If that polygon had the only reference on those rings, then they'll go away, which releases all their segments, which may also then go away. Product Availability Available with ArcGIS Engine, ArcGIS Desktop, and ArcGIS Server. Supported Platforms Windows, Solaris, Linux Specified by: setEmpty in interface IGeometry IOException - If there are interop problems. AutomationException - If the ArcObject component throws an exception. public void queryEnvelope(IEnvelope outEnvelope) throws IOException, Copies this geometry's envelope properties into the specified envelope. Returns the unique Envelope that binds the Geometry object. This is the smallest Envelope that Contains the object. Note: The output geometry must be co-created prior to the query. The output geometry is not co-created by the method; it is populated. This can be used in performance critical situations. For example, creating the geometry only once outside a loop and use the query method could improve performance. Product Availability Available with ArcGIS Engine, ArcGIS Desktop, and ArcGIS Server. Supported Platforms Windows, Solaris, Linux Specified by: queryEnvelope in interface IGeometry outEnvelope - A reference to a com.esri.arcgis.geometry.IEnvelope (in) IOException - If there are interop problems. AutomationException - If the ArcObject component throws an exception. public IEnvelope getEnvelope() throws IOException, Creates a copy of this geometry's envelope and returns it. Returns the unique Envelope that binds the Geometry object. This is the smallest Envelope that Contains the object. Product Availability Available with ArcGIS Engine, ArcGIS Desktop, and ArcGIS Server. Supported Platforms Windows, Solaris, Linux Specified by: getEnvelope in interface IGeometry A reference to a com.esri.arcgis.geometry.IEnvelope IOException - If there are interop problems. AutomationException - If the ArcObject component throws an exception. public void project(ISpatialReference newReferenceSystem) throws IOException, Projects this geometry into a new spatial reference. To Project, the geometry needs to have a Spatial Reference set, and not have an UnknownCoordinateSystem. The new spatial reference system passed to the method defines the output coordinate system. If either spatial reference is Unknown, the coordinates are not changed. The Z and measure values are not changed by the Project method. A geometry is not densified before it is projected. This can lead to the output geometries not reflecting the 'true' shape in the new coordinate system. A straight line in one coordinate system is not necessarily a straight line in a different coordinate system. Use IGeometry2::ProjectEx if you want to densify the geometries while they are projected. The Project method must be applied on high-level geometries only. High-Level geometries are point, multipoint, polyline and polygon. To use this method with low-level geometries such as segments (Line, Circular Arc, Elliptic Arc, Bézier Curve), paths or rings, they must be wrapped into high-level geometry types. If a geometry is projected to a projected coordinate system that can't represent the geographic area where the geometry is located (or if trying to move an xy coordinate from outside the projected coordinate system back into geographic), the geometry will be set to empty. Note: This method can only be called upon the top level geometries (Points, Multipoints, Polylines and Polygons). If the from/to spatial references have different geographic coordinate systems, the Project method looks for a GeoTransformationsOperationSet. If the set of Geotransformations is present in memory, Project will use it to perform a geographic/datum Transformation. To use a specific geotransformation, use the IGeometry2::ProjectEx method. Product Availability Available with ArcGIS Engine, ArcGIS Desktop, and ArcGIS Server. Supported Platforms Windows, Solaris, Linux Specified by: project in interface IGeometry newReferenceSystem - A reference to a com.esri.arcgis.geometry.ISpatialReference (in) IOException - If there are interop problems. AutomationException - If the ArcObject component throws an exception. public void snapToSpatialReference() throws IOException, Moves points of this geometry so that they can be represented in the precision of the geometry's associated spatial reference system. SnapToSpatialReference rounds all coordinates to the resolution defined by the geometry's spatial reference system. This has a similar effect on the geometry as storing the geometry in a Product Availability Available with ArcGIS Engine, ArcGIS Desktop, and ArcGIS Server. Supported Platforms Windows, Solaris, Linux Specified by: snapToSpatialReference in interface IGeometry IOException - If there are interop problems. AutomationException - If the ArcObject component throws an exception. public void geoNormalize() throws IOException, Shifts longitudes, if need be, into a continuous range of 360 degrees. GeoNormalize acts on geometries whose geographic system coordinates are below -180 degrees longitude or over +180 degrees longitude or on geometries that span the +-180 degrees longitude. This method requires the geometry to have a valid spatial reference (geographic or projected coordinate system). This method is used internally as part of the projection process for polygons and polylines. It is typically not used by itself. Product Availability Available with ArcGIS Engine, ArcGIS Desktop, and ArcGIS Server. Supported Platforms Windows, Solaris, Linux Specified by: geoNormalize in interface IGeometry IOException - If there are interop problems. AutomationException - If the ArcObject component throws an exception. public void geoNormalizeFromLongitude(double longitude) throws IOException, Normalizes longitudes into a continuous range containing the longitude. This method is obsolete. This method is obsolete. Use IGeometry::GeoNormalize instead. This method requires the geometry to have a valid spatial reference (geographic or projected coordinate system). Product Availability Available with ArcGIS Engine, ArcGIS Desktop, and ArcGIS Server. Supported Platforms Windows, Solaris, Linux Specified by: geoNormalizeFromLongitude in interface IGeometry longitude - The longitude (in) IOException - If there are interop problems. AutomationException - If the ArcObject component throws an exception. public void queryPlaneIntersection(_WKSPointZ pPlaneNormal, double d, IPoint pPoint) throws IOException, Returns the point of intersection between the ray and the target plane. The point is set empty if there is no intersection. Given a plane represented by a point lying in the plane (IPoint pointInPlane) and a vector normal to the plane (IVector3D normalToPlane): ˇ pPlaneNormal represents the X, Y, and Z components of the normal vector packed into a WKSPointZ struct: WKSPointZ pPlaneNormal = new WKSPointZ(); pPlaneNormal.X = normalToPlane.XComponent; pPlaneNormal.Y = normalToPlane.YComponent; pPlaneNormal.Z = normalToPlane.ZComponent; ˇ D represents the dot product of the normal vector and a vector whose X, Y, and Z components are set to the X, Y, and Z coordinates of the point lying in the plane: IVector3D vector3D = new Vector3DClass(); pointInPlane.X, pointInPlane.Y, pointInPlane.Z double D = normalToPlane.DotProduct(vector3D); ˇ pPoint represents the point of intersection, and should be set to a new instance of the PointClass() before it is passed to the method: IPoint point = new PointClass(); Product Availability Available with ArcGIS Engine, ArcGIS Desktop, and ArcGIS Server. Specified by: queryPlaneIntersection in interface IRay2 pPlaneNormal - A Structure: com.esri.arcgis.system._WKSPointZ (A com.esri.arcgis.system._WKSPointZ COM typedef) (in) d - The d (in) pPoint - A reference to a com.esri.arcgis.geometry.IPoint (in) IOException - If there are interop problems. AutomationException - If the ArcObject component throws an exception. public IClone esri_clone() throws IOException, Clones the receiver and assigns the result to *clone. Product Availability Available with ArcGIS Engine, ArcGIS Desktop, and ArcGIS Server. Specified by: esri_clone in interface IClone A reference to a com.esri.arcgis.system.IClone IOException - If there are interop problems. AutomationException - If the ArcObject component throws an exception. public void assign(IClone src) throws IOException, Assigns the properties of src to the receiver. Use Assign method to assign the properties of source object to receiver object. Both objects need to have the same CLSIDs. Both source and receiver objects need to be instantiated. Product Availability Available with ArcGIS Engine, ArcGIS Desktop, and ArcGIS Server. Supported Platforms Windows, Solaris, Linux Specified by: assign in interface IClone src - A reference to a com.esri.arcgis.system.IClone (in) IOException - If there are interop problems. AutomationException - If the ArcObject component throws an exception. public boolean isEqual(IClone other) throws IOException, Returns TRUE when the receiver and other have the same properties. IsEqual returns True if the receiver and the source have the same properties. Note, this does not imply that the receiver and the source reference the same object. Product Availability Available with ArcGIS Engine, ArcGIS Desktop, and ArcGIS Server. Supported Platforms Windows, Solaris, Linux Specified by: isEqual in interface IClone other - A reference to a com.esri.arcgis.system.IClone (in) The equal IOException - If there are interop problems. AutomationException - If the ArcObject component throws an exception. public boolean isIdentical(IClone other) throws IOException, Returns TRUE when the receiver and other are the same object. IsIdentical returns true if the receiver and the source reference the same object. Product Availability Available with ArcGIS Engine, ArcGIS Desktop, and ArcGIS Server. Supported Platforms Windows, Solaris, Linux Specified by: isIdentical in interface IClone other - A reference to a com.esri.arcgis.system.IClone (in) The identical IOException - If there are interop problems. AutomationException - If the ArcObject component throws an exception. public void interfaceSupportsErrorInfo(GUID riid) throws IOException, Indicates whether the interface supports IErrorInfo. Product Availability Available with ArcGIS Engine, ArcGIS Desktop, and ArcGIS Server. Supported Platforms Windows, Solaris, Linux Specified by: interfaceSupportsErrorInfo in interface ISupportErrorInfo riid - A Structure: com.esri.arcgis.support.ms.stdole.GUID (in) IOException - If there are interop problems. AutomationException - If the ArcObject component throws an exception. Overview Package Class Tree Deprecated Index Help PREV CLASS NEXT CLASS FRAMES NO FRAMES SUMMARY: NESTED | FIELD | CONSTR | METHOD DETAIL: FIELD | CONSTR | METHOD
{"url":"http://resources.esri.com/help/9.3/ArcGISEngine/java/api/arcobjects/com/esri/arcgis/geometry/Ray.html","timestamp":"2014-04-20T20:59:48Z","content_type":null,"content_length":"105203","record_id":"<urn:uuid:9dbc2480-9673-43ed-ad10-7f81d6cdaa43>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
Polytypic Proof Construction , 2000 "... ... PolyP extends a functional language (a subset of Haskell) with a construct for defining polytypic functions by induction on the structure of user-defined datatypes. Programs in the extended language are translated to Haskell. PolyLib contains powerful structured recursion operators like catamorp ..." Cited by 93 (12 self) Add to MetaCart ... PolyP extends a functional language (a subset of Haskell) with a construct for defining polytypic functions by induction on the structure of user-defined datatypes. Programs in the extended language are translated to Haskell. PolyLib contains powerful structured recursion operators like catamorphisms, maps and traversals, as well as polytypic versions of a number of standard functions from functional programming: sum, length, zip, (==), (6), etc. Both the specification of the library and a PolyP implementation are presented. , 2001 "... We give two nite axiomatizations of indexed inductive-recursive de nitions in intuitionistic type theory. They extend our previous nite axiomatizations of inductive-recursive de nitions of sets to indexed families of sets and encompass virtually all de nitions of sets which have been used in ..." Cited by 44 (16 self) Add to MetaCart We give two nite axiomatizations of indexed inductive-recursive de nitions in intuitionistic type theory. They extend our previous nite axiomatizations of inductive-recursive de nitions of sets to indexed families of sets and encompass virtually all de nitions of sets which have been used in intuitionistic type theory. The more restricted of the two axiomatization arises naturally by considering indexed inductive-recursive de nitions as initial algebras in slice categories, whereas the other admits a more general and convenient form of an introduction rule. - Nordic Journal of Computing , 2003 "... We show how to write generic programs and proofs in MartinL of type theory. To this end we consider several extensions of MartinL of's logical framework for dependent types. Each extension has a universes of codes (signatures) for inductively defined sets with generic formation, introduction, el ..." Cited by 42 (2 self) Add to MetaCart We show how to write generic programs and proofs in MartinL of type theory. To this end we consider several extensions of MartinL of's logical framework for dependent types. Each extension has a universes of codes (signatures) for inductively defined sets with generic formation, introduction, elimination, and equality rules. These extensions are modeled on Dybjer and Setzer's finitely axiomatized theories of inductive-recursive definitions, which also have a universe of codes for sets, and generic formation, introduction, elimination, and equality rules. , 2008 "... Datatype-generic programming is defining functions that depend on the structure, or “shape”, of datatypes. It has been around for more than 10 years, and a lot of progress has been made, in particular in the lazy functional programming language Haskell. There are more than 10 proposals for generic p ..." Cited by 20 (10 self) Add to MetaCart Datatype-generic programming is defining functions that depend on the structure, or “shape”, of datatypes. It has been around for more than 10 years, and a lot of progress has been made, in particular in the lazy functional programming language Haskell. There are more than 10 proposals for generic programming libraries or language extensions for Haskell. To compare and characterize the many generic programming libraries in a typed functional language, we introduce a set of criteria and develop a generic programming benchmark: a set of characteristic examples testing various facets of datatype-generic programming. We have implemented the benchmark for nine existing Haskell generic programming libraries and present the evaluation of the libraries. The comparison is useful for reaching a common standard for generic programming, but also for a programmer who has to choose a particular approach for datatype-generic programming. , 2002 "... Functional generic programming is an area of research concerning programs parameterized by types. Such parameterization is a powerful method of abstraction that allows the programmer to dene and reuse common patterns of computation that work over many dierent datatypes. ..." Cited by 7 (3 self) Add to MetaCart Functional generic programming is an area of research concerning programs parameterized by types. Such parameterization is a powerful method of abstraction that allows the programmer to dene and reuse common patterns of computation that work over many dierent datatypes. , 2000 "... Families of inductive types defined by recursion arise in the formalization of mathematical theories. An example is the family of term algebras on the type of signatures. Type theory does not allow the direct definition of such families. We state the problem abstractly by defining a notion, strong p ..." Cited by 1 (1 self) Add to MetaCart Families of inductive types defined by recursion arise in the formalization of mathematical theories. An example is the family of term algebras on the type of signatures. Type theory does not allow the direct definition of such families. We state the problem abstractly by defining a notion, strong positivity, that characterizes these families. Then we investigate its solutions. First, we construct a model using wellorderings. Second, we use an extension...
{"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.30.9144","timestamp":"2014-04-18T14:05:13Z","content_type":null,"content_length":"26224","record_id":"<urn:uuid:6f9a8136-2d04-4e38-b357-61a4278a81c1>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00151-ip-10-147-4-33.ec2.internal.warc.gz"}
The Excel Printout Next: Multiple Linear Regression Up: More on Simple Regression Previous: More on Simple Regression The Excel Printout Following is the printout for running regression in Excel. Regression Statistics Multiple R 0.640955792 R Square 0.410824328 Standard Error 3283.378116 Observations 12 df SS MS F Significance F Regression 1 75171487.75 75171488 6.972866 0.024707725 Residual 10 107805718.5 10780572 total 11 182977206.3 Coefficients Standard Error t Stat P-value Lower 95% Upper 95% Intercept 8136.150298 1519.8802 5.353152 0.000322 4749.645588 11522.65501 miles -0.051269039 0.0194155 -2.64062 0.024708 -0.09452957 -0.00800851 In the previous section, we noted that 41% of this total price variability is explained by the car's mileage (since R^2=SSR/SSTo=.41). This `R Square' is reported on the second line of the printout under `Regression Statistics'. The correlation between Miles and Price (X and Y) is r=.640955792. The square of this value is r^2=(.640955792)^2=.410824328. This number should look familiar, because it is ALSO the value of SSR/ SSTo, which explains why SSR/SSTo is called `R Square'. Note the use of upper case `R' instead of lower case `r' in the Excel printout. Use of the upper case R is standard notation in regression studies, while lower case r tends to be used in correlation studies. Now move down to the Analysis of Variance (ANOVA) table . It contains three rows; "Regression" (or R for short), "Residual" (or E), and "Total" (or To). The sum of squares column (denoted "SS") gives SSR, SSE, and SSTo respectively. Mathematically, SSR and SSE are not comparable in size because they contain different amounts of information, also called "degrees of freedom" or simply "df". This is where the degrees of freedom column comes in. We have SSE=107.8M, but it contains 10 pieces of information, which averages to 10.78M per piece. On the other hand, SSR=75.1M contains only 1 piece of information, so the average is 75.1M per piece. These "average sum of squares per df" are called "mean squares" and reported under the MS column. Note that MS for Regression is 6.97 times the MS for Residual. This may be interpreted as follows. ``Prices of cars vary partly because they have different mileage, right? How much of the variability in price is due to (or explained by) mileage? Answer: the average variation EXPLAINED by Miles is 6.97 times larger than UNEXPLAINED .'' Is 6.97 statistically significant? The answer is Yes, because there is only a 2.47% chance of getting a ratio that large from pure chance variation. Since this P-value is smaller than 5%, the result is statistically significant. A generic representation of the ANOVA table for simple regression is given below. As usual, the sample size is denoted by n. (For the Saturn price data, n=12.) The formulas for SSR, SSE and SSTo have already been discussed previously. MSR and MSE are obtained by division. F is the ratio between MSR and MSE. The P-value, or observed significance level of the F-ratio, is obtained from an F-table (which is beyond the scope of this class). df SS MS F Significance F Regression 1 SSR MSR=SSR/dfR F=MSR/MSE P-value Residual n-2 SSE MSE=SSE/dfE total n-1 SSTo Below the ANOVA table in the Excel regression printout are the estimates of intercept and slope (under the column "coefficients"). Their standard errors are also reported along with a t-ratio, p-value for the t-ratio, and 95% confidence interval. The standard errors are interpreted the usual way, as follows. If another random sample of 12 cars were selected, the computed slope of the regression line would probably change. By how much? Answer: by approximately .019. The t-ratio is the estimate divided by its standard error, and the P-value measures how likely it is from chance alone to get a ratio that large. Consider the following two statements: 1. The price of a used Saturn car should be around $4999 give or take $4079 or so. 2. The price of a used Saturn car with 80,000 miles should be around $4034 give or take $3283 or so. The first statement gives no information on mileage of the car. In the absence of an X-value, our best guess is the sample mean standard deviation from the mean, which is simply the regular sample standard deviation standard deviation from predicted values, which turns out to be the squareroot of the MSE Next: Multiple Linear Regression Up: More on Simple Regression Previous: More on Simple Regression
{"url":"http://www.stat.wmich.edu/s216/book/node128.html","timestamp":"2014-04-19T14:50:09Z","content_type":null,"content_length":"10380","record_id":"<urn:uuid:17599e89-d595-4a27-af26-41b84f400a1d>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00424-ip-10-147-4-33.ec2.internal.warc.gz"}
Performance Recognition for Sulphur Flotation Process Based on Froth Texture Unit Distribution Mathematical Problems in Engineering Volume 2013 (2013), Article ID 530349, 9 pages Research Article Performance Recognition for Sulphur Flotation Process Based on Froth Texture Unit Distribution School of Information Science and Engineering, Central South University, Changsha, Hunan 410083, China Received 30 August 2012; Accepted 20 December 2012 Academic Editor: Bin Jiang Copyright © 2013 Mingfang He et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. As an important indicator of flotation performance, froth texture is believed to be related to operational condition in sulphur flotation process. A novel fault detection method based on froth texture unit distribution (TUD) is proposed to recognize the fault condition of sulphur flotation in real time. The froth texture unit number is calculated based on texture spectrum, and the probability density function (PDF) of froth texture unit number is defined as texture unit distribution, which can describe the actual textual feature more accurately than the grey level dependence matrix approach. As the type of the froth TUD is unknown, a nonparametric kernel estimation method based on the fixed kernel basis is proposed, which can overcome the difficulty when comparing different TUDs under various conditions is impossible using the traditional varying kernel basis. Through transforming nonparametric description into dynamic kernel weight vectors, a principle component analysis (PCA) model is established to reduce the dimensionality of the vectors. Then a threshold criterion determined by the TQ statistic based on the PCA model is proposed to realize the performance recognition. The industrial application results show that the accurate performance recognition of froth flotation can be achieved by using the proposed method. 1. Introduction Sulphur flotation is a complex physical process influenced by multiple operational variables such as inlet air flow, pulp level, and it is naturally hydrophobic to attach to the air bubbles. The objective of sulphur flotation is to separate valuable sulphur minerals from useless materials or other minerals so as to gain the upgraded sulphur minerals [1]. Sulphur concentrate grade depends on flotation separation performance, and it is affected by the accuracy of the performance recognition. It is well recognized that froth visual appearance observed can characterize the combining effect of multiple process conditions on flotation [2], and it is also known as the indicator of flotation separation performance. Recent advances in image processing and computer vision based froth appearance monitoring systems contribute greatly to the feature extraction of visual descriptors [3–5]. Computer-based vision technology is now moving out of the research laboratory and into the plant to become a useful means of monitoring and controlling flotation performance at the cell level [6–8]. The development of base level process control (control of pulp level, air flow rate, etc.) has been significant progress, but automated advanced and optimization flotation control strategies based on computer vision have been more difficult to implement [9]. The performance recognition is available for the optimal control of flotation [10], and flotation performance is closely related to the concentrate grade. Therefore, it is of great importance to improve the sulphur concentrate grade by developing an effective performance recognition method based on computer vision. It is shown that the froth texture is a good indicator to the performance of the flotation cells [11], and texture information is believed to strongly associate with mineral grade [12]. Numerous reported literatures are devoted to the extraction of froth image texture features. The grey-level cooccurrence matrix (GLCM) approach is one of the most popular statistical methods used in practice to measure the textural information of images. Most of the researchers calculated second-order statistics based on GLCM such as angular second moment, entropy, moment of inertia, and moments of deficit and relevance to recognize the flotation performance. GLCM was used as texture descriptor to classify different types of froths, and they also provide qualitative information on the changes in the visual appearance of the froth [13]. However, there is only one angle of displacement used in the GLCM approach, with 0°, 45°, 90°, and 135° calculated to acquire average second-order statistics, neglecting the different variation in different directions and leading to large computing of the high-dimensional matrix. The actual flotation froth texture is more complex, so that a simple statistical property in the GLCM approach is difficult to accurately describe it, leading to the inaccuracy of performance recognition. Based on the proposed concept of texture unit, a new statistical approach to texture analysis, termed the texture spectrum approach, was proposed [14]. The proposed method extracted the textural information of an image with a more complete respect of texture characteristics (simultaneously in all eight directions instead of only one displacement vector used in the GLCM approach). It is worth noticing that the PDF of texture unit number, defined as the texture unit distribution (TUD), is found to be nonnormal. Further exploring of the information indicated by froth structure has shown that the TUD is multipeaky and highly skewed, which neither belongs to any existing mathematical model based distribution. To depict the unknown continuous process of froth flotation, nonparametric estimation provides a credible solution. Commonly used nonparametric estimation techniques include histogram, frequency polygon, shift average histogram, kernel methods [15], wavelet method, and B-spline expansion models. Theoretical researches on tracking the output probability density distribution to a target distribution shape by using various control approaches [16] can also increase the possibility of froth visual features based process control. Nevertheless, the traditional kernel estimation cannot compare the various froth TUDs under different flotation conditions with the varying kernel basis. Therefore, the fixed kernel basis is proposed to describe the TUDs in various froth images. To relate the flotation operation condition with flotation performance, Jampana et al. revealed that the increase in pulp level causes concentrate grade to decrease [17], as the variation of pulp level has great effect on the froth retention time in the flotation cell [18]. The continuous decrease of froth retention time can lead to less collision time between mineral particles and bubbles with decreased gangue drop, which resulting in the deteriorating performance of mineral concentrate grade. Conventionally, industry process performance recognition heavily depends on the frequent inspection of froth views and manipulation of experienced human operators, which is often problematic with strong subjective and unable to regulate the fault performance timely, leading to the unstable flotation process and low concentrate grade. Along with the implementation of online monitoring system of froth visual appearance, quantitative performance recognition becomes highly desired and essential to maintain the operational variables at acceptable rates. Cilliers proposed a quantitative fault detection and diagnosis model which is successfully applied for hydrocyclones [19]. In industrial case studies of aluminum flotation, Xu explored the froth structure by using kernel density estimation technique to approximate the output probability density of surface bubble size distribution rectified by the empirical formula and its application on process fault detection [20]. The froth texture characterizes the roughness of the froth surface, which indicates the mineral contents of froth. When the pulp level is too high with slurry overflow, froth texture is smooth; in this case, the middle value takes a large portion of the texture unit number in the whole image, which results in a high peak in the texture unit distribution curve. On the other hand, when the pulp level becomes too low, the froth cannot overflow, such that the mineral contents in the froth accumulate to a high level. Therefore, the texture becomes coarse and the middle value takes a small portion of the texture unit number in the whole image, resulting in a low peak in the texture unit distribution curve. By transforming the texture unit distribution into the weight vector using the fixed kernel estimator, a weight PCA model can be established to handle the variation in the texture unit distribution. The sulphur froth image contains a great deal of noise because of the acid fog in the sulphur flotation. statistic based on the PCA model can reveal the major variation of the froth texture, and statistic can reveal the noise contained in the image. Thus the proposed new statistical variable is proposed to detect the sulphur flotation fault effectively by considering the influence of noise. The main advantages of the proposed method in this paper are that (i) the texture unit distribution can describe the froth texture feature more completely, by considering eight directions of grey level variation information, compared to the GLCM method. (ii) The mathematical model of texture unit distribution is unknown, as it is nonnormal and multipeaky, so nonparametric estimation method is more suitable to approximate it. The fixed kernel basis can compare the different flotation performance reflected by the weight coefficients of texture unit distribution, compared with the traditional varying kernel basis. (iii) The new statistic can reduce the influence of noise in the accuracy of performance recognition, compared to the traditional and statistics. This work aims to explore the froth texture by using kernel density estimation technique to approximate the surface froth TUD and its application on sulphur flotation process performance recognition. A nonparametric kernel estimator by the fixed kernel basis is designed to approximate texture unit distribution, such that the output TUD is formulated in terms of dynamic weights, on which a principle component analysis (PCA) model is established. Then an effective performance recognition criterion is determined using the proposed statistic based on PCA model. The fault condition is successfully detected on the industrial data of offline froth images. Next section introduces the froth texture unit number calculation oriented texture spectrum scheme. Section 3 presents the output TUD curve modeling by using designed kernel density estimators. The kernel weight vector based PCA model is established, and a threshold criterion determined by the statistic based on the PCA model is proposed to realize the performance recognition in Section 4. Section 5 presents the experimental results and discussion. Conclusion is provided in the last section. 2. Surface Froth Texture Unit Number Calculation Experimental setup consists of RGB camera with resolution of and lens of 35mm, high frequency light source, cover hook protecting camera from dust, acid fog and ambient light, and optical fiber with length over 200m for signal communication to industrial PC computer in operating room. The camera is mounted 96.5cm vertically above the froth surface of the target cell, and froth images with window size cm^2 are captured online at the rate of 15 frames/s. Meanwhile, the corresponding process operational and performance data are collected on industrial scale. Froth images collected from industry field display that various froth texture feature leads to the different performance. The existing texture description method such as texture spectrum, spatial and neighboring grey-level cooccurrence matrix are derived from this fact. Froth image observed is a type of gradient images. Nevertheless, simple second-order statistical variables in the GLCM approach are difficult to accurately describe the froth texture, the texture unit (TU) oriented texture spectrum scheme proposed by [14] is used to describe the texture features. In a froth digital image, each pixel is surrounded by eight neighboring pixels. The local texture information for a pixel can be extracted from a neighborhood of pixels called texture unit, which represents the smallest complete unit (in the sense of having eight directions surrounding the pixel). Given a neighborhood of pixels (which will be denoted by a set containing nine elements: , where represents the intensity value of the central pixel, and is the intensity value of the neighboring pixel ) defines the corresponding texture unit by a set containing eight elements, , where is determined by the following formula: for , and the element occupies the same position as the pixel . As each element of TU has one of three possible values, the combination of all eight elements results in possible texture units in total. There is no unique way to label and order the 6561 texture units. In our study, the 6561 texture units are labeled by using the following formula: where represents the texture unit number, and is the ith element of texture unit set . In addition, the eight elements may be ordered differently. If the eight elements are ordered clockwise as shown in Figure 1, the first element may take eight possible positions from the top left (a) to the middle left (h), and then the 6561 texture units can be labeled by the above formula under eight different ordering ways (from a to h). Figure 2 gives an example of transforming a neighborhood in sulphur flotation froth image to a texture unit with the texture unit number under the ordering way a. TUD is defined as the occurrence frequency for every texture unit number, and it exhibits probability density function (PDF) distribution of froth texture unit number. The online acquired sulphur froth image in cleaner cell in normal condition is shown in Figure 3. Figure 4 shows the froth TUD. The froth texture unit probability density distribution is found to be nonnormal and multipeaky. 3. TUD Curve Modeling The surface sulphur froth TUD is nonnormal. Unlike traditional method applying singular feature such as mean or variance with the assumption that the distribution is normal, probability density distribution is suggested to accurately describe statistical feature of froth texture. The fact that the mathematical model of TUD is unknown makes nonparametric estimation method fitting to depict the unknown continuous process of froth flotation. 3.1. Nonparametric Kernel Estimation Consider a probability density function describing the probability distribution of in as follows: Density estimation accomplishes the fitting of . Though classic nonparametric histogram estimator is good for data presentation, its discontinuity causes difficulty if derivatives of the estimates are required. A continuous version of the histogram is the frequency polygon formed by interpolating the midpoints of a histogram. Histogram based methods seek the balance between estimation accuracy and feature dimensionality, which can be very expensive for large samples. Apart from the histogram, the kernel estimator is most commonly used [15], which is given by where the function is the prespecified kernel function satisfying to ensure a bona fide density estimate. is the corresponding weight of the ith kernel function, and is the window width. Based on the prototype of traditional normal kernel function, a kernel function fitting for froth flotation is constructed as where is the ith kernel function, and is the center of the ith kernel function along the horizontal axis. 3.2. Output TUD Kernel Estimation Supposing there is a dynamic stochastic system with input and output , the probability of output lying in is defined as where the represents the output TUD after froth texture unit number calculation. is control input such as the input amount of pulp level which is a dominant operational condition in the sulphur flotation system. The can be approximated by kernel estimators designed in formula (5) and the corresponding weights . Since , , , it is certain that there are independent weights. So the TUD model is adopted as follows: where is the corresponding weight of . However, the traditional kernel estimation cannot compare the various froth TUDs under different flotation conditions with the varying kernel basis. Therefore, the fixed kernel basis is proposed to describe the TUDs in various froth images, such that the TUD curves can be transformed into dynamic kernel weight vectors, based on which the fault condition can be detected in sulphur flotation. Meanwhile, the computational complexity is also reduced using the designed fixed kernel basis. Adjusting to the range of froth texture unit number, a number of kernel bases are selected to depict the TUD in Figure 5. Its window width is fixed across the entire sample. As the TUD for sulphur froth image is multipeaky and complicated, 25 kernel bases with fixed window width are used to approximate TUD, which is plotted in dashed line in Figure 5. One dotted curve presents the first and second kernel basis multiplying the corresponding weight coefficients. And the estimation result of Figure 3 froth TUD is plotted in solid line. Figure 6 presents the kernel density estimation methods to approximate actual texture unit distribution of sulphur froth image in Figure 3. The results have shown the kernel estimation can accomplish the description of froth texture unit probability density distribution with general low feature dimensionality and high accuracy. 4. Weight PCA Model Based Performance Recognition A fault performance is defined as the departure from an acceptable range of an observed output or operating variable. Timeous detection of fault can determine whether the abnormal condition occurs [ 21]. The information indicated by froth characteristics is a combining effect of multiple operational variables such as pulp level and inlet air flow in sulphur flotation. By retaining the variance of inlet air flow during a short period of time, froth texture in cleaner cell is closely related to concentrate grade which is determined by the regulation of pulp level. Human operators are in capable of performing timeous monitoring of various process variables, and the process manipulation mostly relies on heuristics of their froth view observation. Hodouin used PCA to analyze and interpret the behavior of mineral flotation and grinding circuits in a large mineral processing plant [22]. Kourti summarized the latest developments in multivariate statistical process control (MSPC) and its application for fault detection and isolation (FDI) in industrial processes [23]. 4.1. Weight PCA Model PCA is a multivariate statistical technique used in MSPC and FDI perspectives [23]. PCA uses latent variables instead of every measured variable in the process because they can better explain the behavior of the process. By monitoring the sulphur froth appearance such as froth texture, the process fault performance can be inferred and identified based on established PCA model. The output TUD for sulphur flotation froth can be transformed to dynamic kernel weight vectors through formula (8). PCA reduces the dimensionality of the original weights by projecting it onto a lower dimensionality space. It obtains the principal causes of variability in the sulphur flotation process. If some of these causes change, it can be due to a fault in the process. Consider the weight matrix , containing samples of dynamic kernel weight coefficients collected under normal operation in sulphur flotation. This matrix must be normalized to zero mean and unit variance with the scale parameter vectors and as the mean and variance vectors, respectively. Next step to calculate PCA is to construct the covariance matrix : and performing the SVD decomposition on : where is a diagonal matrix that contains in its diagonal eigenvalues of sorted in decreasing order (). Columns of matrix are the eigenvectors of . The transformation matrix is generated choosing eigenvectors or columns of corresponding to principal eigenvalues. Matrix transforms the space of the measured variables into the reduced dimension space as follows: Columns of matrix are called loadings, and elements of are called scores. Scores are the values of the original measured variables that have been transformed into the reduced dimension space. Operating in (11), the scores can be transformed into the original space as follows: The residual matrix is calculated as Finally the original data space can be calculated as It is very important to choose the number of principal components , because represents the principal sources of variability in the process, and represents the variability corresponding to process noise. There is Cumulative Percent Variance (CPV) approach for determining the number of components to be retained in a PCA model as [24]. The measure of the percent variance () captured by the first principal components is adopted as follows: 4.2. A New Statistical Variable Based Fault Performance Recognition Having established a PCA model based on historical data collected when only common cause variation are present, multivariate control charts based on Hotelling’s and square prediction error (SPE) or can be plotted. The fault performance recognition can be reduced to this two traditional variables ( and ) characterizing two orthogonal subsets of the original space. However, some of sulphur froth images contain a great deal of noise because of the acid fog in the sulphur flotation. The traditional statistic can only describe the variation in the texture information, therefore, the normal image of containing noise caused by acid fog may be detected as a fault image for its disability to handle the noise influence. As the statistic can represent the random noise in the froth texture, by combining statistic and statistic, the new statistic is proposed to detect sulphur flotation fault performance more accurately: where is the regulation factor controlling the value range of . takes values between 99% and 100%. can be calculated as the sum of squares of a new process weight vector as follows: where is a squared matrix formed by the first rows and columns of . The sulphur flotation process is considered normal for a given significance level if where is the critical value of the Fisher-Snedecor distribution with and degrees of freedom and the level of significance. takes values between 90% and 95%. is based on the first principal components, so that it provides a test for derivations in the latent variables that are of greatest importance to the variance of the sulphur flotation process. This statistic will only detect an event if the variation in the latent variables is greater than the variation explained by common causes. New events can be detected by calculating the SPE or of the residuals of a new observation. statistic is calculated as the sum of squares of the residuals. The scalar value is a measurement of goodness of fit of the sample to the model and is directly associated with the noise as follows: with The upper limit of this statistic can be computed as the next form: with where is the value of the normal distribution with α, and the level of significance and are the eigenvalues of the PCA residual covariance matrix . When an unusual event occurs and it produces a change in the covariance structure of the model, it will be detected by a high value of . According to the formulae (18) and (21), the critical value of the new statistical variable can be calculation as Through using the output TUD weight based PCA model, a criterion can be designed to detect the fault. The new statistical variable is calculated on the weight PCA model. Then the critical value of statistical variable is set as the threshold value. When the value of for the new sample is larger than the threshold value evaluated by formula (24), the fault can be detected. 5. Application Results and Discussion To evaluate the proposed weight PCA model based fault detection approach, a series of industrial experiments are carried out in a Chinese sulphur froth flotation plant. In the test runs, froth image videos are captured through the previously introduced monitoring system in the last cleaner flotation cell. Subsequently, the froth videos are processed by the developed image analysis software which is capable of extracting froth features such as TUD online. Figure 7 presents the three types of froth images in different performance, which are collected and analyzed under the same condition in terms of resolution, angle, light condition, position, view scale, and so forth. In practical sulphur flotation process experiments, the air flow rate and feed-in conditions are kept at a steady state so as to stabilize the production process. The adjustment of pulp level (or froth depth) becomes the major manipulating parameter, which directly determines flotation performance. As an indication of flotation performance, the froth texture feature is one of determinants of mineral separation efficiency. Bubbles with relative complex texture generally carry more valuable mineral particles, whose corresponding pulp level value is to be maintained to an acceptable bounded range. When one of the dominant operating variable pulp level is fluctuated, in this case the regulation of slurry underflow, froth surface visual features such as froth texture and color spectral information are reacting to the change of pulp level value. An increase in pulp level was considered, such that its simultaneous effect on froth texture unit distribution can be identified. As is shown in Figure 7, the froth images evolved as pulp level value varied gradually during a period, and the corresponding operational conditions were measured at the same time. As for texture unit number calculation, normal kernel with following basis functions is selected according to formula (5). The window width is set to be 200 as a smoothing parameter, and centered points of each kernel . Since the froth texture unit involved ranges from 0 to 6560, the kernel functions with fixed window width are supposed to cover the entire texture unit value range. Thus, the froth TUD can be approximated by (8), where . The weights of normal kernel expansion have dimension of 25, and only 24 of which are independent. By applying the kernel estimation on the TUDs of froth images in Figure 7, the 3D mesh plot of the output TUD is shown in Figure 8. At an hourly interval, the froth image video is captured at the point since it is reasonable to consider that the froth TUD is representative during a short-time period in this study case. Meanwhile, the process operational conditions are measured correspondingly. As can be seen, the froth TUD tends to shift dramatically with occurring a low peak when the slurry underflow increased at 9:00, which resulted in froth depth value increased from 190mm to 350mm in response. Then the excessive decrease of froth depth to 30mm produces a corresponding upward change of the peak of TUD curve. Accordingly, the separation performance mineral grade deteriorated from 81% at 8:00 to 50% at 11:00. The weight PCA model applied in this case is established as where . According to the formula (24), threshold value can be calculated as by setting . Setting Figure 7(a) as the normal TUD, Figure 9 shows the threshold with solid line and the statistics for froth images in Figure 7. As can be seen, Figures 7(b)-7(c) are clearly identified as fault status because the statistic , which are consistent with the observation results from human operators. Attempts have been made in calculating false alarm rate on a testing database. The testing data consist of 243 offline froth videos captured from the sulphur flotation industry during August of 2011. The fault detection is accomplished by a threshold criterion calculated from formula (24), according to which the statistic above the threshold value indicates that a fault occurs. Table 1 gives the detection performance of the testing database. As can be seen in Figure 9, solid line represents the threshold for fault detection, and asterisks are statistics of each normal video sample. Triangles and diamonds are the samples with fault A and fault B. The total fault detection accuracy on the database is 93.83%. It is possible that the false detection alarm ascribes to the texture unit number calculation malfunction of the captured froth images. 6. Conclusion In this paper the description of texture unit number probability density distribution and its relationship to pulp level operational status are investigated. Unlike traditional discussion of froth texture feature focusing mostly on second-order statistics based on GLCM including angular second moment, entropy, moment of inertia, and moments of deficit and relevance, a nonparametric estimation method is proposed to describe the TUD more accurately based on the fixed normal kernel basis, and the fault performance is detected through the proposed statistic. Desired fault detection for pulp level regulation in froth flotation industry is achieved using the proposed method. This work was financially supported by Key Program of Natural Science Foundation of China under Grant no. 61134006, National Science and Technology Pillar Program of China under Grant no. 2012BAF03B05, and National Science Fund for Distinguished Young Scholars of China under Grant no. 61025015. 1. C. Citir, Z. Aktas, and R. Berber, “Off-line image analysis for froth flotation of coal,” Computers and Chemical Engineering, vol. 28, no. 5, pp. 625–632, 2004. View at Publisher · View at Google Scholar · View at Scopus 2. W. Wang, F. Bergholm, and B. Yang, “Froth delineation based on image classification,” Minerals Engineering, vol. 16, no. 3, pp. 1183–1192, 2003. View at Publisher · View at Google Scholar 3. C. Aldrich, C. Marais, B. J. Shean, and J. J. Cilliers, “Online monitoring and control of froth flotation systems with machine vision: a review,” International Journal of Mineral Processing, vol. 96, no. 1–4, pp. 1–13, 2010. View at Publisher · View at Google Scholar · View at Scopus 4. V. Hasu, J. Hätönen, and H. Hyötyniemi, “Analysis of flotation froth appearance by design of experiment,” in Proceedings of the IFAC Workshop on Future trends in Automation in Mineral and Metal, pp. 22–24, Espoo, Finland, 2000. 5. G. Bonifazi, S. Serranti, F. Volpe, and R. Zuco, “Characteristisation of flotation froth colour and structure by machine vision,” Computers and Geosciences, vol. 27, no. 9, pp. 1111–1117, 2001. View at Publisher · View at Google Scholar · View at Scopus 6. P. N. Holtham and K. K. Nguyen, “On-line analysis of froth surface in coal and mineral flotation using JKFrothCam,” International Journal of Mineral Processing, vol. 64, no. 2-3, pp. 163–180, 2002. View at Publisher · View at Google Scholar · View at Scopus 7. D. W. Moolman, C. Aldrich, and J. S. J. Van Deventer, “The monitoring of froth surfaces on industrial flotation plants using connectionist image processing techniques,” Minerals Engineering, vol. 8, no. 1-2, pp. 23–30, 1995. View at Scopus 8. J. Kaartinen, J. Hätönen, H. Hyötyniemi, and J. Miettunen, “Machine-vision-based control of zinc flotation-A case study,” Control Engineering Practice, vol. 14, no. 12, pp. 1455–1466, 2006. View at Publisher · View at Google Scholar · View at Scopus 9. B. J. Shean and J. J. Cilliers, “A review of froth flotation control,” International Journal of Mineral Processing, vol. 100, no. 3-4, pp. 57–71, 2011. View at Publisher · View at Google Scholar 10. J. J. Liu and J. F. MacGregor, “Froth-based modeling and control of flotation processes,” Minerals Engineering, vol. 21, no. 9, pp. 642–651, 2008. View at Publisher · View at Google Scholar 11. J. M. Hargrave and S. T. Hall, “Diagnosis of concentrate grade and mass flowrate in tin flotation from colour and surface texture analysis,” Minerals Engineering, vol. 10, no. 6, pp. 613–621, 1997. View at Publisher · View at Google Scholar 12. N. Saghatoleslam, H. Karimi, R. Rahimi, and H. H. A. Shirazi, “Modeling of texture and color froth characteristics for evaluation of flotation performance in sarcheshmeh copper pilot plant using image analysis and neural networks,” International Journal of Engineering B, vol. 17, no. 2, pp. 121–130, 2004. View at Scopus 13. G. Bartolacci, P. Pelletier, J. Tessier, C. Duchesne, P. A. Bossé, and J. Fournier, “Application of numerical image analysis to process diagnosis and physical parameter measurement in mineral processes-Part I: flotation control based on froth textural characteristics,” Minerals Engineering, vol. 19, no. 6–8, pp. 734–747, 2006. View at Publisher · View at Google Scholar · View at 14. D. C. He and L. Wang, “Texture unit, texture spectrum, and texture analysis,” IEEE Transactions on Geoscience and Remote Sensing, vol. 28, no. 4, pp. 509–512, 1990. View at Publisher · View at Google Scholar · View at Scopus 15. J. X. Li and L. T. Tran, “Nonparametric estimation of conditional expectation,” Journal of Statistical Planning and Inference, vol. 139, no. 2, pp. 164–175, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 16. L. Guo, H. Wang, and A. P. Wang, “Optimal probability density function control for NARMAX stochastic systems,” Automatica, vol. 44, no. 7, pp. 1904–1911, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 17. P. Jampana, S. L. Shah, and R. Kadali, “Computer vision based interface level control in separation cells,” Control Engineering Practice, vol. 18, no. 4, pp. 349–357, 2010. View at Publisher · View at Google Scholar · View at Scopus 18. D. Beneventi, X. Rousset, and E. Zeno, “Modelling transport phenomena in a flotation de-inking column: focus on gas flow, pulp and froth retention time,” International Journal of Mineral Processing, vol. 80, no. 1, pp. 43–57, 2006. View at Publisher · View at Google Scholar · View at Scopus 19. J. J. Cilliers and C. L. E. Swartz, “Simulation of quantitative fault diagnosis in backfill hydrocyclones,” Minerals Engineering, vol. 8, no. 8, pp. 871–882, 1995. View at Scopus 20. C. H. Xu, W. H. Gui, C. H. Yang, H. Q. Zhu, Y. Q. Lin, and C. Shi, “Flotation process fault detection using output PDF of bubble size distribution,” Minerals Engineering, vol. 26, no. 1, pp. 5–12, 2012. View at Publisher · View at Google Scholar 21. T. Kourti, “Process analysis and abnormal situation detection: from theory to practice,” IEEE Control Systems Magazine, vol. 22, no. 5, pp. 10–25, 2002. View at Publisher · View at Google Scholar · View at Scopus 22. D. Hodouin, J. F. MacGregor, M. Hou, and M. Franklin, “Multivariate statistical analysis of mineral processing plant data,” CIM Bulletin of Mmerul Processing, vol. 86, no. 975, pp. 23–34, 1993. 23. T. Kourti, “Application of latent variable methods to process control and multivariate statistical process control in industry,” International Journal of Adaptive Control and Signal Processing, vol. 19, no. 4, pp. 213–246, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 24. D. Garcia-Alvarez, M. J. Fuente, and G. I. Sainz, “Fault detection and isolation in transient states using principal component analysis,” Journal of Process Control, vol. 22, no. 3, pp. 551–563, 2012. View at Publisher · View at Google Scholar
{"url":"http://www.hindawi.com/journals/mpe/2013/530349/","timestamp":"2014-04-19T09:55:23Z","content_type":null,"content_length":"256668","record_id":"<urn:uuid:7a2d5c72-c366-4668-a670-6dca89f36d26>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00362-ip-10-147-4-33.ec2.internal.warc.gz"}
section numlaws Laws about Numbers in Z Sam Valentine Written September 1999. Last updated September 1999. section numlaws parents numdefs, corelaws This section contains the CADi Prelude-based Laws Those laws which depend solely on the definitions in the prelude are given in the file corelaws.z, which is a parent of this one. Theorems about the Natural Numbers We could here state and prove various useful theorems about , both because of their usefulness, and to assist in proving the consistency of the definitions in numdefs. The theorems would mainly be to the effect that is an Abelian monoid under _ * _, and _ * _ distributes through _ + _. TimesDistributesThruPlus == TimesConstInjective == [1] | a * c = b * c IT 18-Feb-2000
{"url":"http://www.cs.york.ac.uk/hise/cadiz/numlaws.html","timestamp":"2014-04-21T07:19:56Z","content_type":null,"content_length":"5161","record_id":"<urn:uuid:5e8d638e-7b15-44dd-9943-1a84babf421f>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00064-ip-10-147-4-33.ec2.internal.warc.gz"}
Resolution: standard / high Figure 3. The percentage of methylated imprinting center of the Prader-Willi locus (PWS-IC) shows a positive correlation with copy number at the 15q locus. (a) Example of methylation-sensitive high-resolution melting-curve analysis showing > 80% methylated PWS-IC for duplication of 15q11-q13 (dup15q) sample 7041 (3:1 expected genetic ratio of the number of maternal copies of 15q11-q13 to paternal copies (M:P)) and sample 7014 (5:1 expected M:P ratio) (blue curves). All control and autism samples showed, on average, a 62% methylated PWS-IC (sample 6184 autism (orange) and sample 1649 control (green) here). The Prader-Willi syndrome uniparental disomy (PWS-UPD) (yellow) and the Angelman syndrome (AS) deletion (dark red) are shown as indicators of completely unmethylated (AS del) or completely methylated (PWS-UPD) signals from the PWS-IC. The blue arrows show the determination of percentage methylation from the percentage relative signal y-axis. (b) This graph presents the normalized methylation ratio (M:P) shown in Figure 1 grouped by genotype. Note that both control and autism samples cluster tightly at the 1:1 ratio in all samples. There was a single interstitial duplication sample (predicted 2:1 ratio) and a single complex isodicentric 15q (idic15) duplication sample (predicted 5:1 ratio), though the majority of cases examined were idic15q with four copies of the locus (predicted 3:1 ratio). Although somewhat variable, the mean ratio for the Dup 3:1 samples was approximately 3:1 and significantly different from both controls (P = 0.0037) and autism cases (P = 0.0035) by t-test using Welch's correction. The individual Dup 2:1 and Dup 5:1 samples showed higher and lower than predicted ratios, respectively. Error bars represent SEM. There was a positive correlation between the percentage methylation of the PWS-IC and the number of copies of the 15q region, which contains the PWS-IC, on the basis of simple regression analysis (P < 0.001). Scoles et al. Molecular Autism 2011 2:19 doi:10.1186/2040-2392-2-19 Download authors' original image
{"url":"http://www.molecularautism.com/content/2/1/19/figure/F3","timestamp":"2014-04-20T23:27:44Z","content_type":null,"content_length":"13762","record_id":"<urn:uuid:ccd3cdcd-60b9-40f3-9a17-ac5c78d0cf48>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00555-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantum Computing Pioneers in Amsterdam by Henk Nieland Quantum computing has become a hot topic in the past five years. Worldwide interest rapidly increased, for example from IBM, Caltech, Lucent, AT&T, and NSA, since existing techniques will soon reach their limits, and several computations could be done much faster on a quantum computer. The Fourth Workshop on Quantum Information Processing was organized by CWI 9-12 January 2001 in Remember that a cat is said to have nine lives? A Quantum Cat does even better: it can be dead and alive at the same time. This wizardry is at the heart of quantum computing, a novel way of computing based on certain characteristics of quantum mechanics. It emerged in the 1980s as a theoretical alternative to traditional computing, which will be faced with its physical limitations soon. The possibilities are very promising, but the field is still in its infancy, and realization of a working quantum computer is still an enormous challenge. Quantum mechanics offers a novel way of computing, enabling computations out of reach for traditional computers, even if these could be miniaturized to the same level. A crucial notion is superposition . Quantum mechanics describes matter as a superposition of all of its possible states, each with a certain amplitude a complex number whose modulus squared is interpreted as a probability. These amplitudes thus have addition properties different from ordinary probabilities a feature that becomes apparent in interference . Precisely this feature together with the superposition principle gives quantum computing its power. The evolution of a system is described by a unitary operation on the superposition which preserves the probability interpretation of the amplitudes. In traditional computing the smallest unit is a bit which can only take the values 0 or 1. Quantum computing is based on qubits which consist of a superposition of the two classical states 0 and 1 each with its own amplitude. Several physical realizations of qubits have been proposed, for example an atom in the ground state (0) or excited state (1). A computation starts with a number of qubits in a well-determined state, on which a series of unitary operations is performed (the algorithm). Because of interference certain superpositions are intensified, whereas others cancel each other out. After a number of steps the final state (the result) is observed. During the intermediate evolution all possible computational paths are followed simultaneously (quantum parallelism), but they remain hidden in a box. In certain cases this form of computation may lead to a tremendous speed-up compared to traditional methods, but at the same time it poses equally tremendous problems: the smallest disturbance from the environment may ruin the delicate superposition and may render the computation meaningless. Schrödinger s Paradox Nobel Prize winner Erwin Schrödinger invented in the 1930s a thought experiment to elucidate a seemingly absurd consequence of quantum mechanics. A cat sits in a closed box together with one radio-active atom. This atom will decay with a certain probability, according to the laws of quantum mechanics. Upon decaying, the emitted nuclear particle crushes a thin glass tube filled with cyanide and the cat dies instantly. Being outside the box, we don t know whether the radio-active particle has decayed and thus whether the cat is dead or alive. Quantum mechanically the animal is in a superposition of both states with a certain probability. If we look inside the box, however, we see only one of the two states: the cat is either dead or alive. An observation of the superposition of states makes it collapse to one of the states with a certain probability. (Drawing by Tobias Baanders, CWI.) An important notion in quantum mechanics and quantum computing is entanglement : two (or more) qubits can be prepared in such a way that, although they are separated in space one could be on Mars and the other here on earth they have correlations that can not be explained by classical probability theory, for example two atomic nuclei having unknown but opposite spins. As soon as one qubit is measured, the content of the other is also known, no matter how far they are apart. This property can be used for error correction during the computation, as well as for more efficient transmission of information and for certain forms of distributed computations. Quantum computing gained momentum after P.W. Shor showed in 1994 how to construct an efficient algorithm for factoring large numbers, which is of crucial importance for, eg, internet security, followed by an algorithm by L.K. Grover (1996) to search a database quadratically faster than any classical algorithm. The Workshop in Amsterdam (http://www.cwi.nl/~qip) drew 150 participants from 24 countries, including several pioneers of the field, such as Charles Bennett and David DiVincenzo (IBM Yorktown Heights), Richard Jozsa (Bristol), Gilles Brassard (Montreal), Umesh Vazirani (Berkeley), as well as Nobel laureate Gerard t Hooft (Utrecht). CWI started the first quantum computing research group in The Netherlands (and one of the first in Europe) in 1995, and has contributed significant discoveries to the field. CWI has applied quantum computing notions to communication complexity, and found general limitations of quantum computers, as well as some new speed-ups. CWI also studies quantum information theory. The European Union has recognized the importance of this research and has given the group substantial support. Please contact: Harry Buhrman - CWI Tel: +31 20 592 4076 / 4078 E-mail: Harry.Buhrman@cwi.nl
{"url":"http://www.ercim.eu/publication/Ercim_News/enw45/nieland.html","timestamp":"2014-04-16T04:51:26Z","content_type":null,"content_length":"7285","record_id":"<urn:uuid:98fe818c-074a-4722-af11-fb22306e008e>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00429-ip-10-147-4-33.ec2.internal.warc.gz"}
number fields with no unramified extensions? up vote 17 down vote favorite Asked by a colleague: Do we believe that there are an infinite number of number fields that have no unramified extensions? The rational field Q is the most salient example of such a field, and a couple of others are known. But what is conjectured, and what is the philosophy here? 3 I give a large list of examples, and KConrad gives details of how to prove it for some specific cases, here: mathoverflow.net/questions/26491/… – Cam McLeman Oct 5 '10 at 23:41 7 As to the philosophy, note that since such a field has class number 1, this problem is strictly harder than the "there are infinitely many number fields of class number one" problem (which is open, old, and probably very hard). – Cam McLeman Oct 5 '10 at 23:46 1 @Mahesh: If we could show that, then for each such $K$ we could consider its maximal unramified extension, say $L$. Then $L$ has no unramified extensions, and so ranging over all $K$, we obtain an infinite number of number fields with no unramified extensions. So your question is equivalent. – Cam McLeman Oct 6 '10 at 1:37 add comment 1 Answer active oldest votes The question itself is certainly still open. Mostly as an exercise for myself, I'll coalesce my comments above into an answer, and add in some details about where various pieces of the philosophy come from. The starting point is the following philosophy: The ring of integers in any number field with sufficiently small root discriminant admits no non-trivial unramified extensions. This philosophy can occasionally be made precise. For example, Yamamura uses tables of root discriminant bounds from Diaz y Diaz to conclude that for a quadratic imaginary number field $K$ of discriminant $|d|\leq 499$ (or $|d|\leq 2003$ under GRH), the maximal unramified extension of $K$ is a finite extension. This is particularly relevant since each of these maximal unramified extensions clearly has the property that you ask about, that they themselves admit no unramified extensions. The bad news is that the set of number fields with sufficiently small root discriminant to apply these results (at least, without a tremendous of extra effort analyzing carefully constructed extensions) is finite. In particular, results of Odlyzko imply that that there are only finitely many number fields with root discriminant less than $4\pi e^\gamma\approx 22.3$, where $\ gamma$ is the Euler-Mascheroni constant (yeah, that Euler-Mascheroni constant!). Under GRH, this remains true for the larger bound $8\pi e^\gamma$. In fact, as a nice concrete factoid to hold on to, Jones and Roberts have shown that there are exactly 7063 abelian number fields with root discriminant under $8\pi e^\gamma$, and sort these according to their Galois group. up vote 13 down Back to good news: So we now ask ourselves whether or not these numbers $4\pi e^\gamma$ and $8\pi e^\gamma$ can be improved. The answer is a definite yes. If we partition number fields based vote on their proportion of real and complex embeddings, we can get improvements on fields with increased proportion of complex embeddings, up to an improvement factor of $e$ for totally complex number fields. Further, since Odlyzko's argument stems from work of Stark estimating values of $L$-functions, it seems plausible to believe there are analytic improvements to be made as well. So maybe we can keep pushing these bounds higher and higher, enough so that we find infinitely many number fields with smaller root discriminant. More bad news: There's an inherent limit to how good we can make these bounds, coming from the study of class field towers. (Okay, so this is actually really good news for those of us who like to study class field towers, but I digress...) Namely, since root discriminants are unchanged when moving up an unramified extension, fields with an infinite class field tower provide a stopping point for any claim of the form "there are finitely many number fields with root discriminant less than such-and-such bound." This is also something that can be partitioned by proportion of real and complex embeddings, and it's been a hot topic recently to see how limited these Odlyzko-type bounds can get. Recently, Hajir and Maire have further refined this line of thought by considering towers of number fields with tame ramification. So, long story short, from this point of view, the big unknown is whether, once we know optimal bounds on root discriminants, whether or not there will be infinitely many number fields with root discriminants less than that bound. Of course, there's also the possibility that there are other techniques for proving that a number field has no unramified extensions that do not go through root discriminants -- perhaps a form of non-abelian class field theory can come to the rescue, as abelian class field theory can address only the weaker (but still open and fantastically interesting) question of fields with no abelian unramified extensions. 1 There is more discussion about this here mathoverflow.net/questions/31538/… – Mahesh Kakde Oct 6 '10 at 3:53 Since you mentioned about the problem of infinitely many fields of class number one, I would like to mention the conjectures that Coates recently made in a talk in Kyoto. Take the 1 extension of $\mathbb{Q}$ with Galois group $\widehat{\mathbb{Z}}$. Then he conjectures that the set class numbers of all fields in this $\widehat{\mathbb{Z}}$ extension is a bounded set. He also conjectures that for any prime $p$ the class number of all fields in the $\mathbb{Z}_p$ extension of $\mathbb{Q}$ is 1. This was conjectured by Weber for $p=2$. – Mahesh Kakde Oct 6 '10 at 11:36 Interesting, thanks. Do you know of a written reference somewhere? – Cam McLeman Oct 6 '10 at 12:28 1 No. At this moment it is fair to say that Coates raised them as questions based on some numerical evidence though he called them conjectures. There is a very weak numerical evidence for the second part. Some japanese mathematicians (Okazaki, Fukuda, Komatsu were the names mentioned in the talk of Coates) have shown that for p=2,3 the ideal class groups of fields in the $ \mathbb{Z}_p$ extension of $\mathbb{Q}$ has no prime divisor less than a million (I guess). – Mahesh Kakde Oct 6 '10 at 14:20 I am sorry for a vary late comment on Mahesh' one, but recently Morisawa (student of Komatsu, I think) has shown that given a finite set of primes $S$, for each number field $F$ inside $\ 1 mathbb{Q}^{cycl,S}=\prod_{p\in S}\mathbb{Q}^{cycl,p}$, there is a constant $c=c(S,F)$ such that each prime $\ell>c$ whose decomposition field is $F$ does not divide the class number of any other number field contained in $\mathbb{Q}^{cycl,S}$. It appeared in J. Nmber Theory 133 (2013). – Filippo Alberto Edoardo Aug 8 '13 at 2:14 show 1 more comment Not the answer you're looking for? Browse other questions tagged nt.number-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/41219/number-fields-with-no-unramified-extensions/41227","timestamp":"2014-04-17T01:37:54Z","content_type":null,"content_length":"64465","record_id":"<urn:uuid:66ea3868-2f2c-4341-b80b-9421ddb5a93b>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00008-ip-10-147-4-33.ec2.internal.warc.gz"}
The Most Popular Multimedia Posts Year 2012 in Review – The Most Popular Multimedia Posts This is the fourth post in the Year in Review Series. In this post, I will share to you the most popular multimedia posts. These posts include multimedia resources that can be used by teachers and students in teaching and learning mathematics. The Most Popular Multimedia Posts Subscribe to Math and Multimedia If you like Math and Multimedia, you are invited to join more than 2000 subscribers. You may also want to visit my other blogs.
{"url":"http://mathandmultimedia.com/2012/12/29/most-popular-multimedia-posts/","timestamp":"2014-04-18T02:59:55Z","content_type":null,"content_length":"335325","record_id":"<urn:uuid:468d459d-83f9-4176-9c13-ce37c99e54b3>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00487-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: [css3-images] Linear gradients feedback From: Dave Singer <singer@apple.com> Date: Tue, 7 Sep 2010 20:27:03 -0700 Cc: "Tab Atkins Jr." <jackalmage@gmail.com>, Simon Fraser <smfr@me.com>, www-style list <www-style@w3.org> Message-Id: <B3B854AA-1428-4DFB-A375-5FD0D16DBAED@apple.com> To: fantasai <fantasai.lists@inkedblade.net> actually I think I can be vastly clearer and also merge a whole load of suggestions/solutions. and (the devil of people who used to be in research departments) generalize! Try this: Linear gradients. These are drawn between two parallel lines (the 'from' and 'to' lines), which are perpendicular to the gradient vector. The intersection of the 'from' line and the gradient vector is less far along the vector than the intersection of the 'to' line. Each of these lines intersects the shape to be filled at the furthest possible extremity in the negative ('from') and positive ('to') directions along the gradient vector. (Which means we don't need to care about what the colors are before 'from' or after 'to' since they are not visible). This generalizes your diagram for the 'to' line and uses it for the 'from' line. It also covers the degenerate cases where the furthest extremity is a line (0, 90, 180, and 270). It fills from any corner or edge in any stable direction. So, what fill cases does this *not* cover? Well, those whose direction is determined by the geometry of the box that they are filling. Since the box edges are vertical and horizontal (known directions), that leaves us with diagonals. So, the next more general syntax is where the first argument is *either* a vector direction (number), or one of the four vectors bl- tr, tl-br, tr-bl, br-tl (t[op], b[ottom], l[eft], r[ight], obviously). We only need the diagonals as a special case. Ah, but we can deal with some of Elika's incisive question about the axis system in use, if we go for two arguments; then the gradient vector is defined by an angle relative to a base vector. So, then we have a syntax with two arguments; a base direction, specifying two corners plus an angle relative to that base direction the first argument is one of the possibilities: b-t, t-b, l-r, r-l, bl- tr, tl-br, tr-bl, br-tl the second is an angle relative to that base vector the two combined give a computed gradient vector, and after that, everything falls out. Transitions are then defined as interpolating between the computed gradient vectors, of course. Now we only need one syntax and we can interpolate, and so on. linear-gradient( base-direction, relative-angle, from-color, to-color, {stop%, stop-color}* ) where from-color is defined as from 0% and to- color is defined as to 100%. cleaner? clearer? Dave Singer Multimedia and Software Standards, Apple Received on Wednesday, 8 September 2010 03:27:52 GMT This archive was generated by hypermail 2.3.1 : Tuesday, 26 March 2013 17:20:31 GMT
{"url":"http://lists.w3.org/Archives/Public/www-style/2010Sep/0208.html","timestamp":"2014-04-18T23:43:34Z","content_type":null,"content_length":"11805","record_id":"<urn:uuid:8e362251-78a9-49b9-9993-9d40641d8c8a>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00162-ip-10-147-4-33.ec2.internal.warc.gz"}
Parity of a Particle (and Parity of the Higgs in particular) No. CP is an approximate accidental symmetry of the Standard model (SM) lagrangian. Oh, ok, my mistake. The same is true for any accidental symmetry. You don't define the electron,muon,tau to have lepton number=1 before writing the lagranagian, later you see that the symmetry exists with those But still, isn't the accidental lepton number symmetry a little different? Consider the charged current term in the lagrangian: [itex]\mathcal{L}_e^{\textrm{CC}} \propto \overline{\nu_e}\,\gamma^\mu Suppose we assign lepton number to the fields like so: [itex]\quad \overline{\nu_e}\rightarrow -1 \qquad e \rightarrow 1 \qquad W^\mu \rightarrow 0[/itex] Then, looking at the term [itex]\mathcal{L}_e^{\textrm{CC}}[/itex], considering lepton number additive, we get [itex]-1+1+0=0[/itex] and this means the interaction associated with this term conserves lepton number. Lepton number is assigned to the fields and not to the whole term. Consider the Higgs interaction term with the electron, if the Higgs were a pseudoscalar: [itex]\mathcal{L}_e^{\textrm{H}} \propto \overline{e}\,\gamma_5\,e\,H[/itex]. Here, again, one assigns lepton number to the fields: [itex]\quad \overline{e}\rightarrow -1 \qquad e \rightarrow 1 \qquad H \rightarrow 0[/itex] The interaction again conserves lepton number. Lepton number is assigned to the fields and not to the bilinear [itex]\overline{e}\,\gamma_5\,e[/itex] as a whole. Now, regarding CP, it would be different in the sense that [itex]\overline{e}\,\gamma_5\,e[/itex] (a bilinear, not a field) already has a well defined way of transforming under CP: [itex]\overline{e}\,\gamma_5\,e\, \overset{CP}{\rightarrow}\, -\overline{e}\,\gamma_5\,e\ [/itex], because [itex]\det(CP)=-1[/itex]. Assigning a CP number to the field [itex]H[/itex] would then translate our desire to either have a (almost) CP invariant lagrangian or not. So I can't see here how the symmetry exists independently of the assignment, since [itex]\overline{e}\,\gamma_5\,e[/itex] already has a fixed assignment of CP-number ([itex]-1[/itex]). I hope the mistake in my reasoning is easy to spot.
{"url":"http://www.physicsforums.com/showthread.php?p=4168559","timestamp":"2014-04-18T10:40:21Z","content_type":null,"content_length":"70411","record_id":"<urn:uuid:2d4ef864-e6ab-4c2e-a57c-1ef79eaf3cc8>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00418-ip-10-147-4-33.ec2.internal.warc.gz"}
Biomimetic Robotics : Mechanisms and Control ISBN: 9780521895941 | 0521895944 Edition: 1st Format: Hardcover Publisher: Cambridge University Press Pub. Date: 1/26/2009 Why Rent from Knetbooks? Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option for you. Simply select a rental period, enter your information and your book will be on its way! Top 5 reasons to order all your textbooks from Knetbooks: • We have the lowest prices on thousands of popular textbooks • Free shipping both ways on ALL orders • Most orders ship within 48 hours • Need your book longer than expected? Extending your rental is simple • Our customer support team is always here to help
{"url":"http://www.knetbooks.com/biomimetic-robotics-mechanisms-control/bk/9780521895941","timestamp":"2014-04-20T03:44:32Z","content_type":null,"content_length":"30852","record_id":"<urn:uuid:657ff5d5-dc74-4ac2-aca7-fc2aa1f44b66>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00000-ip-10-147-4-33.ec2.internal.warc.gz"}
Research interest: Knot theory and low dimensional topology Office: LS 121 Phone: 478.475.8623 Email: steven.wallace AT maconstate DOT edu Macon State Dept. of Math and Computer Science: math.maconstate.edu Current Semester: Math "ArXiv" search engine: ArXiv • Fall 2011 "KnotInfo" table of knot invariants: KnotInfo Archive of Past Semesters' Class Web Pages "Knots make humans human." WashingtonPostArticle Special Highlight from Spring 2011: Click here to check out some "Mathematical Modeling" student project presentations. Pi Mu Epsilon Mathematics Honorary Society: STUDENTS APPLY TO BE A 2011-2012 MEMBER HERE Go to Favorites for some more cool links. Curriculum Vitae This site was last updated 08/19/11
{"url":"http://math.maconstate.edu/swallace/","timestamp":"2014-04-18T18:40:03Z","content_type":null,"content_length":"12450","record_id":"<urn:uuid:e394660c-c14f-479d-b288-67783d28aed4>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00282-ip-10-147-4-33.ec2.internal.warc.gz"}
Generalized Molecular Replacement Next: Self-rotation Function Up: Molecular Replacement Previous: A Mathematica Script The following sections show how the 26-10 Fab fragment complexed with digoxin was solved by generalized molecular replacement (Brünger 1991c). The space group of the 26-10 Fab/digoxin crystals is 1. Self-rotation function. 2. Modification of the elbow angle of a known Fab structure. 3. Cross-rotation function with the modified Fab structure. 4. Filtering the rotation function by PC-refinement. 5. Analysis of the PC-refinement. 6. Translation function for molecule A, using the PC-refined model. 7. Translation function for molecule B. 8. Combined translation function to determine the relative position between A,B. 9. Rigid-body refinement. In general, one has to try several different starting elbow angles (spaced approximately 10 PC-refinements of other degrees of freedom. The modification of the Fab example input files should be straightforward. An overview of the strategy is shown in Fig. 17.1. Figure 17.1: Overview of molecular replacement. Sat Mar 11 09:37:37 PST 1995
{"url":"http://www.pasteur.fr/recherche/unites/Binfs/xplor/manual/node342.html","timestamp":"2014-04-18T08:09:01Z","content_type":null,"content_length":"4453","record_id":"<urn:uuid:e28eb32f-9655-4b58-9cf3-1c8758ae217d>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00308-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: please help me just 20 min left in my exam my question is.Weights of male mountain lions follow the normal distribution with a median of 150 lb and an interquartile range of 8.2 lb. Find the 75th and 95 percentile of the weights. • one year ago • one year ago Best Response You've already chosen the best response. plz can anybody tell Best Response You've already chosen the best response. The scatter diagram of midterm scores and final scores in a large class is football shaped. For about 80% of the students, the regression estimate of final score based on midterm score is correct to within 15 points. For about 95% of the students, the regression estimate of final score based on midterm score is correct to within ___________ points. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/51551b3de4b07077e0bfe50f","timestamp":"2014-04-16T22:47:06Z","content_type":null,"content_length":"35658","record_id":"<urn:uuid:a627e609-44ef-4dda-888c-17f1a72a38da>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
Kawasaki Disease Aneurysm Z-Scores: Another Smackdown Boston Children’s versus Children’s National Medical Center for a “giant” knockout When the Children’s National (CNMC) coronary artery z-score equations were published in 2008, I briefly compared them to the 2007 Boston data and noted their similarities. In my opinion, the manner in which the CNMC equations handle the “standard deviation” makes these incompatible with some newly proposed cutoffs. Let me explain. Classifying Aneurysms The current AHA criteria for classifying coronary artery aneurysms relies on a combination of z-scores and absolute diameters: • any segment with a z-score of > 2.5 = abnormal • <5 mm = small • 5 – 8 mm = large • ≥8 mm = giant A recent article by Manlhoit et al. points out the folly of using absolute measurements in this instance. They then take the logical next step by introducing a classification system based on z-scores. Using their previously published data (from Boston, see above), the authors advocate the following coronary artery aneurysm z-score classification: • ≥2.5 – <5 = small • ≥5 – <10 = large • ≥10 = giant The clinical science behind establishing these cutoff points is presented in the article and is beyond the scope of what I am trying to do here. However, it is worth noting that (at least to me) the proposed system has a certain elegance and symmetry— it just seems reasonable. Z-Score Equations The Boston and CNMC equations each predict very similar values for the BSA adjusted mean diameter by using an allometric model. The two equations also yield similar results out to about z-scores of +2. But the similarities end where coronary artery abnormalities begin. Using the new criteria proposed for giant aneurysms ( z ≥10 ) and applying the CNMC equations, patients who previously had giant aneurysms ( ≥8 mm) now only have large aneurysms. (See this page for the interactive comparison) The big difference between the two z-score equations (z = [score – mean] / standard deviation ) is in how they deal with the standard deviation. The Boston equations use a separate regression (on BSA) to predict the SD, while the CNMC equations use the regression mean square error (MSE) statistic as a substitute for the SD. I have always been bothered by the patent substitution of the regression MSE (usually, the square root of the MSE i.e., the RMSE) for the population SD— particularly for the purpose of calculating z-scores. While the “transform both sides” strategy is perfectly legitimate for stabilizing the variance (and indeed, for discovering the allometric relationship!), if you play around with the regression residuals and then back-transform (i.e., exponentiate) your calculations — you have just modeled positive skew. Detecting skew shouldn’t be all that hard to do. If the values are distributed normally, then it stands to reason that the residuals (observed - predicted) are also normally distributed. A simple plot of the residuals should show us what is going on. Here is the frequency vs. residuals plot from the recent fetal echo reference values of Lee et al.: (Similar residuals plots are provided by the crazy-cool online curve fitting at ZunZun.com.) That’s not to say that skew doesn’t exist. Indeed, that is part of the point and elegance of the recently applied LMS method. It is imperative that we do something to examine the presence or absence of skew, and then describe how we intended to deal with it. Unfortunately, both of these investigations fail to mention this fundamental data characteristic in their respective manuscripts. Bottom Line Due to unexplored assumptions about the nature of the residuals/skew of the data, z-score cutoff values are not universal and are absolutely dependent upon their underlying reference z-score
{"url":"http://fortuitousconcatenation.blogspot.com/2010/04/kawasaki-disease-aneurysm-z-scores.html","timestamp":"2014-04-19T04:40:55Z","content_type":null,"content_length":"86388","record_id":"<urn:uuid:d5eeb118-0931-4520-a813-db644abb8bde>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00182-ip-10-147-4-33.ec2.internal.warc.gz"}
No Current Programmatic Workshops 1. Organizers: Raf Cluckers (Université de Lille I (Sciences et Techniques de Lille Flandres Artois)), LEAD&nbspJonathan Pila (University of Oxford), Thomas Scanlon (University of California, The workshop will feature talks in a range of topics where model theory interacts with other parts of mathematics, especially number theory and arithmetic geometry, including: motivic integration, algebraic dynamics, diophantine geometry, and valued fields. Updated on Apr 03, 2014 09:21 AM PDT This 2-day workshop will showcase the contributions of female mathematicians to the three main themes of the associated MSRI program: Shimura varieties, p-adic automorphic forms, periods and L-functions. It will bring together women who are working in these areas in all stages of their careers, featuring lectures by both established leaders and emerging researchers. In addition, there will be a poster session open to all participants and an informal panel discussion on career issues. Updated on Mar 17, 2014 09:50 AM PDT The goal of this workshop is to give a practical introduction to some of the main topics and techniques related to the August-December 2014 MSRI program, "New geometric methods in number theory and automorphic forms." The workshop is aimed at graduate students and interested researchers in number theory or related fields. There will be lecture series on periods of automorphic forms, Shimura varieties, and representations of p-adic groups,as well as more advanced topics, including p-adic Hodge theory and the cohomology of arithmetic groups. Updated on Mar 18, 2014 01:21 PM PDT Within the broad range of geometric representation theory the Connections Workshop will focus on three research topics in which we expect particularly striking new developments within the next few years: * Categorical and geometric structures in representation theory and Lie superalgebras * Geometric construction of representations via Shimura varieties and related moduli spaces * Hall algebras and representations The workshop will bring together researchers from these different topics within geometric representation theory and will thus facilitate a successful start of the semester program. It will give junior researchers from each of these parts of geometric representation theory a broader picture of possible applications and of new developments, and will establish a closer contact between junior and senior researchers. This workshop is aimed at encouraging and increasing the active participation of women and members of under-represented groups in the MSRI program. Updated on Feb 07, 2014 08:03 PM PST Geometric Representation Theory is a very active field, at the center of recent advances in Number Theory and Theoretical Physics. The principal goal of the Introductory Workshop will be to provide a gateway for graduate students and new post-docs to the rich and exciting, but potentially daunting, world of geometric representation theory. The aim is to explore some of the fundamental tools and ideas needed to work in the subject, helping build a cohort of young researchers versed in the geometric and physical sides of the Langlands philosophy. Updated on Feb 04, 2014 08:42 AM PST The workshop will focus on the role of categorical structures in number theory and harmonic analysis, with an emphasis on the setting of the Langlands program. Celebrated examples of this theme range from Lusztig's character sheaves to Ngo's proof of the Fundamental Lemma. The workshop will be a forum for researchers from a diverse collection of fields to compare problems and strategies for solutions. Updated on Feb 07, 2014 06:38 PM PST L-functions attached to Galois representations coming from algebraic geometry contain subtle arithmetic information (conjectures of Birch and Swinnerton-Dyer, Deligne, Beilinson, Bloch and Kato, Fontaine and Perrin-Riou). Langlands has predicted the existence of a correspondence relating these L-functions to L-functions of automorphic forms which are much better understood. The workshop will focus on recent developments related to Langlands correspondence (construction of Galois representations attached to automorphic forms via the cohomology of Shimura varieties, modularity of Galois representations...) and arithmetic of special values of L-functions. It will be dedicated to Michael Harris as a tribute to his enormous influence on the themes of the workshop. Updated on Jan 28, 2014 07:05 PM PST This two-day workshop will consist of short courses given by prominent female mathematicians in the field. These introductory courses will be appropriate for graduate students, post-docs, and researchers in areas related to the program. The workshop will also include a panel discussion featuring successful women at various stages in their mathematical careers. Updated on Jan 21, 2014 08:10 PM PST The deformation theory of geometric structures on manifolds is a subfield of differential geometry and topology, with a heavy infusion of Lie theory. Its richness stems from close relations to dynamical systems, algebraic geometry, representation theory, Lie theory, partial differential equations, number theory, and complex analysis. The introductory workshop will serve as an overview to the program. It aims to familiarize graduate students, post-docs, and other researchers to the major topics of the program. There will be a number of short courses. Updated on Jan 28, 2014 05:57 PM PST Updated on Jan 21, 2014 08:47 PM PST Updated on Jan 21, 2014 08:52 PM PST The Research Workshop of the ``Dynamics on moduli spaces of geometric structures'' will concentrate on some of the following general interrelated themes: (1) Geometric structures on the spaces of geometric structures which extend and generalize classical constructions on Teichmüller spaces, such as the Weil-Petersoon metric, the pressure metric, the Teichmüller metric and its geodesic flow, Fenchel-Nielsen coordinates, Fock-Goncharov Thurson-Penner coordinates, and the symplectic and Poisson geometries (2) Relations with harmonic maps, Riemann surfaces, complex geometry: specifically Higgs bundles, holomorphic differentials (quadratic, cubic, etc.) as parameters for representations of the fundamental group, hyperkähler and complex symplectic geometry of moduli spaces, lifts of Teichmüller geodesic flows to flat bundles of character varieties (3) Asymptotic properties of higher Teichmüller spaces, including generalized measured geodesic laminations, Culler-Morgan-Shalen asymptotics of character varieties, degenerations of geometric structures and discrete subgroups (4) Actions of mapping class groups and outer automorphism groups, properness criteria for Anosov representations and their generalizations, properness criteria for non-discrete representations, chaotic actions of mapping class groups and the monodromy map from structures to representations (5) Classification of exotic geometric structures, tameness criteria, generalizations of ending lamination-type invariants to higher rank structures, rigidity and flexibility for thin subgroups, arithmeticity conditions, and geometric transitions Updated on Jan 21, 2014 08:15 PM PST The Advances in Homogeneous Dynamics workshop will feature the speakers whose work is at the forefront of the eld. There will be a panel discussion accompanied by an open problem session to lay out possible directions for the research in homogeneous dynamics. Talks will be in a broad range of topics and this will help to build more connections between researchers interested in dynamical systems, number theory and geometry. For example we hope that the involvement of the participants of the other program held at MSRI during the same academic year (Dynamics on Moduli Spaces of Geometric Structures, Spring 2015) would create new connections between the topics. There will be shorter talks presented by early-career researchers Updated on Jan 21, 2014 08:54 PM PST The purpose of this meeting is to help junior female researchers to become familiar with the focus topics of the main MSRI program, and also for the junior researchers to have an opportunity to get acquainted with more senior women researchers in differential geometry. Updated on Mar 10, 2014 08:35 AM PDT The week will be devoted to an introduction to modern techniques in Riemannian geometry. This is intended to help graduate students and younger researchers get a headstart, in order to increase their participation during the main semester programs and research lectures. To increase outreach, the week will focus on Riemannian geometry and should be largely accessible. Some minicourses on topics of recent interest will be included. The workshop will also have semi-expository lectures dealing with aspects of spaces with curvature bounded from below, since such spaces will occur throughout the semester. We expect that many Berkeley mathematicians and students will participate in the introductory workshop. Updated on Jun 07, 2013 02:05 PM PDT The workshop will integrate elements from complex differential geometry with Einstein metrics and their generalizations. The topics will include - Existence of Kähler-Einstein metrics and extremal Kähler metrics. Notions of stability in algebraic geometry such as Chow stability, K-stability, b-stability, and polytope stability. Kähler-Einstein metrics with conical singularities along a divisor. - Calabi-Yau metrics and collapsed limit spaces. Connections with physics and mirror symmetry. - Einstein metrics and their moduli spaces, ε-regularity, noncompact examples such as ALE, ALF, and Poincaré-Einstein metrics. Generalizations of the Einstein condition, such as Bach-flat metrics and Ricci solitons. - Sasaki-Einstein metrics and metrics with special holonomy. New examples and classification problems. Updated on Aug 03, 2013 09:30 AM PDT The workshop will concentrate on parabolic methods in both Riemannian and complex geometry. The topics will include - Ricci flow. Analytic questions about Ricci flow in three dimensions. Possible applications of Ricci flow to 4-manifold topology. Ricci flow in higher dimensions under curvature assumptions. - Kähler-Ricci Flow. Applications to the Kähler-Einstein problem. Connections to the minimal model program. Study of Kähler-Ricci solitons and limits of Kähler-Ricci flow. - Mean curvature flow. Singularity analysis. Generic mean curvature flow. - Other geometric flows such as Calabi flow and pluriclosed flow. Updated on Jun 07, 2013 10:39 AM PDT Past Programmatic Workshops Recent innovations in higher category theory have unlocked the potential to reimagine the basic tools and constructions in algebraic topology. This workshop will explore the interplay between these higher and $\infty$-categorical techniques with classical algebraic topology, playing each off of the other and returning the field to conceptual, geometrical intuition. Updated on Apr 15, 2014 11:30 AM PDT The development of model theory has always been influenced by its potential applications. Recent years have seen a remarkable flowering of that development, with many exciting applications of model theory in number theory and algebraic geometry. The introductory workshop will aim to increase these interactions by exposing the techniques of model theory to the number theorists and algebraic geometers, and the problems of number theory and algebraic geometry to the model theorists. The Connections for Women workshop will focus on presenting current research on the borders of these subjects, with particular emphasis on the contributions of women. In addition, there will be some social occasions to allow young women and men to make connections with established researchers, and a panel discussion addressing the challenges faced by all young researchers, but especially by women, in establishing a career in mathematics. Updated on Feb 12, 2014 09:59 AM PST Model theory is a branch of mathematical logic whose structural techniques have proven to be remarkably useful in arithmetic geometry and number theory. We will introduce in this workshop some of the main themes of the program. In particular, we will be offering the following tutorials: 1. An Introduction to Stability-Theoretic Techniques, by Pierre Simon. 2. Model Theory and Diophantine Geometry, by Antoine Chambert-Loir, Ya'acov Peterzil, and Anand Pillay. 3. Valued Fields and Berkovich Spaces, by Deirdre Haskell and Martin Hils. 4. Model Theory and Additive Combinatorics, by Lou van den Dries. In addition to the tutorials there will be several "state of the art" lectures on the program topics, indicating recent results as well as directions for future work. Speakers include Ekaterina Amerik, Ehud Hrushovski, Alice Medvedev, Terence Tao, and Margaret Thomas. The introductory workshop aims to familiarize graduate students, postdocs, and non-experts to major and new topics of the current program. Though the audience is expected to have a general mathematical background, knowledge of technical terminology and recent findings is not assumed. Updated on Feb 10, 2014 11:01 AM PST Algebraic topology is a rich, vibrant field with close connections to many branches of mathematics. This workshop will describe the state of the field, focusing on major programs, open problems, exciting new tools, and cutting edge techniques. The introductory workshop serves as an overview to the overlying programmatic theme. It aims to familiarize graduate students, postdocs, and non-experts to major and new topics of the current program. Though the audience is expected to have a general mathematical background, knowledge of technical terminology and recent findings is not assumed. Updated on Jan 27, 2014 11:44 AM PST This two-day workshop will consist of short courses given by prominent female mathematicians in the field. These introductory courses will be appropriate for graduate students, post-docs, and researchers in related areas. The workshop will also include a panel discussion featuring successful women at various stages in their mathematical careers. Updated on Jan 21, 2014 01:08 PM PST The purpose of this workshop is to gather researchers working in various areas of geometry in infinite dimensions in order to facilitate collaborations and sharing of ideas. Topics represented include optimal transport and geometries on densities, metrics on shape spaces, Euler-Arnold equations on diffeomorphism groups, the universal Teichmuller space, geometry of random Riemann surfaces, metrics on spaces of metrics, and related areas. The workshop will be held on the campus of University of California Berkeley (60 Evans Hall) the weekend of December 7-8, 2013. It is funded by an NSF grant. Updated on Dec 05, 2013 02:55 PM PST This workshop discusses recent developments both in the study of the properties of initial data for Einstein's equations, and in the study of solutions of the Einstein evolution problem. Cosmic censorship, the formation and stability of black holes, the role of mass and quasi-local mass, and the construction of solutions of the Einstein constraint equations are focus problems for the workshop. We highlight recent developments, and examine major areas in which future progress is likely. Updated on Nov 26, 2013 09:16 AM PST The workshop will be devoted to emerging approaches to fluid mechanical, geophysical and kinetic theoretical flows based on optimal transportation. It will also explore numerical approaches to optimal transportation problems. Updated on Nov 05, 2013 12:34 PM PST Mathematical relativity is a very widely ranging area of mathematical study, spanning differential geometry, elliptic and hyperbolic PDE, and dynamical systems. We introduce in this workshop some of the leading areas of current interest associated with problems in cosmology, the theory of black holes, and the geometry and physics of the Cauchy problem (initial data constraints and evolution) for the Einstein equations. The introductory workshop serves as an overview to the overlying programmatic theme. It aims to familiarize graduate students, postdocs, and non-experts to major and new topics of the current program. Though the audience is expected to have a general mathematical background, knowledge of technical terminology and recent findings is not assumed. Updated on Oct 23, 2013 10:04 AM PDT Ever since the epic work of Yvonne Choquet-Bruhat on the well-posedness of Einstein's equations initiated the mathematical study of general relativity, women have played an important role in many areas of mathematical relativity. In this workshop, some of the leading women researchers in mathematical relativity present their work. Updated on Oct 23, 2013 10:03 AM PDT The workshop is intended to give an overview of the research landscape surrounding optimal transportation, including its connections to geometry, design applications, and fully nonlinear partial differential equations. As such, it will feature some survey lectures or minicourses by distinguished visitors and/or a few of the organizers of the theme semester, amounting to a kind of summer school. These will be complemented by a sampling of research lectures and short presentations from a spectrum of invited guests and other participants, including some who attended the previous week's {\em Connections for Women} workshop. The introductory workshop aims to familiarize graduate students, postdocs, and non-experts to major and new topics of the current program. Though the audience is expected to have a general mathematical background, knowledge of technical terminology and recent findings is not assumed. Updated on Oct 23, 2013 10:02 AM PDT This two-day event aims to connect women graduate students and beginning researchers with more established female researchers who use optimal transportation in their work and can serve as professional contacts and potential role-models. As such, it will showcase a selection of lectures featuring female scientists, both established leaders and emerging researchers. These lectures will be interspersed with networking and social events such as lunch or tea-time discussions led by successful researchers about (a) the particular opportunities and challenges facing women in science---including practical topics such as work-life balance and choosing a mentor, and (b) promising new directions in optimal transportation and related topics. Junior participants will be paired with more senior researchers in mentoring groups, and all participants will be encouraged to stay for the Introductory Workshop the following week, where they will have the opportunity to propose a short research communication. Updated on Oct 02, 2013 08:49 AM PDT The workshop will examine the interplay between measures of singularities coming both from characteristic p methods of commutative algebra, and invariants of singularities coming from birational algebraic geometry. There is a long history of this interaction which arises via the "reduction to characteristic p" procedure. It is only in the last few years, however, that very concrete objects from both areas, namely generalized test ideals from commutative algebra and multiplier ideals from birational geometry, have been shown to be intimately connected. This workshop will explore this connection, as well as other topics used to study singularities such as jets schemes and valuations. Updated on Jun 05, 2013 09:44 AM PDT 14. Organizers: Victor Ginzburg (University of Chicago), Iain Gordon (University of Edinburgh, UK), Markus Reineke (Bergische Universität Wuppertal, Germany), Catharina Stroppel* (University of Bonn, Germany), and James Zhang (University of Washington) In recent years there have been increasing interactions between noncommutative algebra/representation theory on the one hand and algebraic geometry on the other. This workshop would aim to examine these interactions and, as importantly, to encourage the interactions between the three areas. The precise topics will become more precise nearer the time, but will certainly include: Noncommutative algebraic geometry; Noncommutative resolutions of singularities and Calabi-Yau algebras; Symplectic reflection and related algebras; D-module theory; Deformation-quantization Updated on May 14, 2013 12:12 PM PDT 15. Organizers: Luchezar Avramov (University of Nebraska), David Eisenbud (University of California, Berkeley), and Irena Peeva* (Cornell University) The workshop will focus on recent breakthroughs in understanding and applications of free resolutions and on interactions of commutative algebra and representation theory, where algebraic geometry often appears as a third player. A specific goal is to stimulate further interaction between these fields. Updated on Apr 17, 2014 05:18 PM PDT 16. Organizers: Michael Artin (Massachusetts Institute of Technology - MIT), Michel Van den Bergh* (Vrije Universiteit Brussel), and Toby Stafford (University of Manchester) This workshop will provide several short lecture series consisting two or three lectures each to introduce postdocs, graduate students and non-experts to some of the major themes of the conference. While the precise topics may change to reflect developments in the area, it is likely that we will run mini-series in the following subjects: Noncommutative algebraic geometry; D-Module Theory; Derived Categories; Noncommutative Resolutions of Singularities; Deformation-Quantization; Symplectic Reflection Algebras; Growth Functions of Infinite Dimensional Algebras. Updated on Jan 08, 2014 04:52 PM PST 17. Organizers: Georgia Benkart (University of Wisconsin), Ellen Kirkman* (Wake Forest University), and Susan Sierra (Princeton University & University of Edinburgh) The Connections for Women workshop associated to the MSRI program in noncommutative algebraic geometry and representation theory is intended to bring together women who are working in these areas in all stages of their careers. As the first event in the semester, this workshop will feature a "tapas menu" of current research and open questions: light but intriguing tastes, designed to encourage further exploration and interest. Talks will be aimed at a fairly general audience and will cover diverse topics within the theme of the program. In addition, there will be a poster session for graduate students and recent PhD recipients and a panel discussion on career issues, as well as free time for informal discussion. Updated on Apr 09, 2014 03:05 AM PDT 18. Organizers: Winfried Bruns (Universität Osnabrück), Alicia Dickenstein (University of Buenos Aires, Argentina), Takayuki Hibi (Osaka University), Allen Knutson* (Cornell University), and Bernd Sturmfels (University of California, Berkeley) This workshop on Combinatorial Commutative Algebra aims to bring together researchers studying toric algebra and degenerations, simplicial objects such as monomial ideals and Stanley-Reisner rings, and their connections to tropical geometry, algebraic statistics, Hilbert schemes, D-modules, and hypergeometric functions. Updated on Apr 12, 2014 11:23 AM PDT 19. Organizers: Claire Amiot (Université de Strasbourg), Sergey Fomin (University of Michigan), Bernard Leclerc (Université de Caen), and Andrei Zelevinsky* (Northeastern University) Cluster algebras provide a unifying algebraic/combinatorial framework for a wide variety of phenomena in settings as diverse as quiver representations, Teichmuller theory, Poisson geometry, Lie theory, discrete integrable systems, and polyhedral combinatorics. The workshop aims at presenting a broad view of the state-of-the-art understanding of the role of cluster algebras in all these areas, and their interactions with each other. Updated on Feb 15, 2014 09:13 AM PST 20. Organizers: David Eisenbud* (University of California, Berkeley), Bernhard Keller (Universit´e Paris VII, France), Karen Smith (University of Michigan), and Alexander Vainshtein* (University of Haifa, Israel) This workshop will take place at the opening of the MSRI special programs on Commutative Algebra and on Cluster Algebras. It will feature lecture series at different levels, to appeal to a wide variety of participants. There will be minicourses on the basics of cluster algebras, and others developing particular aspects of cluster algebras and commutative algebra. Updated on Apr 12, 2014 11:23 AM PDT 21. Organizers: Claudia Polini (University of Notre Dame), Idun Reiten (Norwegian University of Science and Technology), Karen Smith (University of Michigan), and Lauren Williams* (University of California, Berkeley) This workshop will present basic notions from Commutative Algebra and Cluster Algebras, with a particular focus on providing background material. Additionally, the workshop aims to encourage and facilitate the exchange of ideas between researchers in Commutative Algebra and researchers in Cluster Algebras. Updated on Sep 13, 2013 10:37 AM PDT 22. Organizers: Noam Berger (The Hebrew University of Jerusalem), Nina Gantert (Technical University, Munich), Andrea Montanari (Stanford University), Alain-Sol Sznitman (Swiss Federal Institute of Technology, ETH Zurich), and Ofer Zeitouni* (University of Minnesota/Weizmann Institute) The field of random media has been the object of intensive mathematical research over the last thirty years. It covers a variety of models, mainly from condensed matter physics, physical chemistry, and geology, where one is interested in materials which have defects or inhomogeneities. These features are taken into account by letting the medium be random. It has been found that this randomness can cause very unexpected effects in the large scale behavior of these models; on occasion these run contrary to the prevailing intuition. A feature of this area, which it has in common with other areas of statistical physics, is that what was initially thought to be just a simple toy model has turned out to be a major mathematical challenge. Updated on Jun 07, 2013 03:09 PM PDT 23. Organizers: Philippe Di Francesco* (Commissariat à l' Énergie Atomique, CEA), Andrei Okounkov (Columbia University), Steffen Rohde (University of Washington ), and Scott Sheffield (Massachusetts Institute of Technology, MIT) Our understanding of the scaling limits of discrete statistical systems has shifted in recent years from the physicists' field-theoretical approaches to the more rigorous realm of probability theory and complex analysis. The aim of this workshop is to combine both discrete and continuous approaches, as well as the statistical physics/combinatorial and the probabilistic points of view. Topics include quantum gravity, planar maps, discrete conformal analysis, SLE, and other statistical models such as loop gases. Updated on Jun 07, 2013 03:08 PM PDT 24. Organizers: Geoffrey R. Grimmett (University of Cambridge), Eyal Lubetzky* (Microsoft Research), Jeffrey Steif (Chalmers University of Technology), and Maria E. Vares (Centro Brasileiro de Pesquisas Físicas) Over the last ten years there has been spectacular progress in the understanding of geometrical properties of random processes. Of particular importance in the study of these complex random systems is the aspect of their phase transition (in the wide sense of an abrupt change in macroscopic behavior caused by a small variation in some parameter) and critical phenomena, whose applications range from physics, to the performance of algorithms on networks, to the survival of a biological species. Recent advances in the scope of rigorous scaling limits for discrete random systems, most notably for 2D systems such as percolation and the Ising model via SLE, have greatly contributed to the understanding of both the critical geometry of these systems and the behavior of dynamical stochastic processes modeling their evolution. While some of the techniques used in the analysis of these systems are model-specific, there is a remarkable interplay between them. The deep connection between percolation and interacting particle systems such as the Ising and Potts models has allowed one model to successfully draw tools and rigorous theory from the other. The aim of this workshop is to share and attempt to push forward the state-of-the-art understanding of the geometry and dynamic evolution of these models, with a main focus on percolation, the random cluster model, Ising and other interacting particle systems on lattices. Updated on Jun 07, 2013 03:08 PM PDT 25. Organizers: Cédric Boutillier (Université Pierre et Marie Curie), Tony Guttmann* (University of Melbourne), Christian Krattenthaler (University of Vienna), Nicolai Reshetikhin (University of California, Berkeley), and David Wilson (Microsoft Research) Research at the interface of lattice statistical mechanics and combinatorial problems of ``large sets" has been and exciting and fruitful field in the last decade or so. In this workshop we plan to develop a broad spectrum of methods and applications, spanning the spectrum from theoretical developments to the numerical end. This will cover the behaviour of lattice models at a macroscopic level (scaling limits at criticality and their connection with SLE) and also at a microscopic level (combinatorial and algebraic structures), as well as efficient enumeration techniques and Monte Carlo algorithms to generate these objects. Updated on Jun 07, 2013 02:00 PM PDT 26. Organizers: Beatrice de Tiliere (University Pierre et Marie Curie), Dana Randall* (Georgia Institute of Technology), and Chris Soteros (University of Saskatchewan) This 2-day workshop will bring together researchers from discrete mathematics, probability theory, theoretical computer science and statistical physics to explore topics at their interface. The focus will be on combinatorial structures, probabilistic algorithms and models that arise in the study of physical systems. This will include the study of phase transitions, probabilistic combinatorics, Markov chain Monte Carlo methods, and random structures and randomized algorithms. Since discrete lattice models stand at the interface of these fields, the workshop will start with background talks in each of the following three areas: Statistical and mathematical physics; Combinatorics of lattice models; Sampling and computational issues. These talks will describe the general framework and recent developments in the field and will be followed with shorter talks highlighting recent research in the area. The workshop will celebrate academic and gender diversity, bringing together women and men at junior and senior levels of their careers from mathematics, physics and computer science. Updated on Jun 07, 2013 02:00 PM PDT 27. Organizers: Irit Dinur (Weizmann Institute), Subhash Khot (Courant Institute), Manor Mendel* (Open University of Israel and Microsoft Research), Assaf Naor (Courant Institute), and Alistair Sinclair (University of California, Berkeley) Geometric problems which are inherently quantitative occur in various aspects of theoretical computer science, including a) Algorithmic tasks for geometric questions such as clustering and proximity data structures. b) Geometric methods in the design of approximation algorithms for combinatorial optimization problems, including the analysis of semidefinite programs and embedding methods. c) Geometric questions arising from computational complexity, particularly in hardness of approximation. These include isoperimetric and Fourier analytic problems. These include isoperimetric and Fourier analytic problems. This workshops aims to present recent progress in these directions. Updated on Jun 07, 2013 02:00 PM PDT 28. Organizers: William Johnson* (Texas A&M University), Bruce Kleiner (Yale University and Courant Institute), Gideon Schechtman (Weizmann Institute), Nicole Tomczak-Jaegermann (University of Alberta), and Alain Valette (Université de Neuchâtel) This workshop is devoted to various kinds of embeddings of metric spaces into Banach spaces, including biLipschitz embeddings, uniform embeddings, and coarse embeddings, as well as linear embeddings of finite dimensional spaces into low dimensional $\ell_p^n$ spaces. There will be an emphasis on the relevance to geometric group theory, and an exploration into the use of metric differentiation theory to effect embeddings. Updated on Jun 07, 2013 03:06 PM PDT 29. Organizers: Anna Erschler* (Université Paris-Sud), Assaf Naor (Courant Institute), and Yuval Peres (Microsoft Research) "Probabilistic Reasoning in Quantitative Geometry" refers to the use of probabilistic techniques to prove geometric theorems that do not have any a priori probabilistic content. A classical instance of this approach is the probabilistic method to prove existence of geometric objects (examples include Dvoretzky's theorem, the Johnson-Lindenstrauss lemma, and the use of expanders and random graphs for geometric constructions). Other examples are the use of probabilistic geometric invariants in the local theory of Banach spaces (sums of independent random variables in the context of type and cotype, and martingale-based invariants), the more recent use of such invariants in metric geometry (e.g., Markov type in the context of embedding and extension problems), probabilistic tools in group theory, the use of probabilistic methods to prove geometric inequalities (e.g., maximal inequalities, singular integrals, Grothendieck inequalities), the use of probabilistic reasoning to prove metric embedding results such as Bourgain's embedding theorem (where the embedding is deterministic, but its analysis benefits from a probabilistic interpretation), probabilistic interpretations of curvature and their applications, and the use of probabilistic arguments in the context of isoperimetric problems (e.g., Gaussian, rearrangement, and transportation cost methods). Updated on Jun 07, 2013 03:06 PM PDT 30. Organizers: Keith Ball (University College London), Eva Kopecka* (Mathematical Institute, Prague), Assaf Naor (Courant Institute), and Yuval Peres (Microsoft Research) Quantitative Geometry deals with geometric questions in which quantitative or asymptotic considerations occur. The workshop will provide a mathematical introduction, a foretaste, of the many themes this exciting topic comprises: geometric group theory, theory of Lipschitz functions, large scale and coarse geometry, embeddings of metric spaces, quantitative aspects of Banach space theory, geometric measure theory and of isoperimetry, and more. Updated on Apr 13, 2014 01:03 PM PDT 31. Organizers: Keith Ball* (University College London), Eva Kopecka (Mathematical Institute, Prague), Assaf Naor (Courant Institute), and Yuval Peres (Microsoft Research) This workshop will provide an introduction to the program on Quantitative Geometry. There will be several short lecture series, given by speakers chosen for the accessibility of their lectures, designed to introduce non-specialists or students to some of the major themes of the program. Updated on Dec 11, 2013 11:56 AM PST 32. Organizers: Brian Conrey (American Institute of Mathematics), Barry Mazur (Harvard University), and Michael Rubinstein* (University of Waterloo) Our workshop will highlight some work relevant to or carried out during our program at the MSRI, including statistical results about ranks for elliptic curves, zeros of L-functions, curves over finite fields, as well as algorithms for L-functions, point counting, and automorphic forms. Updated on Jun 07, 2013 03:06 PM PDT 33. Organizers: John King (University of Nottingham), Arshak Petrosyan* (Purdue University), Henrik Shahgholian (Royal Institute of Technology), and Georg Weiss (University of Dusseldorf) Many problems in physics, industry, finance, biology, and other areas can be described by partial differential equations that exhibit apriori unknown sets, such as interfaces, moving boundaries, shocks, etc. The study of such sets, also known as free boundaries, often occupies a central position in such problems. The main objective of the workshop is to bring together experts in various theoretical an applied aspects of free boundary problems. Updated on Jun 07, 2013 03:05 PM PDT 34. Organizers: Barry Mazur (Harvard University), Carl Pomerance (Dartmouth College), and Michael Rubinstein* (University of Waterloo) Our Introductory Workshop will focus largely on the background, recent work, and current problems regarding: Selmer groups and Mordell-Weil groups, and the distribution of their ranks (and "sizes") over families of elliptic curves, including recent work of Manjul Bhargava and Arul Shankar where they have shown that the average size of the 2-Selmer group of an elliptic curve over Q is 3, and thereby obtains information about the average rank of Mordell-Weil groups; related work on the asymptotics of number fields; certain natural families of L-functions, and the statistical distribution of their zeros and values; complementary algorithmic methods and experimental results regarding L-functions, automorphic forms, elliptic curves and number fields; the statistical behavior of eigenvalues of Frobenius elements in Galois representations. Updated on Jun 07, 2013 03:00 PM PDT 35. Organizers: Chantal David (Concordia University) and Nina Snaith* (University of Bristol) The format of this 2-day workshop will be colloquium-style presentations that will introduce some of the major topics touched on by the "Arithmetic Statistics" program. They will be pitched so as to be understandable to researchers with a variety of mathematical backgrounds. The talks are designed broadly as a lead-in to the program's initial workshop (taking place the following week) and will include topics such as the Sato-Tate conjecture, random matrix theory, and enumeration of number fields. The purpose will be to provide background but also to present the exciting areas where progress is happening fast, where major problems have been solved, or where there are significant open questions that need to be tackled. With this we aim to provide motivation for the Connections participants to involve themselves with the remainder of the program. Updated on Jun 07, 2013 01:27 PM PDT 36. Organizers: Tatiana Toro* (University of Washington) Many problems in physics, industry, finance, biology, and other areas can be described by partial differential equations that exhibit a priori unknown sets, such as interfaces, moving boundaries or shocks for example. The study of such sets, also known as free boundaries, often plays a central role in the understanding of such problems. The aim of this workshop is to introduce several free boundary problems arising in completely different areas. Updated on Jun 07, 2013 03:00 PM PDT 37. Organizers: Catherine Bandle (University of Basel), Claudia Lederman (University of Buenos Aires), Noemi Wolanski (University of Buenos Aires) Contributions of women working in areas related to free boundary problems will be presented. It will include survey lectures on current problems and on standard techniques used in this field, as well as more specific new results of individual researchers. One of the major goals besides the scientific aspect, is to encourage women mathematicians to interact and to build networks. It addresses also to graduate students who are very welcome. A discussion on women’s experiences in the mathematical community should help them to find their way in their mathematical career. Updated on Jun 07, 2013 01:59 PM PDT 38. Organizers: Liliana Borcea (Rice University), Carlos Kenig (University of Chicago), Maarten de Hoop (Purdue University), Peter Kuchment (Texas A&M University), Lassi Paivarinta (University of Helsinki), and Gunther Uhlmann* (University of Washington) Inverse Problems are problems where causes for a desired or an observed effect are to be determined. They lie at the heart of scientific inquiry and technological development. Applications include a number of medical as well as other imaging techniques, location of oil and mineral deposits in the earth's substructure, creation of astrophysical images from telescope data, finding cracks and interfaces within materials, shape optimization, model identification in growth processes, and modelling in the life sciences. The speakers in the workshop will cover a broad range of the most recent developments in the theory and applications of inverse problems. Updated on Jun 07, 2013 02:59 PM PDT 39. Organizers: Robbert Dijkgraaf (Amsterdam), Tohru Eguchi (Kyoto), Yakov Eliashberg* (Stanford), Kenji Fukaya (Kyoto), Yoshiaki Maeda* (Yokohama), Dusa McDuff (Stony Brook), Paul Seidel (Cambridge, MA), Alan Weinstein* (Berkeley). Sponsor: Hayashibara Foundation Symplectic geometry originated as a mathematical language for Hamiltonian mechanics, but during the last 3 decades it witnessed both, spectacuar development of the mathematical theory and discovery of new connections and applications to physics. Meanwhile, non-commutative geometry naturally entered into this picture. Updated on Jun 07, 2013 02:53 PM PDT 40. Organizers: Mark Gross ( University of California San Diego), Kentaro Hori (University of Toronto), Viatcheslav Kharlamov (Université de Strasbourg (Louis Pasteur), Richard Kenyon* (Brown One of the successes of tropical geometry is its applications to a number of different areas of recently developing mathematics. Among these are enumerative geometry, symplectic field theory, mirror symmetry, dimer models/random surfaces, amoebas and algas, instantons, cluster varieties, and tropical compactifications. While these fields appear quite diverse, we believe the common meeting ground of tropical geometry will provide a basis for fruitful interactions between participants. Updated on Jun 07, 2013 01:26 PM PDT 41. Organizers: Eva Maria Feichtner (U Bremen), Ilia Itenberg* (U Strasbourg), Grigory Mikhalkin (U Genève), Bernd Sturmfels (UC Berkeley) This workshop is to lay the foundations for the upcoming program. Mini-courses comprising lectures and exercise/discussion sessions will cover the foundational aspects of tropical geometry as well as its connections with adjacent areas: symplectic geometry, several complex variables, algebraic geometry (in particular enumerative and computational aspects) and geometric combinatorics. The mini-courses will be augmented by research talks on current tropical develpoments to open the scene and set up new goals in the beginning semester. Updated on Jun 07, 2013 01:56 PM PDT 42. Organizers: Alicia Dickenstein* (U Buenos Aires), Eva Maria Feichtner* (U Bremen) The aim of this workshop is to introduce advanced graduate students and postdocs to tropical geometry. Various aspects of this multi-faceted field will be highlighted in two short-courses comprising lectures and exercise/discussion sessions as well as in research talks. The workshop will thus provide the participants with an excellent introduction to the forthcoming events of the program. The scientific part will be complemented by a round table discussion on career issues of female mathematicians. Updated on Jun 07, 2013 02:51 PM PDT 43. Organizers: John Etnyre* (Georgia Institute of Technology), Dusa McDuff (Barnard College, Columbia University), and Lisa Traynor (Bryn Mawr). This workshop aims both to introduce people to a broad swath of the field and to frame its most important problems. Each day will be organized around a basic topic, such as how to count holomorphic curves with boundary on a Lagrangian submanifold (which leads to various versions of Floer theory) or how to understand the general structure of symplectic and contact manifolds. There will also be an introduction to the analytic and algebraic aspects of symplectic field theory, and a discussion of some applications. Updated on Jun 07, 2013 02:51 PM PDT 44. Organizers: Eleny Ionel (Stanford University), Dusa McDuff* (Barnard College, Columbia University). This will form a bridge between the graduate student workshop which will just be ending and the Introductory workshop. After some elementary talks describing some of the main questions in the field, there will be an extended discussion session intended to explain basic concepts to those unfamiliar with the area. There will also be an opportunity for young researchers in the field to present their work, and an evening social event. Updated on Jun 07, 2013 11:05 AM PDT 45. Organizers: Greg Friedman, Eugénie Hunsicker, Anatoly Libgober, and Laurentiu Maxim This workshop will bring together researchers interested in the topology of stratified spaces. It will focus roughly on four topics: topology of complex varieties, signature theory on singular spaces, L^2 and intersection cohomology, and mixed Hodge theory and singularities. Aside from talks on current research, there will be a series of introductory lectures on these themes. These talks will be aimed at strengthening the connections among the various topology research groups and the connections between topology researchers and researchers at the program on Analysis of Singular Spaces, running concurrently. Updated on Jun 07, 2013 02:47 PM PDT 46. Organizers: Ben Green (University of Cambridge), Bryna Kra (Northwestern University), Emmanuel Lesigne (University of Tours), Anthony Quas (University of Victoria), and Mate Wierdl (University of Updated on Jun 07, 2013 11:02 AM PDT 47. Organizers: Ben Green (University of Cambridge), Bryna Kra (Northwestern University), Emmanuel Lesigne (University of Tours), Anthony Quas (University of Victoria), Mate Wierdl (University of Updated on Jun 07, 2013 02:35 PM PDT 48. Organizers: David Benson, Daniel Nakano(chair), Raphael Rouquier Updated on Jun 07, 2013 02:35 PM PDT 49. Organizers: Sergey Fomin, Bernard Leclerc, Vic Reiner (Chair), Monica Vazirani Updated on Jun 07, 2013 02:34 PM PDT 50. Organizers: Alexander Kleshchev, Arun Ram, Richard Stanley (chair), Bhama Srinivasan Updated on Jun 07, 2013 02:34 PM PDT 51. Organizers: Jonathan Alperin(chair), Robert Boltje, Markus Linckelmann Updated on Jun 07, 2013 02:34 PM PDT 52. Organizers: Noel Brady, Mike Davis, Mark Feighn Updated on Jun 07, 2013 11:03 AM PDT 53. Organizers: Mladen Bestvina, Jon McCammond, Michah Sageev, Karen Vogtmann Updated on Jun 07, 2013 01:42 PM PDT 54. Organizers: Ruth Charney, Indira Chatterji, and Karen Vogtmann Updated on Jun 07, 2013 01:45 PM PDT 55. Organizers: Jeff Brock, Richard Canary, Howard Masur, Alan Reid, and Maryam Mirzakhani Updated on Jun 07, 2013 02:32 PM PDT 56. Organizers: Roberto Camassa (UNC - Chapel Hill), Jinqiao Duan (Illinois Institute of Technology - Chicago), Peter E. Kloeden (U of Frankfurt, Germany), Jonathan Mattingly (Duke U), Richard McLaughlin (UNC - Chapel Hill) Complex physical, biological, geophysical and environmental systems display variability over a broad range of spatial and temporal scales. To make progress in understanding and modelling such systems, a combination of computational, analytical, and experimental techniques is required. There are issues that emerge prominently in each of these categories and in all these stochastic methods are playing a fundamental role. Updated on Jun 07, 2013 11:02 AM PDT 57. Organizers: Greg Pavliotis and Andrew Stuart Updated on Jun 07, 2013 11:02 AM PDT 58. Organizers: Jonathan Mattingly (Duke), Igor Mezic (UCSB-Chair), Andrew Stuart (Warwick) Updated on Jun 07, 2013 01:45 PM PDT 59. Organizers: Charles Elliott, Xiaobing Feng, Michael Holst, Hongkai Zhao Updated on Jun 07, 2013 02:30 PM PDT 60. Organizers: Bennett Chow, Gerhard Huisken, Chuu-Lian Terng, and Gang Tian Updated on Jun 07, 2013 02:05 PM PDT 61. Organizers: Chris Jones (U North Carolina), Edgar Knobloch (UC-Berkeley-Physics), Nancy Kopell (Boston U), Lai-Sang Young (chair, Courant) Updated on Jun 07, 2013 02:46 PM PDT 62. Organizers: Debra Lewis (UC Santa Cruz), Mary Pugh (U Toronto), and Mary Lou Zeeman (Bowdoin College) Updated on Jun 07, 2013 10:59 AM PDT 63. Organizers: Panagiota Daskalopoulos, Peter Li and Lei Ni Updated on Jun 07, 2013 02:05 PM PDT 64. Organizers: G. Carlsson, P. Diaconis, R. Jardine, and G. M. Ziegler Updated on Jun 07, 2013 02:44 PM PDT 65. Organizers: G. Carlsson, P. Diaconis, and S. Holmes Updated on Jun 07, 2013 02:44 PM PDT 66. Organizers: Bennett Chow, Peter Li and Gang Tian Updated on Jun 07, 2013 01:44 PM PDT 67. Organizers: Christine Guenther and Panagiota Daskalopoulos Updated on Jun 07, 2013 02:05 PM PDT 68. Organizers: G. Carlsson, P. Diaconis, G. M. Ziegler Updated on Jun 07, 2013 02:43 PM PDT Updated on Jun 07, 2013 10:58 AM PDT 70. Organizers: Mina Aganagic, A. Klemm (Wisconsin), Jun Li (Stanford), R. Pandharipande (Princeton), Yongbin Ruan (Wisconsin) Mirror duality has demonstrated the striking effectiveness of concepts of modern physics in enuerative geometry. It is of the same type as the simple radius inversion duality seen in string compactifications on S1. This type was discovered early because it shows up in every term in the string genus expansion and can be studied in 2d conformal field theory. Updated on Jun 07, 2013 02:43 PM PDT 71. Organizers: Michael Bennett, Chantal David, William Duke, Andrew Granville (co-chair),Yuri Tschinkel (co-chair) This workshop is jointly sponsored by MSRI and CRM and will be held at the Banff International Research Station in Banff, Canada. Updated on Jun 07, 2013 11:43 AM PDT 72. Organizers: Fedor Bogomolov, Antoine Chambert-Loir, Jean-Louis Colliot-Thélène (chair), A. Johan de Jong, Raman Parimala Updated on Jun 07, 2013 02:42 PM PDT 73. Organizers: Yongbin Ruan, H. Nakajima, G. Mason Updated on Jun 07, 2013 02:02 PM PDT 74. Organizers: Jean-Louis Colliot-Thélène, Roger Heath-Brown, János Kollár, Bjorn Poonen (chair), Alice Silverberg, Yuri Tschinkel NOTE: This workshop is to be held at the International House Berkeley on the UC Berkeley campus, at 2299 Piedmont Avenue. Updated on Jun 07, 2013 01:56 PM PDT 75. Organizers: R. Cohen (Stanford), J. Morava (Johns Hopkins), A. Adem (UBC/UW--Madison), Y. Ruan (UW-Madison); Local Organizers: M. Aguilar (UNAM-Mexico City), D. Juan-Pineda (UNAM-Morelia), J.Seade (UNAM-Cuernavaca) The purpose of this program is to introduce new topological concepts in physics to young research mathematicians from both South and North America. The lectures given during the first week will provide the necessary background; these will be supplemented, primarily during the second week, with lectures by leading researchers on recent progress. That week serves as the Opening Workshop for the MSRI program, Spring, 2006, in New Topological Structures in Physics. Updated on Jun 07, 2013 02:42 PM PDT 76. Organizers: Nicolas Burq, Hans Lindblad, Igor Rodnianski, Christopher Sogge, Sijue Wu NOTE: This workshop is to be held at the International House on the UC Berkeley campus, at 2299 Piedmont Avenue. On site registration for the workshop will be at the International House, starting at 8:30 AM Monday and ending at 3:30 PM Monday. Updated on Jun 07, 2013 02:41 PM PDT 77. Organizers: L. Craig Evans (U.C. Berkeley), Wilfrid Gangbo (Georgia Tech), Cristian Gutierrez (Temple University) NOTE: This workshop is to be held at the International House on the UC Berkeley campus, at 2299 Piedmont Avenue, except for the Tuesday session, which will be held at the Lawrence Berkeley National Laboratory. On site registration for the workshop will start at 8:30 AM Monday and end at 3:30 PM Monday. Updated on Jun 07, 2013 02:40 PM PDT Updated on Jun 07, 2013 02:04 PM PDT 79. Organizers: James Colliander (Toronto), Patrick Gerard (Orsay), Herbert Koch (Dortmund), Natasha Pavlovic (Princeton), Daniel Tataru (Berkeley) Updated on Jun 07, 2013 02:40 PM PDT 80. Organizers: David Aldous, Claire Kenyon, Jon Kleinberg, Michael Mitzenmacher, Christos Papadimitriou, Prabhakar Raghavan This workshop seeks to bring together (a) mathematicians studying the math properties of particular models, and (b) experts in various network fields who can survey the successes and challenges of modeling within their field. Updated on Jun 07, 2013 01:29 PM PDT 81. Organizers: Jitendra Malik, Jean-Michel Morel, Song Chun Zhu Updated on Jun 07, 2013 11:44 AM PDT 82. Organizers: Don Geman, Jitendra Malik, Pietro Perona, Cordelia Schmid Updated on Jun 07, 2013 01:24 PM PDT 83. Organizers: Kathryn Leonard , David Mumford This workshop is aimed at faculty who wish to learn about this exciting field and would like to enrich a variety of undergraduate courses with new examples and applications. The workshop is being held in collaboration with the Mathematical Association of America as part of the MAA's Professional Enhancement Program (PREP). See the PREP website for information about registration and participant support. Updated on Jun 07, 2013 11:44 AM PDT 84. Organizers: Dimitris Achlioptas, Elchanan Mossel, Yuval Peres The topics of this workshop include phase transitions in connection to random graphs, boolean functions, satisfiability problems, coding, reconstruction on trees and spinglasses. Special focus will be given to the study of the interplay between the replica method, local weak convergence and algorithmic aspects of Updated on Jun 07, 2013 02:39 PM PDT 85. Organizers: Andrew Blake and Yair Weiss Updated on Jun 07, 2013 11:43 AM PDT 86. Organizers: David Donoho and Bruno Olshausen Updated on Jun 07, 2013 11:43 AM PDT 87. Organizers: Fabio Martinelli, Alistair Sinclair, Eric Vigoda Recent years have seen the rapid development of techniques for the analysis of MCMC algorithms, with applications in all the above areas. These techniques draw from a wide range of mathematical disciplines, including combinatorics, discrete probability, functional analysis, geometry and statistical physics, and there has been significant cross-fertilization between them. This workshop aims to bring together practitioners from all these domains with the aim of furthering this interplay of ideas. Updated on Jun 07, 2013 01:24 PM PDT 88. Organizers: David Donoho, Olivier Faugeras, David B Mumford Updated on Jun 07, 2013 01:23 PM PDT 89. Organizers: Ruzena Bajcsy, Jana Kosecka, Kathryn Leonard Updated on Jun 07, 2013 11:44 AM PDT 90. Organizers: Alistair Sinclair MSRI Program on Probability, Algorithms and Statistical Physics, Spring 2005 --- OPENING DAY, Thursday 13 January, 2005 Updated on Jun 07, 2013 02:40 PM PDT 91. Organizers: Eva Maria Feichtner, Philip Hanlon, Peter Orlik, Alexander Varchenko This workshop will be part of MSRI's Special Semester in Hyperplane Arrangements and Applications. Updated on Jun 07, 2013 02:29 PM PDT 92. Organizers: Daniel C. Cohen, Michael Falk (chair), Peter Orlik, Inna Scherbak, Alexandru Suciu, Hiroaki Terao, Sergey Yuzvinsky This workshop will focus on the following topics: Characteristic varieties and resonance varieties, homotopy types of arrangements, moduli of arrangements, Gauss-Manin connections, KZ and qKZ equations, elliptic hypergeometric functions, and hypergeometric functions associated with curves of arbitrary genus. Updated on Jun 07, 2013 02:28 PM PDT 93. Organizers: Michael Falk, Peter Orlik (Chair), Alexander Suciu, Hiroaki Terao, and SergeyYuzvinsky Updated on Jun 07, 2013 01:29 PM PDT 94. Organizers: Lalo Gonzalez-Vega, Victoria Powers, and Frank Sottile Updated on Jun 07, 2013 02:27 PM PDT 95. Organizers: Denis Auroux, Dan Freed, Helmut Hofer, Francis Kirwan, and Gang Tian Updated on Jun 07, 2013 02:25 PM PDT 96. Organizers: Viatcheslav Kharlamov, Boris Shapiro, and Oleg Viro Updated on Jun 07, 2013 11:36 AM PDT 97. Organizers: Selman Akbulut, Grisha Mikhalkin, Victoria Powers, Boris Shapiro, Frank Sottile, and Oleg Viro Updated on Mar 31, 2014 12:47 PM PDT 98. Organizers: Ben Chow, Peter Li, Richard Schoen (chair), and Richard Wentworth Updated on Jun 07, 2013 02:04 PM PDT 99. Organizers: Jesús A. De Loera, Jacob E. Goodman, János Pach and Günter M. Ziegler Updated on Jun 07, 2013 01:55 PM PDT 100. Organizers: Pankaj Agarwal, Herbert Edelsbrunner, Micha Sharir, and Emo Welzl Updated on Jun 07, 2013 11:36 AM PDT 101. Organizers: Jesús A. De Loera, Herbert Edelsbrunner, Jacob E. Goodman, János Pach, Micha Sharir, Emo Welzl, and Günter M. Ziegler Updated on Jun 07, 2013 01:55 PM PDT 102. Organizers: Robert Bryant (Co-chair), Simon Donaldson, H. Blaine Lawson, Richard Schoen, and Gang Tian (Co-chair) Updated on Jun 07, 2013 02:25 PM PDT 103. Organizers: Robert Bryant LOCATION: The Banff Conference Centre, Banff, Canada Updated on Jun 07, 2013 02:25 PM PDT 104. Organizers: J. Sjostrand, S. Zelditch, and M. Zworski Updated on Jun 07, 2013 01:44 PM PDT 105. Organizers: R. Littlejohn, W.H. Miller, and M. Zworski Updated on Jun 07, 2013 12:16 PM PDT 106. Organizers: Mark Green, Juergen Herzog, and Bernd Sturmfels (chair) To be held at the Banff International Research Station in Banff, Alberta, Canada. Updated on Jun 07, 2013 12:17 PM PDT 107. Organizers: Serkan Hosten, Craig Huneke, Bernd Sturmfels (chair), and Irena Swanson Updated on Jun 07, 2013 02:25 PM PDT 108. Organizers: Luchezar Avramov (chair), Ragnar Buchweitz, and John Greenlees Updated on Jun 07, 2013 02:24 PM PDT 109. Organizers: Steering Committee: Dorit Aharonov, Charles Bennett, Harry Buhrman, Isaac Chuang, Mike Mosca, Umesh Vazirani, and John Watrous Updated on Jun 07, 2013 01:23 PM PDT 110. Organizers: Craig Huneke (chair), Paul Roberts, Karen Smith, and Bernd Ulrich. Updated on Jun 07, 2013 02:24 PM PDT 111. Organizers: Richard Jozsa and Mary Beth Ruskai Updated on Jun 07, 2013 01:22 PM PDT 112. Organizers: David Di Vincenzo (Watson-IBM), and Peter Shor (AT&T), Chair Presented jointly with IPAM, and held in Los Angeles. See IPAM website for details. Updated on Jun 07, 2013 11:35 AM PDT 113. Organizers: Richard Cleve, Peter Shor, and Umesh Vazirani To be held at the Banff Conference Centre in Banff (Alberta), Canada Updated on Jun 07, 2013 11:34 AM PDT 114. Organizers: Luchezar Avramov, Mark Green, Craig Huneke, Karen E. Smith and Bernd Sturmfels Updated on Jun 07, 2013 02:21 PM PDT 115. Organizers: Dorit Aharonov, Leonard Schulman, and Umesh Vazirani Updated on Jun 07, 2013 11:33 AM PDT 116. Organizers: G. Felder, D. Freed, E. Frenkel, V. Kac, T. Miwa, I. Penkov, V. Serganova, I. Singer and G. Zuckerman The first week will focus on Infinite-dimensional Algebras, Conformal Field Theory and Integrable Systems, and the second week would be devoted to Supersymmetry in Mathematics and Physics. Updated on Jun 07, 2013 02:38 PM PDT 117. Organizers: S. Bradlow, O. Garcia-Prada, M. Kapranov, L. Katzarkov, M. Kontsevich, D. Orlov, T. Pantev, C. Simpson, and B. Toen Updated on Jun 07, 2013 02:21 PM PDT 118. Organizers: E. Frenkel, V. Ginzburg, G. Laumon and K. Vilonen Discussion of the important developments in the geometric Langlands correspondence in the last few years Updated on Jun 07, 2013 01:43 PM PDT 119. Organizers: K. Behrend, W. Fulton, L. Katzarkov, M. Kontsevich, Y. Manin, R. Pandharipande, T. Pantev, B. Toen, and A. Vistoli Updated on Jun 07, 2013 02:17 PM PDT 120. Organizers: William Fulton, Ludmil Katzarkov, and Tony Pantev The field of algebraic stacks has gathered a huge momentum and is bound to become one of the main tools of the working mathematician. Updated on Jun 07, 2013 11:22 AM PDT 121. Organizers: Joyce McLaughlin, Adrian Nachman, William Symes, Gunther Uhlmann (chair) and Michael Vogelius The purpose of the workshop will be to bring together people working on different aspects of inverse problems, to appraise the current status of development of the field, and to encourage interaction between mathematicians and scientists and engineers working directly with the applications. Updated on Jun 07, 2013 02:17 PM PDT 122. Organizers: Gunther Uhlmann (chair), David Haynor (Department of Radiology, University of Washington), Gary Margrave (Department of Geophysics, University of Calgary) and Ricardo Weder (Universidad Nacional Autonoma de Mexico) Updated on Jun 07, 2013 02:16 PM PDT 123. Organizers: Leticia Barchini, Oklahoma State University, Roger Zierau, Oklahoma State University. This workshop will concentrate on several topics in representation theory and geometric analysis of homogeneous spaces for which techniques in integral geometry play a key role. Updated on Jun 07, 2013 11:19 AM PDT 124. Organizers: Liliana Borcea, David Colton, Michael Eastwood, Simon Gindikin, Alexander Goncharov and Gunther Uhlmann Updated on Jun 07, 2013 02:16 PM PDT 125. Organizers: Joel Hass and David Hoffman see program webpage at http://zeta.msri.org/calendar/programs/ProgramInfo/52/show_program Updated on Jun 07, 2013 02:15 PM PDT 126. Organizers: Dan Rockmore and Dennis Healy see program webpage at http://zeta.msri.org/calendar/programs/ProgramInfo/51/show_program Updated on Jun 07, 2013 01:40 PM PDT 127. Organizers: Tanya Christiansen, Charles Epstein, Rafe Mazzeo, Richard Melrose This workshop will focus on problems of a scattering theoretic nature for geometric operators on manifolds with asymptotically regular geometries, and also on spectral theory and related questions of invertibility of such operators on singular spaces. The emphasis will be on the consideration of new problems and the dissemination of new techniques. Updated on Jun 07, 2013 02:02 PM PDT 128. Organizers: Man-Duen Choi, Edward G. Effros, George A. Elliott (co-chairman), Vaughan F. R. Jones, Henri Moscovici, Ian F. Putnam (co-chairman), Marc A. Rieffel and Dan-Virgil Voiculescu This meeting will be joint for the first two days with the MSRI workshop on Quantization and Non-commutative Geometry, and during the three-day period April 29 - May 1 will function as a closing conference for the 2000-01 MSRI program on Operator Algebras. Updated on Jun 07, 2013 01:54 PM PDT 129. Organizers: A. Connes, J. Cuntz, N. Higson, G.G. Kasparov, N.P. Landsman, H. Moscovici (chair, Non-commutative Geometry), M.A. Rieffel (chair, Quantization), G. Skandalis, A. Weinstein, M. Wodzicki, S.L. Woronowicz These two topics have been scheduled in a joint workshop because the confluence of their research is likely to influence future advances in both fields. Updated on Jun 07, 2013 02:03 PM PDT 130. Organizers: Jean-Michel Bismut, Tom Branson, S.-Y. Alice Chang and Kate Okikiolu This workshop will study the spectral theory of geometric operators, including: spectral invariants, applications in conformal geometry, classification of 4-manifolds, index theory and scattering Updated on Jun 07, 2013 02:02 PM PDT 131. Organizers: P. Biane, D. Shlyakhtenko, R. Speicher, D. Voiculescu, E. Effros, E. Kirchberg, V. Paulsen, G. Pisier, Z-J. Ruan and A. Sinclair The Free Probability section of the workshop will cover several aspects of the subject: applications to von Neumann algebras and C*-algebras of free product type, connections with random matrix theory, free stochastic processes and free stochastic integration, combinatorial approach via noncrossing partitions, free entropy. The Non-commutative Banach Space section will cover the central concepts of the recently developed theory of operator spaces such as: exactness, local reflexivity and injectivity with applications to C* tensor products, operator algebras and operator modules. The non-commutative Lp-spaces, which play an important role in this theory, provide many points of contact with free Updated on Jun 07, 2013 11:01 AM PDT 132. Organizers: Noam Elkies, William McCallum, Jean-François Mestre, Bjorn Poonen (chair) and René Schoof This workshop will focus on the development of explicit and computational methods in arithmetic geometry, as well as the complexity analysis of existing algorithms. Updated on Jun 07, 2013 01:47 PM PDT 133. Organizers: D. Bisch, V.F.R. Jones, Y. Kawahigashi, S. Popa, R. Borcherds, S. Doplicher, R. Lawrence, P. Goddard and A. Wassermann These two areas have had a strong interaction in the last two decades, leading to exciting and closely related mathematics. Updated on Jun 07, 2013 01:47 PM PDT 134. Organizers: Eric Bach, Dan Boneh, Cynthia Dwork (chair), Shafi Goldwasser, Kevin McCurley and Carl Pomerance This workshop will focus on number-theoretic aspects of cryptography, and will be cross-cultural, where the the cultures in question are "mathematics" and "computer science." Updated on Jun 07, 2013 10:49 AM PDT 135. Organizers: W. Arveson,B. Blackadar,E. Effros,G. Elliott (chair), D. Handelman, E.Kirchberg, I. Putnam,M. Rordam,E. Stormer,M. Takesaki As part of the full-year 2000-2001 program on Operator Algebras, MSRI will host a one-week NATO ADVANCED RESEARCH WORKSHOP on Simple C*-algebras and Non-commutative Dynamical Systems, September 25-29, 2000. Updated on Jun 07, 2013 01:47 PM PDT 136. Organizers: D. Bisch (chair), E.G. Effros, V.F.R. Jones and D.V. Voiculescu This workshop introduces graduate students and other scientists to the exciting area of Operator Updated on Jun 07, 2013 01:43 PM PDT 137. Organizers: David Bailey, Joe Buhler (chair), Cynthia Dwork, Hendrik Lenstra Jr., Andrew Odlyzko, Bjorn Poonen, William Velez and Noriko Yui This workshop will have lecture series covering the basic areas of algorithmic number theory, aimed at graduate students and mathematicians without extensive experience in the field. Updated on Jun 07, 2013 12:00 PM PDT 138. Organizers: M. Artin (MIT), K. R. Goodearl (UC Santa Barbara) and M. Van den Bergh (Limburgs) Updated on Jun 07, 2013 02:14 PM PDT 139. Organizers: G. Benkart (Univ.of Wisconsin), A. Shalev (Hebrew Univ.), E. Zelmanov (Yale Univ.) Updated on Jun 07, 2013 01:54 PM PDT 140. Organizers: Miriam Cohen, Hans-Jurgen Schneider, Susan Montgomery (Chair), Fred Van Oystaeyen For more information about this event, please see the original web page at: Updated on Jun 07, 2013 01:53 PM PDT 141. Organizers: Pierre Debes, Hiroaki Nakamura, Akio Tamagawa Updated on Jun 07, 2013 12:07 PM PDT 142. Organizers: Moshe Jarden (Tel Aviv), Gunter Malle (Kassel), Helmut Voelklein (U. of Florida) Updated on Jun 07, 2013 12:06 PM PDT 143. Organizers: Michael D. Fried, David Harbater and Lance W. Small For more information about this conference, please visit the original web page at Updated on Jun 07, 2013 11:20 AM PDT 144. Organizers: Pavel Bleher, D.A. Hejhal, Andrew Odlyzko, and Peter Sarnak Please see the workshop web page at http://www.msri.org/activities/programs/9899/random/qc/ for more information. Updated on Jun 07, 2013 11:21 AM PDT 145. Organizers: B. Dubrovin, A. Its, M. Mehta (Chair), and N. Reshetikhin Updated on Jun 07, 2013 12:04 PM PDT 146. Organizers: E. Basor (Chair), P. Bleher, A. Its, and C. Tracy Updated on Jun 07, 2013 12:04 PM PDT 147. Organizers: Felipe Cucker and Jim Renegar Updated on Jun 07, 2013 12:06 PM PDT 148. Organizers: Eberhard Becker, Lakshman Yagati, Michael Singer, and Peter Stiller Updated on Jun 07, 2013 02:12 PM PDT 149. Organizers: David H Bailey, Daniel R Grayson, Alyson Reeves and Nobuki Takayama Updated on Jun 07, 2013 12:16 PM PDT 150. Organizers: Jean-Pierre Dedieu, Marie-Francoise Roy, Bernd Sturmfels, and Mike Shub Updated on Jun 07, 2013 01:54 PM PDT 151. Organizers: Arieh Iserles, Marie-Francoise Roy, Teresa Krick, Michael Singer, Andrew Stuart, and Bernd Sturmfels Updated on Jun 07, 2013 02:11 PM PDT Updated on Jun 07, 2013 02:11 PM PDT 153. Organizers: A. Pillay (Chair), C. Steinhorn, D. Haskell Updated on Jun 07, 2013 02:10 PM PDT 154. Organizers: P. Fitzsimmons, D. Nualart Updated on Jun 07, 2013 11:25 AM PDT 155. Organizers: C. Mueller, E. Pardoux, B. Rozovskii Updated on Jun 07, 2013 11:25 AM PDT 156. Organizers: M. Christ, D. Jerison, C. Kenig, J. Pipher, and E. Stein Updated on Jun 07, 2013 01:20 PM PDT 157. Organizers: C. Kenig, F. Ricci, E. Stein Updated on Jun 07, 2013 01:21 PM PDT 158. Organizers: Curtis Greene (Chair), Sergey Fomin, Phil Hanlon, and Sheila Sundaram Updated on Jun 07, 2013 01:22 PM PDT 159. Organizers: D. Elworthy, J. F. Le Gall, J. Rosen Updated on Jun 07, 2013 11:25 AM PDT 160. Organizers: Margaret Bayer, Louis Billera (Chair), Paul Edelman and Gunter M. Ziegler Updated on Jun 07, 2013 11:31 AM PDT 161. Organizers: Joan Birman (Chair), Xiao-Song Lin, Paul Melvin, and Andrei Zelevinsky Updated on Jun 07, 2013 01:40 PM PDT 162. Organizers: Robion Kirby (UC Berkeley), Peter Kronheimer (Harvard), Dusa McDuff (SUNY at Stony Brook), Ronald Stern (Chair, UC Irvine), and Gang Tian (MIT) Updated on Jun 07, 2013 01:46 PM PDT 163. Organizers: Anders Bjorner (Chair), Zoltan Furedi, and Jeffry Kahn Updated on Jun 07, 2013 11:31 AM PDT 164. Organizers: Lynne Butler, Ira Gessel, Rodica Simion (chair), and Michelle Wachs Updated on Jun 07, 2013 11:32 AM PDT 165. Organizers: Andrew Casson (Chair), Allen Hatcher, John Luecke, Walter Neumann, and Abigail Thompson Updated on Jun 07, 2013 10:48 AM PDT Updated on Jun 07, 2013 10:48 AM PDT 167. Organizers: E. Carlen and E. Lutwak. Updated on Jun 07, 2013 10:45 AM PDT 168. Organizers: Eric Bedford, Daniel Burns,Janos Kollar, Robert Lazarsfeld, Michael Schneider (Chair), Domingo Toledo, and Scott Wolpert Updated on Jun 07, 2013 10:44 AM PDT 169. Organizers: L. Lovasz, N. Tomczak-Jaegermann, and A. Pajor Updated on Jun 07, 2013 10:44 AM PDT Updated on Jun 07, 2013 10:43 AM PDT
{"url":"https://www.msri.org/web/msri/scientific/workshops/programmatic-workshops","timestamp":"2014-04-18T13:23:00Z","content_type":null,"content_length":"234108","record_id":"<urn:uuid:93cac9db-805e-4a30-9cfc-6185147f42a9>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00356-ip-10-147-4-33.ec2.internal.warc.gz"}
Rational Functions 2.3: Rational Functions Created by: CK-12 Learning objectives • Find the $x-$$y-$ • Find the horizontal, vertical, and oblique asymptotes • Graph a rational function using $x-$$y-$ Standard Form of Rational Functions and the Domain of Rational Functions Any function that has the form where $P(x)$$Q(x)$$Q(x)e0$rational function. The domain of any rational function includes all real numbers $x$ Example 1 What is the domain of $f(x)=\frac{1}{x}$ Notice that the only input that can make the denominator equal to zero is $x=0$$f(x)$$x=0$$f(x)=\frac{1}{x}$$x$$f(x)$$x$$f(x)$$x-$$y-$horizontal and vertical asymptotes, respectively. Graph Simple Rational Functions Example 2 Graph the function $f(x)=\frac{1}{x}$ Solution: We know that the domain of $f(x)$$x=0$$x=0$$x<0, f(x)<0,$$x>0, f(x)>0$$f(x)$ $& x && 1 && 2 && 10 && \frac{1}{5} && \frac{1}{10} && -1 && -\frac{1}{2} && -\frac{1}{10} && -2 && -10\\& f(x)=\frac{1}{x} && 1 && \frac{1}{2} && \frac{1}{10} && 5 && 10 && -1 && -2 && -10 && -\frac {1}{2} && -\frac{1}{10}$ Example 3 What is the domain of $f(x)=\frac{x+2}{(x-1)(x+3)}$ The domain is all real numbers except at the points that cause the denominator to equal zero, namely, at $x=1$$x=-3$$f(x)$ Vertical and Horizontal Asymptotes An asymptote is a line or curve to which a function's graph draws closer without touching it. Functions cannot cross a vertical asymptote, and they usually approach horizontal asymptotes in their end behavior (i.e. as $x\rightarrow\pm\infty$$f(x)=\frac{1}{x}$$x$$(x\rightarrow\pm\infty)$$(x\rightarrow 0)$ There are three types of asymptotes: horizontal, vertical and oblique. We will analyze each one below. Looking at the graph of $f(x)=\frac{x+2}{(x-1)(x+3)}$vertical asymptotes (the vertical dotted lines), one is at $x=1$$x=-3$ We can find vertical asymptotes by simply equating the denominator to zero and then solving for $x$ Then setting $Q(x)=0$ So if gives the vertical asymptotes at $x=1$$x=-3$ The horizontal asymptote a line parallel to the $x-$$x\rightarrow\infty$$x\rightarrow -\infty$ How to Find the Horizontal Asymptote • Put the rational function in a standard form. That is, expand the numerator and denominator if they are written in a factored form. • Remove all terms except the terms that contain the largest exponents of $x$ • There are three possibilities: □ If the degree of the numerator is smaller than the degree of the denominator, then the horizontal asymptote crosses the $y-$$y=0$$x-$ □ If the degree of the denominator and the numerator are the same, then the horizontal asymptote equals to the ratio of the leading coefficients. □ If the degree of the numerator is larger than the degree of the denominator, then there is no horizontal asymptote. Example 4 Find the vertical and horizontal asymptotes of To find the vertical asymptote(s), set the denominator to zero and then solve for $x$ $3x^{3}-81 & = 0\\3x^3 & = 81\\x^3 & = 27\\x & = \sqrt[3]{27}\\x & = 3$ Thus the graph has a vertical asymptote at $x=3$ To find the horizontal asymptote, we follow the procedure above. Both the numerator and denominator are already written in standard form. Next, remove all terms except the largest exponents of $x$ Notice that the degree of the numerator and the denominator are the same and therefore the horizontal asymptote is the ratio of the coefficients, So the horizontal asymptote is at $y=\frac{2}{3}$ Example 5 Find the asymptotes of Remove all terms except the leading terms, Notice that the degree of the numerator is less than the degree of the denominator. Therefore, the horizontal asymptote is at $y=0$$x-$ $2x^{4}-9 & = 0\\x^4 & = \frac{9}{4}\\x & = \pm \frac{\sqrt{6}}{2}$ Example 6 Another example, consider the rational function Removing all terms except the leading terms of the numerator and denominator, Here, the degree of the numerator is larger than the degree of the denominator. Thus there is no horizontal asymptote. Once you have found the asymptotes, it is relatively easy to graph rational functions. We will illustrate how to graph rational functions with two examples. Example 7 Note that the domain of $T$$xe 1$$x=1$$T(x)$$T$$y-$$x-$ The $y-$$y=T(0)$ Thus the $y-$ The $x-$$y=T(x)=0$ Note that a fraction $\frac{a}{b}=0$$a=0$$T9$ $2x+1 & = 0\\x & = \frac{-1}{2}$ Notice that we could have just set the numerator to zero and found the $x-$$P(x)=0$$x-$$x-$$P(x)=0$$Q(x)=0$$x$ Next, the vertical asymptote. Set $Q(x)=0$ $x-1 & = 0\\x & = 1$ And the horizontal asymptote: Therefore, the vertical asymptote is at $x=1$$y=2$ Example 8 The domain of $g$$\frac{3}{2}$$\left \{ x|xe 0 \ \text{and} \ xe\frac{3}{2} \right \}$$y-$ this tells us that there is no $y-$$x-$ $2x^{2}+1 & = 0\\2x^2 & = -1\\x^2 & = \frac{-1}{2}\\x & = \sqrt{\frac{-1}{2}}$ Notice that this equation has no real solution and therefore, there is no $x-$ The vertical asymptote can be found by setting the denominator to zero, $2x^{2}-3x & = 0\\x(2x-3) & = 0$ The two solutions are $x=0$$x=\frac{3}{2}$ Finally, the horizontal asymptote is found by analyzing the leading terms: That is, $y=1$$g(x)$ Oblique Asymptotes Thus far, we have restricted our discussion of rational functions to those where the degree of the numerator is less than or equal to the degree of the denominator. As our final analysis of graphing rational functions, we will consider the case when the degree of the numerator is greater than the degree of the denominator by one. Example 9 First observe that the vertical asymptote is at $x=2$ Doing the long division here, $& \qquad \qquad x + 2 \\& x-2 \ \big ) \overline{x^{2} + 0x -1 }\\& \qquad \quad \underline{x^{2} -2x \ \ \downarrow}\\& \qquad \qquad \quad 2x -1\\& \qquad \qquad \quad \underline{2x -4}\\& \qquad \qquad \qquad \quad \ 3$ So in this case, the function $g(x)$ The above equation tells us that as $x\to\pm\infty$$g(x)=x+2+\frac{3}{x-2}$$y=x+2$$x$$x=1,000,000$$\frac{3}{999,998}\approx 0$$x+2$oblique asymptote and it is indicated by the dashed line in Figure Example 10 The vertical asymptote here is $x=1$ Notice that the $x-$$x=2$$x=-1$ which indicates an oblique asymptote at $y=x$ Graph Rational Functions Using Transformation Just like polynomials, rational functions can be graphed using transformations. The main point to remember for graphing rational functions by transformations is that some transformations change the asymptotes while others do not. • $r(x)+c$$c$$c<0$ • $r(x-c)$$c$$c<0$ • $a\cdot r(x)$$a$$x-$$a<0$ • $r(a\cdot x)$$y-$$\frac{1}{a}$ • $r(-x)$$y-$ • $-r(x)$$x-$ Example 11 A rational function $r(x)$$r(x)$$r(x)-3$$-r(x)$$r(3-x)$ a) The horizontal asymptote moves down by three units b) The function is reflected about the $x-$ c) $r(3-x)=r(-(x-3))$$r(-x)$$r(-(x-3))$$x=1$ For each of the rational functions below, determine the domain, the asymptotes, the $x-$$y-$ 1. $f(x)=\frac{2x+5}{x-1}$ 2. $f(x)=\frac{x+2}{x^{2}+1}$ 3. $f(x)=\frac{9x^{2}-4}{3x-2}$ 4. $f(x)=\frac{7}{3x^{2}}$ 5. $f(x)=\frac{x^{3}}{x^{3}+1}$ 6. $f(x)=\frac{14}{x^{2}-16}$ 7. $f(x)=\frac{5(x-2)}{x^{2}-3x+2}$ 8. $f(x)=\frac{x^{2}+1}{x-1}$ 9. $f(x)=\frac{x^{3}-3x^{2}-4x}{x^{2}+3x}$ 10. In physics, Boyle's law states that the product of the pressure $P$$V$$PV=\text{constant}$$4000 \ Pa.m^{2}$$P=\frac{4000}{V}$$v>0$ 1. Domain: $xe 1$$x=1$$y=2$ 2. Domain: All real numbers; Vertical asymptote: none; Horizontal asymptote: $y=0$ 3. Domain: $xe\frac{2}{3}$ 4. Domain: $xe 0$$x=0$$y=0$ 5. Domain: $xe -1$$x=-1$$y=1$ 6. Domain: $xe\pm 4$$x=\pm 4$$y=0$ 7. Domain: $xe2, xe1$$x=1$$y=0$ 8. Domain: $xe 1$$y=x+1$$x=1$$y=0$ 9. Domain: $xe0, xe -3$$y=x-6$$x=-3$ Files can only be attached to the latest version of None
{"url":"http://www.ck12.org/book/CK-12-Math-Analysis/r19/section/2.3/","timestamp":"2014-04-21T02:01:31Z","content_type":null,"content_length":"173042","record_id":"<urn:uuid:25a3cc30-8b9b-45ef-8004-0aa0218617bd>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00290-ip-10-147-4-33.ec2.internal.warc.gz"}
Hiram, GA Trigonometry Tutor Find a Hiram, GA Trigonometry Tutor ...I have post graduate in education as well. I am a certified teacher in PreK-5 and have taught language arts including phonics for 10 years. In addition I have taught reading and phonics in middle school for 8 years. 47 Subjects: including trigonometry, chemistry, English, physics ...I have a year's worth of peer-tutoring experience in Chemistry. Because of my peer-tutoring, I helped others improve their grades in Chemistry. Chemistry is one of my favorite science subjects that I'm great at helping others with. 17 Subjects: including trigonometry, chemistry, calculus, geometry ...Tutored peers on numerous Economic topics while a college student. Awarded Mason Gold Standard Award for contributing to the academic achievement of my peers. Graduated with a BA in Economics from BYU and and an MBA in Finance from the College of William and Mary. 28 Subjects: including trigonometry, calculus, physics, linear algebra ...I enjoy being able to help students, not only with mastering their challenging subjects, but also with instilling confidence in themselves. I continually assess students' ongoing needs and develop customized lesson plans to complement their individual learning styles. I have taught students of diverse ages and backgrounds, including underprivileged and learning-disabled students. 31 Subjects: including trigonometry, English, reading, chemistry ...Mostly, I tutored microeconomics, macroeconomics, econometrics and international economics. Although students seem to understand the concepts, sometimes they tend to forget that those concepts are easily applicable in real life. Students seem to be scared of graphs and some calculations. 36 Subjects: including trigonometry, calculus, geometry, algebra 1 Related Hiram, GA Tutors Hiram, GA Accounting Tutors Hiram, GA ACT Tutors Hiram, GA Algebra Tutors Hiram, GA Algebra 2 Tutors Hiram, GA Calculus Tutors Hiram, GA Geometry Tutors Hiram, GA Math Tutors Hiram, GA Prealgebra Tutors Hiram, GA Precalculus Tutors Hiram, GA SAT Tutors Hiram, GA SAT Math Tutors Hiram, GA Science Tutors Hiram, GA Statistics Tutors Hiram, GA Trigonometry Tutors
{"url":"http://www.purplemath.com/hiram_ga_trigonometry_tutors.php","timestamp":"2014-04-16T07:46:58Z","content_type":null,"content_length":"24040","record_id":"<urn:uuid:a380a01c-01a2-4983-833e-3bc25c7b143d>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00148-ip-10-147-4-33.ec2.internal.warc.gz"}
for the - MFCS’95, LNCS 969 , 1995 "... One of the most successful techniques for automatic verification is that of model checking. For finite automata there exist since long extremely efficient model-checking algorithms, and in the last few years these algorithms have been made applicable to the verification of real-time automata usi ..." Cited by 52 (7 self) Add to MetaCart One of the most successful techniques for automatic verification is that of model checking. For finite automata there exist since long extremely efficient model-checking algorithms, and in the last few years these algorithms have been made applicable to the verification of real-time automata using the region-techniques of Alur and Dill. In this , 1996 "... . There is a growing need for reliable methods of designing correct reactive systems such as computer operating systems and air traffic control systems. It is widely agreed that certain formalisms such as temporal logic, when coupled with automated reasoning support, provide the most effective a ..." Cited by 39 (2 self) Add to MetaCart . There is a growing need for reliable methods of designing correct reactive systems such as computer operating systems and air traffic control systems. It is widely agreed that certain formalisms such as temporal logic, when coupled with automated reasoning support, provide the most effective and reliable means of specifying and ensuring correct behavior of such systems. This paper discusses known complexity and expressiveness results for a number of such logics in common use and describes key technical tools for obtaining essentially optimal mechanical reasoning algorithms. However, the emphasis is on underlying intuitions and broad themes rather than technical intricacies. 1 Introduction There is a growing need for reliable methods of designing correct reactive systems. These systems are characterized by ongoing, typically nonterminating and highly nondeterministic behavior. Examples include operating systems, network protocols, and air traffic control systems. There is - ETHICS , 1988 "... This chapter introduces modal logic as a tool for talking about graphs, or to use more traditional terminology, as a tool for talking about Kripke models and frames. We want the reader to gain an intuitive appreciation of this perspective, and a firm grasp of the key technical ideas (such as bisimul ..." Cited by 13 (1 self) Add to MetaCart This chapter introduces modal logic as a tool for talking about graphs, or to use more traditional terminology, as a tool for talking about Kripke models and frames. We want the reader to gain an intuitive appreciation of this perspective, and a firm grasp of the key technical ideas (such as bisimulations) which underly it. We introduce the syntax and semantics of basic modal logic, discuss its expressivity at the level of models, examine its computational properties, and then consider what it can say at the level of frames. We then move beyond the basic modal language, examine the kinds of expressivity offered by a number of richer modal logics, and try to pin down what it is that makes them all ‘modal’. We conclude by discussing an example which brings many of the ideas we discuss into play: games. , 1995 "... The propositional µ-calculus as introduced by Kozen in [12] is considered. In that paper ..." , 1995 "... This thesis presents formal techniques to be used when describing and developing reliable real time systems. Three different probabilistic real time logics, used to specify about reliable real time processes in terms of timed probabilistic graphs, are presented. Algorithms for verifying implementati ..." Cited by 5 (0 self) Add to MetaCart This thesis presents formal techniques to be used when describing and developing reliable real time systems. Three different probabilistic real time logics, used to specify about reliable real time processes in terms of timed probabilistic graphs, are presented. Algorithms for verifying implementations with respect to specifications in the developed logics are presented. An algorithm to construct timed probabilistic graphs from specifications in one of the logics is presented. An algorithm "... The propositional mu-calculus is a propositional logic of programs which incorporates a least fixpoint operator and subsumes the propositional dynamic logic of Fischer and Ladner, the infinite looping construct of Streett, and the game logic of Parikh. We give an elementary time decision procedure, ..." Add to MetaCart The propositional mu-calculus is a propositional logic of programs which incorporates a least fixpoint operator and subsumes the propositional dynamic logic of Fischer and Ladner, the infinite looping construct of Streett, and the game logic of Parikh. We give an elementary time decision procedure, using a reduction to the emptiness problem for automata on infinite trees. A small model theorem is obtained as a corollary. 0 1989 Academic Press, Inc. 1. , 2013 "... Dealing with uncertainty in the context of planning has been an active research subject in AI. Addressing the case when uncertainty evolves over time can be difficult. In this work, we provide a solution to this problem by proposing a temporal logic to reason about quantities and probability. For th ..." Add to MetaCart Dealing with uncertainty in the context of planning has been an active research subject in AI. Addressing the case when uncertainty evolves over time can be difficult. In this work, we provide a solution to this problem by proposing a temporal logic to reason about quantities and probability. For this logic, we provide a PSPACE SAT algorithm together with a complete calculus. The algorithm enables us to perform planning under uncertainty via SAT, extending a technique used for classic planning. We can show that any obtained plan will have certain properties (desired or undesired). The calculus can also be used to derive the impossibility of a plan, given a set of specifications. 1 "... Abstract. We define exogenous logics for reasoning about probabilistic systems: a probabilistic state logic EPPL, and its fixpoint extension MEPL, which is enriched with operators from the modal µ-calculus. System states correspond to probability distributions over classical states and the system ev ..." Add to MetaCart Abstract. We define exogenous logics for reasoning about probabilistic systems: a probabilistic state logic EPPL, and its fixpoint extension MEPL, which is enriched with operators from the modal µ-calculus. System states correspond to probability distributions over classical states and the system evolution is modeled by parametrized Kripke structures that capture both stochastic and non–deterministic transitions. We introduce two approaches to the verification of properties expressed in these logics, one syntactic (a weakly complete Hilbert calculus) and the other semantic (a model–checking algorithm). The completeness proof of MEPL builds on the decidability of the existential theory of the real numbers and on a polynomial-space sat algorithm for EPPL. The model checking problem for MEPL is also analysed and the logic is related to previous work. The semantics of EPPL and MEPL are defined in terms of probability distributions over sets of propositional symbols, whereas the usual approaches are designed for reasoning about distributions over paths of possible behaviour. The intended application of our logics is as a specification formalism for properties of probabilistic systems. We illustrate the use of the logics for specifying system properties with some simple examples. 1. , 2010 "... Abstract. We consider exogenous logics for reasoning about probabilistic systems: a variant of probabilistic state logic EPPL[24], and its fixpoint extension MEPL, which is enriched with operators from the modal µ-calculus. System states correspond to probability distributions over classical states ..." Add to MetaCart Abstract. We consider exogenous logics for reasoning about probabilistic systems: a variant of probabilistic state logic EPPL[24], and its fixpoint extension MEPL, which is enriched with operators from the modal µ-calculus. System states correspond to probability distributions over classical states and the system evolution is modeled by parametrized Kripke structures that capture both stochastic and non–deterministic transitions. We introduce two approaches to the verification of properties expressed in these logics, one syntactic (a weakly complete Hilbert calculus) and the other semantic (a model– checking algorithm). The completeness proof of MEPL builds on the decidability of the existential theory of the real numbers and on a polynomial-space sat algorithm for EPPL. The model checking problem for MEPL is also analysed and the logic is related to previous work. The semantics of EPPL and MEPL are defined in terms of probability distributions over sets of propositional symbols, whereas the usual approaches are designed for reasoning about distributions over paths of possible behaviour. The intended application of our logics is as a specification formalism for properties of probabilistic systems. We illustrate the use of the logics for specifying system properties with some simple examples. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=431032","timestamp":"2014-04-20T09:59:58Z","content_type":null,"content_length":"32948","record_id":"<urn:uuid:dda2f418-7aea-489f-bd0b-25d84cf50a41>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00572-ip-10-147-4-33.ec2.internal.warc.gz"}
Hiram, GA Trigonometry Tutor Find a Hiram, GA Trigonometry Tutor ...I have post graduate in education as well. I am a certified teacher in PreK-5 and have taught language arts including phonics for 10 years. In addition I have taught reading and phonics in middle school for 8 years. 47 Subjects: including trigonometry, chemistry, English, physics ...I have a year's worth of peer-tutoring experience in Chemistry. Because of my peer-tutoring, I helped others improve their grades in Chemistry. Chemistry is one of my favorite science subjects that I'm great at helping others with. 17 Subjects: including trigonometry, chemistry, calculus, geometry ...Tutored peers on numerous Economic topics while a college student. Awarded Mason Gold Standard Award for contributing to the academic achievement of my peers. Graduated with a BA in Economics from BYU and and an MBA in Finance from the College of William and Mary. 28 Subjects: including trigonometry, calculus, physics, linear algebra ...I enjoy being able to help students, not only with mastering their challenging subjects, but also with instilling confidence in themselves. I continually assess students' ongoing needs and develop customized lesson plans to complement their individual learning styles. I have taught students of diverse ages and backgrounds, including underprivileged and learning-disabled students. 31 Subjects: including trigonometry, English, reading, chemistry ...Mostly, I tutored microeconomics, macroeconomics, econometrics and international economics. Although students seem to understand the concepts, sometimes they tend to forget that those concepts are easily applicable in real life. Students seem to be scared of graphs and some calculations. 36 Subjects: including trigonometry, calculus, geometry, algebra 1 Related Hiram, GA Tutors Hiram, GA Accounting Tutors Hiram, GA ACT Tutors Hiram, GA Algebra Tutors Hiram, GA Algebra 2 Tutors Hiram, GA Calculus Tutors Hiram, GA Geometry Tutors Hiram, GA Math Tutors Hiram, GA Prealgebra Tutors Hiram, GA Precalculus Tutors Hiram, GA SAT Tutors Hiram, GA SAT Math Tutors Hiram, GA Science Tutors Hiram, GA Statistics Tutors Hiram, GA Trigonometry Tutors
{"url":"http://www.purplemath.com/hiram_ga_trigonometry_tutors.php","timestamp":"2014-04-16T07:46:58Z","content_type":null,"content_length":"24040","record_id":"<urn:uuid:a380a01c-01a2-4983-833e-3bc25c7b143d>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00148-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculation Speed You will probably need to switch to Manual Calculation Mode when entering data. Improving calculation speed increases productivity. For response times greater than a tenth of a send but still less than about 1 second, users can successfully keep a train of thought going, although they will notice the response time delay. IBM studies from the 1970s and 1980s showed significant productivity gains for users when response times were less than a second.
{"url":"http://www.decisionmodels.com/optspeed.htm","timestamp":"2014-04-17T12:50:33Z","content_type":null,"content_length":"21541","record_id":"<urn:uuid:8fb34a93-f067-4e80-89db-682442c230e8>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
Institute for Mathematics and its Applications (IMA) - Singular PDE's and the Single-step Formulation of Feedback Linearization With Pole Placement by Costas Kravaris, University of Michigan Costas Kravaris, University of Michigan The present work proposes a new formulation to the feedback linearization problem. The problem under consideration is not treated within the context of geometric exact feedback linearization, where restrictive conditions arise, but is conveniently formulated in the context of singular PDE theory. In particular, the mathematical formulation of the problem is realized via a system of first-order quasi-linear singular PDE's and a rather general set of necessary and sufficient conditions for solvability is derived, by using Lyapunov's auxiliary theorem on singular PDE's. The solution to the above system of singular PDE's is locally analytic and this enables a series solution method, which is easily programmable with the aid of a symbolic software package. Under a simultaneous implementation of a nonlinear coordinate transformation and a nonlinear state feedback law computed through the solution of the system of PDE's, both feedback linearization and pole-placement design objectives are accomplished in one step, avoiding the restrictions of the other approaches.
{"url":"http://www.ima.umn.edu/dynsys/wkshp_abstracts/kravaris1.html","timestamp":"2014-04-16T16:00:16Z","content_type":null,"content_length":"13913","record_id":"<urn:uuid:6f943abd-100e-4c2e-8e08-6d1c0a412f4b>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00054-ip-10-147-4-33.ec2.internal.warc.gz"}
Mukilteo Math Tutor Find a Mukilteo Math Tutor ...For the past two years, I have been employed at Western Washington University's Tutoring Center and for two years before that I worked at the Math Center at Black Hills High School in Olympia. I am certified level 1 by the College Reading and Learning Association, and have tutored subjects rangi... 13 Subjects: including statistics, linear algebra, algebra 1, algebra 2 ...Physics is another subject I tutor frequently and was my focus as an undergraduate and graduate student (at Washington University in St Louis and the University of Washington - Seattle, respectively). I have assistant-taught introductory college level physics classes for four years. These classe... 17 Subjects: including prealgebra, English, linear algebra, algebra 1 ...I have an immense love for math and Spanish and have been very successful in these subjects for a big part of my life. Being ahead of others in my grade at math shows my love and able to understand it. I won't just give my students the answers, but instead will push them to try and solve the problems on their own after I have shown them how to solve other examples. 15 Subjects: including algebra 2, geometry, precalculus, prealgebra ...I think the best way to communicate my knowledge with others is to use simple examples, and have students complete slightly more difficult problems, with my help. I can address very specific questions, which are the kind I used to have. In my opinion, working with another is the best way to fully grasp the material taught in most advanced Math and Engineering courses. 12 Subjects: including geometry, physics, algebra 1, algebra 2 ...It is also not about comprehension and there are also strategies to use with two comparing readings. I teach many strategies to help students succeed on the SAT. I have been teaching SAT prep for over 5 years both one-on-one and in groups. 5 Subjects: including SAT math, ASVAB, SAT reading, SAT writing
{"url":"http://www.purplemath.com/mukilteo_math_tutors.php","timestamp":"2014-04-17T16:10:07Z","content_type":null,"content_length":"23795","record_id":"<urn:uuid:9bb7c206-96ae-47df-beb7-ca03ee5f8274>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00431-ip-10-147-4-33.ec2.internal.warc.gz"}
The Unprovability of Consistency: An Essay in Modal Logic Results 1 - 10 of 14 , 1994 "... MultiLanguage systems (ML systems) are formal systems allowing the use of multiple distinct logical languages. In this paper we introduce a class of ML systems which use a hierarchy of first order languages, each language containing names for the language below, and propose them as an alternative to ..." Cited by 178 (47 self) Add to MetaCart MultiLanguage systems (ML systems) are formal systems allowing the use of multiple distinct logical languages. In this paper we introduce a class of ML systems which use a hierarchy of first order languages, each language containing names for the language below, and propose them as an alternative to modal logics. The motivations of our proposal are technical, epistemological and implementational. From a technical point of view, we prove, among other things, that the set of theorems of the most common modal logics can be embedded (under the obvious bijective mapping between a modal and a first order language) into that of the corresponding ML systems. Moreover, we show that ML systems have properties not holding for modal logics and argue that these properties are justified by our intuitions. This claim is motivated by the study of how ML systems can be used in the representation of beliefs (more generally, propositional attitudes) and provability, two areas where modal logics have been extensively used. Finally, from an implementation point of view, we argue that ML systems resemble closely the current practice in the computer representation of propositional attitudes and metatheoretic theorem proving. - Bulletin of Symbolic Logic , 2001 "... In 1933 G odel introduced a calculus of provability (also known as modal logic S4) and left open the question of its exact intended semantics. In this paper we give a solution to this problem. We find the logic LP of propositions and proofs and show that G odel's provability calculus is nothing b ..." Cited by 114 (22 self) Add to MetaCart In 1933 G odel introduced a calculus of provability (also known as modal logic S4) and left open the question of its exact intended semantics. In this paper we give a solution to this problem. We find the logic LP of propositions and proofs and show that G odel's provability calculus is nothing but the forgetful projection of LP. This also achieves G odel's objective of defining intuitionistic propositional logic Int via classical proofs and provides a Brouwer-Heyting-Kolmogorov style provability semantics for Int which resisted formalization since the early 1930s. LP may be regarded as a unified underlying structure for intuitionistic, modal logics, typed combinatory logic and #-calculus. - CUNY Ph.D. Program in Computer Science , 2007 "... The Logic of Proofs LP captures the invariant propositional properties of proof predicates t is a proof of F with a set of operations on proofs sufficient for realizing the whole modal logic S4 and hence the intuitionistic logic IPC. Some intuitive properties of proofs, however, are not invariant an ..." Cited by 21 (9 self) Add to MetaCart The Logic of Proofs LP captures the invariant propositional properties of proof predicates t is a proof of F with a set of operations on proofs sufficient for realizing the whole modal logic S4 and hence the intuitionistic logic IPC. Some intuitive properties of proofs, however, are not invariant and hence not present in LP. For example, the choice function ‘+ ’ in LP, which is specified by the condition s:F ∨t:F → (s+t):F, is not necessarily symmetric. In this paper, we introduce an extension of the Logic of Proofs, SLP, which incorporates natural properties of the standard proof predicate in Peano Arithmetic: t is a code of a derivation containing F, including the symmetry of Choice. We show that SLP produces Brouwer-Heyting-Kolmogorov proofs with a rich structure, which can be useful for applications in epistemic logic and other areas. 1 - NATIONAL UNIVERSITY OF SINGAPORE , 2005 "... The true belief components of Plato's tripartite definition of knowledge as justified true belief are represented in formal epistemology by modal logic and its possible worlds semantics. At the same time, the justification component of Plato's definition did not have a formal representation. This ..." Cited by 20 (7 self) Add to MetaCart The true belief components of Plato's tripartite definition of knowledge as justified true belief are represented in formal epistemology by modal logic and its possible worlds semantics. At the same time, the justification component of Plato's definition did not have a formal representation. This , 1993 "... This report describes the elimination of the injectivity restriction for functional arithmetical interpretations as used in the systems PF and PFM in the Basic Logic of Proofs. An appropriate axiom system PU in a language with operators "x is a proof of y" is defined and proved to be sound and compl ..." Cited by 17 (13 self) Add to MetaCart This report describes the elimination of the injectivity restriction for functional arithmetical interpretations as used in the systems PF and PFM in the Basic Logic of Proofs. An appropriate axiom system PU in a language with operators "x is a proof of y" is defined and proved to be sound and complete with respect to all arithmetical interpretations based on functional proof predicates. Unification plays a major role in the formulation of the new axioms. , 1992 "... MultiLanguage systems (ML systems) are formal systems allowing the use of multiple distinct logical languages. In this paper we introduce a class of ML systems which use a hierarchy of metatheories, each with a first order language containing names for the language below, and propose them as an a ..." Cited by 3 (3 self) Add to MetaCart MultiLanguage systems (ML systems) are formal systems allowing the use of multiple distinct logical languages. In this paper we introduce a class of ML systems which use a hierarchy of metatheories, each with a first order language containing names for the language below, and propose them as an alternative to modal logics. The motivations of our proposal are technical and epistemological. From a technical point of view, we prove, among other things, that modal logics can be embedded in the corresponding ML systems. Moreover, we show that ML systems have properties not holding for modal logics and argue that these properties are justified by our intuitions. We motivate our claim by studying how they can be used in the representation of beliefs (more generally, propositional attitudes) and provability, two areas where modal logics have been extensively used. 1 "... It is shown that the modal logic S4, simple -calculus and modal -calculus admit a realization in a very simple propositional logical system LP , which has an exact provability semantics. In LP both modality and -terms become objects of the same nature, namely, proof polynomials. The provability inte ..." Cited by 3 (1 self) Add to MetaCart It is shown that the modal logic S4, simple -calculus and modal -calculus admit a realization in a very simple propositional logical system LP , which has an exact provability semantics. In LP both modality and -terms become objects of the same nature, namely, proof polynomials. The provability interpretation of modal -terms presented here may be regarded as a system-independent generalization of the Curry-Howard isomorphism of proofs and -terms. 1 Introduction The Logic of Proofs (LP , see Section 2) is a system in the propositional language with an extra basic proposition t : F for "t is a proof of F ". LP is supplied with a formal provability semantics, completeness theorems and decidability algorithms ([3], [4], [5]). In this paper it is shown that LP naturally encompasses -calculi corresponding to intuitionistic and modal logics, and combinatory logic. In addition, LP is strictly more expressive because it admits arbitrary combinations of ":" and propositional connectives. The "... Explicit modal logic was first sketched by Gödel in [16] as the logic with the atoms "t is a proof of F". The complete axiomatization of the Logic of Proofs LP was found in [4] (see also [6], [7],[18]). In this paper we establish a sort of a functional completeness property of proof polynomials which ..." Cited by 2 (2 self) Add to MetaCart Explicit modal logic was first sketched by Gödel in [16] as the logic with the atoms "t is a proof of F". The complete axiomatization of the Logic of Proofs LP was found in [4] (see also [6],[7], [18]). In this paper we establish a sort of a functional completeness property of proof polynomials which constitute the system of proof terms in LP. Proof polynomials are built from variables and constants by three operations on proofs: "\Delta" (application), "!" (proof checker), and "+" (choice). Here constants stand for canonical proofs of "simple facts", namely instances of propositional axioms and axioms of LP in a given proof system. We show that every operation on proofs that (i) can be specified in a propositional modal language and (ii) is invariant with respect to the choice of a proof system is realized by a proof polynomial. - in Proceedings AiML-II, Philosophical Institute , 1998 "... In 1933 Godel introduced a modal logic of provability (S4) and left open the problem of a formal provability semantics for this logic. Since then numerous attempts have been made to give an adequate provability semantics to Godel's provability logic with only partial success. In this paper we give t ..." Cited by 1 (0 self) Add to MetaCart In 1933 Godel introduced a modal logic of provability (S4) and left open the problem of a formal provability semantics for this logic. Since then numerous attempts have been made to give an adequate provability semantics to Godel's provability logic with only partial success. In this paper we give the complete solution to this problem in the Logic of Proofs (LP). LP implements Godel's suggestion (1938) of replacing formulas "F is provable" by the propositions for explicit proofs "t is a proof of F" (t : F ). LP admits the reflection of explicit proofs t : F ! F thus circumventing restrictions imposed on the provability operator by Godel's second incompleteness theorem. LP formalizes the Kolmogorov calculus of problems and proves the Kolmogorov conjecture that intuitionistic logic coincides with the classical calculus of problems.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=83405","timestamp":"2014-04-19T22:55:43Z","content_type":null,"content_length":"36360","record_id":"<urn:uuid:317cfe95-892d-4b7b-87c5-b6dd5cc80de8>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00175-ip-10-147-4-33.ec2.internal.warc.gz"}
As shown in Fig. 8-3, the discrete Fourier transform changes an N point input signal into two point output signals. The input signal contains the signal being decomposed, while the two output signals contain the amplitudes of the component sine and cosine waves (scaled in a way we will discuss shortly). The input signal is said to be in the time domain. This is because the most common type of signal entering the DFT is composed of samples taken at regular intervals of time. Of course, any kind of sampled data can be fed into the DFT, regardless of how it was acquired. When you see the term "time domain" in Fourier analysis, it may actually refer to samples taken over time, or it might be a general reference to any discrete signal that is being decomposed. The term frequency domain is used to describe the amplitudes of the sine and cosine waves (including the special scaling we promised to explain). The frequency domain contains exactly the same information as the time domain, just in a different form. If you know one domain, you can calculate the other. Given the time domain signal, the process of calculating the frequency domain is called decomposition, analysis, the forward DFT, or simply, the DFT. If you know the frequency domain, calculation of the time domain is called synthesis, or the inverse DFT. Both synthesis and analysis can be represented in equation form and computer algorithms. The number of samples in the time domain is usually represented by the variable N. While N can be any positive integer, a power of two is usually chosen, i.e., 128, 256, 512, 1024, etc. There are two reasons for this. First, digital data storage uses binary addressing, making powers of two a natural signal length. Second, the most efficient algorithm for calculating the DFT, the Fast Fourier Transform (FFT), usually operates with N that is a power of two. Typically, N is selected between 32 and 4096. In most cases, the samples run from 0 to N-1, rather than 1 to N. Standard DSP notation uses lower case letters to represent time domain signals, such as x[ ], y[ ], and z[ ]. The corresponding upper case letters are used to represent their frequency domains, that is, X[ ], Y[ ], and Z[ ]. For illustration, assume an N point time domain signal is contained in x[n]. The frequency domain of this signal is called X[ ], and consists of two parts, each an array of N/2 +1 samples. These are called the Real part of X[ ] , written as: ReX[ ], and the Imaginary part of X[ ], written as: ImX[ ]. The values in ReX[ ] are the amplitudes of the cosine waves, while the values in ImX[ ] are the amplitudes of the sine waves (not worrying about the scaling factors for the moment). Just as the time domain runs from x[n] to x[N-1], the frequency domain signals run from ReX[0] to ReX[N/2], and from ImX[0] to ImX[N/2]. Study these notations carefully; they are critical to understanding the equations in DSP. Unfortunately, some computer languages don't distinguish between lower and upper case, making the variable names up to the individual programmer. The programs in this book use the array XX[ ] to hold the time domain signal, and the arrays REX[ ] and IMX[ ] to hold the frequency domain signals. The names real part and imaginary part originate from the complex DFT, where they are used to distinguish between real and imaginary numbers. Nothing so complicated is required for the real DFT. Until you get to Chapter 29, simply think that "real part" means the cosine wave amplitudes, while "imaginary part" means the sine wave amplitudes. Don't let these suggestive names mislead you; everything here uses ordinary numbers. Likewise, don't be misled by the lengths of the frequency domain signals. It is common in the DSP literature to see statements such as: "The DFT changes an N point time domain signal into an N point frequency domain signal." This is referring to the complex DFT, where each "point" is a complex number (consisting of real and imaginary parts). For now, focus on learning the real DFT, the difficult math will come soon enough.
{"url":"http://www.dspguide.com/ch8/2.htm","timestamp":"2014-04-17T06:52:42Z","content_type":null,"content_length":"39637","record_id":"<urn:uuid:0952ef38-9f0a-4565-bb2e-e378978f4f0d>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00512-ip-10-147-4-33.ec2.internal.warc.gz"}
math teaching websites? 7th December 2007 14:45 #4 Senior Member Array Rep Power 7th December 2007 12:43 #3 Regular User Array Rep Power 7th December 2007 12:39 #2 Senior Member Array Rep Power 7th December 2007 12:35 #1 Regular User Array Rep Power Re: math teaching websites? Sounds like you might be biting off a bit more than you can chew. With that said, you can check out these sites: Teachers' notes: Functions and Formulae Java Triangle Puzzle: The (Java) Triangle Puzzle AAA Math (great geometry lessons): AAA Math Math.com - World of Math Online A-Plus Math: Online Math Activities: --- Good luck You can also check the Resource Pool for math links. Thanks for these. I'll check them out. Re: math teaching websites? Sounds like you might be biting off a bit more than you can chew. With that said, you can check out these sites: I-Maths: Teachers' notes: Functions and Formulae Java Triangle Puzzle: The (Java) Triangle Puzzle AAA Math (great geometry lessons): AAA Math Math.com: Math.com - World of Math Online A-Plus Math: Aplusmath.com Online Math Activities: Math --- Good luck You can also check the Resource Pool for math links. Last edited by wangsuda; 7th December 2007 at 12:42. Reason: Automerged Doublepost To view links or images in signatures your post count must be 10 or greater. You currently have 0 posts. math teaching websites? Hi, I'm making the leap into math teaching. Anybody know of any good websites? I'm looking in particular for lesson plans, discussion of lesson planning, and secondary sources (stuff that might be interesting to students to give them some context to what they are working on.) Re: math teaching websites? Excuse me for not taking this seriously ..... Re: math teaching websites? I like this one. Lots of online maths books, and no copyright issues: Centre for Innovation in Mathematics Teaching BOOM BOOM Re: math teaching websites? Thanks for all these suggestions. These'll give me something to chew on.... Re: math teaching websites? Moved to resource pool. Follow the three R’s: Respect for self Respect for others and Responsibility for all your actions. --Erma Bombek 7th December 2007 15:31 #5 Senior Member Array Rep Power 7th December 2007 17:22 #6 Regular User Array Rep Power 7th December 2007 18:45 #7 playing the field... Array Rep Power
{"url":"http://www.ajarnforum.net/vb/lesson-plans/27105-math-teaching-websites.html","timestamp":"2014-04-20T13:41:44Z","content_type":null,"content_length":"70664","record_id":"<urn:uuid:ed935e6b-82fa-4395-911a-9c7d43fe2b35>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00144-ip-10-147-4-33.ec2.internal.warc.gz"}
The Negative Binomial Distribution: pmf, mgf, mean and variance. The Negative Binomial Distribution The probability mass function of the negative binomial distribution Consider the situation where one performs a number Bernoulli trials, each trial has a probability of success $p$ , and trials continue until the $r$ th successs occurs. Let $X$ be the random variable which is the number of trials up to and including the $r$ th success. This means that the range of X is the set $\{ r, r+1, \ldots \}$ . Then the pmf would be given by ${{x-1}\choose{r-1}} p^r q^{x-r}.$ Note that there are r successes and x-r previous failures, with the last success at a fixed rth position. Thus the number of possible outcomes is the number of combinations of selecting $(x-1)$ objects taken $(r-1)$ at a time. Top pmf mgf mean variance The moment generating function of the negative binomial distribution What is the mgf of the negative binomial distribution? Let us compute the value of $M_Xt = E(tX) = \sum _{x=r}^{\infty} e^{tx}{{x-1}\choose{r-1}} p^r q^{x-r}.$ Writing the first few terms of the expansion, we get which simplifies to and which upon factoring out $p^re^{tr} = (pe^t)^r$ and further simplication results in It will be shown later that the bracketed terms is equivalent to , see the blog entry A negative binomial series identity and therefore the moment generating function of the negative binomial distribution is given by Note that the numerator in the formula of Spiegel's Statistics in page. 118 should be raised to the power $r$ . Top pmf mgf mean variance The mean of the Negative Binomial Distribution The expectation or mean of the negative binomial distribution with the pmf and mgf above is obtained by differentiating the mgf wrt t and setting t to zero: $E(X) = \mu= M'_X(t)|{t = 0}.$ When differentiated, the derivative is $\frac{(1-qe^t)^r r(pe^t)^{r-1}pe^t + (pe^t)^r r qe^t{(1-qe^t)^{r-1}}}{(1-qe^t)^{2r}}$ and the value at t=0 is $\frac{r p^{2r-1}( p+q)}{p^2r}$ which collapses to $\mu = \frac{r}{p}$ Top pmf mgf mean variance The variance of the Negative Binomial Distribution The second moment or $E(X^2)$ of the negative binomial distribution with the pmf and mgf above is obtained by differentiating the mgf twice wrt t and setting t to zero and the variance is computed as We will leave as an exercise (at the moment since it is so tedious) that the variance is given by $V(X) = \frac{rq}{p^2}$ . This entry is subject to review but the final formulas are all right. Top pmf mgf mean variance Feb. 20, 2010: We missed the square in the denominator! We will redo the presentation for the variance. Ramadan Ally Says: January 21st, 2011 at 4:41 pm It will be better if you will provide to us the derivation of variance of negative binomial distribution by using raw moment.. (I'm from Tanzania} ernie Says: January 21st, 2011 at 4:49 pm Thanks, will take a look at this again before January is over. Reader feedback is important to us. Lee Corbin Says: April 23rd, 2011 at 4:40 am Very nice article. There is a typo in the equation following "which simplifies to" in the exponent of the 3rd term. Thank you. DAV Says: January 1st, 2013 at 8:13 am This is a very big effort. It has simplified my academic work. Keep it up kenny Says: April 28th, 2013 at 6:36 pm
{"url":"http://adorio-research.org/wordpress/?p=4689","timestamp":"2014-04-18T02:58:35Z","content_type":null,"content_length":"77982","record_id":"<urn:uuid:a92290f3-c5b2-4e24-ae25-332d1984a132>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00643-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/int/answered/1","timestamp":"2014-04-17T12:41:55Z","content_type":null,"content_length":"96330","record_id":"<urn:uuid:18b61aba-da36-49fc-9801-bb87705443c1>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00044-ip-10-147-4-33.ec2.internal.warc.gz"}
Bike/motorcycle leaning in a curve. What happens to the normal force N? "With a car of the same weight, the larger the contact patch area, the lower the load per unit area of the contact patch, which translates into somewhat higher grip, up to a point, due to something called tire load sensitivity, where the coefficient of friction decreases the with amount of load. There's a point where increasing contact patch area beyond some near optimal size has little additional effect." 1) Why does that result in a higher grip? Because the coefficient of friction decreases as the load factor increases. Wiki article: The wiki article mentions that maximum horizontal force is proportional to the normal force raised to somwhere from .7 to .9 power. If there was no load sensitivity, then it would be just the normal force (or normal force raised to 1.0 power). Are you saying that after a certain contact patch area we start losing friction (traction)? No, only that there's a point of diminishing returns. Also a larger tire involves more mass and in an open wheel car, more aerodynamic drag. Open wheel race cars normally have smaller front tires than rear tires, while closed body race cars like Lemans prototype cars have front and rear tires of similar size. There's also the issue of heat dissipation, and having a larger tire (taller or wider) provides more surface area for heat dissipation. For a non-downforce race car, the camber is set so to keep the surface of the tire closer to parallel to the road surface when under cornering loads. Without the camber setting, the cornering load would lift the inside (of the curve) portion of the tire away from the road. For a car that turns left and right, a bit of negative camber is used on both left and right sides to accomplish this, with a bit less camber on the rear tires (since they also propel the car forwards). In the case of Nascar race cars on a track where there are only left turns, the left side tires will use positive camber (all tires lean a bit to the left). The common way to check the camber setting is to monitor the tire temperatures at the inside, middle, and outside. For a non-downforce race car, the goal is to get the temperatures somewhat even, so camber thrust is not a factor for these cars. For a high downforce race car, such as Formula 1, the camber is set a bit more negative, so that the tire's inside temperatures are a bit higher than the outside temperatures. I'm not sure if the reason for this is to produce camber thrust as opposed to other reasons for the extra bit of negative camber setting. So, does the normal force go from being vertical to being at an angle? The normal force is normally defined to be the force perpendicular to the surface of a road, which may be banked. Does the action of leaning the bike increase the necessary centripetal force? Leaning the bike is primarily done for balance, so that the bike doesn't fall inwards or outwards during a turn. It's common for motorcycle racing riders to hang off on the inside, so that the bike leans less than it would otherwise (note that this would decrease any camber thrust effect). How many forces do we have? Weight W, normal force N, static friction f_s and camber thrust? Camber thrust is a component of the static friction force.
{"url":"http://www.physicsforums.com/showthread.php?t=649369","timestamp":"2014-04-17T09:51:59Z","content_type":null,"content_length":"69715","record_id":"<urn:uuid:37fb2ca6-786b-4d91-b8fd-e467aa912f32>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00647-ip-10-147-4-33.ec2.internal.warc.gz"}
Descriptive and Computational Complexity, in Results 1 - 10 of 41 - In Proceedings of the Ninth ACM SIGACT-SIGMOD Symposium on Principles of Database Systems , 1990 "... We present a query language called GraphLog, based on a graph representation of both data and queries. Queries are graph patterns. Edges in queries represent edges or paths in the database. Regular expressions are used to qualify these paths. We characterize the expressive power of the language a ..." Cited by 167 (18 self) Add to MetaCart We present a query language called GraphLog, based on a graph representation of both data and queries. Queries are graph patterns. Edges in queries represent edges or paths in the database. Regular expressions are used to qualify these paths. We characterize the expressive power of the language and show that it is equivalent to stratified linear Datalog, first order logic with transitive closure, and non-deterministic logarithmic space (assuming ordering on the domain). The fact that the latter three classes coincide was not previously known. We show how GraphLog can be extended to incorporate aggregates and path summarization, and describe briefly our current prototype implementation. 1 Introduction The literature on theoretical and computational aspects of deductive databases, and the additional power they provide in defining and querying data, has grown rapidly in recent years. Much less work has gone into the design of languages and interfaces that make this additional pow... - Information and Computation , 1998 "... ..." - Journal of Computer and System Sciences "... Given a successor relation S (i.e., a directed line graph), and given two distinguished points s and t, the problem ORD is to determine whether s precedes t in the unique ordering defined by S. We show that ORD is L-complete (via quantifier-free projections). We then show that first-order logic with ..." Cited by 51 (2 self) Add to MetaCart Given a successor relation S (i.e., a directed line graph), and given two distinguished points s and t, the problem ORD is to determine whether s precedes t in the unique ordering defined by S. We show that ORD is L-complete (via quantifier-free projections). We then show that first-order logic with counting quantifiers, a logic that captures TC 0 ([BIS90]) over structures with a built-in total-ordering, can not express ORD. Our original proof of this in the conference version of this paper ([Ete95]) employed an Ehrenfeucht-Fraiss'e Game for first-order logic with counting ([IL90]). Here we show how the result follows from a more general one obtained independently by Nurmonen, [Nur96]. We then show that an appropriately modified version of the EF game is "complete" for the logic with counting in the sense that it provides a necessary and sufficient condition for expressibility in the logic. We observe that the L-complete problem ORD is essentially sparse if we ignore reorderings of v... , 1992 "... The purpose of this thesis is to give a "foundational" characterization of some common complexity classes. Such a characterization is distinguished by the fact that no explicit resource bounds are used. For example, we characterize the polynomial time computable functions without making any direct r ..." Cited by 45 (3 self) Add to MetaCart The purpose of this thesis is to give a "foundational" characterization of some common complexity classes. Such a characterization is distinguished by the fact that no explicit resource bounds are used. For example, we characterize the polynomial time computable functions without making any direct reference to polynomials, time, or even computation. Complexity classes characterized in this way include polynomial time, the functional polytime hierarchy, the logspace decidable problems, and NC. After developing these "resource free" definitions, we apply them to redeveloping the feasible logical system of Cook and Urquhart, and show how this first-order system relates to the second-order system of Leivant. The connection is an interesting one since the systems were defined independently and have what appear to be very different rules for the principle of induction. Furthermore it is interesting to see, albeit in a very specific context, how to retract a second order statement, ("inducti... , 1993 "... The computational complexity of a problem is usually defined in terms of the resources required on some machine model of computation. An alternative view looks at the complexity of describing the problem (seen as a collection of relational structures) in a logic, measuring logical resources such as ..." Cited by 36 (7 self) Add to MetaCart The computational complexity of a problem is usually defined in terms of the resources required on some machine model of computation. An alternative view looks at the complexity of describing the problem (seen as a collection of relational structures) in a logic, measuring logical resources such as the number of variables, quantifiers, operators, etc. A close correspondence has been observed between these two, with many natural logics corresponding exactly to independently defined complexity classes. For the complexity classes that are generally identified with feasible computation, such characterizations require the presence of a linear order on the domain of every structure, in which case the class PTIME is characterized by an extension of first-order logic by means of an inductive operator. No logical characterization of feasible computation is known for unordered structures. We approach this question from two directions. On the one hand, we seek to accurately characterize the - In Structure and Complexity , 1993 "... We establish a general connection between fixpoint logic and complexity. On one side, we have fixpoint logic, parameterized by the choices of 1st-order operators (inflationary or noninflationary) and iteration constructs (deterministic, nondeterministic, or alternating). On the other side, we have t ..." Cited by 36 (5 self) Add to MetaCart We establish a general connection between fixpoint logic and complexity. On one side, we have fixpoint logic, parameterized by the choices of 1st-order operators (inflationary or noninflationary) and iteration constructs (deterministic, nondeterministic, or alternating). On the other side, we have the complexity classes between P and EXPTIME. Our parameterized fixpoint logics capture the complexity classes P, NP, PSPACE, and EXPTIME, but equality is achieved only over ordered structures. There is, however, an inherent mismatch between complexity and logic -- while computational devices work on encodings of problems, logic is applied directly to the underlying mathematical structures. To overcome this mismatch, we develop a theory of relational complexity, which bridges tha gap between standard complexity and fixpoint logic. On one hand, we show that questions about containments among standard complexity classes can be translated to questions about containments among relational complex... , 1995 "... This paper is about automated techniques for (modal logic) correspondence theory. The theory we deal with concerns the problem of finding fixpoint characterizations of modal axiom schemata. Given a modal schema and a semantics based method of translating modal formulae into classical ones, we try to ..." Cited by 27 (7 self) Add to MetaCart This paper is about automated techniques for (modal logic) correspondence theory. The theory we deal with concerns the problem of finding fixpoint characterizations of modal axiom schemata. Given a modal schema and a semantics based method of translating modal formulae into classical ones, we try to derive automatically a fixpoint formula characterizing precisely the class of frames validating this schema. The technique we consider can, in many cases, be easily applied without any computer support. Although we mainly concentrate on Kripke semantics, our fixpoint approach is much more general, as it is based on the elimination of second-order quantifiers from formulae. Thus it can be applied in second-order theorem proving as well. We show some application examples for the method which may serve as new, automated proofs of the respective correspondences. - LECTURES IN APPLIED MATHEMATICS , 1996 "... We present a logical approach to complexity over the real numbers with respect to the model of Blum, Shub and Smale. The logics under consideration are interpreted over a special class of two-sorted structures, called R-structures: They consist of a finite structure together with the ordered field ..." Cited by 24 (8 self) Add to MetaCart We present a logical approach to complexity over the real numbers with respect to the model of Blum, Shub and Smale. The logics under consideration are interpreted over a special class of two-sorted structures, called R-structures: They consist of a finite structure together with the ordered field of reals and a finite set of functions from the finite structure into R. They are a special case of the metafinite structures introduced recently by Grädel and Gurevich. We argue that R-structures provide the right class of structures to develop a descriptive complexity theory over R. We substantiate this claim by a number of results that relate logical definability on R-structures with complexity of computations of BSS-machines. - Annals of Pure and Applied Logic , 1996 "... Inexpressibility results in Finite Model Theory are often proved by showing that Duplicator, one of the two players of an Ehrenfeucht game, has a winning strategy on certain structures. In this article a new method is introduced that allows, under certain conditions, the extension of a winning strat ..." Cited by 21 (3 self) Add to MetaCart Inexpressibility results in Finite Model Theory are often proved by showing that Duplicator, one of the two players of an Ehrenfeucht game, has a winning strategy on certain structures. In this article a new method is introduced that allows, under certain conditions, the extension of a winning strategy of Duplicator on some small parts of two finite structures to a global winning strategy. As applications of this technique it is shown that (*) Graph Connectivity is not expressible in existential monadic second-order logic (MonNP), even in the presence of a built-in linear order, (*) Graph Connectivity is not expressible in MonNP even in the presence of arbitrary built-in relations of degree n^o(1), and (*) the presence of a built-in linear order gives MonNP more expressive power than the presence of a built-in successor relation. - Journal of Computer and System Sciences , 1997 "... It is a well-known result of Fagin that the complexity class NP coincides with the class of ..."
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=678471","timestamp":"2014-04-17T23:03:04Z","content_type":null,"content_length":"37598","record_id":"<urn:uuid:60c848a4-fe78-4719-a970-8182dc092dc5>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
of Statistic Components of Statistical Thinking and Implications for Instruction and Assessment Beth L. Chance California Polytechnic State University Journal of Statistics Education Volume 10, Number 3 (2002), www.amstat.org/publications/jse/v10n3/chance.html Copyright © 2002 by Beth L. Chance, all rights reserved. This text may be freely shared among individuals, but it may not be republished in any medium without express written consent from the author and advance notification of the editor. Key Words: Introductory statistics; Literacy; Reasoning. This paper focuses on a third arm of statistical development: statistical thinking. After surveying recent definitions of statistical thinking, implications for teaching beginning students (including non-majors) are discussed. Several suggestions are given for direct instruction aimed at developing “habits of mind” for statistical thinking in students. The paper concludes with suggestions for assessing students’ ability to think statistically. While these suggestions are primarily aimed at non-majors, many statistics majors would also benefit from further development of these ideas in their undergraduate education. 1. Introduction This paper focuses on the third arm of statistical development: statistical thinking. While having our students “think statistically” sounds desirable, to many instructors it may not be immediately obvious what this involves and whether or not statistical thinking can be developed through direct instruction. Furthermore, what, if any, components of statistical thinking can we expect our beginning students to develop? To help delineate the components of statistical thinking with a guide as how to address these ideas in our teaching, this paper will examine the following questions: • What is statistical thinking? • How can we teach statistical thinking? • How can we determine whether students are thinking statistically? First, the paper provides a survey of recent definitions of “statistical thinking,” focusing on elements involved in this process and attempting to differentiate statistical thinking from statistical literacy and statistical reasoning. Second, implications for instruction are given which focus primarily on the beginning courses for non-statistics majors. Several suggestions provide mechanisms for trying to develop “habits” of statistical thinking in students. While these suggestions are aimed at non-majors, many statistics majors would also be well served by incorporation of these ideas in their introductory courses and reinforcement in subsequent courses. The final section suggests methods and concrete examples for assessing students’ ability to think statistically. While statistical thinking may be distinctly defined, teaching and evaluating thinking greatly overlaps with reasoning and literacy. 2. Definitions of Statistical Thinking Numerous texts and papers utilize the phrase “statistical thinking” in their title. However, few give a formal definition of statistical thinking. Many appear to use “thinking,” “reasoning,” and “literacy” interchangeably in an effort to distinguish the understanding of statistical concepts from the numerical manipulation that too often has characterized statistical use and instruction. Aided by recent advancements in technology, “number crunching” no longer must dominate the landscape of the introductory course. Instead, we have the luxury of allowing our students to focus on the statistical process that precedes the calculations and the interpretation of the results of these calculations. Statistical research, practice, and education are entering a new era, one that focuses on the development and use of statistical thinking. (Snee 1999, p. 255) We want students to see the “big picture.” However, it has not been as clear how to develop this ability in our students, or even exactly what we mean that big picture to be. Realizing the inadequacies of current formulations, several statisticians and committees have made formal attempts to characterize what is meant by statistical thinking: Box, Hunter, and Hunter (1978), p. 2, outline the process of statistical inquiry through the following schematic: Figure 1. "The learning process as a feedback loop." They encourage statisticians to: • Find out as much as you can about the problem • Don’t forget nonstatistical knowledge • Define objectives • Learn from each other, highlighting the interplay between theory and practice Much of this schematic is what researchers are still building on today. Moore (1990) proposed that the core elements include: 1. The omnipresence of variation in processes 2. The need for data about processes 3. The design of data production with variation in mind 4. The quantification of variation 5. The explanation of variation These ideas were used to form the definition provided by the American Statistical Association (ASA) / Mathematical Association of America (MAA) Joint Committee on Undergraduate Statistics (see Cobb 1992) as: • the need for data • the importance of data production • the omnipresence of variability • the measuring and modeling of variability The ASA Working Committee on Statistical Thinking (see Sylwester 1993) proposed: a. the appreciation of uncertainty and data variability and their impact on decision making b. the use of the scientific method in approaching issues and problems In the domain of quality control and process improvement, Snee (1990) defined statistical thinking as: thought processes, which recognize that variation is all around us and present in everything we do, all work is a series of interconnected processes, and identifying, characterizing, quantifying, controlling, and reducing variation provide opportunities for improvement. The American Society for Quality Glossary of Statistical Terms (1996) provides a philosophy of learning and action based on the following fundamental principles: • all work occurs in a system of interconnected processes • variation exists in all processes • understanding and reducing variation are keys to success Mallows (1998) argued that the above definitions were missing the “zeroth problem,” that is, what data might be relevant. He suggested the following definition: ... the relation of quantitative data to a real-world problem, often in the presence of variability and uncertainty. It attempts to make precise and explicit what the data has [sic] to say about the problem of interest (p. 3). Mallows also asked whether we can develop a theory of statistical thinking and applied statistics. Wild and Pfannkuch (1999) attempted to do just that. Their approach was to ask practicing statisticians and students working on projects what they are “doing” in an attempt to identify the key elements of this previously vague but somehow intuitively understood set of ideas. Their interviews led to development of a four-dimensional framework of statistical thinking in empirical enquiry: • Dimension One: The Investigative Cycle • Dimension Two: Types of Thinking • Dimension Three: The Interrogative Cycle • Dimension Four: Dispositions They claim that by understanding the thinking patterns and strategies used by statisticians and practitioners to solve real-world problems, and how they are integrated, we will be better able to improve the necessary problem solving and thinking skills in our students. A theme running throughout their article is that the contextual nature of the statistics problem is an essential element and how models are linked to this context is where statistical thinking occurs. While many of the dispositions desired in statistical thinkers, such as credulousness and skepticism, are gained through experience, Wild and Pfannkuch further argue that problem solving tools and “worry” or “trigger” questions can be taught to students, instead of relying solely on an apprenticeship model. Clearly, development of the models and prescriptive tools they describe will help with identification of and instruction in statistical thinking. In a response to Wild and Pfannkuch, Moore (1999) argued for “selective introduction” of the types of statistical thinking we introduce to beginning students. In clarifying the “Data, Analysis, Conclusions” portion of the investigative cycle, he argued for the following structure: When you first examine a set of data, (1) begin by graphing the data and interpreting what you see; (2) look for overall patterns and for striking deviations from those patterns, and seek explanations in the problem context; (3) based on examination of the data, choose appropriate numerical descriptions of specific aspects; (4) if the overall pattern is sufficiently regular, seek a compact mathematical model for that pattern (p. 251). For more advanced students he would appear to focus more on issues of measurement and problem formulation as discussed by Mallows. In response, Snee (1999) argued that “What data are relevant and how to collect good data are important considerations and might also be considered core competencies of statisticians” (p. 257) and Smith (1999) advocated adding “creativity” as a mode of thinking to Wild and Pfannkuch’s list . Following the approach of Wild and Pfannkuch, it seems that a definition of “statistical thinking” includes “what a statistician does.” These processes clearly involve, but move beyond, summarizing data, solving a particular problem, reasoning through a procedure, and explaining the conclusion. Perhaps what is unique to statistical thinking, beyond reasoning and literacy, is the ability to see the process as a whole (with iteration), including “why,” to understand the relationship and meaning of variation in this process, to have the ability to explore data in ways beyond what has been prescribed in texts, and to generate new questions beyond those asked by the principal investigator. While literacy can be narrowly viewed as understanding and interpreting statistical information presented, for example in the media, and reasoning can be narrowly viewed as working through the tools and concepts learned in the course, the statistical thinker is able to move beyond what is taught in the course, to spontaneously question and investigate the issues and data involved in a specific context. The hope is that by identifying these components, we can attempt to develop them in novice statisticians, instead of relying solely on apprenticeship and experience, and also in our non-majors, encouraging them to appreciate this “wider view” (Wild 1994) of statistics. In a newsletter from the University of Melbourne Statistical Consulting Center, Gordon (1998) stated: “What professional statisticians have, and amateurs do not have, is precisely that broad view, or overall framework, in which to put a particular problem.” Paradoxically, providing a tangible description of this type of insight is very difficult. On the other hand, as Wild argues, we may be able to develop “mental habits” that will allow non-statisticians to better appreciate the role and relevance of statistical thinking in future studies. While we may not be able to directly teach students to “think statistically,” we can provide them with experiences and examples that foster and reinforce the type of strategies we wish them to employ in novel problems. 3. Implications for Instruction - Developing Habits These definitions suggest that there is a more global view of the statistical process, including understanding of variability and the statistical process as whole, that we would like to instill in our students. In the past, it was generally assumed that statisticians would develop this manner of thinking through practice, experience, and working with senior statisticians. Recently, there have been more and more calls for instructing novices, including non-majors, in the mental habits and problem solving skills needed to think statistically. These mental habits include: 1. consideration of how to best obtain meaningful and relevant data to answer the question at hand 2. constant reflection on the variables involved and curiosity for other ways of examining and thinking about the data and problem at hand 3. seeing the complete process with constant revision of each component 4. omnipresent skepticism about the data obtained 5. constant relation of the data to the context of the problem and interpretation of the conclusions in non-statistical terms 6. thinking beyond the textbook The question is whether, and how, these habits can be incorporated into beginning instruction. Does the answer vary depending on whether we are talking about courses for statisticians than for other students? Futhermore, where does this component fit into the framework of statistical development? With recent developments in tools for statistical instruction, including case studies, student projects, new assessment tools (for an overview of these resources, see Moore 2001), it is viable to instill these habits in students. However, the choice of the term “habits” here is quite deliberate, for these skills need to be taught through example and repeated use. Furthermore, they don’t apply in every situation, but students can learn to approach problems with these general guidelines in mind. Below I begin to outline some of these guidelines and how students can be encouraged to develop these habits. The subsequent section provides suggestions for assessing whether students possess these habits. 3.1 Start from the beginning Successful statistical consultants have the ability to ask the necessary questions to extract the appropriate data to address the issue in question. To me the greatest contributions of statistics to scientific enquiry have been at the planning stage. (Smith 1999, p. 249) Typically it has been assumed that statisticians gain this ability through experience and osmosis, that only by experiencing situations where approaches have failed can we learn how to ask the relevant questions. As Wild and Pfannkuch (1999) argue, we can provide more structure in this learning process. For example, students need to be given numerous situations where issues of data collection are examined and are clearly relevant to the conclusions drawn from the data. Perhaps the most obvious approach is to ask students to collect data themselves, such as measuring the diameter of a tennis ball ( Scheaffer, Gnanadesikan, Watkins, and Witmer 1996). Students quickly see the difficulties associated with such a task: Do we have an appropriate measurement tool? What units are we using? How do different methods of measurements contribute to the variability in the measurements? What are other sources of variation in the measurements and can we control them? How does variability among observational units affect our results? How do repeated measurements enable us to better estimate the “true” measurement? Students clearly see the messiness of actual data collection so often ignored in textbook problems. Students also have a higher degree of ownership and engagement with such assignments. One of the key questions is “have we collected the right data?” Students can be given numerous examples where “the right answer to the wrong question,” often referred to a Type III Error, has led to drastic consequences. The Challenger accident has been held up as an example of not examining the relevant data. Even more simply, students can be asked to compare the prices of small sodas at different Major League Baseball stadiums (as in Rossman and Chance 2001). The subsequent analysis should note the fact that the sizes of “small soda” vary from stadium to stadium, and this variation in definition should not be ignored. Or students can compare the percentage of high school students in a state taking the SAT with the average SAT score. Students see that states with lower percentages taking the SAT tend to have higher average scores. They begin to question whether they are looking at the most relevant information for measuring states’ performances in educating In my teaching, one way I emphasize to students that all investigations must begin with examination of data collection issues is by moving these topics to be the first discussed in the course. I believe that this emphasizes to students to start with evaluation of the question asked, consideration of other variables, and careful planning of the data collection. 3.2 Understand the statistical process as a whole Too often, statistical methods are seen as tools that are applied in limited situations. For example, a problem will say “construct a histogram to examine the behavior of these data” or “perform a t -test to assess whether these means are statistically different.” This approach allows students to form a very narrow view of statistical application: pieces are applied in isolation as specified by the problem statement. Or a researcher comes to the consulting statistician, data in hand, querying “what method should I use to get the answer I want?” This is extreme, but too often the role of the statistician at the beginning of the investigation is ignored until it is too late. Instead, instruction should encourage students to view the statistical process in its entirety. Perhaps the most obvious approach is to assign student projects in which students have the primary responsibility of formulating the data collection plan, actively collecting the data, analyzing the data, and then interpreting the data to a general audience. Details of how I structure the project assignments can be found in Chance (1997). In particular, they are designed so students begin planning their study during the second week of the course (since we started by discussing data collection issues) and are expanded as each new stage of the statistical process is discussed in the course. Students are not told which techniques are appropriate but must decide for themselves, choosing among all topics (histograms versus bar graphs, through two sample comparisons, inference for regression, chi-square analyses and ANOVA) discussed in the course. Indeed projects have been used with increasing regularity in statistics course and still stand as the best way of introducing students to the entire process of statistical inquiry. Still, as Wild and Pfannkuch (1999) caution, “let them do projects” is clearly insufficient as the sole tool for developing statistical problem solving strategies. While we can provide students with such experiences, it is paramount to provide them with a mechanism for learning from the experience and transferring this new knowledge to other problems. Thus, my students do several data collection activities throughout the course and receive feedback that they may apply to their projects. Similarly, they submit periodic project reports during the process to receive feedback on their decisions at each stage and to ensure the questions being investigated are appropriate to the purposes of the course. I also structure written assignments where the feedback provided in the grading is expected to be utilized in subsequent assignments. For example, the first writing assignment may ask them to report the mean, median, standard deviation, and quartiles, and comment on the distribution and the interpretations of these statistics. The next assignment merely asks them to describe the distribution, and they are expected to apply their prior knowledge of what constitutes an adequate summary. These suggestions also encourage students to see the statistical process as iterative. Comments on one project report can be used to modify the proposed procedure before data collection begins. Other approaches that can be used to complement the project component of the course in helping students focus on the overall process include questions at the end of a problem relating back to the data collection issues and how they impact the conclusions drawn. For example, students can be asked at the end of an inferential question whether the conclusions appear valid based on the data collection procedures. Similarly, a required component of my project assignment is for students to reflect on the weaknesses of the process and suggest changes or next steps for future project teams. 3.3 Always be skeptical Wild and Pfannkuch (1999) identified skepticism as a disposition of statistical thinkers that may be taught through experience and seeing “ways in which certain types of information can be unsoundly based and turn out to be false” (p. 235). Research in cognition has demonstrated that to effectively instruct students in a new “way of thinking” they need to be given discrediting experiences (see discussion in delMas, Garfield, and Chance 1999). Students can be shown numerous examples where poor data collection techniques have invalidated the results. For example, a poll administered by Roper found that 22% of respondents said “it seemed possible” that the Holocaust never happened. Urschel (1994) outlines that many major newspapers responded with great concern of the growing anti-Semitism and Holocaust denial. However, a follow-up poll taken by Gallup which reworded the question, simplifying the language, allowing for less extreme response, and removing the double negative, found 83% stating that it definitely happened. Similarly, a recent poll by Microsoft was attacked for being “worded in such a way that even market researchers within Microsoft questioned its fairness” ( Brinkley 1999). An infamous example is the Literary Digest poll, whose poor sampling techniques led to an extremely poor prediction of election results. It is also easy to find numerous examples of newspaper headlines that imply causal conclusions with observational studies. (Ramsey and Schafer 1997, provide an especially effective schematic of the statistical inferences permitted with basic study designs, p. 9.) Through discussion of these examples, student should develop “worry questions” (Gal, et al., 1995), such as the source of the data, the actual questions used, and the appropriateness of the conclusions drawn. Students need to also be given sufficient questions requiring them to choose the appropriate analysis procedure. For example, Short, Moriarty, and Cooley (1995) present a data set on reading level of cancer pamphlets and reading ability of cancer patients. The medians of the two data sets are identical, however, looking at graphs of the two distributions reveals that 27% of the patients would not be able to understand the simplest pamphlet. The authors note that: Beginning with the display may ‘spoil the fun’ of thinking about the appropriateness of measuring and testing centers. We have found that constructing the display only after discussing the numerical measures of center highlights the importance of simple displays that can be easily interpreted and that may provide the best analysis for a particular problem. Similarly, no inferential technique should be taught without also examining its limitations. For example, large samples lead to statistical significance only in those cases where all other technical conditions are also met. The Literary Digest had a huge sample size but the results were still meaningless. Conversely, small samples often do not allow application of standard inferential procedures. Students can be taught to appreciate these limitations and understand when they will need to consult a statistician to determine appropriate methods not covered in their introductory Thus, we can integrate such exposures into instruction instead of only providing problems with nice, neat integer solutions. Through repeated exposure and expectations of closer examination, students should learn to generate these questions on their own, whether they want to or not. I knew I had succeeded when one student indicated that she could no longer watch television, as she was now constantly bombarding herself with questions about sampling and question design. These approaches should help instill the constant skepticism Wild and Pfannkuch (1999) observed in their interviews with professional statisticians. 3.4 Think about the variables involved Here three issues are paramount: Are they the right variables? How do I think the variables will behave? Are there other variables of importance? As Mallows (1998) argues, too often we ignore the problem specification in introductory courses, instead starting from the model, assuming the model is correct, and developing our understanding from that point forward. Similarly, Wild and Pfannkuch (1999) argue that we do not teach enough of the mapping between the context and the models. However, particularly in courses for beginning students, these issues are quite relevant and often more of interest to the student. Students are highly motivated to attempt to “debunk” published studies, highlighting areas they feel were not sufficiently examined. This natural inclination to question studies should be rewarded and further developed. Asking students to reflect on whether the relevant data have been collected was discussed in Section 3.1. Students can also be instructed to always conjecture how a variable will behave (considering shape and range of values, for example), before the data have been collected. For example, students can be asked to sketch a graph of measurements of student heights or number of siblings before the data is gathered in class. By anticipating variable behavior, students will better be able to identify unexpected outcomes and problems with data collection. Students will also be able to determine the most appropriate subsequent steps of the analysis based on the shape and behavior of the data. Students also develop a deeper understanding of variation and how it manifests itself in different settings. Students need to be encouraged to think about the problem and understand the problem sufficiently to begin to anticipate what the data will be like. A statistical thinker is also able to look beyond the variables suggested by the practitioner and guard against ignoring influential variables or drawing faulty causal conclusions. For example, Rossman and Chance (2001) present an example demonstrating the strong correlation between average life expectancy in a country and number of people per television in the country. Too often, people tend to jump to causal conclusions. Here, students are able to postulate other variables that could explain this relationship, such as the wealth of the country. Similarly, in the SAT example highlighted in Section 3.1, students should consider geography and state policy as an explanation for the low percentage of students taking the SATs in some states. Overall, students need to realize that they may not be able to anticipate all relevant variables, highlighting the importance of brainstorming prior to data collection, discussion with practitioners, and properly designed 3.5 Always relate the data to the context Students should realize that no numerical answer is sufficient in their statistics course until this answer is related back to the context, to the original question posed. Students should also be encouraged to relate the data in hand to previous experiences and to other outside contexts. Thus, reporting a mean or a p-value should be deemed insufficient presentation of results. Rather, the meaning is provided when these numbers are interpreted in context. For example, data on the weights of the 2000 U.S. Men’s Olympic Rowing team contain an extreme low outlier. Many students will recognize that value as the coxswain and will be able to discuss the role of that observation in the overall data summary. Similarly, data on inter-eruption times of the Old Faithful geyser show two distinct mounds, and students can speculate as to the causes of the two types of eruptions. While not all students will possess the outside knowledge needed in each of these settings, these data can be used in classroom discussions to encourage students to always relate their statistical knowledge to other subjects, geology, biology, and psychology, as examples, instead of learning statistics and other subjects in “separate mental compartments” (Wild 1994). These examples also encourage students in “noticing variation and wondering why” (Mullins in Wild and Pfannkuch 1999). Another example that highlights to students the importance of the problem context is the “Unusual episode” (see Dawson 1995). In this example, students are provided with data on number of people exposed to risk, number of deaths, economic status, age, and gender for 1323 individuals. Based solely on these data tables and yes or no questions of the instructor, students are asked to identify the unusual episode involved. This activity encourages students to think about context, hypothesize explanations, and search for meaning, similar to the sleuthing work done by practicing 3.6 Understand (and believe) the relevance of statistics Extending the previous point, students can be instructed to view statistics in the context of the world around them. Techniques range from having students collect data on themselves and their classmates to having students bring in examples of interest from recent news articles. I often include a graded component in my course where students have to discuss some experience they have with statistics outside of class during the term. For example, students may view a talk in their discipline that utilizes statistics, or may be struck by an interesting statement in the media that they now view differently with their statistical debunking glasses on. Thus, students can be led to appreciate the role of statistics in the world around them. We can also help students see the crucial role statistics and statistical inference play in interpreting information, especially the information represented in popular media. Not only do “data beat anecdotes” (Moore 1998), but using statistical techniques allows us to extract meaning from data we could not otherwise. Still, issues of variability heavily influence the information we can learn. One lesson I try to impart to my students is the role of sample size in our inferential conclusions - we are allowed to make stronger statements with larger sample sizes and must be cautious of spurious results with small sample sizes. Students can be lead to discover the effect of sample size on p-value by using technology to calculate the p-value for the same difference in population proportions, but different sample sizes (Rossman and Chance 2001). Thus, we cannot determine if two sample proportions are significantly different until we know the sample sizes involved. Similarly, we cannot compare averages, such as GPAs of different majors, without knowing the sample sizes and sample standard deviations involved. Statistical methods are necessary to take sampling variability into account before drawing conclusions, and students need to appreciate their role. At the same time, statisticians believe in what they are doing. Before making any conclusion, the statistical thinker immediately asks for the supporting data. I feel I often succeed too well in helping students question conclusions to the point that they never believe any statistical result. The role of randomness in particular is one where the statistical thinker has faith in the outcome and relies on the randomization mechanism, but the novice thinker is untrusting or continues to desire to list and control all variables they can imagine. Again, much of this belief comes from experience, but students can be shown repeatedly what randomization and random sampling accomplish. For example, an exercise in Moore and McCabe (1998) has students pool results from repeated randomization of rats into treatment groups. Students see the long term regularity and equality of the group means prior to treatment and begin to better understand what randomization does and does not accomplish for them. Students should see this idea throughout the course to better understand the “why” of the techniques they are learning. Students can also be instructed in making sure all statements are supported by the data. For example, in grading their initial lab assignments my most common feedback is “Why, how do you know this is true?” as I insist they support their claims. Many of the above examples are constant reinforcements to make sure students do not make claims beyond what is supported by the data in hand. Casual uses of statistics in sports provide great fodder for unsubstantiated claims. For example, at the start of a National Football League playoff game telecast, it was announced that the Tennessee Titans had won 11 of the 12 games in which they had won the coin toss to start the game. The novice merely accepts the data as presented. The statistical thinker immediately looks for the comparison - what was the team’s overall record (13 wins and 3 losses)? Is this really a significant difference (no)? Was this a conjecture developed prior to seeing the data? No, and students need to understand the problems with “searching for significant results.” Students also need to be cautioned against relying excessively on their prior intuitions or opinions. As an example, students can be asked to evaluate a baseball team’s performance based on the average number and standard deviation of errors per game. Often students will respond with their own opinion about the team, ignoring the data presented. With feedback, they can be coached to specify only “what the data say.” Similarly, we can help students learn to jump to the salient point of a problem, instead of meandering in a forest of irrelevant or anecdotal information. 3.7 Think beyond the textbook The examples given in Section 3.2 (questions that say “construct a histogram to examine the behavior of these data” or “perform a t-test to assess whether these means are statistically different”) also highlight the dependency students develop on knowing which section of the book a question comes from. Students learn to apply procedures when directed, but then after the course are at a loss of where to begin when presented with a novel question. Students need to be given questions that are more open and encouraged to examine the question from different directions to build understanding. For example, a histogram of the Old Faithful data mentioned earlier can fail to reveal the bimodal nature of the data with large bin widths. Students should be encouraged to look at more than one visual display. If the ability to explore is an important goal in the course, then this needs to also be built into the assessment. For example, a question on the 1997 Advanced Placement (AP) Statistics exam asked students to choose among several regression models. A question on the 1998 AP exam asked them to produce a histogram from a scatterplot and to comment on features revealed in one display that were much harder to detect in the other. Students blindly following the TI-83 graphing calculator output often did not see as useful a picture as those selecting their own interval limits or using the nature of the data. To help students choose among inference procedures discussed, I often give them a group quiz where the procedures are listed and they are asked to identify the appropriate procedure based solely on the statement of the research question, considering the number and type of variables involved. This helps students see that the focus is on translating the question of interest, not just the 4. Assessing Statistical Thinking The number one mantra to remember when designing assessment instruments is “assess what you value.” If you are serious about requiring students to develop the above habits, then you must incorporate follow-up questions into your assessment instruments, whether final exams or performance assessment components. For example, Wild (1994) claims he is more interested that students ask questions (in relation to background knowledge and beyond the subject matter, as examples) and so usually gives instructions to his graders to “give credit for anything that sounds halfway sensible.” Similarly, in my group project grades, students are rewarded as much for the process as the final product. The experience of participating in the project is my main goal, above the level of sophistication of the final product. This allows students to analyze data using the techniques discussed in the course rather than the sometimes much more complicated but purely correct approach. Still, students are required to discuss potential biases and other weakness in their current analysis and generate future questions. This encourages students to reflect on the process, critique their own work, realize the limitations of what they have learned, and see how theory differs from practice - all key components of statistical Still, much of our assessment must by necessity rely on more traditional exam-based questions. Below are some exam questions (adapted from other resources) that I’ve given in my service courses that attempt to assess students’ ability to apply the above mental habits. ┃ The underlying principle of all statistical inference is that one uses sample statistics to learn something (that is, to infer something) about the population parameters. Convince me that you ┃ ┃ understand this statement by writing a short paragraph describing a situation in which you might use a sample statistic to infer something about a population parameter. Clearly identify the ┃ ┃ sample, population, statistic, and parameter in your example. Be as specific as possible, and do not use any example which we have discussed in class (from Rossman and Chance 2001). ┃ ┃ ┃ ┃ This problem requires students to demonstrate their understanding of the overall statistical process, at least from the point of data collection forward. Students are required to extract a ┃ ┃ general approach from the isolated methods learned in the course. The focus is on the big picture rather than a specific technique. They also have to demonstrate their ability to apply their ┃ ┃ statistical knowledge to answer a question of interest (an individual assessment to complement the group project). ┃ ┃ Given data on calories for several Chinese foods, students are asked to produce a histogram (using technology) and then ┃ ┃ (b) Do you think it is reasonable to use these data to rank the foods from least to most in terms of calorie content? Explain how else you might look at the data if you were interested counting ┃ ┃ calories. ┃ ┃ ┃ ┃ In question (b), I’m hoping students will consider the issue of serving size. This serves as a follow-up question to the small soda costs at baseball games examined in class. This approach should ┃ ┃ be aided by their graph in which egg rolls and soup, the two appetizers, stand out as low outliers. Thus, students are expected to think beyond the statistical method, utilizing context and ┃ ┃ behavior of the data in their answer. ┃ ┃ As part of its twenty-fifth reunion celebration, the Class of ’70 of Central University mails a questionnaire to its members. One of the questions asks the respondent to give his or her total ┃ ┃ income last year. Of the 820 members of the class of ’70, the university alumni office has addresses for 583. Of these, 421 return the questionnaire. The reunion committee computes the mean ┃ ┃ income given in the responses and announces, “The members of the class of ’70 has enjoyed resounded success. The average income of class members is $120,000!” Suggest three different sources of ┃ ┃ bias or misleading information in this result, being explicit about the direction of bias you expect (from Freedman, Pisani, and Purves 1998). ┃ ┃ ┃ ┃ In this problem, students have to apply knowledge from several different parts of the course to critique a statement. This tests students’ ability to evaluate published conclusions while focusing ┃ ┃ on issues of data collection (sampling and nonsampling errors) and resistance. Students are asked to address bias, but are not specifically told to focus on sampling design, questionnaire ┃ ┃ wording, or resistance. ┃ ┃ Four (smoothed out) histograms are sketched below. They are histograms for the following variables (in a study of a small town): ┃ ┃ (a) Heights of all members of households with children where both parents are less than 24 years old ┃ ┃ (b) Heights of both members of all married couples ┃ ┃ (c) Heights of all people ┃ ┃ (d) Heights of all automobiles ┃ ┃ Match the variables with their histograms. Clearly explain your reasoning (from Freedman, et al., 1998). ┃ ┃ ┃ ┃ This question addresses students’ ability to speculate and justify different variable behaviors. Students need to think about the context and observational units involved, not just produce ┃ ┃ graphical displays. Responses are graded on the level of support given to their conjecture of the variable behavior. ┃ ┃ Which set of data is more likely to have a bimodal shape: daily New York City temperatures at noon for the summer months or daily New York City temperatures at noon for an entire year? Explain ┃ ┃ (from Utts 1999; I often replace New York City with a more local city). ┃ ┃ ┃ ┃ This question again asks students to go beyond simply constructing a histogram, but being able to explain the behavior. I find students who can construct a histogram for a set of values still ┃ ┃ struggle with this problem. They may pick the correct answer (entire year), but their explanations often show a lack of understanding of the two axes in a histogram (focusing on time on the ┃ ┃ horizontal and temperature on the vertical axis). ┃ ┃ The FBI reports that nationally 55% of all homicides were the result of gunshot wounds. In a recent random sample taken in one community, 66% of all homicides were the result of gunshot wounds. ┃ ┃ What three possible conclusions can you draw about the percentage from this community compared to the national percentage? What additional information would you need to begin to choose one ┃ ┃ conclusion over another? ┃ ┃ ┃ ┃ In this short question, the main goal is to see if students understand the role of variability in statistics and why conclusions cannot be drawn until that variation is considered. ┃ ┃ A researcher is examining the time for 3 different medicines to register in the blood system (minutes). She wants to test the null hypothesis that the mean times are all the same: p-value to ┃ ┃ largest p-value and explain your choices. Your grade will be based mostly on your explanation (inspired by Cobb 1998). ┃ ┃ ┃ ┃ Again, this problem does not focus on application of a particular technique but rather asks students to consider issues of sample size and variation in determining statistical significance. Also ┃ ┃ notice the emphasis on communication for full credit. I am less concerned with their final ordering, but use a scoring rubric that rates the level of sophistication and integration of these ┃ ┃ components in their explanation (such as, do they only focus on centers, do they understand that, if all else was equal, larger samples have smaller p-values). Thus, students need to understand ┃ ┃ the purpose of statistical inference and to be able to explain the results of the statistical methods. This is similar to the “explain this result to someone who has not taken statistics” ┃ ┃ question that can be added to the end of a statistical analysis question. ┃ ┃ A report based on the Current Population Survey estimates the 1991 median weekly earnings of families of wage and salary works as $664. An approximate 95% confidence interval for the 1991 median ┃ ┃ weekly earnings of all families of wage and salary workers is $657.14 to $670.86. Interpret this interval, and discuss why you believe the researchers are interested in the median instead of the ┃ ┃ mean in this study (from Moore and McCabe 1998). ┃ ┃ ┃ ┃ This sketch of a problem shows that you can ask students to interpret results from methods not discussed in class. This tests if they can apply the overall reasoning of statistical inference to ┃ ┃ their interpretation. It addresses the need for students to be able to recognize the relevance of the tools they learn in the course beyond the specific examples (and methods) discussed in class. ┃ ┃ Furthermore, can students recognize the limitations of the procedures they have learned and when they need to ask for outside consultation? ┃ ┃ A university is interested in studying reasons many of their students were failing to graduate. They found that most attrition was occurring during the first three semesters so they recorded ┃ ┃ various data on the students when they entered the school and their GPA after three semesters. [Students given data set with numerous variables.] ┃ ┃ (a) Describe the distribution of GPA for these students. ┃ ┃ (b) Is SAT-Math score a statistically significant predictor of GPA for students at this school? ┃ ┃ (c) Is there a statistically significant difference between the average GPA values among the majors at this school? ┃ ┃ (adapted from Moore and McCabe 1998). ┃ ┃ ┃ ┃ This type of question is given as a take-home question for the final exam. Students are given one week to identify the relevant statistical methods by reviewing their notes and class examples. ┃ ┃ Students are instructed to work individually. This type of problem has several goals: can students apply the habits of how to examine a data set numerically and graphically, describing shape, ┃ ┃ center, spread, unusual observations, can students identify and execute the relevant statistical technique with minimal prodding (they don’t know what section of the book this question came from ┃ ┃ so they are missing that context), can they recognize the need for statistical inference to generalize from a sample to a population? With respect to the last point, I have added more and more ┃ ┃ direction to help students see the need to compute a p-value to attest to “statistical significance.” To receive full credit for the inference problems students must still accompany each analysis ┃ ┃ with appropriate graphical and numerical summaries (again, they must decide which is appropriate). Students are also required to justify their choice of analysis method. To answer these ┃ ┃ questions, students must decide which variables to examine. This is a complement to giving them a news article and asking them to evaluate the statistical analysis. ┃ While the above questions are aimed primarily at introductory service courses, novice statisticians could be required to analyze the questions like these in greater depth. For example, with my more mathematically inclined students I expect them to develop a confidence interval formula for a new parameter, such as for a variance, based on the basic overall structure learned in the course. We can also rely less on the convenient simplifications we sometimes make with statistics-phobic non-majors (for example, focusing on population over process). Chatfield (1988) provides an excellent resource for providing additional exposure to messy data and developing further problem solving habits in young statisticians. However, beginning statistics majors should also be taught the other mental habits (focus on data collection, question the variables chosen) as well. Our teaching needs to focus “... on the big ideas and general strategies... ” (Moore 1998, p. 1257). Such instruction will also serve to improve literacy and reasoning: Students’ understanding and retention could be significantly enhanced by teaching the overall process of investigation before the tools, by using tangible case studies to introduce and motivate new topics, and by striving for gross (overall) understanding of key concepts (statistical thinking) before fine skills to apply numerical tools.” (Hoerl 1997) Still, evidence of statistical thinking lies in what students do spontaneously, without prompting or cue from the instructor. Students should be given opportunities to demonstrate their “reflexes.” We should see if they demonstrate flexibility in problem solutions and ability to search for meaning with unclear guidelines. These are difficult “skills” to assess and may be beyond what we hope for in the first course for beginning students. However, students can be given more open-ended problems to see how they approach problems on their own and whether they have developed the ability to focus on the critical points of the problem, while still receiving feedback and mentoring from instructors. Recently, “capstone courses” such as this have been incorporated into undergraduate statistics curriculum (see, for example, Spurrier 1999) and texts of case studies (see Peck, Haugh, and Goodman 1998) have further enabled instructors to give students these experiences. 5. Conclusion Applied to beginning students, I would classify many of the above “habits” as statistical thinking, and this may be all we are hoping to accomplish in many introductory service courses. At this level, I think the types of statistical thinking we aim to teach are what is needed for an informed consumer of statistical information. They serve as the first steps of what we would like to develop in all statisticians, but also what we need to develop in every citizen to understand the importance and need of proper scientific investigation. I suspect that these examples stepped on the toes of statistical reasoning as well, as we encourage students to reason with their statistical tools, and to make sure this reasoning includes awareness of data collection issues and interpretation as well. However, it is through repetition and constant reinforcement that these habits develop into an ingrained system of thought. Through a survey I distributed to students two years after finishing my introductory course, I learned that students often “revert” to some of their old habits. To further develop statistical thinking, these habits need to be continually emphasized in follow-up courses, particularly in other disciplines. It is also important to remember that when students step into any mathematics course, often they are not expecting to apply their knowledge in these ways. They are accustomed to calculating one definitive correct answer that can be boxed and then compared to the numbers in the back of the text. Thus, such habits (questioning, justification, writing in their own words) require specific instruction and justification in the introductory statistics course. Instructors also need to be aware of the need to allow, even reward, alternative ways of examining data and interpreting data. Thus, we can specifically address the development of statistical thinking in all students. By providing exposure to and instruction in the types of thinking used by statisticians, we can hasten the development of these ways of approaching problems and applying methods in beginning students. These techniques overlap greatly with improving student literacy and reasoning as well. Delving even further into these examples and providing more open-ended problems will continue this development in future statisticians as well. To determine whether students are applying statistical thinking, problems need to be designed that test student reflexes, thought patterns, and creativity in novel situations. Thanks to Thomas H. Short, Sr. for the electronic rendering of Figure 1. American Society for Quality (1996), Glossary of Statistical Terms, Milwaukee, WI: Author. Box, G. E. P., Hunter, W. G., and Hunter, J. A. (1978), Statistics for Experimenters, New York: John Wiley and Sons. Brinkley, J. (1999), “Microsoft witness attacked for contradictory opinions,” The New York Times, 15 Jan. 1999, C2. Chance, B. (1997), “Experiences with Alternative Assessment Techniques in Introductory Undergraduate Statistics Courses',” Journal of Statistics Education [Online], 5(3). (www.amstat.org/publications Chatfield, C. (1988), Problem Solving: A Statistician’s Guide, London: Chapman and Hall. Cobb, G. (1992), “Teaching Statistics,” in Heeding the Call for Change: Suggestions for Curricular Action, ed. L. A. Steen, MAA Notes, Number 22, Washington, DC: Mathematical Association of America, ----- (1998), “The Objective-Format Question in Statistics: Dead Horse, Old Bath Water, or Overlooked Baby?,” presented at the Annual Meeting of American Educational Research Association, San Diego, Dawson, R. J. M. (1995), “The `Unusual Episode’ Data Revisited,” Journal of Statistics Education [Online], 3(3), (www.amstat.org/publications/jse/v3n3/datasets.dawson.html) delMas, R., Garfield, J., and Chance, B. (1999), “A Model of Classroom Research in Action: Developing Simulation Activities to Improve Students' Statistical Reasoning,” Journal of Statistics Education [Online], 7(3). (www.amstat.org/publications/jse/secure/v7n3/delmas.cfm) Freedman, D., Pisani, R., and Purves, R. (1998), Statistics (3rd ed.), New York: W. W. Norton and Company, Inc. Gal, I., Ahlgren, C., Burrill, G., Landwehr, J., Rich, W., and Begg, A. (1995), “Working Group: Assessment of Interpretive Skills,” Writing Group Draft Summaries Conference on Assessment Issues in Statistics Education, Philadelphia: University of Pennsylvania, 23-35. Gordon, I. (1998), “From the Director,” News and Views [Online], 13. (www.scc.ms.unimelb.edu.au/news/n13.html) Hoerl, R. W. (1997), “Introductory Statistical Education: Radical Redesign is Needed, or is it?,” Newsletter for the Section on Statistical Education of the American Statistical Association [Online], 3(1). (renoir.vill.edu/~short/StatEd/v3n1/Hoerl.html) Mallows, C. (1998), “The Zeroth Problem,” The American Statistician, 52, 1-9. Moore, D. S. (1990), “Uncertainty,” in On the Shoulders of Giants, ed. L. A. Steen, National Academy Press, 95-173. ----- (1998), “Statistics Among the Liberal Arts,” Journal of the American Statistical Association, 93, 1253-1259. ----- (1999), “Discussion: What Shall We Teach Beginners?,” International Statistical Review, 67, 250-252. Moore, D. S., and McCabe, G. P. (1998), Introduction to the Practice of Statistics (3rd ed.), New York: W.H. Freeman and Company. Moore, T. (ed.) (2001), Teaching Statistics: Resources for Undergraduate Instructors, Washington, DC: Mathematical Association of America and American Statistical Association. Peck, R., Haugh, L. D., and Goodman, A. (eds.) (1998), Statistical Case Studies: A Collaboration Between Academe and Industry, Alexandria, VA: American Statistical Association/SIAM. Ramsey, F. L., and Schafer, D. W. (1997), The Statistical Sleuth: A Course in Methods of Data Analysis, Belmont, CA: Duxbury Press. Rossman, A. J., and Chance, B. L. (2001), Workshop Statistics: Discovery with Data (2nd ed.), Emeryville, CA: Key College Publishing. Scheaffer, R., Gnanadesikan, M., Watkins, A., and Witmer, J. (1996), Activity-Based Statistics, New York: Springer-Verlag Publishers. Short, T. H., Moriarty, H., and Cooley, M. E. (1995), “Readability of Educational Materials for Patients with Cancer,” Journal of Statistics Education [Online], 3(2), (www.amstat.org/publications/jse Smith, T. M. F. (1999), “Discussion” in response to Wild and Pfannkuch, International Statistical Review, 67, 248-250. Snee, R. D. (1990), “Statistical Thinking and Its Contribution to Total Quality,” The American Statistician, 44, 116-121. ----- (1999), “Discussion: Development and Use of Statistical Thinking: A New Era,” International Statistical Review, 67, 255-258. Spurrier, J. D. (1999), The Practice of Statistics: Putting the Pieces Together, Belmont, CA: Duxbury Press. Sylwester, D. (1993), “Statistical Thinking,” AMSTAT News, February, . Urschel, J. (1994), “Putting a reality check on ‘Holocaust denial’,” USA Today, January 12, 1994. Utts, J. (1999), Seeing Through Statistics, Belmont, CA: Duxbury Press. Wild, C. J. (1994), “Embracing the ‘Wider View’ of Statistics,” The American Statistician, 48, 163-171. Wild, C. J., and Pfannkuch, M. (1999), “Statistical Thinking in Empirical Enquiry,” International Statistical Review, 67, 223-265. Beth L. Chance Department of Statistics California Polytechnic State University San Luis Obispo, CA 93407 Volume 10 (2002) | Archive | Index | Data Archive | Information Service | Editorial Board | Guidelines for Authors | Guidelines for Data Contributors | Home Page | Contact JSE | ASA Publications
{"url":"http://www.amstat.org/publications/jse/v10n3/chance.html","timestamp":"2014-04-20T18:33:47Z","content_type":null,"content_length":"66413","record_id":"<urn:uuid:262ba898-32b4-4d92-ad59-3dcffd722408>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00606-ip-10-147-4-33.ec2.internal.warc.gz"}
Binomial error paradox October 26th 2012, 09:10 AM Binomial error paradox In a binomial distribution if the chance of something happening is p and q=(1-p) and the variance for the event occurring is pq. The variance for the event not occurring is also qp. Since the error within a given confidence interval is related to the variance then this error is the same for both the even occurring and not occurring. In my particular case p=0.1 and with a 95% confidence interval the value of p lies between 0.096 and 0.104 or 0.1+-0.004 I am looking to find the relative error associated with something happening which would be 0.004/0.1= 4% I want to keep gathering data until the relative error is below 3% but here is what confuses me. The chance of the event not occurring is 0.9 with the same interval as the chance of it happening (between 0.894 and 0.904 or 0.9+-0.04) and the relative error is 0.004/0.9= 0.44% which is below the 3% I am aiming for. It does not make sense to be content with the error of the event occurring but not be content with the error of it occurring. How do people handle relative error in a binomial distribution? Perhaps take an average, or only look at the error in the confidence interval? □ 2 hours ago □ - 4 days left to answer. October 26th 2012, 05:21 PM Re: Binomial error paradox Hey Shakarri. If you are trying to find the relative difference between the mean and the standard error then simply set up the inequality and solve for n. If you are using a Wald test or a normal approximation then the standard error of the mean with a binomial is se = SQRT(p_hat*(1-p_hat)/n) where p_hat is the estimated value of the proportion which is just the mean of the sample data. So you are looking at [1.96*se]/p_hat < t where t is your threshold (3% = 0.03) so extracting n we get: (1.96)^2*(1-p_hat)/(p_hat*t^2) < n or n > (1.96)^2*(1-p_hat)/(p_hat*t^2) So you can find the first integer satisfying that condition and you have your sample size. If you want to consider that p_hat can fluctuate within a specific range then you will need to do this for the lower and upper bounds and combine both information to get a value for n. October 27th 2012, 01:45 AM Re: Binomial error paradox I am afraid you have misunderstood my question, but thanks for trying. I am using a normal approximation and the standard error multiplied by 1.96 is 0.004 I am using the formula [1.96*se]/p_hat = t as in your response but for finding t for the current sample size n The problem is that when applying this equation to the chance of the event occuring (p_hat= 0.1) t= 0.004/0.1= 0.04 which I consider to be too high and so more data would need to be gathered Applying the same formula to the chance of the event not occurring (p_hat= 0.9) t=0.004/0.9= 0.0044 which is a low enough value of t and no more data would need to be gathered. t in this case is almost 10 times lower than t for the chance of the event occurring. So on one hand I am not sure if 0.1 is accurate for the chance of something happening, but on the other hand I am sure that 0.9 is accurate for the chance of it not happening. I hope you can see why this doesn't make sense. October 27th 2012, 04:00 AM Re: Binomial error paradox Well in terms of entropy, the maximum value is at p = 1/2, but what you could do is again use upper and lower bounds and combine the information to get an estimate. You can make the bounds whatever you like and you can even make them dependent on an existing sample and each new observation you obtain. The reason I mention entropy is that the point of the highest entropy is where the highest point of uncertainty. You also need a relatively lower sample for low points of entropy which is why considering where the highest amount of entropy (and the lowest) is something to really consider when you want to do these kinds of calculations. Apart from either using a guess based on a prior distribution, or updating your guess with each new observation that is added to your appended sample, I can't really suggest anything else. October 27th 2012, 07:35 AM Re: Binomial error paradox This is not about the upper and lower bounds it is about the reference point used when looking at the relative error. It isn't clear what the relative error should be relative to. Forget about the numbers, if the chance of something happening is p and the chance of it not happening is q and I am taking a 95% confidence interval The standard error or the difference between the upper bound and the mean is 1.96*(pq/n)^1/2 This error relative to p is [1.96*(pq/n)^1/2]/p The error relative to q is [1.96*(pq/n)^1/2]/(1-p) Which relative error is correct? If p<0.5 then the first figure is higher than the second. But why should they be higher? Why would I be more certain about the chance of something happening than I am about the chance of it not happening? Since q is related to p I cannot be certain about 1 and uncertain about the other. Ultimately my question is how do people find a relative error that avoids this paradox of being more certain of q than they are of p October 27th 2012, 08:10 AM Re: Binomial error paradox Hint: Read into entropy.
{"url":"http://mathhelpforum.com/statistics/206130-binomial-error-paradox-print.html","timestamp":"2014-04-19T11:15:43Z","content_type":null,"content_length":"10145","record_id":"<urn:uuid:760679eb-72f6-473d-b807-4ff7b9c9135a>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00151-ip-10-147-4-33.ec2.internal.warc.gz"}
Various spatial tree implementations. (require-extension kd-tree) The spatial-tree library is intended to contain a collection of spatial tree implementations. A spatial tree is a data structure for organizing and searching points in an n-dimensional space. The present implementation code implements a single spatial tree structure, the k-d tree. This library currently only supported points in 3D space. [procedure] make-point3d:: DOUBLE * DOUBLE * DOUBLE -> POINT3D 3D point constructor. [procedure] point3d? :: POINT3D -> BOOL 3D point predicate. [procedure] point3d-x :: POINT3D -> DOUBLE [procedure] point3d-y :: POINT3D -> DOUBLE [procedure] point3d-z :: POINT3D -> DOUBLE Accessors for the x,y,z coordinates of a 3D point. A K-D tree (short for k-dimensional tree) is a space-partitioning data structure for organizing points in a k-dimensional space. [procedure] list->kd-tree:: POINT3D LIST -> KD-TREE Given a list of points, constructs and returns a K-D tree object. [procedure] kd-tree? :: KD-TREE -> BOOL Returns #t if the given object is a K-D tree, #f otherwise. [procedure] kd-tree-empty? :: KD-TREE -> BOOL Returns #t if the given K-D tree object is empty, #f otherwise. [procedure] kd-tree-is-valid? :: KD-TREE -> BOOL Checks whether the K-D tree property holds for the given tree. Specifically, it tests that all points in the left subtree lie to the left of the plane, p is on the plane, and all points in the right subtree lie to the right. [procedure] kd-tree-all-subtrees-are-valid? :: KD-TREE -> BOOL Checks whether the K-D tree property holds for the given tree and all subtrees. [procedure] kd-tree->list :: KD-TREE -> POINT3D LIST Returns a list with the points contained in the tree. [procedure] kd-tree->list* :: KD-TREE -> (INT . POINT3D) LIST Returns a list where every element has the form (i . p), where i is the relative index of this point, and p is the point. [procedure] kd-tree-subtrees :: KD-TREE -> KD-TREE LIST [procedure] kd-tree-point :: KD-TREE -> POINT3D [procedure] kd-tree-map [procedure] kd-tree-for-each [procedure] kd-tree-for-each* [procedure] kd-tree-fold-right [procedure] kd-tree-fold-right* Query procedures [procedure] kd-tree-nearest-neighbor [procedure] kd-tree-near-neighbors [procedure] kd-tree-near-neighbors* [procedure] kd-tree-k-nearest-neighbors [procedure] kd-tree-slice [procedure] kd-tree-slice* [procedure] kd-tree-remove About this egg Version history Improvements to internal representation Initial release Copyright 2012 Ivan Raikov and the Okinawa Institute of Science and Technology. This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. A full copy of the GPL license can be found at
{"url":"http://wiki.call-cc.org/eggref/4/spatial-trees","timestamp":"2014-04-16T22:23:12Z","content_type":null,"content_length":"8347","record_id":"<urn:uuid:c33e79c5-5448-4b9f-96ae-cfea1771bd21>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
divide polynomials with a ti-84 Author Message n3szn Posted: Thursday 28th of Dec 11:45 Hello math wizards, I need some urgent help. I have a set of math questions that I need to answer and I am hopelessly lost. I don’t know where to begin or how to go about and this paper is due next week. Kindly let me know if you are good in quadratic equations or if there is a good site which can assist me. From: St. Helens, UK kfir Posted: Friday 29th of Dec 11:14 This story sounds familiar to me. Even though I was great in algebra for several years, when I attended Intermediate algebra there were a lot of math topics that seemed so complicated . I remember I got a very bad grade when I took the test on divide polynomials with a ti-84. Now I don't have this issue anymore, I can solve anything quite easily , even linear equations and triangle similarity. I was smart that I didn't spend my money on a tutor, because I heard of Algebrator from a student . I have been using it since then whenever I found something From: egypt Gog Posted: Sunday 31st of Dec 07:58 Algebrator indeed is a very good software to help you learn math, sitting at home . You won’t just get the problem solved but the entire solution as well, that’s how concepts are built . And to score well in math, it’s important to have strong concepts. I would advise you to use this software if you want to finish your assignment on time. From: Austin, CIL Posted: Sunday 31st of Dec 20:21 http://www.www-mathtutor.com/factoring-polynomials-1.html and http://www.www-mathtutor.com/graphing-solution-sets-for-inequalities.html are a couple of authentic resources that give out the Algebrator. But, before placing the order, get to know what it offers and how is it different by reading the feedback online. From my personal experience, I can vouch that you can begin using Algebrator right away without any assistance since the tool is fully easy and very much self- explanatory . From: N 34 3 8 / W 118 14 MoonBuggy Posted: Monday 01st of Jan 14:54 I am a regular user of Algebrator. It not only helps me complete my homework faster, the detailed explanations given makes understanding the concepts easier. I suggest using it to help improve problem solving skills. From: Leeds, MichMoxon Posted: Tuesday 02nd of Jan 11:26 Life can be hard when one has to work along with their studies. Visit http://www.www-mathtutor.com/equations-as-functions.html, I am sure it will be of use to you.
{"url":"http://www.www-mathtutor.com/www-math-algebra/solving-inequalities/divide-polynomials-with-a-ti.html","timestamp":"2014-04-18T14:05:02Z","content_type":null,"content_length":"59945","record_id":"<urn:uuid:e690a3b1-280d-451d-8acb-fbcf912707e4>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00492-ip-10-147-4-33.ec2.internal.warc.gz"}
West Englewood, NJ Math Tutor Find a West Englewood, NJ Math Tutor ...I've taken the MCATs myself twice scoring above 32 points combined. And I've completed two years of medical school before switching my focus to research and education. Over the past two years, I've been teaching Physics, Chemistry, Biology, Organic Chemistry, and Calculus at the College Level and preparing students for their MCATs, DATs, and GREs. 83 Subjects: including discrete math, SAT math, Java, statistics ...Sincerely, DheerajI taught Algebra 1 at the 9th grade level during my time as a teacher. I am very comfortable with all aspects of the curriculum and have accumulated a wide variety of resources addressing all levels of difficulty. I have seen first-hand the problems students have with differen... 26 Subjects: including trigonometry, algebra 1, algebra 2, calculus ...As an experienced teacher of high school and college level physics courses, I know what your teachers are looking for and I bring all the tools you'll need to succeed! Of course, a big part of physics is math, and I am experienced and well qualified to tutor math from elementary school up throug... 18 Subjects: including algebra 1, algebra 2, calculus, Microsoft Excel ...I tutored them in general chemistry, general physics, and other general science courses. I received excellent grades for my all science courses during my undergraduate years. Since then I have fallen in love with tutoring and it has become my passion. 37 Subjects: including calculus, linear algebra, algebra 1, algebra 2 I know first hand that science is one of the most interesting and exciting fields to be a part of. With a physics major and math minor under my belt, coupled with thorough experimental laboratory experience and wide exposure to the environment of physics after presenting research at the American Ph... 25 Subjects: including algebra 1, algebra 2, grammar, SAT math Related West Englewood, NJ Tutors West Englewood, NJ Accounting Tutors West Englewood, NJ ACT Tutors West Englewood, NJ Algebra Tutors West Englewood, NJ Algebra 2 Tutors West Englewood, NJ Calculus Tutors West Englewood, NJ Geometry Tutors West Englewood, NJ Math Tutors West Englewood, NJ Prealgebra Tutors West Englewood, NJ Precalculus Tutors West Englewood, NJ SAT Tutors West Englewood, NJ SAT Math Tutors West Englewood, NJ Science Tutors West Englewood, NJ Statistics Tutors West Englewood, NJ Trigonometry Tutors Nearby Cities With Math Tutor Allerton, NY Math Tutors Bogota, NJ Math Tutors Brookdale, NJ Math Tutors Delawanna, NJ Math Tutors Englewood Cliffs Math Tutors Fort George, NY Math Tutors Hamilton Grange, NY Math Tutors Hillside, NY Math Tutors Inwood Finance, NY Math Tutors Morsemere, NJ Math Tutors Packanack Lake, NJ Math Tutors Peoples Park, NJ Math Tutors Pines Lake, NJ Math Tutors Radburn, NJ Math Tutors Teaneck Math Tutors
{"url":"http://www.purplemath.com/West_Englewood_NJ_Math_tutors.php","timestamp":"2014-04-19T23:47:54Z","content_type":null,"content_length":"24250","record_id":"<urn:uuid:6bdce10f-9228-4c86-9e37-ecf1127eca73>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00262-ip-10-147-4-33.ec2.internal.warc.gz"}
The Complexity of Robot Motion Planning Results 1 - 10 of 382 , 1998 "... the rapidly developing systematic connections between FPT and useful heuristic algorithms | a new and exciting bridge between the theory of computing and computing in practice. The organizers of the seminar strongly believe that knowledge of parameterized complexity techniques and results belongs ..." Cited by 894 (63 self) Add to MetaCart the rapidly developing systematic connections between FPT and useful heuristic algorithms | a new and exciting bridge between the theory of computing and computing in practice. The organizers of the seminar strongly believe that knowledge of parameterized complexity techniques and results belongs into the toolkit of every algorithm designer. The purpose of the seminar was to bring together leading experts from all over the world, and from the diverse areas of computer science that have been attracted to this new framework. The seminar was intended as the rst larger international meeting with a specic focus on parameterized complexity, and it hopefully serves as a driving force in the development of the eld. 1 We had 49 participants from Australia, Canada, India, Israel, New Zealand, USA, and various European countries. During the workshop 25 lectures were given. Moreover, one night session was devoted to open problems and Thursday was basically used for problem - IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION , 1996 "... A new motion planning method for robots in static workspaces is presented. This method proceeds in two phases: a learning phase and a query phase. In the learning phase, a probabilistic roadmap is constructed and stored as a graph whose nodes correspond to collision-free configurations and whose edg ..." Cited by 890 (113 self) Add to MetaCart A new motion planning method for robots in static workspaces is presented. This method proceeds in two phases: a learning phase and a query phase. In the learning phase, a probabilistic roadmap is constructed and stored as a graph whose nodes correspond to collision-free configurations and whose edges correspond to feasible paths between these configurations. These paths are computed using a simple and fast local planner. In the query phase, any given start and goal configurations of the robot are connected to two nodes of the roadmap; the roadmap is then searched for a path joining these two nodes. The method is general and easy to implement. It can be applied to virtually any type of holonomic robot. It requires selecting certain parameters (e.g., the duration of the learning phase) whose values depend on the scene, that is the robot and its workspace. But these values turn out to be relatively easy to choose, Increased efficiency can also be achieved by tailoring some components of the method (e.g., the local planner) to the considered robots. In this paper the method is applied to planar articulated robots with many degrees of freedom. Experimental results show that path planning can be done in a fraction of a second on a contemporary workstation (=150 MIPS), after learning for relatively short periods of time (a few dozen seconds) , 1998 "... This article describes the software architecture of an autonomous, interactive tour-guide robot. It presents a modular and distributed software architecture, which integrates localization, mapping, collision avoidance, planning, and various modules concerned with user interaction and Web-based telep ..." Cited by 271 (73 self) Add to MetaCart This article describes the software architecture of an autonomous, interactive tour-guide robot. It presents a modular and distributed software architecture, which integrates localization, mapping, collision avoidance, planning, and various modules concerned with user interaction and Web-based telepresence. At its heart, the software approach relies on probabilistic computation, on-line learning, and any-time algorithms. It enables robots to operate safely, reliably, and at high speeds in highly dynamic environments, and does not require any modifications of the environment to aid the robot's operation. Special emphasis is placed on the design of interactive capabilities that appeal to people's intuition. The interface provides new means for human-robot interaction with crowds of people in public places, and it also provides people all around the world with the ability to establish a "virtual telepresence" using the Web. To illustrate our approach, results are reported obtained in mid-... - IEEE fins. Auto. Control , 1993 "... Abstract--In this paper, we investigate methods for steering systems with nonholonomic constraints between arbitrary configurations. Early work by Brockett derives the optimal controls for a set of canonical systems in which the tangent space to the configuration manifold is spanned by the input vec ..." Cited by 251 (15 self) Add to MetaCart Abstract--In this paper, we investigate methods for steering systems with nonholonomic constraints between arbitrary configurations. Early work by Brockett derives the optimal controls for a set of canonical systems in which the tangent space to the configuration manifold is spanned by the input vector fields and their first order Lie brackets. Using Brockett’s result as motivation, we derive suboptimal trajectories for systems which are not in canonical form and consider systems in which it takes more than one level of bracketing to achieve controllability. These trajectories use sinusoids at integrally related frequencies to achieve motion at a given bracketing level. We define a class of systems which can be steered using sinusoids (chained systems) and give conditions under which a class of two-input systems can be converted into this form. I. , 1992 "... In manufacturing, it is often necessary to orient parts prior to packing or assembly. We say that a planar part is polygonal if its convex hull is a polygon. We consider the following problem: given a list of n vertices describing a polygonal part whose initial orientation is unknown, find the short ..." Cited by 207 (41 self) Add to MetaCart In manufacturing, it is often necessary to orient parts prior to packing or assembly. We say that a planar part is polygonal if its convex hull is a polygon. We consider the following problem: given a list of n vertices describing a polygonal part whose initial orientation is unknown, find the shortest sequence of mechanical gripper actions that is guaranteed to orient the part up to symmetry in its convex hull. We show that such a sequence exists for any polygonal part by giving an O#n log n# algorithm for finding the sequence. Since the gripper actions do not require feedback, this result implies that any polygonal part can be oriented without sensors. - In IEEE Int. Conf. Robot. & Autom , 2000 "... This paper describes a new approach to probabilistic roadmap planners (PRMs). The overall theme of the algorithm, called Lazy PRM, is to minimize the number of collision checks performed during planning and hence minimize the running time of the planner. Our algorithm builds a roadmap in the configu ..." Cited by 189 (14 self) Add to MetaCart This paper describes a new approach to probabilistic roadmap planners (PRMs). The overall theme of the algorithm, called Lazy PRM, is to minimize the number of collision checks performed during planning and hence minimize the running time of the planner. Our algorithm builds a roadmap in the configuration space, whose nodes are the user-defined initial and goal configurations and a number of randomly generated nodes. Neighboring nodes are connected by edges representing paths between the nodes. In contrast with PRMs, our planner initially assumes that all nodes and edges in the roadmap are collision-free, and searches the roadmap at hand for a shortest path between the initial and the goal node. The nodes and edges along the path are then checked for collision. If a collision with the obstacles occurs, the corresponding nodes and edges are removed from the roadmap. Our planner either finds a new shortest path, or first updates the roadmap with new nodes and edges, and then searches for a shortest path. The above process is repeated until a collision-free path is returned. - AI Magazine vol "... This article describes a methodology for programming robots known as probabilistic robotics. The probabilistic paradigm pays tribute to the inherent uncertainty in robot perception, relying on explicit representations of uncertainty when determining what to do. This article surveys some of the progr ..." Cited by 166 (9 self) Add to MetaCart This article describes a methodology for programming robots known as probabilistic robotics. The probabilistic paradigm pays tribute to the inherent uncertainty in robot perception, relying on explicit representations of uncertainty when determining what to do. This article surveys some of the progress in the field, using in-depth examples to illustrate some of the nuts and bolts of the basic approach. Our central conjecture is that the probabilistic approach to robotics scales better to complex real-world applications than approaches that ignore a robot’s uncertainty. 1 , 2000 "... This paper describes Minerva, an interactive tour-guide robot that was successfully deployed in a Smithsonian museum. Minerva's software is pervasively probabilistic, relying on explicit representations of uncertainty in perception and control. This article describes ..." Cited by 153 (42 self) Add to MetaCart This paper describes Minerva, an interactive tour-guide robot that was successfully deployed in a Smithsonian museum. Minerva's software is pervasively probabilistic, relying on explicit representations of uncertainty in perception and control. This article describes , 1994 "... This thesis addresses situated, embodied agents interacting in complex domains. It focuses on two problems: 1) synthesis and analysis of intelligent group behavior, and 2) learning in complex group environments. Basic behaviors, control laws that cluster constraints to achieve particular goals and h ..." Cited by 146 (20 self) Add to MetaCart This thesis addresses situated, embodied agents interacting in complex domains. It focuses on two problems: 1) synthesis and analysis of intelligent group behavior, and 2) learning in complex group environments. Basic behaviors, control laws that cluster constraints to achieve particular goals and have the appropriate compositional properties, are proposed as effective primitives for control and learning. The thesis describes the process of selecting such basic behaviors, formally specifying them, algorithmically implementing them, and empirically evaluating them. All of the proposed ideas are validated with a group of up to 20 mobile robots using a basic behavior set consisting of: safe--wandering, following, aggregation, dispersion, and homing. The set of basic behaviors acts as a substrate for achieving more complex high--level goals and tasks. Two behavior combination operators are introduced, and verified by combining subsets of the above basic behavior set to implement collective flocking, foraging, and docking. A methodology is introduced for automatically constructing higher--level behaviors - Robotics and Autonomous Systems , 1994 "... The problem of synthesizing and analyzing collective autonomous agents has only recently begun to be practically studied by the robotics community. This paper overviews the most prominent directions of research, defines key terms, and summarizes the main issues. Finally, it briefly describes our app ..." Cited by 123 (14 self) Add to MetaCart The problem of synthesizing and analyzing collective autonomous agents has only recently begun to be practically studied by the robotics community. This paper overviews the most prominent directions of research, defines key terms, and summarizes the main issues. Finally, it briefly describes our approach to controlling group behavior and its relation to the field as a whole.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=194","timestamp":"2014-04-17T16:17:08Z","content_type":null,"content_length":"38815","record_id":"<urn:uuid:2da65fdc-fa56-42a3-bc8c-0e57d1336684>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
Draw the graphs of f(x) =cos (cos^-1 x) and f(x) = cos^-1(co Draw the graphs of f(x) =cos (cos^-1 x) and f(x) = cos^-1(cos x)? I know it's not really possible to draw the graph on here but try to explain what the graph looks like if you can. I know how to do a regular cosine graph but not an inverted cosine graph. I really need the help, thanks so much! Re: Draw the graphs of f(x) =cos (cos^-1 x) and f(x) = cos^- maddy315 wrote:Draw the graphs of f(x) =cos (cos^-1 x) and f(x) = cos^-1(cos x)? You might want to start by reviewing the definition of the inverse cosine, and then think about the simplified (or, perhaps, "evaluated") form of the functions. What happens when you take the cosine of the inverse cosine of x? What is the result? And so forth.
{"url":"http://www.purplemath.com/learning/viewtopic.php?f=12&t=2216","timestamp":"2014-04-18T19:04:40Z","content_type":null,"content_length":"18554","record_id":"<urn:uuid:9841dcb0-a21e-4a71-963f-9f35b4e0df16>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00104-ip-10-147-4-33.ec2.internal.warc.gz"}
GED® Math | Math Approach Home | Writing | Reading | Social Studies | Math | Science GED® MATHEMATICS Before we get into the lessons, let's take a look at some important tips for taking the math section of the GED® test. As with all sections of the GED® test, remember to: • Pace yourself • Answer every question • Eliminate answer choices whenever you can And, above all, • Relax! If you ever feel like you are struggling, relax. Be realistic. Be patient enough to get it right, and focused enough that you work out as many problems as you can to the best of your ability. BEFORE THE TEST 1. Read and understand the directions: The Mathematics test consists of multiple-choice questions intended to measure general mathematics skills and problem-solving ability. The questions are based on short readings that often include a graph, chart, or figure. Work carefully, but do not spend too much time on any one question. Be sure to answer every question. Only some of the questions will require you to use a formula. Not all the formulas given will be Some questions contain more information than you will need to solve the problem; other questions do not give enough information. If the question does not give enough information to solve the problem, the correct answer choice is “Not enough information is given.” [Interpret this piece of information as meaning that you need to focus on the key elements necessary for calculating the problem. Rarely is there information that you don’t need. Even rarer is a problem that does not contain enough information for you to solve it.] Part I: Calculators are allowed. Part II: Calculators are not allowed. Do not use the test booklet as scratch paper or as an answer sheet. The test administrator will give you blank paper for your calculations. Record your answers on the separate answer sheet provided. Be sure all information is properly recorded on the answer sheet. To record your answers, fill in the numbered circle on the answer sheet that corresponds with the answer you selected for each question in the test booklet. For Example: If a grocery bill totaling $15.75 is paid with a $20.00 bill, how much change should be returned? (1) $5.25 (2) $4.75 (3) $4.25 (4) $3.75 (5) $3.25 The correct answer is “$4.25”. Therefore, Answer 3 would be filled in on the answer sheet. Do not rest the point of your pencil on the answer sheet while you are considering your answer. Make no stray or unnecessary marks. If you change an answer, erase your first mark completely. Mark only one answer for each question; multiple answers will be scored as incorrect. Do not fold or crease your answer sheet. All test materials must be returned to the test administrator. 2. Know the formulas Even though you will have a sheet of formulas to refer to, you should know these formulas well beforehand. Your goal should be to memorize as many formulas as possible to cut down on time spent looking for or figuring out a formula during the test. Why else? -Problems often require some insight and adaptation beyond just using the formula in front of you. -Knowing formulas by heart will save time Start with the Geometry formulas (Area, Circumference, Volume, Pythagorean Theorem.) The best way to memorize the formulas is by applying them to practice problems. Additional important formulas will also be highlighted in the lessons. DURING THE TEST 1. Skim the directions at the beginning From this course and your many practice sessions, you will already know the directions. You should still read the directions within the test, but the longer directions at the beginning are basically the same as what you just read above. Know what to do, and you won't have to waste time! 2. Complete the picture If a diagram is not fully labeled with the numbers that are contained in the question, label them yourself in the appropriate places. If a problem describes a shape or form but does not provide a picture, draw it and label the dimensions. The perimeter of a square flower bed is 12 feet. What is the area of the flower bed in square feet? A) 3 B) 12 C) 24 D) 9 E) There is not enough information to solve the problem. First, draw your square. You see that it has four sides of the same length. (Your drawing may not have all sides exactly the same length, but by simply sketching it, you have captured the idea.) Since it has four equal sides, the perimeter must be divided equally among those sides. 12/4 = 3. Now label those sides. The area can be represented by imagining grid lines. You don’t have to draw the lines. Simply having a square in front of you provides a much clearer framework by which to consider the problem. Supplying a picture for the problem also lets you know if enough information has been provided. 3. Narrow down your answer choices For multiple-choice format questions, when finding a distinct answer escapes you or is too time-consuming, you can use the strategy of eliminating answer choices. a. Eliminate impossible choices Eliminate answer choices that obviously don’t fit. A 5-foot ladder is leaning against a 20-foot wall. The bottom end of the ladder is 3 feet from the wall. How many feet above the ground does the ladder touch the wall? A) 0.7 B) 2.5 C) 4 D) 7.5 E) 16 A 5-foot ladder leaning against a wall does not touch the wall at a height of greater than five feet. So you can immediately eliminate D) and E) b. Eliminate answer elements An answer choice can be eliminated on the basis of only part of it being wrong. Which of the following pairs of points both lie on the line whose equation is 3x-y= 2? A) (3,-2) and (1,5) B) (2,4) and (1,5) C) (2,-2) and (1,5) D) (3,7) and (3,-2) E) (2,4) and (3,7) You start by plugging in (3,-2) from Answer A. It doesn't work, so you can eliminate it. (Don't bother trying the second coordinate pair. Even if it works, you can't choose A.) Next, try (2,4) from B. It works. Circle it. Also notice that it is part of E, so circle it there, too. Next, try (2,-2) from C. It does not work. Cross C out. You are left with B and E, so you must try (1,5) or (3,7). You only need to try one of them. If it works, that's your answer. If not, the other choice is the right answer because there are no other choices left. 4. Estimate and round off Again, this applies only to multiple-choice format questions. You can approximate and get close enough to identify the right answer without spending lots of time working out an exact figure. A daredevil is shot out of a cannon a distance of 55 meters. His assistant’s stopwatch times him as being airborne for 12.5 seconds. At what speed did he travel? A) 1.5 B) 3.2 C) 4.4 D) 5.6 E) 6 You can safely approximate, for example, that 12 goes into 50 at least 4 times and less than 5 times, so the answer is most likely C. Back: Math Introduction | Next: Math Lesson 1A Signup! It's Free! | Language Arts | Reading | Social Studies | Math | Science
{"url":"http://www.gedforfree.com/free-ged-course/math/math-approach.html","timestamp":"2014-04-21T02:03:07Z","content_type":null,"content_length":"19862","record_id":"<urn:uuid:095ce565-172a-41bd-9544-b1d10de5abb0>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00178-ip-10-147-4-33.ec2.internal.warc.gz"}
An Introduction to Machine Vision James Matthews Machine vision is an incredibly difficult task - a task that seems relatively trivial to humans is infinitely complex for computers to perform. This essay should provide a simple introduction to computer vision, and the sort of obstacles that have to be overcome. Data size Why is this important? Well, the first consideration/problem of vision systems is the sheer size of the data it has to deal with. Doing the math, we have 640x480 pixels to begin with (307,200). This is multiplied by three to account for the red, green and blue (RGB) data (921,600). So, with just one image we are looking at 900K of data! So, if we are looking at video of this resolution we would be dealing with 23Mb/sec (or 27Mb/sec in the US) of information! The solution to this is fairly obvious - we just cannot deal with this sort of resolution at this speed at this colour depth! Most vision systems will work with greyscale video with a resolution of 200x150. This greatly reduces the data rate - from 23Mb/sec to 0.72Mb/sec! Most modern day computer can manage this sort of rate very easily. Of course, receiving the data is the smallest problem that vision systems face - it is processing it that takes the time. So how can we simplify the data down further? I'll present two simple methods - edge detection and prototyping. Edge Detection Most vision systems will be determining where and what something is, and for the most part, by detecting the edges of the various shapes in the image should be sufficient to help us on our way. Let us look at two edge detections of our picture: The left picture is generated by Adobe Photoshop's "Edge Detection" filter, and the right picture is generated by Generation5's ED256 program. You can see that both programs picked out the same features, although Photoshop has done a better job of accentuating more prominent features. The process of edge detection is surprisingly simple. You merely look for large changes in intensity between the pixel you are studying and the surrounding pixels. This is achieved by using a filter matrix. The two most common edge detecion matrices are called the Laplacian and Laplacian Approximation matrices. I'll use the Laplacian matrix here since the number are all integers. The Laplacian matrix looks like this: 1 -8 1 Now, let us imagine we are looking at a pixel that is in a region bordering a black-to-white block. So the pixel and its surrounding 8 neighbours would have the following values: Where 255 is white and 0 is black. We then multiply the corresponding values with each other: 255 -2040 255 We then add all of the values together and take the absolute value - giving us the value of 765. Now, if this value is above our threshold (normalling around 20-30, so this is way above the threshold!) then we say that point denotes an edge. Try the above calculation with a matrix that consists of only 255. Experiment with the ED256 program which allows you to play with either the Laplacian or Laplacian Approxmation matrices, even create your own. Prototyping came about through a data classification technique called competitive learning. Competition learning is employed throughout different fields in AI, especially in neural networks or more specificially self-organizing networks. Competitive learning is meant to create x-number of prototypes given a data set. These prototypes are meant to be approximations of groups of data within the Somebody thought it would be neat to apply this sort of technique to an image to see if there are data patterns within an image. Obviously it is different for every image, but on the whole, areas of the image can be classified very well using this techinque. Here a more specific overview of the algorithm: Prototyping Algorithm 1. Take x samples of the image (x is a high number like 1000). In our case, these samples would consist of small region of the image (perhaps 15x15 pixels). 2. Create y number of prototypes (y is normally a smaller number like 9). Again, these prototypes would consist of 15x15 groups of pixels. 3. Initialize these prototypes to random values (noisy images). 4. Cycle through our samples, and try and find the prototype that is closest to the sample. Now, alter our prototype to be a little closer to the sample. This is normally done by a weighted average. ED256 brings the chosen prototype 10% closer to the sample. 5. Do this many times - around 5000. You will find that the prototypes now actually represent groups of pixels that are predominate in the image. 6. Now, you can create a simpler image only made up of y colours by classifying each pixel according to the prototype it is closest too. Here is our picture in greyscale and another that has been fed through the prototyping algorithm built into ED256. We use greyscale to make prototyping a lot simpler. I've also enlarged the prototypes and their corresponding colours to help you visualize the process: Notice how the green corresponds to pixels that have a predominantly white surroundings, most are red because they are similar to the "brick" prototype. For very dark areas (look at the far right window frame) they are classified as dark red. For another example, look at this picture of a F-22 Raptor. Notice how the red corresponds to the edges on the right wing (and the left too for the some reason!) and the dark green for the left trailing edges/intakes and right vertical tail. Dark blue is for horizontal edges, purple for the dark aircraft body and black for the land. How do these techniques really help machine vision systems? It all boils down to simplifying the data that the computer has to deal with. Less data, the more time can be spent extrapolating features. The trade-off is between data size and retaining the features within the image. For example, with the prototyping example, we would have no trouble spotting the buildings in the picture, but the tree and the car are a lot harder to differentiate. The same applies with a computer. In general, edge detection helps when you need to fit a model to a picture - for example, spotting people in a scene. Prototyping helps to classify images, by detecting their prominent features. Prototyping has a lot of uses since it can "spot" traits of an image that humans do not. Submitted: 11/10/2000 Article content copyright © James Matthews, 2000.
{"url":"http://www.generation5.org/content/2000/vision.asp","timestamp":"2014-04-19T14:29:32Z","content_type":null,"content_length":"14649","record_id":"<urn:uuid:06864916-632b-4683-b62b-80cfb36c6bb8>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: conservation results FOM: November 1 - November 30, 1999 [Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index] [FOM Postings] [FOM Home] FOM: conservation results This message is a follow-up to Harvey Friedman's 9/29 posting, FOM: 61: Finitist proofs of conservation Harvey's posting contains a smooth an elegant way of turning a number of model-theoretic proofs of conservation results into finitary, syntactic arguments. In response to a personal request from Harvey, I wrote a note discussing the relationship between model-theoretic and proof-theoretic proofs of conservation results. At Steve Simpson's prompting, I have reworked my response into this posting, in the hope that others may find it interesting or useful. When I say "conservation result," I have in mind a setup that involves two theories, T_1 and T_2, and a class of sentences Gamma in the language of T_1. The theorem then states that whenever T_1 proves some sentence phi in Gamma, then T_2 proves it as well, or, possibly, a related "translation" Results like these can be interesting for a number of reasons. Typically, the theory T_1 formalizes a type of mathematical reasoning that is, prima facie, stronger than that of T_2, in which case the result gives a "reduction" of the stronger theory to the weaker one. For example, one can reduce various kinds of second-order theories (that is, theories in a second-order language) to first-order ones, first-order theories to quantifier-free ones, infinitary theories to finitary ones, impredicative theories to predicative ones, classical theories to constructive ones, nonstandard theories to standard ones, and so on. In short, insofar as T_1 "captures" a certain type of mathematical argumentation (in the sense that ordinary mathematical arguments of a certain kind can be formalized in T_1, in a natural way), a corresponding conservation result shows that this type of reasoning can be reduced to / understood in terms of / interpreted in terms of / justified relative to certain others. Results like these are often surprising and unexpected, since in many cases T_1 and T_2 look very different. I have argued in a talk called "Semantic methods in proof theory," (on my web page, http://www.andrew.cmu.edu/~avigad), that most results in proof theory, when stated in their strongest form, can be viewed as conservation results. For example, one can construe the results of an ordinal analysis as determining, finitarily, that a theory T_1 is conservative over a very weak theory together with principles involving induction or recursion on ordinal notations. (See the discussion in the paper "Ordinal analysis without proofs," also on my web page; I should note that Lev Beklemishev has approached these issues from a similar point of view, but has come up with characterizations of ordinal analysis that are different from mine.) For another example, one obtains combinatorial independences by finding finitary proofs that the theory T_1 is conservative over weak theories together with certain combinatorial principles. Here I will focus on conservation results in the ordinary sense, restricting my attention mainly to cases where T_1 and T_2 are classical theories. One can find much more information on reductions like these (as well as reductions of classical theories to constructive ones) in a trio of articles found together in JSL 53 (2), 1988, 337-384: Sieg, Wilfried, "Hilbert's program sixty years later" Simpson, Stephen, "Partial realizations of Hilbert's program" Feferman, Solomon, "Hilbert's program relativized: proof-theoretical and foundational reductions" An interesting thing about the conservation results described in these articles is that some of them were first discovered by model-theoretic methods, others by proof-theoretic methods. Neither camp can claim clear superiority. Of course, model-theoretic proofs rely on more semantic intuitions. Advantages to the proof-theoretic arguments are that (1) they can be carried out in a weak metatheory, and (2) they involve explicit translations of the proofs in T_1 to T_2, with bounds on the increase in length of proof. As far as (2) is concerned, what typically happens is that either there is an interpretation of T_1 in T_2, in which there is a low-degree polynomial bound on the increase in length of proof, or there is an "essential" use of cut-elimination/normalization, in which case the best bound involves an iterated stack-of twos. The brunt of my message to Harvey was basically this: I don't know of any model-theoretic proof of a conservation result that hasn't been duplicated using proof-theoretic methods as well; and where there isn't a direct interpretation, the superexponential increase in length of proof can be shown to be necessary. To illustrate, I've listed some of my favorite conservation results, and indicated whether or not the first proofs were model-theoretic or proof-theoretic. Keep in mind that I am leaving out a lot -- I haven't even mentioned reductions of classical theories to constructive ones, ordinal analysis, functional interpretations, conservation results for fragments of arithmetic, nonstandard theories, etc. The following theories are mostly concerned with standard, classical representations of Here is the list: WKL_0 (and hence RCA_0 and ISigma_1) over PRA Sigma^1_1-AC_0 (and hence ACA_0) over PA Sigma^1_1-AC over (Pi^0_1-CA)^<epsilon_0 ATR_0 over IR or ID^_<omega Pi^1_1-CA_0 over ID_<omega NBG (von-Neumann, Bernays, Goedel set theory) over ZFC The conservation results hold for formulas in the common language, plus a little more if you read them the right way. For example, the second result holds for Pi^1_2-sentences if you say what it means for such a sentence to be provable in PA (roughly, add free set variables, and ask for an explicit formula witnessing the existential quantifier). All these results can be obtained by model-theoretic methods, but all of them can also be obtained using cut-elimination arguments, in which case the proofs are easily shown to be finitary (taking place in EFA*, aka IDelta_0 + superexponentiation). To the best of my knowledge, the credits are as follows: Mints, Takeuti, and Parsons got ISigma_1 over PRA independently (Parsons used a Dialectica interpretation plus normalization, the other two cut elimination or normalization). RCA_0 is easily interpreted in ISigma_1 (hence without a big increase in length of proofs). WKL_0 over PRA is due to Friedman, using a model-theoretic argument. Later, Sieg did this using cut elimination, and Kohlenbach did it with a Dialectica interpretation (and normalization). Also after Friedman's proof, Harrington gave a forcing argument for the Pi^1_1 conservation of WKL_0 over RCA_0 (yielding Friedman's result as a corollary). Independently, Hajek and I turned this into an interpretation of WKL_0 in RCA_0, so there is no big increase in the length of proofs. ACA_0 over PA and NBG over ZF are trivial model-theoretic arguments, and historically, the latter came first. A number of people seem to have come up with the model-theoretic argument independently -- see the references in Fraenkel, Bar-Hillel, and L\'evy's *Foundations of Set Theory*. I believe Schoenfield had the first syntactic proof, using a method of eliminating special constants. Arguments using cut elimination are also easy, probably first noted by Feferman and Sieg. Sigma^1_1-AC_0 over PA was obtained by Barwise and Schlipf using recursive saturation, but the result was also implicit in Friedman's work (see the discussion of Sigma^1_1-AC). Sieg duplicated the result using cut-elimination, Feferman with a Dialectica interpretation (and Sigma^1_1-AC over (Pi^0_1-CA)^<epsilon0 is due to Friedman. Sieg and Feferman later got it with cut-elimination, and Feferman with a Dialectica interpretation. Here there is no big increase in lengths of proofs. (Note that Sigma^1_1-AC is not finitely axiomatizable, so there are short "local" interpretations.) ATR_0 over IR is in a paper by Friedman, McAloon, and Simpson, and the proof is model-theoretic. Jaeger got a proof-theoretic version using cut-elimination and other methods. ATR_0 over ID^<omega then follows from other known reductions. I gave a direct proof of this last conservation result by noting that (ATR) is equivalent to a second-order version of the fixed-point axioms, after which an easy model-theoretic or cut-elimination argument finishes it off. Pi^1_1-CA_0 over ID<omega similarly follows by an easy model-theoretic or cut-elimination argument, given the Kleene's analysis of Pi^1_1 sets in terms of inductive definitions. Kreisel was probably the first to consider formal theories of inductive definitions; this particular result is due to Unless otherwise noted, in all cases there is necessarily a superexponential increase in the length of proofs. The easiest way to show this is due to Solovay: use his method of "shortening of cuts" to get short proofs of the consistency of big pieces of the second theory in the first theory. For example, ACA_0 has proofs of the consistency of ISigma_{2_n} that are polynomial in n, where 2_n is a stack of n 2's. One can be more specific regarding the measure of length one is using, and get the upper and lower bounds to coincide asymptotically -- see Pudlak's article in the Handbook of Proof Theory. (Incidentally, I like to think of Herbrand's theorem as being a kind of conservation result for logic with quantifiers over logic without. Lower bounds here (I think) are due to Statman and Orevkov independently; see Schwichtenberg and Troelstra's book on Proof Theory for necessary increases in length of proofs in pure [Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index] [FOM Postings] [FOM Home]
{"url":"http://www.personal.psu.edu/t20/fom/postings/9911/msg00005.html","timestamp":"2014-04-19T00:15:20Z","content_type":null,"content_length":"12144","record_id":"<urn:uuid:ee11a482-c405-4fc8-b3b1-a3de4d7f637b>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00433-ip-10-147-4-33.ec2.internal.warc.gz"}
Percentage Calculators Percentage Calculator makes it easy to calculate the of any number. Just enter the you want to find in the first box, then the number you want to find the of in the second box and click Enter. You can also find what one number is as a of another by picking this option from the drag down menu and clicking Enter. Calculating percentages has never been easier! Percentage Change Calculator You can also use our above calculator to work out percentage changes . Our percentage change calculator does all the work for you! Just select 'as a percentage of' from the drop down menu. Then input your number after the percentage change in the first box and the original number in the second box and click on Enter. Now simply take the result and subtract 100% and hey presto you have your answer! For example 120 as a of 100 = 120% -100% = 20%, so the percentage change from 100 to 120 is 20%! Uses for Percentage Change People use percentage change for a variety of reasons. Some of these may be: • Percentage of weight lost on a diet • Comparing crowd numbers for an event from day to day • Percentage gained or lost in an investment • Comparing sports statistics The amount of change between two numbers is the difference, but just a difference does not mean much to many people. It has more of an impact when you say, “There was a 50 percent increase in attendance at the concert compared to last year,” versus when you say, “There were 100 more people at the concert this year than last.” VAT Calculator VAT Calculator (Value Added Tax Calculator) enables you to work out prices without VAT and even how much the VAT you are paying on an item. Just enter the purchase price of any item, select the VAT percentage from the drop down menu, then click ENTER. Calculating VAT is now easy – start using the online VAT calculator now! Body Fat Calculator Our Body Fat Calculator makes it ultra simple to calculate your body fat percentage. All you have to do to calculate body fat percentage is enter in your current weight in pounds, your waist size measured in inches, and if you are a female then your wrist size, forearm size, and hips size then press the CALCULATE button. Good luck in calculating your body fat percentage with our Body Fat The American Council on Exercise recommends the following percentages: Description Women Men Essential fat 10–13% 2–5% Athletes 14-20% 6–13% Fitness 21-24% 14–17% Just "Average" 25-31% 18-24% Excess fat 32%+ 25%+ how-to-work-out-percentages.co.uk is a simple site with one aim - to show you how to workout percentages,easily. This is an invaluable skill - and is critical if you work in a business of any type or simply if you want to know if you are getting a good deal on that new TV or Car.
{"url":"http://www.how-to-work-out-percentages.co.uk/Percentage%20calculator.html","timestamp":"2014-04-16T15:59:14Z","content_type":null,"content_length":"33876","record_id":"<urn:uuid:6b9d63c8-b11f-4781-aa1b-563195a1b35f>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00624-ip-10-147-4-33.ec2.internal.warc.gz"}
In the figure above, the point on segment PQ that is twice a Author Message In the figure above, the point on segment PQ that is twice a [#permalink] 18 Sep 2012, 01:51 Expert's post Bunuel C Math Expert D Joined: 02 Sep 2009 E Posts: 17327 Difficulty: Followers: 2876 15% (low) Question Stats: (01:53) correct 23% (01:15) based on 221 sessions Re: In the figure above, the point on segment PQ that is twice a [#permalink] 18 Sep 2012, 01:52 Expert's post In the figure above, the point on segment PQ that is twice as far from P as from Q is (A) (3,1) (B) (2,1) (C) (2,-1) (D) (1.5,0.5) (E) (1,0) Bunuel Options A or C cannot be correct answer since these points aren't even on segment PQ. E (1,0) is clearly closer to P, so it's also out, D is right in the middle of the segment, so only option B is left. Math Expert Answer: B. Joined: 02 Sep 2009 Posts: 17327 NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!! Followers: 2876 PLEASE READ AND FOLLOW: 11 Rules for Posting!!! RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!; COLLECTION OF QUESTIONS: PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12 Fresh Meat DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set. What are GMAT Club Tests? 25 extra-hard Quant Tests Current Student Re: In the figure above, the point on segment PQ that is twice a [#permalink] 18 Sep 2012, 02:19 Joined: 25 Jun 2012 2 Posts: 71 This post received Location: India Ans (2,1) WE: General Management (Energy and Utilities) making a graph solving it we get (2,1) Followers: 2 Kudos [?]: 30 [2] , given: Re: In the figure above, the point on segment PQ that is twice a [#permalink] 18 Sep 2012, 05:30 This post received Bunuel wrote: Joined: 12 Mar 2012 In the figure above, the point on segment PQ that is twice as far from P as from Q is Posts: 171 (A) (3,1) Location: India (B) (2,1) Concentration: Technology, General Management (C) (2,-1) GMAT Date: 07-23-2012 (D) (1.5,0.5) WE: Programming (E) (1,0) The question ask to divide the line PQ into 2:1 ratio and find the point. Followers: 0 By symmetry, the line segment at x-axis (1,0) will be divided in ratio 1:2. Kudos [?]: 30 [1] , given: 4 Similarly, at (2,1) line will be divided in ration 2:1 Hence B FOCUS..this is all I need! Re: In the figure above, the point on segment PQ that is twice a [#permalink] 18 Sep 2012, 09:32 This post received Bunuel wrote: The Official Guide for GMAT® Review, 13th Edition - Quantitative Questions Project In the figure above, the point on segment PQ that is twice as far from P as from Q is (A) (3,1) (B) (2,1) (C) (2,-1) (D) (1.5,0.5) (E) (1,0) Joined: 02 Jun 2011 Practice Questions Posts: 115 Question: 43 Page: 158 Followers: 0 Difficulty: 600 Kudos [?]: 20 [1] , given: 5 GMAT Club is introducing a new project: The Official Guide for GMAT® Review, 13th Edition - Quantitative Questions Project Each week we'll be posting several questions from The Official Guide for GMAT® Review, 13th Edition and then after couple of days we'll provide Official Answer (OA) to them along with a slution. We'll be glad if you participate in development of this project: 1. Please provide your solutions to the questions; 2. Please vote for the best solutions by pressing Kudos button; 3. Please vote for the questions themselves by pressing Kudos button; 4. Please share your views on difficulty level of the questions, so that we have most precise evaluation. Thank you! By looking at the figure it comes out that option "B" (2,1) is the right choice. Re: In the figure above, the point on segment PQ that is twice a [#permalink] 21 Sep 2012, 02:18 Expert's post In the figure above, the point on segment PQ that is twice as far from P as from Q is (A) (3,1) (B) (2,1) (C) (2,-1) (D) (1.5,0.5) (E) (1,0) Options A or C cannot be correct answer since these points aren't even on segment PQ. E (1,0) is clearly closer to P, so it's also out, D is right in the middle of the segment, so only option B is left. Bunuel Answer: B. Math Expert Kudos points given to everyone with correct Joined: 02 Sep 2009 solution Posts: 17327 . Let me know if I missed someone. Followers: 2876 _________________ NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!! PLEASE READ AND FOLLOW: 11 Rules for Posting!!! RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!; COLLECTION OF QUESTIONS: PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12 Fresh Meat DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set. What are GMAT Club Tests? 25 extra-hard Quant Tests Director Re: In the figure above, the point on segment PQ that is twice a [#permalink] 26 Oct 2012, 02:25 Status: Gonna rock this twice as far from P as from Q This confused me..I thought teh Qs is asking for the midpoint.. Joined: 22 Jul 2012 Could somebody explain me what it is asking.. Posts: 551 and thanks bunuel for POE method. but how would u solve this algebraically? Location: India GMAT 1: 640 Q43 V34 GMAT 2: 630 Q47 V29 hope is a good thing, maybe the best of things. And no good thing ever dies. WE: Information Technology Who says you need a 700 ?Check this out : http://gmatclub.com/forum/who-says-you-need-a-149706.html#p1201595 (Computer Software) My GMAT Journey : end-of-my-gmat-journey-149328.html#p1197992 Followers: 2 Kudos [?]: 22 [0], given: Re: In the figure above, the point on segment PQ that is twice a [#permalink] 21 Dec 2012, 14:14 Sachin9 wrote: megafan twice as far from P as from Q Manager This confused me..I thought teh Qs is asking for the midpoint.. Joined: 28 May 2009 Could somebody explain me what it is asking.. Posts: 157 and thanks bunuel for POE method. but how would u solve this algebraically? Followers: 4 I concur. I think it would be helpful if we can solve this algebraically using the given y-coordinate and point Q, rather than elimination — similar to Bunel's approach to this problem. Kudos [?]: 68 [0], given: 89 New to GMAT Club? Start here Kindle Flashcards - Quant (Official Gmat Club, miguemick's Quant topics), and Verbal (Official Gmat Club) - in Kindle (azw3) format Re: In the figure above, the point on segment PQ that is twice a [#permalink] 22 Dec 2012, 00:49 Status: Gonna rock this time!!! Please suggest a algebraic approach, Bunuel! Joined: 22 Jul 2012 _________________ Posts: 551 hope is a good thing, maybe the best of things. And no good thing ever dies. Location: India Who says you need a 700 ?Check this out : http://gmatclub.com/forum/who-says-you-need-a-149706.html#p1201595 GMAT 1: 640 Q43 V34 My GMAT Journey : end-of-my-gmat-journey-149328.html#p1197992 GMAT 2: 630 Q47 V29 WE: Information Technology (Computer Software) Followers: 2 Kudos [?]: 22 [0], given: Re: In the figure above, the point on segment PQ that is twice a [#permalink] 22 Dec 2012, 04:04 Math Expert Joined: 02 Sep 2009 This post received Posts: 17327 KUDOS Followers: 2876 Expert's post Kudos [?]: 18410 [1] , given: 2351 Re: In the figure above, the point on segment PQ that is twice a [#permalink] 26 Dec 2012, 02:06 This post received Lets see the algebraic solution. A point P that divides a line AB internally into ratio m1 & m2 is as below. eaakbari A(x1,y1)____________m1_____________P(x3,y3)________________________m2__________________________B(x2,y2) Manager The formula for finding coordinates are as below. Joined: 24 Mar 2010 x3 = (x1*m2 + x2*m1)/(m1 + m2)y3 = (y1*m2 + y2*m1)/(m1 + m2) Posts: 82 Back to our question. Followers: 0 P (0,-1 ) & Q (3,2) Kudos [?]: 14 [2] , given: We fathom that m1 =2 & m2 = 1 x1 = 0 , y1 = -1x2 = 3 , y1 = 2 Plugging values, we obtain answer as (2,1). Hence B Once you know the formula its very easy. P.S. Bunuel's approach is beautifully elegant. However, for me solving algebraically is much faster than figuring the answer choices out. - Stay Hungry, stay Foolish - fozzzy Re: In the figure above, the point on segment PQ that is twice a [#permalink] 29 Dec 2012, 18:44 Director The only way to solve this problem is by plugging in answer choices? Joined: 29 Nov 2012 _________________ Posts: 936 Click +1 Kudos if my post helped... Followers: 11 Amazing Free video explanation for all Quant questions from OG 13 and much more http://www.gmatquantum.com/og13th/ Kudos [?]: 160 [0], given: GMAT Prep software What if scenarios gmat-prep-software-analysis-and-what-if-scenarios-146146.html Re: In the figure above, the point on segment PQ that is twice a [#permalink] 30 Dec 2012, 00:11 fozzzy wrote: The only way to solve this problem is by plugging in answer choices? Joined: 24 Mar 2010 See this Posts: 82 Followers: 0 If you need further elaboration, let me know. Kudos [?]: 14 [0], given: 134 _________________ - Stay Hungry, stay Foolish - Re: In the figure above, the point on segment PQ that is twice a [#permalink] 30 Dec 2012, 01:10 I was thinking along those lines by forming a triangle and then solving it. This is a fairly easy question you can still use the answer choices but if its a bit more Director complicated I would like a fast approach. Joined: 29 Nov 2012 Plane 2.png [ 9.82 KiB | Viewed 3924 times ] Posts: 936 Followers: 11 Click +1 Kudos if my post helped... Kudos [?]: 160 [0], given: 543 Amazing Free video explanation for all Quant questions from OG 13 and much more http://www.gmatquantum.com/og13th/ GMAT Prep software What if scenarios gmat-prep-software-analysis-and-what-if-scenarios-146146.html Re: In the figure above, the point on segment PQ that is twice a [#permalink] 30 Dec 2012, 01:26 fozzzy wrote: I was thinking of using similar triangle property, I was thinking along those lines by forming a triangle and then solving it. You can solve it easily using similar triangles too but you've picked the wrong triangles Joined: 24 Mar 2010 Take the triangle with vertices [ (-1, 0) , (1,1) , (0,0) ] Posts: 82 and the triangle with vertices [ (3, 2) , (1,1) , (3,0) ] Followers: 0 These two triangles are similar since their lengths are in the ration 1:2 Kudos [?]: 14 [0], given: 134 and then you can proceed. - Stay Hungry, stay Foolish - Re: In the figure above, the point on segment PQ that is twice a [#permalink] 19 Jan 2013, 03:24 eaakbari wrote: Lets see the algebraic solution. A point P that divides a line AB internally into ratio m1 & m2 is as below. Senior Manager A(x1,y1)____________m1_____________P(x3,y3)________________________m2__________________________B(x2,y2) Status: Prevent and prepare. The formula for finding coordinates are as below. Not repent and repair!! x3 = (x1*m2 + x2*m1)/(m1 + m2) Joined: 13 Feb 2010 y3 = (y1*m2 + y2*m1)/(m1 + m2) Posts: 277 Back to our question. Location: India P (0,-1 ) & Q (3,2) Concentration: Technology, General Management We fathom that m1 =2 & m2 = 1 GPA: 3.75 x1 = 0 , y1 = -1 WE: Sales x2 = 3 , y1 = 2 Plugging values, we obtain answer as (2,1). Hence B Followers: 9 Once you know the formula its very easy. Kudos [?]: 29 [0], given: 276 P.S. Bunuel's approach is beautifully elegant. However, for me solving algebraically is much faster than figuring the answer choices out. This is called section formula. It will be helpful. Thanks I've failed over and over and over again in my life and that is why I succeed--Michael Jordan Kudos drives a person to better himself every single time. So Pls give it generously Wont give up till i hit a 700+ Re: In the figure above, the point on segment PQ that is twice a [#permalink] 19 Jan 2013, 09:25 Joined: 16 Nov 2012 http://www.teacherschoice.com.au/Maths_ ... Geom_3.htm Posts: 44 go through the page.. Location: United States ans is (2,1) Concentration: Operations, Social Entrepreneurship _________________ GMAT Date: 08-27-2013 ......................................................................................... Please give me kudos if my posts help. GPA: 3.46 WE: Project Management Followers: 0 Kudos [?]: 9 [0], given: 54 essarr Re: In the figure above, the point on segment PQ that is twice a [#permalink] 20 Jan 2013, 01:11 Intern I think with questions like these, the test writers are testing whether you'd quickly jump to using an algebraic approach, which in this case is much more time consuming, as compared to making the answer choices a part of your toolbox for finding the correct answer.. Joined: 22 Jan 2012 The question itself tells us we need to split the line into 3 equal parts with the asked coordinate being 2 parts away from P A quick glance at the graph gives us the slope of 1, which easily shows us which points will cover the three segments Posts: 23 P(0,-1) --> (1,0) --> (2,1) --> Q(3,2) Thus (2,1) being twice as far from P as from Q Followers: 0 True, an algebraic approach might be required for more complex problems where the slope isn't easily determined or the line segment might be split into a different ratio, Kudos [?]: 5 [0], given: 11 but this question isn't testing that Senior Manager Re: In the figure above, the point on segment PQ that is twice a [#permalink] 06 Aug 2013, 00:41 Status: The Best Or Nothing Coordinates of point P = (0,-1) Joined: 27 Dec 2012 Coordinates of point Q = (3,2) Posts: 395 Point required to be found is on segment PQ that is twice as far from P as from Q Location: India So, adding (2,2) to point P (0,-1) Concentration: General Answer = B = (2,1) Management, Technology WE: Information Technology (Computer Software) Kindly press "Kudos" to appreciate Followers: 1 Kudos [?]: 71 [0], given: gmatclubot Re: In the figure above, the point on segment PQ that is twice a [#permalink] 06 Aug 2013, 00:41
{"url":"http://gmatclub.com/forum/in-the-figure-above-the-point-on-segment-pq-that-is-twice-a-139117.html?oldest=1","timestamp":"2014-04-20T18:24:09Z","content_type":null,"content_length":"260107","record_id":"<urn:uuid:5abbb3f5-9baf-45c5-8eda-7d5f51de5c75>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00627-ip-10-147-4-33.ec2.internal.warc.gz"}
combining advective reactive transport equation for two solutes October 11th 2010, 07:01 PM combining advective reactive transport equation for two solutes This is a rather basic one sorry, but I am rather stuck.... all I want to do is apply the chain rule to combine the advective transport equation for two solutes such that $dCa/dt =abla \cdot (D abla Ca)-v \cdot abla Ca + \sum Ja$ and $dCb/dt =abla \cdot (D abla Cb)-v \cdot abla Cb + \sum Jb$ where Ca is the concentration of 'a' in the fluid, D is a dispersion coefficient, v is velocity and Ja is the flux of 'a' to the fluid etc to get $d(Ca/Cb)/dt = ?$ can anyone please help? October 12th 2010, 01:30 PM May be you can use quotient rule $<br /> \displaystyle{(\frac {f(x)}{g(x)})'=\frac {f'(x)g(x)-f(x)g'(x)}{g^2(x)}}<br />$ October 13th 2010, 03:01 PM I think I might be getting there.... Taking the one dimensonal case(s) (same for C2) $dC_1/dt=D*(d^2C_1/dz^2)-v\cdot(dC_1/dz)+\sum J_1_,_i$ (1) and using the product rule as a special case of the chain rule $dr/dt=C_2\cdot(dC_1/dt)-C_1\cdot(dC_2/dt)$ (2) where $r=C_1/C_2$ (3) and rearranging for dr/dt in terms of C_1 $dr/dt=(C_1/r)\cdot(dC_1/dt)-C1\cdot(dC_2/dt)$ (4) Substituting (1) for C_1 and C_2 into (4) i get; $dr/dt=(C_1/r)\cdot(D*(d^2C_1/dz^2)-v\cdot(dC_1/dz)+\sum J_1_,_i)-C_1\cdot(D*(d^2C_2/dz^2)-v\cdot(dC_2/dz)+\sum J_2_,_i)$ All I need to do is expand this and rearrange but I am total stuck has how to expand out the brackers, as I suck at math. Can anyone enlighten me? Please. I would be really really grateful.
{"url":"http://mathhelpforum.com/differential-equations/159260-combining-advective-reactive-transport-equation-two-solutes-print.html","timestamp":"2014-04-17T01:47:36Z","content_type":null,"content_length":"7104","record_id":"<urn:uuid:fa30948b-aef1-40f5-9985-a94e28991d34>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00640-ip-10-147-4-33.ec2.internal.warc.gz"}
search results Expand all Collapse all Results 1 - 5 of 5 1. CJM Online first Tate Cycles on Abelian Varieties with Complex Multiplication We consider Tate cycles on an Abelian variety $A$ defined over a sufficiently large number field $K$ and having complex multiplication. We show that there is an effective bound $C = C(A,K)$ so that to check whether a given cohomology class is a Tate class on $A$, it suffices to check the action of Frobenius elements at primes $v$ of norm $ \leq C$. We also show that for a set of primes $v$ of $K$ of density $1$, the space of Tate cycles on the special fibre $A_v$ of the Néron model of $A$ is isomorphic to the space of Tate cycles on $A$ itself. Keywords:Abelian varieties, complex multiplication, Tate cycles Categories:11G10, 14K22 2. CJM 2012 (vol 66 pp. 170) Modular Abelian Varieties Over Number Fields The main result of this paper is a characterization of the abelian varieties $B/K$ defined over Galois number fields with the property that the $L$-function $L(B/K;s)$ is a product of $L$-functions of non-CM newforms over $\mathbb Q$ for congruence subgroups of the form $\Gamma_1(N)$. The characterization involves the structure of $\operatorname{End}(B)$, isogenies between the Galois conjugates of $B$, and a Galois cohomology class attached to $B/K$. We call the varieties having this property strongly modular. The last section is devoted to the study of a family of abelian surfaces with quaternionic multiplication. As an illustration of the ways in which the general results of the paper can be applied we prove the strong modularity of some particular abelian surfaces belonging to that family, and we show how to find nontrivial examples of strongly modular varieties by twisting. Keywords:Modular abelian varieties, $GL_2$-type varieties, modular forms Categories:11G10, 11G18, 11F11 3. CJM 2012 (vol 65 pp. 403) On the Dihedral Main Conjectures of Iwasawa Theory for Hilbert Modular Eigenforms We construct a bipartite Euler system in the sense of Howard for Hilbert modular eigenforms of parallel weight two over totally real fields, generalizing works of Bertolini-Darmon, Longo, Nekovar, Pollack-Weston and others. The construction has direct applications to Iwasawa main conjectures. For instance, it implies in many cases one divisibility of the associated dihedral or anticyclotomic main conjecture, at the same time reducing the other divisibility to a certain nonvanishing criterion for the associated $p$-adic $L$-functions. It also has applications to cyclotomic main conjectures for Hilbert modular forms over CM fields via the technique of Skinner and Urban. Keywords:Iwasawa theory, Hilbert modular forms, abelian varieties Categories:11G10, 11G18, 11G40 4. CJM 2012 (vol 65 pp. 195) Surfaces with $p_g=q=2$, $K^2=6$, and Albanese Map of Degree $2$ We classify minimal surfaces of general type with $p_g=q=2$ and $K^2=6$ whose Albanese map is a generically finite double cover. We show that the corresponding moduli space is the disjoint union of three generically smooth irreducible components $\mathcal{M}_{Ia}$, $\mathcal{M}_{Ib}$, $\mathcal{M}_{II}$ of dimension $4$, $4$, $3$, respectively. Keywords:surface of general type, abelian surface, Albanese map Categories:14J29, 14J10 5. CJM 2011 (vol 63 pp. 1058) $S_3$-covers of Schemes We analyze flat $S_3$-covers of schemes, attempting to create structures parallel to those found in the abelian and triple cover theories. We use an initial local analysis as a guide in finding a global description. Keywords:nonabelian groups, permutation group, group covers, schemes
{"url":"http://cms.math.ca/cjm/kw/abelian","timestamp":"2014-04-16T04:12:19Z","content_type":null,"content_length":"32826","record_id":"<urn:uuid:a43c4eac-b7b3-47c6-a55b-12b2027ff55c>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00293-ip-10-147-4-33.ec2.internal.warc.gz"}
What is the distance between Buda, Texas and Austin, Texas in miles? You asked: What is the distance between Buda, Texas and Austin, Texas in miles? Assuming you meant • Austin, the capital city of the U.S. state of Texas and the seat of Travis County Did you mean? Say hello to Evi Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we will be adding all of Evi's power to this site. Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
{"url":"http://www.evi.com/q/what_is_the_distance_between_buda,_texas_and_austin,_texas_in_miles","timestamp":"2014-04-19T12:29:58Z","content_type":null,"content_length":"54202","record_id":"<urn:uuid:645103f5-6132-43fb-ace2-20a46125a490>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00463-ip-10-147-4-33.ec2.internal.warc.gz"}
Find the equations of the two bisectors of the angles April 16th 2010, 11:48 PM #1 Dec 2008 Find the equations of the two bisectors of the angles Find the equations of the two bisectors of the angles for 3x + 4y = 0 and 5x - 12y + 1 = 0 How do I get started on this? Since the angle bisector of two lines is the locus of all points equidistant from both angle's legs , we must have that any point $(x,y)$ on the bisector must be equidistant from both lines, $\frac{|3x+4y|}{5} = \frac{|5x-12y+1|}{169} \Longrightarrow \frac{3x+4y}{5} = \pm \frac{5x-12y+1}{169}$ , one of these is the bisector's equation of the acute angle and the other one the bisector's eq. of the obtuse one. April 17th 2010, 01:25 AM #2 Oct 2009
{"url":"http://mathhelpforum.com/pre-calculus/139594-find-equations-two-bisectors-angles.html","timestamp":"2014-04-21T04:39:31Z","content_type":null,"content_length":"33526","record_id":"<urn:uuid:f1f24726-a8d4-4fff-a7da-2b21be46ed75>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
Bensenville Algebra 1 Tutor ...I love math and helping students understand it. I first tutored math in college and have been tutoring for a couple years independently. My students' grades improve quickly, usually after only a few sessions. 26 Subjects: including algebra 1, Spanish, chemistry, special needs ...As a physics graduate student at San Diego State University I continued my tutoring experience again as a university employee. This time I tended the General Math Studies (GMS) 'Math Lab' where students could drop in for help with math at their own convenience. The lab restricts its services to... 7 Subjects: including algebra 1, calculus, physics, geometry ...My experience includes leading workshops in Biology at the University of Miami as well as coaching Pop-Warner football. I lead workshops for a year at the University. My job was to make sure students understood the lecture material and answer any questions they had. 27 Subjects: including algebra 1, reading, English, chemistry I am an experienced special education teacher and tutor. Being a special education teacher means that I know how to pinpoint exactly how each student learns best and I have the patience and ability to guide students, no matter if they are in special education, gifted education or somewhere in betwe... 33 Subjects: including algebra 1, reading, English, writing ...My students included autistic children and the physically disabled. I currently tutor English via Skype for a friend who lives in Vietnam. This friend is also my Hapkido Grandmaster from whom I earned my Black Belt. 30 Subjects: including algebra 1, reading, chemistry, physics
{"url":"http://www.purplemath.com/bensenville_il_algebra_1_tutors.php","timestamp":"2014-04-18T21:45:07Z","content_type":null,"content_length":"23896","record_id":"<urn:uuid:27f17428-9018-4de5-bffa-d30c393557cd>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00458-ip-10-147-4-33.ec2.internal.warc.gz"}
Powder Springs, GA Algebra 2 Tutor Find a Powder Springs, GA Algebra 2 Tutor ...I look forward to hearing from you and to helping you achieve your goals!I have tutored in math for elementary, middle, school and high school, up to SAT preparation for the past 15 years. I have a Master's Degree in English from Wake Forest and have taught grammar in community college and high ... 33 Subjects: including algebra 2, English, reading, writing ...I know what's expected from each student and I will create a plan of action to help you achieve your personal goals to better understand mathematics. I am Georgia certified in Mathematics (grades 6-12) with my Masters in Mathematics Education from Georgia State University. I will create a plan of action to help you achieve your personal goals to better understand mathematics. 7 Subjects: including algebra 2, geometry, algebra 1, SAT math Teaching is my dream in life. It is what makes me feel fulfilled and happy. I would say that it all started when I was a Freshman (9th grade) at Santa Monica High School. 14 Subjects: including algebra 2, reading, biology, chemistry I have been a teacher for seven years and currently teach 9th grade Coordinate Algebra and 10th grade Analytic Geometry. I am up to date with all of the requirements in preparation for the EOCT. I am currently finishing up my masters degree from KSU. 4 Subjects: including algebra 2, geometry, algebra 1, prealgebra ...During high school, I was actively involved with several tutoring programs offered for elementary and high school students. Many of these students were not native English speakers, and, since I have been speaking Spanish and Farsi from an early age, I could effectively communicate and help these... 28 Subjects: including algebra 2, Spanish, reading, English Related Powder Springs, GA Tutors Powder Springs, GA Accounting Tutors Powder Springs, GA ACT Tutors Powder Springs, GA Algebra Tutors Powder Springs, GA Algebra 2 Tutors Powder Springs, GA Calculus Tutors Powder Springs, GA Geometry Tutors Powder Springs, GA Math Tutors Powder Springs, GA Prealgebra Tutors Powder Springs, GA Precalculus Tutors Powder Springs, GA SAT Tutors Powder Springs, GA SAT Math Tutors Powder Springs, GA Science Tutors Powder Springs, GA Statistics Tutors Powder Springs, GA Trigonometry Tutors Nearby Cities With algebra 2 Tutor Austell algebra 2 Tutors Chamblee, GA algebra 2 Tutors Clarkdale, GA algebra 2 Tutors Cumming, GA algebra 2 Tutors Dallas, GA algebra 2 Tutors Douglasville algebra 2 Tutors Hiram, GA algebra 2 Tutors Holly Springs, GA algebra 2 Tutors Lilburn algebra 2 Tutors Lithia Springs algebra 2 Tutors Mableton algebra 2 Tutors Marietta, GA algebra 2 Tutors Tyrone, GA algebra 2 Tutors Villa Rica, PR algebra 2 Tutors Winston, GA algebra 2 Tutors
{"url":"http://www.purplemath.com/powder_springs_ga_algebra_2_tutors.php","timestamp":"2014-04-17T21:36:15Z","content_type":null,"content_length":"24283","record_id":"<urn:uuid:461b947a-ac9a-4f1c-9a26-324fcdc7490a>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
Higher Level Math Question---> Russian Lucky Numbers?! - WyzAnt Answers In Russia you get into a bus, take a ticket, and sometimes say : Wow, a lucky number! Bus tickets are numbered by 6-digit numbers, and a lucky ticket has the sum of 3 first digits being equal to the sum of 3 last digits. When we were in high school we had to write a code that prints out all the lucky tickets' numbers; at least I did, to show my loyalty to the progammers' clan. Now, if you add up all the lucky tickets' numbers you will find out that 13 (the most unlucky number) is a divisor of the result. Can you prove it (without writing a code)? Tutors, please sign in to answer this question. 2 Answers Let first number of {} = 26 and add 13 to every number after 26 stopping at 52, and eliminate any number not ending in an even integer and ending in 0 to get the sum set. Stop at 998 to get 52 as the sum of each 3-number sets. 52/13= 4. There is just 26,39, and 52 as possibilities. 998+998=52 while 999+999=54 and not divisible by 13. And 54 is the highest number one can whereby add adding single digits in a series of three. Thank you for your answer. Unfortunately, I have still have not gotten what I needed. I am still not aware of how.. if you add all the lucky numbers... the sum is divisible by 13. Do you think you can help me prove that with a little more clarification? Thanks! :) I am wondering too since I can't figure why 54 divisible by 17 isn't a possible winninumber red. there seems to be missing criteria. If 13 is necessary, then wheArdis the statement that makes us eliminate 54 as a possible sum to use. Im not sure I understand what you're trying to say. :/ Hello Miss., This question from nearly a week ago seems never to have been resolved favorably, so I will attempt to give a brief exposition of its solution. First, we must be clear on what constitutes a bus ticket. From the wording of the problem, it seems to me that any string of six digits, from 000000 to 999999 is a valid ticket, since this would enable the bus system to have an ordering of tickets. In other words, tickets beginning with one or more zeroes are perfectly acceptable. If we agree to this definition of a ticket, the problem becomes straightforward. Let an arbitrary lucky bus ticket be denoted by its six-digits as abcdef. Write the value of the ticket as follows: 1000abc + def where we think of abc and def as normal three-digit integers. Now, if abcdef is lucky, then so is defabc. Note that if we add together the values of the two lucky tickets abcdef and defabc, we get 1001(abc + def), and since 1001 is divisible by 13, then any pair of lucky tickets of this type sum to a multiple of 13. The last thing to note is that any ticket of the form abcabc will only occur once in the sum of lucky tickets, not as a pair of tickets. However, tickets of this form are already divisible by 13, since they have the value 1000abc + abc = 1001abc. So, the sum of all the lucky bus tickets must be divisible by 13. Since 1001 = 7·11·13, the sum of all the bus tickets is also divisible by 7 and 11, we could say that since 7 and 11 are (maybe) lucky numbers, these outweigh the unlucky influence of 13! I hope this explanation is clear. Hassan H.
{"url":"http://www.wyzant.com/resources/answers/12885/higher_level_math_question_russian_lucky_numbers","timestamp":"2014-04-24T15:32:15Z","content_type":null,"content_length":"45225","record_id":"<urn:uuid:ad5e20b9-ce32-4ed8-8a27-76fc52f041d9>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
Energy and momentum for assumed Fourier transform solutions to the homogeneous Maxwell equation. Recent Comments Therese on Just Energy Canada nasty busin… peeterjoot on Found code that fails grade 11… David Galloway on Found code that fails grade 11… David Galloway on A nice example of one reason t… Adsd Asdsads on Public service announcement: h… Energy and momentum for assumed Fourier transform solutions to the homogeneous Maxwell equation. Posted by Peeter Joot on December 22, 2009 [Click here for a PDF of this post with nicer formatting] Motivation and notation. In Electrodynamic field energy for vacuum (reworked) [1], building on Energy and momentum for Complex electric and magnetic field phasors [2] a derivation for the energy and momentum density was derived for an assumed Fourier series solution to the homogeneous Maxwell’s equation. Here we move to the continuous case examining Fourier transform solutions and the associated energy and momentum A complex (phasor) representation is implied, so taking real parts when all is said and done is required of the fields. For the energy momentum tensor the Geometric Algebra form, modified for complex fields, is used \begin{aligned}T(a) = -\frac{\epsilon_0}{2} \text{Real} \Bigl( {{F}}^{*} a F \Bigr).\end{aligned} \hspace{\stretch{1}}(1.1) The assumed four vector potential will be written \begin{aligned}A(\mathbf{x}, t) = A^\mu(\mathbf{x}, t) \gamma_\mu = \frac{1}{{(\sqrt{2 \pi})^3}} \int A(\mathbf{k}, t) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch Subject to the requirement that $A$ is a solution of Maxwell’s equation \begin{aligned}abla (abla \wedge A) = 0.\end{aligned} \hspace{\stretch{1}}(1.3) To avoid latex hell, no special notation will be used for the Fourier coefficients, \begin{aligned}A(\mathbf{k}, t) = \frac{1}{{(\sqrt{2 \pi})^3}} \int A(\mathbf{x}, t) e^{-i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{x}.\end{aligned} \hspace{\stretch{1}}(1.4) When convenient and unambiguous, this $(\mathbf{k},t)$ dependence will be implied. Having picked a time and space representation for the field, it will be natural to express both the four potential and the gradient as scalar plus spatial vector, instead of using the Dirac basis. For the gradient this is \begin{aligned}abla &= \gamma^\mu \partial_\mu = (\partial_0 - \boldsymbol{abla}) \gamma_0 = \gamma_0 (\partial_0 + \boldsymbol{abla}),\end{aligned} \hspace{\stretch{1}}(1.5) and for the four potential (or the Fourier transform functions), this is \begin{aligned}A &= \gamma_\mu A^\mu = (\phi + \mathbf{A}) \gamma_0 = \gamma_0 (\phi - \mathbf{A}).\end{aligned} \hspace{\stretch{1}}(1.6) The field bivector $F = abla \wedge A$ is required for the energy momentum tensor. This is \begin{aligned}abla \wedge A&= \frac{1}{{2}}\left( \stackrel{ \rightarrow }{abla} A - A \stackrel{ \leftarrow }{abla} \right) \\ &= \frac{1}{{2}}\left( (\stackrel{ \rightarrow }{\partial}_0 - \ stackrel{ \rightarrow }{\boldsymbol{abla}}) \gamma_0 \gamma_0 (\phi - \mathbf{A})- (\phi + \mathbf{A}) \gamma_0 \gamma_0 (\stackrel{ \leftarrow }{\partial}_0 + \stackrel{ \leftarrow }{\boldsymbol {abla}})\right) \\ &= -\boldsymbol{abla} \phi -\partial_0 \mathbf{A} + \frac{1}{{2}}(\stackrel{ \rightarrow }{\boldsymbol{abla}} \mathbf{A} - \mathbf{A} \stackrel{ \leftarrow }{\boldsymbol{abla}}) \ This last term is a spatial curl and the field is then \begin{aligned}F = -\boldsymbol{abla} \phi -\partial_0 \mathbf{A} + \boldsymbol{abla} \wedge \mathbf{A} \end{aligned} \hspace{\stretch{1}}(2.7) Applied to the Fourier representation this is \begin{aligned}F = \frac{1}{{(\sqrt{2 \pi})^3}} \int \left( - \frac{1}{{c}} \dot{\mathbf{A}}- i \mathbf{k} \phi+ i \mathbf{k} \wedge \mathbf{A}\right)e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf {k}.\end{aligned} \hspace{\stretch{1}}(2.8) The energy momentum tensor is then \begin{aligned}T(a) &= -\frac{\epsilon_0}{2 (2 \pi)^3} \text{Real} \iint \left( - \frac{1}{{c}} {{\dot{\mathbf{A}}}}^{*}(\mathbf{k}',t)+ i \mathbf{k}' {{\phi}}^{*}(\mathbf{k}', t)- i \mathbf{k}' \ wedge {\mathbf{A}}^{*}(\mathbf{k}', t)\right)a\left( - \frac{1}{{c}} \dot{\mathbf{A}}(\mathbf{k}, t)- i \mathbf{k} \phi(\mathbf{k}, t)+ i \mathbf{k} \wedge \mathbf{A}(\mathbf{k}, t)\right)e^{i (\ mathbf{k} -\mathbf{k}') \cdot \mathbf{x} } d^3 \mathbf{k} d^3 \mathbf{k}'.\end{aligned} \hspace{\stretch{1}}(2.9) The tensor integrated over all space. Energy and momentum? Integrating this over all space and identification of the delta function \begin{aligned}\delta(\mathbf{k}) \equiv \frac{1}{{(2 \pi)^3}} \int e^{i \mathbf{k} \cdot \mathbf{x}} d^3 \mathbf{x},\end{aligned} \hspace{\stretch{1}}(3.10) reduces the tensor to a single integral in the continuous angular wave number space of $\mathbf{k}$. \begin{aligned}\int T(a) d^3 \mathbf{x} &= -\frac{\epsilon_0}{2} \text{Real} \int \left( - \frac{1}{{c}} {{\dot{\mathbf{A}}}}^{*}+ i \mathbf{k} {{\phi}}^{*}- i \mathbf{k} \wedge {\mathbf{A}}^{*}\ right)a\left( - \frac{1}{{c}} \dot{\mathbf{A}}- i \mathbf{k} \phi+ i \mathbf{k} \wedge \mathbf{A}\right)d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(3.11) Observing that $\gamma_0$ commutes with spatial bivectors and anticommutes with spatial vectors, and writing $\sigma_\mu = \gamma_\mu \gamma_0$, one has \begin{aligned}\int T(\gamma_\mu) \gamma_0 d^3 \mathbf{x} = \frac{\epsilon_0}{2} \text{Real} \int {\left\langle{{\left( \frac{1}{{c}} {{\dot{\mathbf{A}}}}^{*}- i \mathbf{k} {{\phi}}^{*}+ i \mathbf{k} \wedge {\mathbf{A}}^{*}\right)\sigma_\mu\left( \frac{1}{{c}} \dot{\mathbf{A}}+ i \mathbf{k} \phi+ i \mathbf{k} \wedge \mathbf{A}\right)}}\right\rangle}_{{0,1}}d^3 \mathbf{k}.\end{aligned} \hspace{\ The scalar and spatial vector grade selection operator has been added for convenience and does not change the result since those are necessarily the only grades anyhow. The post multiplication by the observer frame time basis vector $\gamma_0$ serves to separate the energy and momentum like components of the tensor nicely into scalar and vector aspects. In particular for $T(\gamma^0)$, one could \begin{aligned}\int T(\gamma^0) d^3 \mathbf{x} = (H + \mathbf{P}) \gamma_0,\end{aligned} \hspace{\stretch{1}}(3.13) If these are correctly identified with energy and momentum then it also ought to be true that we have the conservation relationship \begin{aligned}\frac{\partial {H}}{\partial {t}} + \boldsymbol{abla} \cdot (c \mathbf{P}) = 0.\end{aligned} \hspace{\stretch{1}}(3.14) However, multiplying out (3.12) yields for $H$ \begin{aligned}H &= \frac{\epsilon_0}{2} \int d^3 \mathbf{k} \left(\frac{1}{{c^2}} {\left\lvert{\dot{\mathbf{A}}}\right\rvert}^2 + \mathbf{k}^2 ({\left\lvert{\phi}\right\rvert}^2 + {\left\lvert{\ mathbf{A}}\right\rvert}^2 )- {\left\lvert{\mathbf{k} \cdot \mathbf{A}}\right\rvert}^2 + 2 \frac{\mathbf{k}}{c} \cdot \text{Real}( i {{\phi}}^{*} \dot{\mathbf{A}} )\right)\end{aligned} \hspace{\ The vector component takes a bit more work to reduce \begin{aligned}\mathbf{P} &= \frac{\epsilon_0}{2} \int d^3 \mathbf{k} \text{Real} \left(\frac{i}{c} ({{\dot{\mathbf{A}}}}^{*} \cdot (\mathbf{k} \wedge \mathbf{A})+ {{\phi}}^{*} \mathbf{k} \cdot (\ mathbf{k} \wedge \mathbf{A})+ \frac{i}{c} (\mathbf{k} \wedge {\mathbf{A}}^{*}) \cdot \dot{\mathbf{A}}- \phi (\mathbf{k} \wedge {\mathbf{A}}^{*}) \cdot \mathbf{k}\right) \\ &=\frac{\epsilon_0}{2} \int d^3 \mathbf{k} \text{Real} \left(\frac{i}{c} \left( ({{\dot{\mathbf{A}}}}^{*} \cdot \mathbf{k}) \mathbf{A} -({{\dot{\mathbf{A}}}}^{*} \cdot \mathbf{A}) \mathbf{k} \right)+ {{\phi}}^{*} \left( \mathbf {k}^2 \mathbf{A} - (\mathbf{k} \cdot \mathbf{A}) \mathbf{k} \right)+ \frac{i}{c} \left( ({\mathbf{A}}^{*} \cdot \dot{\mathbf{A}}) \mathbf{k} - (\mathbf{k} \cdot \dot{\mathbf{A}}) {\mathbf{A}}^{*} \ right)+ \phi \left( \mathbf{k}^2 {\mathbf{A}}^{*} -({\mathbf{A}}^{*} \cdot \mathbf{k}) \mathbf{k} \right) \right).\end{aligned} Canceling and regrouping leaves \begin{aligned}\mathbf{P}&=\epsilon_0 \int d^3 \mathbf{k} \text{Real} \left(\mathbf{A} \left( \mathbf{k}^2 {{\phi}}^{*} + \mathbf{k} \cdot {{\dot{\mathbf{A}}}}^{*} \right)+ \mathbf{k} \left( -{{\ phi}}^{*} (\mathbf{k} \cdot \mathbf{A}) + \frac{i}{c} ({\mathbf{A}}^{*} \cdot \dot{\mathbf{A}})\right)\right).\end{aligned} \hspace{\stretch{1}}(3.16) This has no explicit $\mathbf{x}$ dependence, so the conservation relation (3.14) is violated unless ${\partial {H}}/{\partial {t}} = 0$. There is no reason to assume that will be the case. In the discrete Fourier series treatment, a gauge transformation allowed for elimination of $\phi$, and this implied $\mathbf{k} \cdot \mathbf{A}_\mathbf{k} = 0$ or $\mathbf{A}_\mathbf{k}$ constant. We will probably have a similar result here, eliminating most of the terms in (3.15) and (3.16). Except for the constant $\mathbf{A}_\mathbf{k}$ solution of the field equations there is no obvious way that such a simplified energy expression will have zero derivative. A more reasonable conclusion is that this approach is flawed. We ought to be looking at the divergence relation as a starting point, and instead of integrating over all space, instead employing Gauss’s theorem to convert the divergence integral into a surface integral. Without math, the conservation relationship probably ought to be expressed as energy change in a volume is matched by the momentum change through the surface. However, without an integral over all space, we do not get the nice delta function cancellation observed above. How to proceed is not immediately clear. Stepping back to review applications of Gauss’s theorem is probably a good first step. [1] Peeter Joot. Electrodynamic field energy for vacuum. [online]. http://sites.google.com/site/peeterjoot/math2009/fourierMaxVac.pdf. [2] Peeter Joot. {Energy and momentum for Complex electric and magnetic field phasors.} [online]. http://sites.google.com/site/peeterjoot/math2009/complexFieldEnergy.pdf. One Response to “Energy and momentum for assumed Fourier transform solutions to the homogeneous Maxwell equation.” 1. December 29, 2009 at 12:26 am [...] Energy and momentum for assumed Fourier transform solutions to the homogeneous Maxwell equation… [...]
{"url":"http://peeterjoot.wordpress.com/2009/12/22/energy-and-momentum-for-assumed-fourier-transform-solutions-to-the-homogeneous-maxwell-equation/","timestamp":"2014-04-20T03:09:41Z","content_type":null,"content_length":"113677","record_id":"<urn:uuid:d48555fd-c6a4-4fbe-b7c4-159a2af9de91>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00108-ip-10-147-4-33.ec2.internal.warc.gz"}
History of Modern Mathematics Article 4: Complex Numbers Additional Information • Year Published: 1906 • Language: English • Country of Origin: United States of America • Source: Smith, D.E. (1906). Hisotyr of Modern Mathematics. London: Chapman and Hall. • Readability: □ Flesch–Kincaid Level: 12.0 • Word Count: 726 Smith, D. (1906). Article 4: Complex Numbers. History of Modern Mathematics (Lit2Go Edition). Retrieved April 21, 2014, from http://etc.usf.edu/lit2go/103/history-of-modern-mathematics/1729/ Smith, David Eugene. "Article 4: Complex Numbers." History of Modern Mathematics. Lit2Go Edition. 1906. Web. <http://etc.usf.edu/lit2go/103/history-of-modern-mathematics/1729/ article-4-complex-numbers/>. April 21, 2014. David Eugene Smith, "Article 4: Complex Numbers," History of Modern Mathematics, Lit2Go Edition, (1906), accessed April 21, 2014, http://etc.usf.edu/lit2go/103/history-of-modern-mathematics/1729/ Browse Happy and update your internet browser today! />The Theory of Complex Numbers may be said to have attracted attention as early as the sixteenth century in the recognition, by the Italian algebraists, of imaginary or impossible roots. In the seventeenth century Descartes distin- guished between real and imaginary roots, and the eighteenth saw the labors of De Moivre and Euler. To De Moivre is due (1730) the well-known formula which bears his name, (cos = cos , and to Euler (1748) the formula cos e^θ i The geometric notion of complex quantity now arose, and as a result the theory of complex numbers received a notable expansion. The idea of the graphic representation of complex numbers had appeared, however, as early as 1685, in Wallis’s De Algebra tractatus. In the eighteenth century Kühn (1750) and Wessel (about 1795) made decided advances towards the present theory. Wessel’s memoir appeared in the Proceedings of the Copenhagen Academy for 1799, and is exceedingly clear and complete, even in comparison with modern works. He also considers the sphere, and gives a quaternion theory from which he develops a complete spherical trigonometry. In 1804 the Abbé Buée independently came upon the same idea which Wallis had suggested, that ±√–1 should represent a unit line, and its negative, perpendicular to the real axis. Buée’s paper was not published until 1806, in which year Argand also issued a pamphlet on the same subject. It is to Argand’s essay that the scientific foundation for the graphic representation of complex numbers is now generally referred. Nevertheless, in 1831 Gauss found the theory quite unknown, and in 1832 published his chief memoir on the subject, thus bringing it prominently before the mathematical world. Mention should also be made of an excellent little treatise by Mourey (1828), in which the foundations for the theory of directional numbers are scientifically laid. The general acceptance of the theory is not a little due to the labors of Cauchy and Abel, and especially the latter, who was the first to boldly use complex numbers with a success that is well known. The common terms used in the theory are chiefly due to the founders. Argand called cos φ + i sin φ the “direction factor”, and r = √a^2 + b^2 the “modulus”; Cauchy (1828) called cos φ + i sin φ the “reduced form” (l’expression réduite); Gauss used i for √–1, introduced the term “complex number” for a + bi, and called a^2 + b^2 the “norm.” The expression “direction coefficient”, often used for cos φ + i sin φ, is due to Hankel (1867), and “absolute value,” for “modulus,” is due to Weierstrass. Following Cauchy and Gauss have come a number of contributors of high rank, of whom the following may be especially mentioned: Kummer (1844), Kronecker (1845), Scheffler (1845, 1851, 1880), Bellavitis (1835, 1852), Peacock (1845), and De Morgan (1849). Möbius must also be mentioned for his numerous memoirs on the geometric applications of complex numbers, and Dirichlet for the expansion of the theory to include primes, congruences, reciprocity, etc., as in the case of real numbers. Other types^2 have been studied, besides the familiar a + bi, in which i is the root of x^2 + 1 = 0. Thus Eisenstein has studied the type a + bj , j being a complex root of x^3 – 1 = 0. Similarly, complex types have been derived from x^k – 1 = 0 (k prime). This generalization is largely due to Kummer, to whom is also due the theory of Ideal Numbers,^3 which has recently been simplified by Klein (1893) from the point of view of geometry. A further complex theory is due to Galois, the basis being the imaginary roots of an irreducible congruence, F (x) = 0 (mod p, a prime). The late writers (from 1884) on the general theory include Weierstrass, Schwarz, Dedekind, Hölder, Berloty, Poincaré, Study, and Macfarlane. ^1 Riecke, F., Die Rechnung mit Richtungszahlen, 1856, p. 161; Hankel, H., Theorie der komplexen Zahlensysteme, Leipzig, 1867; Holzmüller, G., Theorie der isogonalen Verwandtschaften, 1882, p. 21; Macfarlane, A., The Imaginary of Algebra, Proceedings of American Association 1892, p. 33; Baltzer, R., Einführung der komplexen Zahlen, Crelle, 1882; Stolz, O., Vorlesungen über allgemeine Arithmetik, 2. Theil, Leipzig, 1886. ^2 Chapman, C. H., Weierstrass and Dedekind on General Complex Numbers, in Bulletin New York Mathematical Society, Vol. I, p. 150; Study, E., Aeltere und neuere Untersuchungen über Systeme complexer Zahlen, Mathematical Papers Chicago Congress, p. 367; bibliography, p. 381. ^3 Klein, F., Evanston Lectures, Lect. VIII.
{"url":"http://etc.usf.edu/lit2go/103/history-of-modern-mathematics/1729/article-4-complex-numbers/","timestamp":"2014-04-21T09:47:33Z","content_type":null,"content_length":"17744","record_id":"<urn:uuid:87690743-7d29-499a-97c1-566b544d377c>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00157-ip-10-147-4-33.ec2.internal.warc.gz"}